我想在每个批次的几个不同序列上运行 LSTM,然后加入最后的输出。这是我一直在尝试的:
from keras.layers import Dense, Input, LSTM, Embedding, TimeDistributed
num_sentences = 4
num_features = 3
num_time_steps = 5
inputs = Input([num_sentences, num_time_steps])
emb_layer = Embedding(10, num_features)
embedded = emb_layer(inputs)
lstm_layer = LSTM(4)
shape = [num_sentences, num_time_steps, num_features]
lstm_outputs = TimeDistributed(lstm_layer, input_shape=shape)(embedded)
这给了我以下错误:
Traceback (most recent call last):
File "test.py", line 12, in <module>
lstm_outputs = TimeDistributed(lstm_layer, input_shape=shape)(embedded)
File "/Users/erick/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 546, in __call__
self.build(input_shapes[0])
File "/Users/erick/anaconda2/lib/python2.7/site-packages/keras/layers/wrappers.py", line 94, in build
self.layer.build(child_input_shape)
File "/Users/erick/anaconda2/lib/python2.7/site-packages/keras/layers/recurrent.py", line 702, in build
self.input_dim = input_shape[2]
IndexError: tuple index out of range
我尝试省略 中的input_shape
参数TimeDistributed
,但它没有改变任何东西。
在尝试了 michetonu 的回答并出现了同样的错误后,我意识到我的 keras 版本可能已经过时了。实际上,正在运行 keras 1.2,并且代码在 2.0 上运行良好。
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句