# What dimension is the LSTM model considers the data sequence?

E.G. Cortes

I know that an LSTM layer expects a 3 dimension input (samples, timesteps, features). But which of it dimension the data is considered as a sequence. Reading some sites I understood that is the timestep, so I tried to create a simple problem to test. In this problem, the LSTM model needs to sum the values in timesteps dimension. Then, assuming that the model will consider the previous values of the timestep, it should return as an output the sum of the values.

I tried to fit with 4 samples and the result was not good. Does my reasoning make sense?

``````import numpy as np
from keras.models import Sequential
from keras.layers import Dense, LSTM

X = np.array([
[5.,0.,-4.,3.,2.],
[2.,-12.,1.,0.,0.],
[0.,0.,13.,0.,-13.],
[87.,-40.,2.,1.,0.]
])
X = X.reshape(4, 5, 1)
y = np.array([[6.],[-9.],[0.],[50.]])

model = Sequential()
model.fit(X, y, epochs=1000, batch_size=4, verbose=0)

print(model.predict(np.array([[[0.],[0.],[0.],[0.],[0.]]])))
print(model.predict(np.array([[[10.],[-10.],[10.],[-10.],[0.]]])))
print(model.predict(np.array([[[10.],[20.],[30.],[40.],[50.]]])))
``````

output:

``````[[-2.2417212]]
[[7.384143]]
[[0.17088854]]
``````
mnis

First of all, yes you're right that `timestep` is the dimension take as data sequence.

Next, I think there is some confusion about what you mean by this line

"assuming that the model will consider the previous values of the timestep"

In any case, LSTM doesn't take previous values of time step, but rather, it takes the output activation function of the last time step.

Also, the reason that your output is wrong is because you're using a very small dataset to train the model. Recall that, no matter what algorithm you use in machine learning, it'll need many data points. In your case, 4 data points are not enough to train the model. I used slightly more number of parameters and here's the sample results.

However, remember that there is a small problem here. I initialised the training data between 0 and 50. So if you make predictions on any number outside of this range, this won't be accurate anymore. Farther the number from this range, lesser the accuracy. This is because, it has become more of a function mapping problem than addition. By function mapping, I mean that your model will learn to map all values that are in training set(provided it's trained on enough number of epochs) to outputs. You can learn more about it here.

この記事はインターネットから収集されたものであり、転載の際にはソースを示してください。

0