How to use Keras TimeseriesGenerator

connor449

I am having trouble implementing Keras TimeseriesGenerator. What I want is to experiment with different values for look_back, which is a variable that determines the lag length for X in terms of each y. Right now, I have it set to 3, but would like to be able to test multiple values. Essentially I want to see if using the last n rows to predict a value increases the accuracy. Here is my code:

### trying with timeseries generator
from keras.preprocessing.sequence import TimeseriesGenerator

look_back = 3

train_data_gen = TimeseriesGenerator(X_train, X_train,
    length=look_back, sampling_rate=1,stride=1,
    batch_size=3)
test_data_gen = TimeseriesGenerator(X_test, X_test,
    length=look_back, sampling_rate=1,stride=1,
    batch_size=1)

### Bi_LSTM
Bi_LSTM = Sequential()
Bi_LSTM.add(layers.Bidirectional(layers.LSTM(512, input_shape=(look_back, 11))))
Bi_LSTM.add(layers.Dropout(.5))
# Bi_LSTM.add(layers.Flatten())
Bi_LSTM.add(Dense(11, activation='softmax'))
Bi_LSTM.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
### fitting a small normal model seems to be necessary for compile
Bi_LSTM.fit(X_train[:1],
              y_train[:1],
              epochs=1,
              batch_size=32,
              validation_data=(X_test[:1], y_test[:1]),
              class_weight=class_weights)
print('ignore above, necessary to run custom generator...')
Bi_LSTM_history = Bi_LSTM.fit_generator(Bi_LSTM.fit_generator(generator,
                                                    steps_per_epoch=1,
                                                    epochs=20,
                                                    verbose=0,
                                                    class_weight=class_weights))

Which yields the following error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-35-11561ec7fb92> in <module>()
     26               batch_size=32,
     27               validation_data=(X_test[:1], y_test[:1]),
---> 28               class_weight=class_weights)
     29 print('ignore above, necessary to run custom generator...')
     30 Bi_LSTM_history = Bi_LSTM.fit_generator(Bi_LSTM.fit_generator(generator,

2 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    143                             ': expected ' + names[i] + ' to have shape ' +
    144                             str(shape) + ' but got array with shape ' +
--> 145                             str(data_shape))
    146     return data
    147 

ValueError: Error when checking input: expected lstm_16_input to have shape (3, 11) but got array with shape (1, 11)

If I change the BiLSTM input shape to (1,11) like listed above, then I get this error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-36-7360e3790518> in <module>()
     31                                                     epochs=20,
     32                                                     verbose=0,
---> 33                                                     class_weight=class_weights))
     34 

5 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    143                             ': expected ' + names[i] + ' to have shape ' +
    144                             str(shape) + ' but got array with shape ' +
--> 145                             str(data_shape))
    146     return data
    147 

ValueError: Error when checking input: expected lstm_17_input to have shape (1, 11) but got array with shape (3, 11)

What is going on here?

If needed, my data is read from a df where each row (observation) is a (1,11) float vector and each label is an int, which I convert to a 1 hot vector shape (1,11).

Marco Cerliani

I found a lot of mistakes in the code... for this reason, I want to provide a dummy example that you can follow to carry out your task. Please pay attention to the original dimension of your data and the dimension of data generated by the TimeSeriesGenerator. This is important to understand how to build the network

# utility variable
look_back = 3
batch_size = 3
n_feat = 11
n_class = 11
n_train = 200
n_test = 60

# data simulation
X_train = np.random.uniform(0,1, (n_train,n_feat)) # 2D!
X_test = np.random.uniform(0,1, (n_test,n_feat)) # 2D!
y_train = np.random.randint(0,2, (n_train,n_class)) # 2D!
y_test = np.random.randint(0,2, (n_test,n_class)) # 2D!


train_data_gen = TimeseriesGenerator(X_train, y_train, length=look_back, batch_size=batch_size)
test_data_gen = TimeseriesGenerator(X_test, y_test, length=look_back, batch_size=batch_size)

# check generator dimensions
for i in range(len(train_data_gen)):
    x, y = train_data_gen[i]
    print(x.shape, y.shape)

Bi_LSTM = Sequential()
Bi_LSTM.add(Bidirectional(LSTM(512), input_shape=(look_back, n_feat)))
Bi_LSTM.add(Dropout(.5))
Bi_LSTM.add(Dense(n_class, activation='softmax'))
print(Bi_LSTM.summary())

Bi_LSTM.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

Bi_LSTM_history = Bi_LSTM.fit_generator(train_data_gen,
                                        steps_per_epoch=50,
                                        epochs=3,
                                        verbose=1,
                                        validation_data=test_data_gen) # class_weight=class_weights)

この記事はインターネットから収集されたものであり、転載の際にはソースを示してください。

侵害の場合は、連絡してください[email protected]

編集
0

コメントを追加

0

関連記事

分類Dev

TimeSeriesGeneratorを使用したKeras LSTM用のカスタムデータジェネレーター

分類Dev

Keras timeseriesgenerator:1つのステップで複数のデータポイントを予測する方法は?

分類Dev

How to use Keras LSTM batch_input_size properly

分類Dev

How to use TF IDF vectorizer with LSTM in Keras Python

分類Dev

How to use a scipy function on each element of a tensor using Keras?

分類Dev

How to use keras embedding layer with 3D tensor input?

分類Dev

How to use tf.nn.sampled_softmax_loss with Tensorflow Keras?

分類Dev

Use ConvXDtranspose on Keras

分類Dev

Keras TimeseriesGenerator関数を使用して、IDによってグループ化されたシーケンスを生成します

分類Dev

How to use the first layers of a pretrained model to extract features inside a Keras model (Functional API)

分類Dev

How to build a keras model

分類Dev

How to cache layer activations in Keras?

分類Dev

How to retrain/update keras model?

分類Dev

How to implement maclaurin series in keras?

分類Dev

Pickling monkey-patched Keras model for use in PySpark

分類Dev

Extract the weights from a TensorFlow graph to use them in Keras

分類Dev

Is there a way to use the native tf Attention layer with keras Sequential API?

分類Dev

How to replace (or insert) intermediate layer in Keras model?

分類Dev

How does Keras evaluate loss on test set?

分類Dev

How to convert images color space in Keras?

分類Dev

How to wrap a custom TensorFlow loss function in Keras?

分類Dev

How to vertically stack trained models in keras?

分類Dev

How to enforce monotonicity for (regression) model outputs in Keras?

分類Dev

Keras - How to perform a prediction using KerasRegressor?

分類Dev

Keras: how to mutate data for each epoch

分類Dev

How to save and reuse all settings for a keras model?

分類Dev

How to fine-tune a functional model in Keras?

分類Dev

How to pass extracted features to keras model?

分類Dev

How to pass weights to mean squared error in keras

Related 関連記事

  1. 1

    TimeSeriesGeneratorを使用したKeras LSTM用のカスタムデータジェネレーター

  2. 2

    Keras timeseriesgenerator:1つのステップで複数のデータポイントを予測する方法は?

  3. 3

    How to use Keras LSTM batch_input_size properly

  4. 4

    How to use TF IDF vectorizer with LSTM in Keras Python

  5. 5

    How to use a scipy function on each element of a tensor using Keras?

  6. 6

    How to use keras embedding layer with 3D tensor input?

  7. 7

    How to use tf.nn.sampled_softmax_loss with Tensorflow Keras?

  8. 8

    Use ConvXDtranspose on Keras

  9. 9

    Keras TimeseriesGenerator関数を使用して、IDによってグループ化されたシーケンスを生成します

  10. 10

    How to use the first layers of a pretrained model to extract features inside a Keras model (Functional API)

  11. 11

    How to build a keras model

  12. 12

    How to cache layer activations in Keras?

  13. 13

    How to retrain/update keras model?

  14. 14

    How to implement maclaurin series in keras?

  15. 15

    Pickling monkey-patched Keras model for use in PySpark

  16. 16

    Extract the weights from a TensorFlow graph to use them in Keras

  17. 17

    Is there a way to use the native tf Attention layer with keras Sequential API?

  18. 18

    How to replace (or insert) intermediate layer in Keras model?

  19. 19

    How does Keras evaluate loss on test set?

  20. 20

    How to convert images color space in Keras?

  21. 21

    How to wrap a custom TensorFlow loss function in Keras?

  22. 22

    How to vertically stack trained models in keras?

  23. 23

    How to enforce monotonicity for (regression) model outputs in Keras?

  24. 24

    Keras - How to perform a prediction using KerasRegressor?

  25. 25

    Keras: how to mutate data for each epoch

  26. 26

    How to save and reuse all settings for a keras model?

  27. 27

    How to fine-tune a functional model in Keras?

  28. 28

    How to pass extracted features to keras model?

  29. 29

    How to pass weights to mean squared error in keras

ホットタグ

アーカイブ