大家好,我在使用model.fit()训练模型时收到ValueError。我尝试了多种方法来解决它,但没有用。看看..但是我确实将所有图像的大小调整为(512,512)
................
................
................
def resizing(image, label):
image = tf.image.resize(image, (512, 512))/255.0
return image, label
mapped_training_set = train_set.map(resizing)
mapped_testing_set = test_set.map(resizing)
mapped_valid_set = valid_set.map(resizing)
tf.keras.layers.Conv2D(32, (3, 3), input_shape=(512, 512, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
.........
.........
.........
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation="relu"),
tf.keras.layers.Dense(101, activation="softmax")
model.compile(optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
hist = model.fit(mapped_training_set,
epochs=10,
validation_data=mapped_valid_set,
)
**我收到此错误:**
<ipython-input-31-1d134652773c> in <module>()
1 hist = model.fit(mapped_training_set,
2 epochs=10,
----> 3 validation_data=mapped_valid_set,
4 )
16 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
235 except Exception as e: # pylint:disable=broad-except
236 if hasattr(e, 'ag_error_metadata'):
--> 237 raise e.ag_error_metadata.to_exception(e)
238 else:
239 raise
ValueError: in converted code:
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py:677 map_fn
batch_size=None)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py:2410 _standardize_tensors
exception_prefix='input')
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py:573 standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected conv2d_32_input to have 4 dimensions, but got array with shape (512, 512, 3)
我试图搜索以修复错误,但现在已经超过2个小时了,我没有找到答案。
我发现的所有结果和解决方案都不是我的主题。
请帮助我被困在这里。
提前致谢
您需要为模型传递输入形状(batch_size, height, width, channels)
。这就是为什么它说期望4维。相反,您传递的是的单张图片(512, 512, 3)
。
如果要在单个图像上训练模型,则应通过更改每个图像的形状image = tf.expand_dims(image, axis=0)
。这可以在resize
函数中完成。
如果您想分批训练模型,则应在mapped_training_set = mapped_training_set.batch(batch_size)
后面添加map
。然后,其他两个数据集也是如此。
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句