ValueError:应定义输入的通道尺寸。找不到`

旺昌西

我是Tensorflow的新手,所以我不确定您需要什么来解决我的问题。因此,如果您需要任何其他信息,请告诉我。

基本上,我试图通过运行图像Sequential基于https://www.tensorflow.org/tutorials/images/classification上的教程,我正在尝试插入并播放自己的数据集。

我目前停留在使用model.fit()运行模型的地方,这给了我以下错误:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-90-85c03bda7f8f> in <module>
     16 
     17 epochs=1
---> 18 history = model.fit(
     19   train_data,
     20   validation_data=test_data,

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
   1132                 _r=1):
   1133               callbacks.on_train_batch_begin(step)
-> 1134               tmp_logs = self.train_function(iterator)
   1135               if data_handler.should_sync:
   1136                 context.async_wait()

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    816     tracing_count = self.experimental_get_tracing_count()
    817     with trace.Trace(self._name) as tm:
--> 818       result = self._call(*args, **kwds)
    819       compiler = "xla" if self._jit_compile else "nonXla"
    820       new_tracing_count = self.experimental_get_tracing_count()

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    860       # This is the first call of __call__, so we have to initialize.
    861       initializers = []
--> 862       self._initialize(args, kwds, add_initializers_to=initializers)
    863     finally:
    864       # At this point we know that the initialization is complete (or less

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
    701     self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
    702     self._concrete_stateful_fn = (
--> 703         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
    704             *args, **kwds))
    705 

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   3018       args, kwargs = None, None
   3019     with self._lock:
-> 3020       graph_function, _ = self._maybe_define_function(args, kwargs)
   3021     return graph_function
   3022 

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   3412 
   3413           self._function_cache.missed.add(call_context_key)
-> 3414           graph_function = self._create_graph_function(args, kwargs)
   3415           self._function_cache.primary[cache_key] = graph_function
   3416 

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   3247     arg_names = base_arg_names + missing_arg_names
   3248     graph_function = ConcreteFunction(
-> 3249         func_graph_module.func_graph_from_py_func(
   3250             self._name,
   3251             self._python_function,

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    996         _, original_func = tf_decorator.unwrap(python_func)
    997 
--> 998       func_outputs = python_func(*func_args, **func_kwargs)
    999 
   1000       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    610             xla_context.Exit()
    611         else:
--> 612           out = weak_wrapped_fn().__wrapped__(*args, **kwds)
    613         return out
    614 

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
    983           except Exception as e:  # pylint:disable=broad-except
    984             if hasattr(e, "ag_error_metadata"):
--> 985               raise e.ag_error_metadata.to_exception(e)
    986             else:
    987               raise

ValueError: in user code:

    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:839 train_function  *
        return step_function(self, iterator)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:829 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1262 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2734 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:3423 _call_for_each_replica
        return fn(*args, **kwargs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:822 run_step  **
        outputs = model.train_step(data)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:788 train_step
        y_pred = self(x, training=True)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:1032 __call__
        outputs = call_fn(inputs, *args, **kwargs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/sequential.py:398 call
        outputs = layer(inputs, **kwargs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:1028 __call__
        self._maybe_build(inputs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:2722 _maybe_build
        self.build(input_shapes)  # pylint:disable=not-callable
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/layers/convolutional.py:188 build
        input_channel = self._get_input_channel(input_shape)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/layers/convolutional.py:367 _get_input_channel
        raise ValueError('The channel dimension of the inputs '

    ValueError: The channel dimension of the inputs should be defined. Found `None`.

这是我的模型代码:

model = Sequential([
  layers.Conv2D(16, 3, padding='same', activation='relu'),
  layers.MaxPooling2D(),
  layers.Conv2D(32, 3, padding='same', activation='relu'),
  layers.MaxPooling2D(),
  layers.Conv2D(64, 3, padding='same', activation='relu'),
  layers.MaxPooling2D(),
  layers.Flatten(),
  layers.Dense(128, activation='relu'),
  layers.Dense(4)
])

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

epochs=10
history = model.fit(
  train_data,
  validation_data=test_data,
  epochs=epochs
)

我了解在本教程中他们使用了内置的预处理功能,但是我尝试构建自己的预处理功能以促进学习。

def preprocessing(image, target_size):
    # Extracting labels
    parts = tf.strings.split(image, os.sep)
    label = parts[-2]
    
    # Decoding image file
    path = tf.io.read_file(image)
    image = tf.image.decode_jpeg(path)
    
    # Cropping
    image = tf.image.crop_to_bounding_box(image, offset_height=25, offset_width=25, target_height=image_size, target_width=image_size)
    
    # Normalizing
    image = image / 255
    
    return image, label

list_ds = tf.data.Dataset.list_files(DATA_DIR + '/*/*')
preprocess_function = partial(preprocessing, target_size=image_size)
processed_data = list_ds.map(preprocess_function)
train_data = processed_data.take(8000).batch(batch_size)
test_data = processed_data.skip(8000).batch(batch_size)

我可以提供的其他信息是图像是灰度的,因此为1通道,并且在预处理函数中已将其标准化为/ 255,并且image_size为300,batch_size为100。

尼古拉斯·格维斯(Nicolas Gervais)

尝试这个:

image = tf.image.decode_jpeg(path, channels=1)

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章

来自分类Dev

ValueError:应定义“密集”输入的尺寸。找不到`

来自分类Dev

找不到术语定义

来自分类Dev

找不到定义的功能

来自分类Dev

将 VGGFace ResNet 导出到 Tensorflow Serving:ValueError:应定义“密集”输入的最后一个维度。发现“无”

来自分类Dev

Selenium 找不到输入元素

来自分类Dev

找不到#define的函数定义

来自分类Dev

找不到Ruby函数定义

来自分类Dev

找不到符号“ QMHSNDPM”的定义

来自分类Dev

在 WSDL 中找不到 <定义>

来自分类Dev

我找不到“模拟!”的定义

来自分类Dev

找不到有关GRPC通道选项的文档

来自分类Dev

MissingPluginException在通道iOS上找不到方法copyText的实现

来自分类Dev

在Holoview条形图中有2个关键尺寸的尺寸标签(KeyError:找不到尺寸)

来自分类Dev

找不到松弛的Web API返回通道chat.postMessage到私有通道

来自分类Dev

Keras预测()valueError:输入没有正确的尺寸

来自分类Dev

R闪亮错误:找不到对象输入

来自分类Dev

usbhid找不到输入中断端点

来自分类Dev

找不到多个文件作为ffmpeg的输入

来自分类Dev

找不到输入BIOS设置的方法!

来自分类Dev

bmon:找不到可用的输入模块

来自分类Dev

硒找不到项目列表输入

来自分类Dev

serializeArray找不到一些输入

来自分类Dev

C# Selenium 找不到输入

来自分类Dev

反应本机:找不到变量:定义

来自分类Dev

在setInterval中找不到自己定义的函数

来自分类Dev

找不到Android Gradle定义方法

来自分类Dev

Rspec找不到自定义模块

来自分类Dev

在xCode中找不到方法定义

来自分类Dev

上面定义的找不到回调函数