我有以下模型:
我尝试使用 ImageDataGenerator 与 flow*_*from_directory 和 fit_generator 来拟合模型,但是出现以下错误:
ValueError: Input 0 is incompatible with layer model: expected shape=(None, 256, 256, 1), found shape=(None, 400, 400, 1)
我使用了正确的target_size,所以我不知道为什么会出现错误。我的代码如下:
model_merged.compile(loss='categorical_crossentropy',
optimizer="adam",
metrics=['acc'])
train_datagen =ImageDataGenerator(rescale=1./255, validation_split=0.25)
#training data
train_generator = train_datagen.flow_from_directory(
'/kaggle/working/images/', # Source directory
target_size=(256, 256), # Resizes images
batch_size=batch_size,
class_mode='categorical',subset = 'training')
epochs = epochs
#Testing data
validation_generator = train_datagen.flow_from_directory(
'/kaggle/working/images/',
target_size=(256, 256),
batch_size=batch_size,
class_mode='categorical',
subset='validation') # set as validation data
#Model fitting for a number of epochs
history = model_merged.fit_generator(
train_generator,
steps_per_epoch=steps_train,
epochs=epochs,
validation_data = validation_generator,
validation_steps = steps_val,
verbose=1)
Update
batch_size = 32
epochs = 32
steps_train = 18
steps_val = 3
img_height = 256
img_width = 256
data_dir='/kaggle/working/images/'
model_merged.compile(loss='categorical_crossentropy',
optimizer="adam",
metrics=['acc'])
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
# scale pixel value between 0 and 1
normalization_layer = tf.keras.layers.Rescaling(1./255)
reshape_layer = Reshape((-1,256,256))
resize_layer = Resizing(1,256)
permute_layer = Permute((2,3,1))
train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y))
train_ds = train_ds.map(lambda x, y: (reshape_layer(x), y))
val_ds = val_ds.map(lambda x, y: (reshape_layer(x), y))
train_ds = train_ds.map(lambda x, y: (resize_layer(x), y))
val_ds = val_ds.map(lambda x, y: (resize_layer(x), y))
train_ds = train_ds.map(lambda x, y: (permute_layer(x), y))
val_ds = val_ds.map(lambda x, y: (permute_layer(x), y))
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
history = model_merged.fit(
train_ds,
steps_per_epoch=steps_train,
epochs=epochs,
validation_data = val_ds,
validation_steps = steps_val,
verbose=1)
错误与上面的错误相同。我添加了一些新图层,以便将输入图层从 (None,256,256,3) 更改为 (None,256,256,1),因为这是最初的错误,但它仍然不起作用。我不确定错误来自哪里,因为训练和验证数据集的尺寸现在是正确的。
Update 2
I removed the concatenation layer from the merged model, since I want the output of model A to be passed to the input of model B, however, even with the new merged model the error still appears.