0

I am having an issue with my autoencoder as I am shaping the ouput incorrectly. Currently the autoencoder is coded lke this.

I Got This Error :

ValueError: Dimensions must be equal, but are 2000 and 3750 for '{{node mean_absolute_error/sub}} = Sub[T=DT_FLOAT](sequential_8/sequential_7/conv1d_transpose_14/BiasAdd, IteratorGetNext:1)' with input shapes: [?,2000,3], [?,3750,3].

Can someone help with adjusting the architecture if this is possible. I seem to be forgetting the original modifications I made to adjust for this initially.

import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Conv1D, MaxPooling1D, UpSampling1D, concatenate
from tensorflow.keras.callbacks import EarlyStopping

# Provided encoder
encoder = tf.keras.models.Sequential([
    tf.keras.layers.Reshape([3750, 3], input_shape=[3750, 3]),
    tf.keras.layers.Conv1D(32, kernel_size=5, padding="same", activation="relu"),
    tf.keras.layers.MaxPool1D(pool_size=2),
    tf.keras.layers.Conv1D(64, kernel_size=5, padding="same", activation="relu"),
    tf.keras.layers.MaxPool1D(pool_size=2),
    tf.keras.layers.Conv1D(128, kernel_size=5, padding="same", activation="relu"),
    tf.keras.layers.MaxPool1D(pool_size=2),
    tf.keras.layers.Conv1D(256, kernel_size=5, padding="same", activation="relu"),
    tf.keras.layers.MaxPool1D(pool_size=2),
    tf.keras.layers.Conv1D(512, kernel_size=5, padding="same", activation="relu"),
    tf.keras.layers.MaxPool1D(pool_size=2),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512)
])

#latent space

decoder = tf.keras.models.Sequential([
    tf.keras.layers.Dense(512 * 125, input_shape=[512]),
    tf.keras.layers.Reshape([125, 512]),
    tf.keras.layers.Conv1DTranspose(512, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(256, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(128, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(64, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    # Adjust the kernel size and padding to match the input shape
    tf.keras.layers.Conv1DTranspose(3, kernel_size=5, strides=1, padding="same", activation="linear")
])

# Add more layers with larger kernel sizes to both encoder and decoder.
ae = tf.keras.models.Sequential([encoder, decoder])

ae.compile(
    loss="mean_squared_error", 
    optimizer=tf.keras.optimizers.Adam(learning_rate=0.00001)
)
# Define the early stopping criteria
early_stopping = EarlyStopping(monitor='val_loss', patience=30, mode='min')

history = ae.fit(X_train, X_train, batch_size=8, epochs=150, validation_data=(X_val, X_val), callbacks=[early_stopping])    ```

1 Answer 1

0

As the error indicates there is a dimension mismatch in your autoencoder which causes error when computing the mean-squared-error for the input and output image. You can check the output shape of your model using model.summary() which results in the following result:

enter image description here

A quick fix would be to add padding layers for example you can use the following architecture for decoder:

decoder = tf.keras.models.Sequential([
    tf.keras.layers.Dense(512 * 125, input_shape=[512]),
    tf.keras.layers.Reshape([125, 512]),
    tf.keras.layers.Conv1DTranspose(512, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(256, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(128, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.Conv1DTranspose(64, kernel_size=5, strides=1, padding="same", activation="relu"),
    tf.keras.layers.UpSampling1D(size=2),
    tf.keras.layers.ZeroPadding1D(padding=875),
    # Adjust the kernel size and padding to match the input shape
    tf.keras.layers.Conv1DTranspose(3, kernel_size=5, strides=1, padding="same", activation="linear")
])

Remark: It is better to add these embeddings between layers instead so you don't overwhelm you model with a lot of zeros in one layer.

Sign up to request clarification or add additional context in comments.

1 Comment

Thank you I’ve decided to adjust some of the upsampling layers for a cleaner look .

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.