AutoEncoder

A simple autoencoder trained on MNIST.

This model is part of the "Introduction to Generative AI" course.
For more details, visit the GitHub repository.

Model Description

The AutoEncoder is a neural network designed to compress and reconstruct input data. It consists of an encoder that compresses the input into a latent space and a decoder that reconstructs the input from the latent representation.

Training Details

  • Dataset: MNIST (handwritten digits)
  • Loss Function: Mean Squared Error (MSE)
  • Optimizer: Adam
  • Learning Rate: 0.001
  • Epochs: 40
  • Latent dim: 10

Tracking

For detailed training logs and metrics, visit the Weights & Biases run.

Load Model

from model import AutoEncoder
import torch

model = AutoEncoder()
model.load_state_dict(torch.load("model.pth"))
model.eval()

License

This project is licensed under the MIT License. See the LICENSE file for details.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train hussamalafandi/autoencoder-mnist

Collection including hussamalafandi/autoencoder-mnist