✦ AE · 3→64→64→2→64→64→3 · ELU · MSE
How Autoencoders Work
A real neural network (backprop + AdamW) learns to compress 3D space into a 2D latent space.
Every reset draws new random weights — convergence path differs every run. Customize the architecture below.
Training Control
0.3×
0.7×
1×
2×
5×
10×
↺ Reset
▶ Play
Step 0 — Random weights Step 3000 — Converged
⚙ Model Configuration ▼
Model Type
AE
β-VAE
β =
1.0
Hidden Units
32
64
128
256
Activation
ELU
ReLU
Leaky
Tanh
Batch Size (live)
32
64
150
300
Full
Learning Rate (live)
1e-4
5e-4
1e-3
3e-3
1e-2
↺ Apply & Restart
1
Input Space
3D structural descriptors (fixed)
⟳ rotating
2
Latent Space — Encoder
2D codes from actual encoder forward pass
3
Reconstructed Output
Ghost=input · dots=reconstruction · color=error magnitude
⟳ rotating
Training Loss Curve
Real MSE loss computed from actual forward pass each step