✦ AE · 3→64→64→2→64→64→3 · ELU · MSE

How Autoencoders Work

A real neural network (backprop + AdamW) learns to compress 3D space into a 2D latent space. Every reset draws new random weights — convergence path differs every run. Customize the architecture below.

A
B
C
D
Recon Loss
Grad Steps
0
KL × β
Training Control
Step 0 — Random weightsStep 3000 — Converged
⚙ Model Configuration
1
Input Space
3D structural descriptors (fixed)
⟳ rotating
2
Latent Space — Encoder
2D codes from actual encoder forward pass
3
Reconstructed Output
Ghost=input · dots=reconstruction · color=error magnitude
⟳ rotating
low error → high error
Training Loss Curve
Real MSE loss computed from actual forward pass each step