β-VAE loss
Balances reconstruction quality and latent regularisation through the β parameter.
Core method
A mixture-of-encoders β-VAE framework for learning interpretable, traversable latent spaces in heterogeneous scientific data.
Motivation
A single encoder can force diverse scientific cases through one representation pathway. This may be limiting when the data contains multiple regimes, such as healthy versus glaucomatous retinal structure, or developing versus non-developing atmospheric systems.
MixtureBetaVAE uses multiple encoders with a shared decoder so that different latent mappings can cooperate while still producing a common reconstruction space. The aim is to support both predictive performance and interpretability.
Architecture
Objective
Balances reconstruction quality and latent regularisation through the β parameter.
Encourages encoder components to cooperate in a shared latent representation space.
Optional classification head aligns latent structure with known labels or states.
Optional discriminator and feature matching improve local realism in reconstructions.
Specialised reconstruction weighting can emphasise scientifically important regions.
Optional centre loss can encourage compact class-aware latent regions.
Interactive placeholder
This demo is currently conceptual. Later it can be connected to actual generated outputs.
The latent point is moving through an intermediate region. Later, this can display real RNFL/GCIPL reconstructions or cyclone tensor frames.
Applications
Explore how the same representation-learning idea is used for retinal imaging and tropical meteorology.