Mixture encoders
Separate encoder components produce latent statistics for different representation pathways.
Project · Research model
A mixture-of-encoders β-VAE framework for interpretable representation learning across heterogeneous scientific domains.
Summary
MixtureBetaVAE is designed for scientific settings where the goal is not merely to classify examples, but to learn latent representations that can be inspected, traversed, and interpreted.
The model uses multiple encoders and a shared decoder to support heterogeneous data regimes. This is useful when different examples may follow different structural pathways but still need to live in a common latent and generative space.
The current website version uses placeholder visuals and descriptions. Later, this page can include training curves, architecture diagrams, ablation summaries, generated outputs, and links to code or papers.
Technical structure
Separate encoder components produce latent statistics for different representation pathways.
A common decoder encourages a unified generative space across encoder components.
Balances latent regularisation with reconstruction fidelity through rate-distortion control.
Aligns latent representations with class labels while preserving generative structure.
Optional discriminator and feature matching can improve local quality in generated outputs.
Supports counterfactual exploration across clinically or physically meaningful transitions.
Placeholder outputs
Placeholder for UMAP, PCA, t-SNE, or learned latent trajectory visualisation.
Placeholder for RNFL/GCIPL input, reconstruction, and traversal comparison.
Placeholder for meteorological variable frames or animated tensor outputs.
What I learned
The practical value of the model depends on whether the learned representation supports meaningful scientific questions.
Loss design should be treated carefully because improving one objective can weaken another.
Latent traversals only matter if their changes are clinically or physically plausible.