Research model
MixtureBetaVAE
Mixture-of-encoders β-VAE framework for interpretable representation learning across scientific domains.
View project →Machine Learning Researcher · Scientific Computing · Interpretable AI
I explore how generative representation learning can reveal meaningful structure in complex data — from retinal disease progression to tropical cyclone genesis.
Research snapshot
My research asks how machine learning models can help us reason about complex transitions, instead of simply assigning labels.
Interpretable modelling of retinal structure using OCT-derived RNFL and GCIPL thickness maps.
Open research area →Spatio-temporal representation learning for tropical cyclone genesis using ERA5 and OWZ data.
Open research area →Core method
A mixture-of-encoders β-VAE with shared decoder, designed for interpretable latent spaces, counterfactual traversal, and scientific reasoning.
View methodInteractive preview
This placeholder demo shows the type of interface we can later connect to real model outputs.
The latent point is moving through an intermediate region. Later, this can display real RNFL/GCIPL reconstructions or cyclone tensor frames.
Featured projects
Research model
Mixture-of-encoders β-VAE framework for interpretable representation learning across scientific domains.
View project →HPC pipeline
OzSTAR-ready workflow for constructing spatio-temporal cyclone seed tensors from OWZ and ERA5 data.
View project →Medical imaging
Preprocessing and modelling pipeline for RNFL and GCIPL maps used in glaucoma representation learning.
View project →Path
Early curiosity, computing, and problem solving.
Academic growth and research training.
Representation learning, generative models, and scientific data.
Interpretable AI for ophthalmic imaging and tropical meteorology.
Writing & notes
Let’s connect
I’m always open to thoughtful conversations around machine learning, scientific computing, research workflows, and ideas that connect AI with real-world data.
Get in touch