Research

Interpretable representation learning for complex scientific data.

My research develops generative-discriminative models that support prediction, reconstruction, latent traversal, and scientific reasoning.

Framework

From raw scientific data to interpretable latent spaces.

01

Scientific data

OCT-derived retinal maps or atmospheric tensor sequences from meteorological data.

02

Representation model

A mixture-of-encoders β-VAE learns latent structure while reconstructing the input.

03

Prediction

Supervised heads can align latent representations with clinical or physical labels.

04

Interpretation

Latent traversals and counterfactuals reveal meaningful directions of change.

Evaluation philosophy

Accuracy is necessary, but not sufficient.

Scientific models should be evaluated across multiple axes: prediction, fidelity, interpretability, and stability.

AUC

Discrimination

How well does the model separate clinically or physically meaningful states?

RMSE

Reconstruction

Does the model preserve important scientific structure in generated outputs?

z

Latent coherence

Do latent traversals correspond to plausible disease or environmental transitions?

Δ

Counterfactual stability

Are generated changes stable and meaningful under shifts in domain or cohort?

Projects

See how the research becomes code.

The projects section includes placeholders for research code, HPC pipelines, OCT preprocessing, and future demos.

View projects