Project · Medical imaging

OCT Processing Pipeline

A placeholder project page for preparing OCT-derived RNFL and GCIPL maps for glaucoma-focused representation learning.

Summary

Why preprocessing matters.

OCT-derived retinal maps are not immediately model-ready. They require consistent file discovery, class membership handling, visit pairing, anatomical alignment, normalisation, masking, and quality checks.

This project page is a placeholder for documenting the practical workflow that transforms raw or semi-processed OCT-derived maps into clean input tensors for MixtureBetaVAE-style modelling.

Later, this page can include real examples of RNFL and GCIPL maps, pairing statistics, cohort summaries, preprocessing scripts, and reconstruction examples.

Workflow modules

From file discovery to model input.

01

Subject lists

Load normal and glaucoma subject lists for macular and optic-disc scans.

02

File parsing

Recover patient ID, scan type, date, eye, serial number, and map type from filenames.

03

Visit pairing

Pair ONH RNFL and macular GCIPL maps using exact or fallback visit-date matching.

04

Anatomical alignment

Flip right-eye maps horizontally so anatomy is consistently aligned across eyes.

05

Normalisation

Scale thickness values into a stable model range while preserving clinical meaning.

06

Masking and losses

Use masks or region weights to handle optic-disc areas and clinically important structures.

Conceptual workflow

From OCT files to paired maps.

OCT Files
Parse Metadata
Pair Visits
Align + Normalise
Model Input
Placeholder note: Replace this section with real diagrams, manifest statistics, and representative paired examples later.

Placeholder outputs

Artifacts to add later.

Technical reflection

Dataset quality shapes model quality.

Class balance matters.

Paired multi-modal data can introduce severe imbalance, especially when exact visit-date matching is required.

Alignment is part of the model design.

Anatomical orientation decisions affect whether the model learns meaningful structure or avoidable left-right variation.

Loss weighting should reflect the domain.

Reconstruction objectives can be improved by weighting clinically important regions rather than treating all pixels equally.

Related

Connect this workflow to the ophthalmic AI research page.