Papers
Topics
Authors
Recent
Search
2000 character limit reached

FOCUS: Full-Process OCT Clinical Utility

Updated 10 February 2026
  • FOCUS is a modular, end-to-end OCT analysis system that automates processes from image acquisition and quality assessment to anatomical segmentation and report generation.
  • It leverages foundation model-driven deep learning and advanced segmentation algorithms to accurately extract clinical features and support multi-disease classification.
  • The system integrates standards-compliant data workflows and open-source tools, ensuring reproducible, efficient diagnostics in both ophthalmology and intraoperative neurosurgery.

The Full-process OCT-based Clinical Utility System (FOCUS) is a modular, end-to-end pipeline for automated quantitative analysis and diagnostic decision support in optical coherence tomography (OCT) imaging. FOCUS enables fully automatic processing from raw OCT image acquisition through image quality assessment, anatomical and disease-specific segmentation, feature quantification, and structured report generation. Implementations span ophthalmology and intraoperative neurosurgery, offering validated workflows for both retinal disease screening and research-scale oculomic phenotyping. FOCUS leverages foundation model-driven deep learning, probabilistic or region-based anatomical models, and standards-compliant data integration to address the demand for reproducible, scalable, and clinically actionable OCT image analysis (Zhang et al., 3 Feb 2026, Burke, 10 Feb 2025, Burke et al., 2024).

1. System Architecture and Data Flow

FOCUS is structured into a sequence of automated modules reflecting the clinical workflow from image acquisition to report delivery. Core design principles include modularity, cross-device compatibility, reproducibility, and standards-based interoperability.

The major workflow stages are:

  1. Data Ingestion: Automatic import of raw OCT (and optionally SLO) files in proprietary or DICOM format. Image metadata, pixel-to-micron scaling, and quality indicators are extracted for each scan.
  2. Pre-processing: Includes column-wise shadow compensation, denoising (median filtering), and contrast enhancement (CLAHE). Cropping to anatomical region-of-interest (ROI) is performed using pretrained segmentation models (Burke, 10 Feb 2025, Burke et al., 2024).
  3. Segmentation: Deep learning models (e.g., Choroidalyzer, DeepGPET, or VisionFM-variants) segment choroidal/retinal boundaries, vessels, or pathology, generating probabilistic maps and structure-specific masks (Burke, 10 Feb 2025, Burke et al., 2024, Zhang et al., 3 Feb 2026).
  4. Clinical Feature Extraction: For each defined region (ETDRS grid, peripapillary sectors, etc.), features such as choroidal thickness, choroid vascularity index (CVI), vessel area, and pathological markers are computed using established formulas.
  5. Diagnostic Classification (if applicable): In disease-focused pipelines (retinal disease detection), FOCUS integrates foundation model-driven AI (e.g., VisionFM) with task-specific prompt decoders and applies adaptive aggregation to synthesize slice-level evidence into patient-level predictions (Zhang et al., 3 Feb 2026).
  6. Report Generation: Outputs structured DICOM- and HL7-compliant reports, segmentation overlays, QC metrics, and tabulated results. Integration with clinical information systems is enabled via PACS/EHR interfaces (Burke et al., 2024).

A schematic of the pipeline is as follows:

Stage Input Output
Data Ingestion Raw OCT/SLO files Image arrays, metadata
Pre-processing Image arrays Enhanced B-scans
Segmentation Enhanced B-scans Probability maps, binary masks
Feature Extraction Segmentation, masks Quantitative metrics (thickness, CVI)
Report Generation Metrics, overlays DICOM SR, PDFs, HL7/JSON

2. Deep Learning Models and Key Algorithms

Ophthalmic FOCUS

  • Image Quality Assessment: EfficientNetV2-S, fine-tuned for poor-vs-usable quality classification; F1=99.01%, AUC=0.9972 (Zhang et al., 3 Feb 2026).
  • Retinal Pathology Detection and Multi-disease Classification: Vision Foundation Model (VisionFM), a large transformer backbone pretrained on multimodal ophthalmic data, supports slice-level abnormality detection (binary: normal vs abnormal) and multi-disease 9-class softmax (AMD, CNV, DR, ERM, MH, ME, RP, CSC, normal). A prompt decoder enables feature adaptation to each task. Final classification per patient is achieved by the Unified Adaptive Aggregation Classifier (UAAC), which outputs class probability:

Sp,c=σ(∑i=1Nwi,csi,c+bc),c∈{1,…,9}S_{p,c} = \sigma\left(\sum_{i=1}^N w_{i,c} s_{i,c} + b_c\right), \quad c \in \{1,\ldots,9\}

where wi,cw_{i,c} and bcb_c are optimized with patient-level cross-entropy loss (Zhang et al., 3 Feb 2026).

  • Anatomical Segmentation: Choroidalyzer (U-Net style, multitask with heads for region, vessel, fovea; ResNet-34 encoder; Dice+BCE loss). For peripapillary regions, DeepGPET (U-Net, MobileNetV3 encoder) is used (Burke, 10 Feb 2025, Burke et al., 2024).
  • Vessel Segmentation: Multi-scale Median Cut Quantisation (MMCQ) is a non-deep-learning algorithm prioritizing precision in vessel detection (Burke, 10 Feb 2025).

Intraoperative and Systemic Utility

  • FACT-ROCT (Robotic OCT in Neurosurgery): Integrates swept-source OCT with robotic scanning, real-time adaptive focusing, and on-the-fly volumetric reconstruction. Tumor grading uses standard deviation of attenuation coefficient (σμ\sigma_\mu) and vascular metrics (VND, DM, VTV—see below). Decision support overlays and real-time measurement are directly available to the surgical team (He et al., 2024).

3. Feature Computation and Quantitative Metrics

FOCUS pipelines compute a comprehensive suite of anatomical and disease-relevant metrics:

  • Choroidal Thickness and Area:

CT(x)=zRPE−choroid(x)−zchoroid−sclera(x)CT(x) = z_{RPE-choroid}(x) - z_{choroid-sclera}(x)

For the subfoveal location (x0x_0), SFCT=CT(x0)SFCT = CT(x_0). Mean and regional thickness are averaged over A-scan indices.

  • Choroid Vascularity Index (CVI):

CVI=AvesselAchoroidCVI = \frac{A_{vessel}}{A_{choroid}}

In volumes, vessel and choroid regions are summed in 3D and averaged per ETDRS region (Burke, 10 Feb 2025).

  • Ophthalmic Disease Classification: Patient is assigned the class with maximal UAAC output.
  • Intraoperative Tumor Grading (FACT-ROCT): SD of attenuation coefficient (σμ\sigma_\mu) serves as a marker; σμ,th=0.75 mm−1\sigma_{\mu,th}=0.75 \:\text{mm}^{-1} yields AUC≈\approx0.92 (He et al., 2024).
  • Vascular Metrics: Vascular Node Density, Tortuosity Index (DM), and Trajectory Variability (VTV).

4. Validation, Performance, and Reproducibility

  • Retinal Disease FOCUS (Ophthalmology):
    • Trained on 3,300 patients (40,672 slices), externally validated on 1,345 patients (18,498 slices) across centers and multiple devices.
    • Patient-level F1 (internal): 99.01% (quality), 97.46% (abnormal detection), 94.39% (multi-disease diagnosis).
    • Cross-center F1 for diagnosis: 90.22%–95.24%.
    • Human-machine comparison: FOCUS F1 93.49% vs clinicians 91.35%, runtime <1 s/volume (vs 2–5 min manual) (Zhang et al., 3 Feb 2026).
  • Choroidal Analysis (Choroidalyzer/MMCQ/DeepGPET):
    • Segmentation: Region AUC 0.9998 (Choroidalyzer), Dice 0.9789; vessel Dice 0.8817.
    • Feature MAE for thickness <12 µm, volume CVI MAE ≈0.0271, Pearson r ≥0.97.
    • Measurement noise λ <5–10% (thickness), <15–25% (CVI), well below clinical effect sizes (Burke, 10 Feb 2025, Burke et al., 2024).
  • Intraoperative FACT-ROCT:
    • σμ\sigma_\mu-based tumor grading accuracy >90%, with imaging speed enabling ≤2 min for 70×13×10 mm³ volumes (He et al., 2024).
    • No imaging-related adverse events recorded.

5. System Integration, Reporting, and Clinical Workflow

FOCUS supports seamless integration into clinical and research environments:

  • Automated File Watchers: Ingest OCT/SLO data directly from acquisition devices, standardize input, and extract metadata (Burke et al., 2024).
  • Quality Control: Automatic SNR and overlap checks flag low-quality or off-center scans. Segment overlays and traffic-light icons in generated reports highlight outlier changes (>10 µm thickness, >0.05 CVI) (Burke et al., 2024).
  • Report Generation: Outputs include structured DICOM SR, PDF summaries, HL7 ORU for EHR flowsheets, and JSON/CSV tables for research.
  • Interactive Dashboards: Provide technical staff with real-time overlays, thickness plots, and QC lists. Clinician-facing reports embed overlay figures and summary metrics aligned to normative age-matched references (Burke et al., 2024).

6. Limitations, Challenges, and Future Directions

  • Domain Generalization: Most validation is confined to retrospective data from Chinese centers or device-specific cohorts, necessitating international and cross-device generalization studies (Zhang et al., 3 Feb 2026).
  • Disease Prevalence and Sample Bias: Higher prevalence of disease in development datasets may not reflect true screening populations; prospective, population-based validation is needed.
  • Technical Constraints: For FACT-ROCT, OCT penetration is limited to 1–2 mm; dynamic focusing and tracking remain areas for future development (He et al., 2024).
  • Scalability: FOCUS runs efficiently on commodity hardware without a GPU, but full multicenter deployment and support for high-throughput environments may benefit from containerization (Docker), web-based GUIs, and enhanced cloud support (Burke et al., 2024).
  • Extensions: Future work includes LLM integration for explainable reporting, privacy-preserving pipelines, machine learning–augmented segmentation/classification in surgery, and prospective studies in tele-ophthalmology and primary care (Zhang et al., 3 Feb 2026, He et al., 2024).

7. Open-Source Implementation and Adoption

FOCUS and associated tools (OCTolyzer, Choroidalyzer, DeepGPET, MMCQ) are implemented in Python using PyTorch, with command-line utilities and optional GUI frontends. The codebase supports Windows and Linux, deployments with minimal hardware, and batch or on-demand workflows. Docker containers facilitate easy deployment. The toolkit is available as open-source software, supporting both reproducible research and clinical translation (Burke, 10 Feb 2025, Burke et al., 2024).

Tool Functionality Link
OCTolyzer Full FOCUS pipeline integration https://github.com/jaburke166/OCTolyzer
Choroidalyzer Region/vessel/fovea segmentation https://github.com/justinengelmann/choroidalyzer
DeepGPET ROI segmentation https://github.com/jaburke166/deepgpet
MMCQ Vessel segmentation (precision-optimized) https://github.com/jaburke166/mmcq

These resources enable standardized, automated OCT analysis for a variety of research, clinical, and population health applications.

References:

(Zhang et al., 3 Feb 2026, Burke, 10 Feb 2025, He et al., 2024, Burke et al., 2024)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Full-process OCT-based Clinical Utility System (FOCUS).