Papers
Topics
Authors
Recent
Search
2000 character limit reached

MedicoSAM: Adapting SAM for Medical Imaging

Updated 10 February 2026
  • MedicoSAM is a comprehensive framework that adapts the SAM paradigm to medical imaging by leveraging domain-specific fine-tuning and prompt optimization.
  • It employs multiple adaptation regimes—including full fine-tuning, parameter-efficient methods, and specialized prompt propagation—to boost segmentation accuracy and efficiency.
  • The toolkit integrates seamlessly with clinical workflows through interactive platforms, automating prompt generation and significantly reducing annotation time.

MedicoSAM refers collectively to a set of foundational methodologies, model variants, and practical toolkits designed to adapt the Segment Anything Model (SAM) paradigm to the domain of medical image segmentation. The central premise is to leverage large-scale vision transformers, prompt-driven mask generation, and domain-adapted fine-tuning procedures to support diverse modalities, annotation regimes, and clinical workflows within medical imaging. MedicoSAM encompasses both concrete architecture variants specifically titled “MedicoSAM” as well as broader concepts inspiring prompt engineering, fine-tuning protocols, lightweight adaptation, and integration into medical annotation platforms (Archit et al., 20 Jan 2025, &&&1&&&, Lyu et al., 2024, Carvalho et al., 2024, Liu et al., 2024, Jiang et al., 15 Jan 2026).

1. Evolution and Rationale for MedicoSAM

The original SAM was introduced to segment arbitrary objects in natural images without task-specific retraining, using promptable mask proposals—commonly in the form of points, boxes, or masks. However, direct application of SAM to medical imaging revealed several shortcomings: domain shift in texture/statistics, annotation scarcity, the need for spatial precision, and the importance of efficiently supporting interactive correction or few-shot learning (Archit et al., 20 Jan 2025, Liu et al., 2024). MedicoSAM formulations arose to address these deficits by:

2. Core Model Architectures and Adaptation Protocols

MedicoSAM models adopt and adapt the canonical encoder–prompt encoder–decoder structure of SAM, introducing three principal adaptation regimes:

  • Full-Parameter Fine-Tuning: Every parameter of the ViT image encoder, prompt encoder, and mask decoder is trained on large medical datasets (e.g., SA-Med2D-20M, 33 multiorgan datasets), yielding universal backbones with robust interactive and semantic segmentation properties across diverse domains (Archit et al., 20 Jan 2025, Jiang et al., 15 Jan 2026).
  • Parameter-Efficient Adaptation: Partial fine-tuning is performed by unfreezing only the mask decoder or by injecting lightweight LoRA modules and/or domain-specific adapters, dramatically reducing computational burden while preserving initialization benefits (Carvalho et al., 2024, Wei et al., 2023).
  • Prompt Specialization and Propagation: Emphasis is placed on sophisticated prompt engineering—automatically extracting representative support examples, propagating masks via registration, and generating prompts (points/boxes/masks) for rapid adaptation in low-data or one-shot settings (Xu et al., 2024, Zhong et al., 23 Mar 2025, Wang et al., 29 Apr 2025).

Additionally, models incorporate either the original prompt encoder or introduce content-modality prompt streams, CLIP-derived semantic embeddings, or fine-grained prior aligners to fuse textual and spatial priors (Lyu et al., 2024, Zhong et al., 23 Mar 2025).

3. Fine-Tuning Schemes, Losses, and Training Protocols

The typical MedicoSAM training protocol consists of:

  • Curating a large-scale, multi-modal dataset of medical images paired with pixel-accurate masks and/or clinical concept text prompts (Archit et al., 20 Jan 2025, Jiang et al., 15 Jan 2026).
  • Synthesizing prompt types during training (boxes, points, mask regions), mimicking realistic annotation flows (including multi-step corrective prompts) (Archit et al., 20 Jan 2025).
  • Adopting domain-customized loss functions, most commonly composite Dice and cross-entropy losses:

LDice=12ipigiipi+igi,Lseg=imilogm^i.L_{Dice} = 1 - \frac{2 \sum_i p_i g_i}{\sum_i p_i + \sum_i g_i},\quad L_{seg} = -\sum_{i} m_i \log \hat m_i.

Additional regularizers may include IoU score regression, modality-classification, and contrastive objectives (Archit et al., 20 Jan 2025, Lyu et al., 2024).

  • For parameter-efficient fine-tuning, only mask decoder heads are updated, with the ViT and prompt encoders frozen (Carvalho et al., 2024).
  • In one-shot or few-shot settings, techniques such as k-centroid clustering for support set discovery, mask propagation via B-spline registration, and iterative auto-prompting are used to maximize performance from limited labeled data (Xu et al., 2024, Wang et al., 29 Apr 2025).
  • Optimization procedures employ AdamW, with learning rate decay and layer-wise learning rate decay (LLRD) to preserve low-level feature generality in deep ViT stacks (Jiang et al., 15 Jan 2026, Archit et al., 20 Jan 2025).

4. Prompt Engineering, Interaction, and Platform Integration

Prompt engineering is central to MedicoSAM’s flexibility:

  • Interactive Modes: MedicoSAM supports 2D and 3D prompts (points, boxes), prompt propagation across slices, and real-time mask updates, critical for clinical annotation interfaces (Liu et al., 2024).
  • Automated Prompting: Methods automatically generate prompt sets from propagated coarse masks, region centroids, or segmentation boundaries, combined with iterative mask refinement loops (Xu et al., 2024).
  • Semantic and Modality Prompts: Medical LLMs and CLIP-derived embeddings generate text-based prompts with anatomical specificity, further aligned with image features via specialized adapters (Zhong et al., 23 Mar 2025, Lyu et al., 2024).
  • Platform Integration: MedicoSAM models are designed for drop-in compatibility with open-source tools (3D Slicer, Napari), and platforms such as SAMME provide unified APIs for loading arbitrary variants, switching interactive modes, and launching fine-tuning loops via graphical UI or scripts (Liu et al., 2024, Archit et al., 20 Jan 2025).

5. Benchmarks, Quantitative Results, and Comparative Analysis

Comprehensive benchmarking of MedicoSAM methods is performed across annotation regimes, datasets, and competitor models. Key findings include:

Framework/Variant Key Regime Dataset(s) 2D/3D Dice (%) Notable Results
MedicoSAM (Archit et al., 20 Jan 2025) Full fine-tuning, interactive/semantic 16+ ext datasets 2D: 0.80–0.90 +0.05–0.13 Dice over vanilla SAM for interactive
MedicoSAM/SAM-MPA (Xu et al., 2024) Few-shot, clustering+propagation Breast US, CXR US: 74.5 CXR: 94.4 Outperforms PerSAM and manual prompting
MCP-MedSAM (Lyu et al., 2024) Lightweight, prompt fusion 11 modalities 87.5 Trainable <24h single A100, SOTA in challenge
Fine-tuned SAM (CXR) (Carvalho et al., 2024) Mask decoder fine-tune, point prompts CXR (Montg, Shzn) ≈0.95 Near U-Net parity with <200 samples
SAM3, MedSAM3 (Jiang et al., 15 Jan 2026) Text prompt only, full backbone adapt. 33 datasets, ood 77 (internal), 74 (external) Large OOD improvements vs. prompt-only vanilla SAM3
RRL-MedSAM (Wang et al., 29 Apr 2025) 3D one-shot, knowledge distill., auto-prompt OASIS, CT-Lung 82-94 Lightweight (5M params) outperforms full-size baselines

Standard evaluation uses Dice, IoU, and in 3D, augmented by Hausdorff distance. MedicoSAM frequently matches or exceeds traditional UNet/nnU-Net baselines in interactive/low-shot tasks, but in purely semantic segmentation, further medical pretraining sometimes yields limited benefit over generic models (Archit et al., 20 Jan 2025).

6. Extensions, Limitations, and Future Directions

Challenges persist in universal medical segmentation:

  • Volumetric and Small Structure Generalization: 2D-slice-based models are less effective on thin vessels or in capturing long-range 3D context. Volumetric prompts and memory-tracker modules are under development (Zhong et al., 23 Mar 2025, Jiang et al., 15 Jan 2026).
  • Computational Efficiency: Full fine-tuning at high resolution is costly. Efforts focus on distillation, parameter-efficient transfer, and dataset/optimizer innovations for rapid adaptation (Lyu et al., 2024).
  • Prompt Automation: Advances in automated prompt generation and the integration of medical LLMs for more context-aware semantic guidance are ongoing. Current systems rely on hand-crafted or semi-automatic prompt strategies, especially for multi-organ or rare target tasks (Zhong et al., 23 Mar 2025).
  • Annotation Cost and Semi-Supervised Regimes: Methods like SSL-MedSAM2 leverage SAM2’s pseudo-label generation, bootstrapping fully-supervised U-Net training with high-quality pseudo-annotations, reducing labeling requirements by >80% on clinical segmentation challenges (Gong et al., 12 Dec 2025).
  • Annotation Platform Ecosystem: Continued development of modular toolkits (e.g., SAMME), fine-tune wrappers, prompt generators, and rapid threshold/ensemble utilities is anticipated (Liu et al., 2024, Carvalho et al., 2024).

MedicoSAM’s sustained impact will depend on expanded cross-center clinical validation, routine integration in annotation pipelines, and the maturation of 3D prompt+tracking architectures for full-volume consistency.

7. Significance and Impact in the Medical Imaging Ecosystem

MedicoSAM establishes a practical and extensible framework for medical image segmentation foundation models. Key contributions are:

  • Universal prompt-driven segmentation across modalities, organs, and annotation regimes, directly supporting clinical annotation, research labeling, and downstream AI training (Archit et al., 20 Jan 2025, Jiang et al., 15 Jan 2026).
  • Significant acceleration in annotation throughput for interactive workflows, with prompt propagation and real-time mask updates reducing structure annotation from minutes to seconds (Liu et al., 2024).
  • Compatibility with established and emerging annotation/registration platforms, facilitating hybrid pipelines for image-guided therapy, mixed reality, robotic navigation, and data augmentation (Liu et al., 2024).
  • A reproducible blueprint for future foundation model adaptation, emphasizing prompt flexibility, multi-modal scaling, and the systematic comparison of lightweight and fully-trained backbones.

MedicoSAM thus signifies both a concrete method—manifested in public codebases and clinical deployments—and an evolving research direction unifying the strengths of large-scale promptable vision models with the stringent requirements of medical image analysis.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to MedicoSAM.