Papers
Topics
Authors
Recent
Search
2000 character limit reached

SAM3-Adapter: Efficient ViT Segmentation

Updated 16 February 2026
  • SAM3-Adapter is a parameter-efficient adaptation module that enhances segmentation performance by injecting lightweight neural adapters into a frozen ViT backbone.
  • It places adapters after self-attention and MLP units across transformer stages to enable robust, task-specific fine-tuning with minimal additional parameters.
  • Empirical results demonstrate significant performance gains in camouflaged, shadow, and medical image segmentation with only a slight increase in latency and computational cost.

SAM3-Adapter refers to a class of parameter-efficient adaptation modules designed to enable the Segment Anything Model 3 (SAM3)—a large-scale Vision Transformer (ViT)-based foundation model—to achieve high-precision segmentation in challenging downstream tasks such as camouflaged object detection, shadow detection, and medical image segmentation. The SAM3-Adapter framework introduces lightweight neural adapters into the frozen backbone of SAM3, facilitating strong task-specific adaptation, improved accuracy, and enhanced efficiency over previous approaches leveraging SAM and SAM2 (Chen et al., 24 Nov 2025).

1. Architectural Principles and Adapter Integration

SAM3 builds upon a multi-stage, hierarchical ViT-style visual encoder, employing four primary transformer stages (stages 1–4). The fundamental motivation for adapters arises from the observation that vanilla fine-tuning of large vision encoders yields poor efficiency and overfitting in data-scarce or low-level segmentation scenarios like camouflage and medical tasks.

SAM3-Adapter modules are injected after the self-attention and MLP units within each transformer stage. The adapter at stage ii operates as:

xout(i)=xin(i)+Δ(i)(xin(i),Fi)x^{(i)}_{\text{out}} = x^{(i)}_{\text{in}} + \Delta^{(i)}\left(x^{(i)}_{\text{in}}, F_i\right)

where xin(i)∈Rdx^{(i)}_{\text{in}} \in \mathbb{R}^{d} is the feature vector at the respective stage, Δ(i)\Delta^{(i)} is a residual adapter function, and FiF_i denotes optional task-specific auxiliary inputs. In its canonical form, the adapter implements a two-layer bottleneck MLP with activation (typically GELU):

Δ(x)=W2 σ(W1x) ,x′=x+W2 σ(W1x)\Delta(x) = W_2\,\sigma\left(W_1 x\right)\,, \quad x' = x+W_2\,\sigma(W_1 x)

where W1∈Rk×dW_1 \in \mathbb{R}^{k \times d} and W2∈Rd×kW_2 \in \mathbb{R}^{d \times k} with k≪dk \ll d (e.g., k=d/8k=d/8 or d/4d/4). The adapters support conditioning via additional features FiF_i by MLP transformation and fusion, functioning as prompts to modulate the encoder’s activations.

The overall encoder remains frozen, thereby preserving pre-trained representations and drastically reducing trainable parameter count and GPU memory consumption (Chen et al., 24 Nov 2025, Xiong et al., 1 Dec 2025).

2. Training Objectives and Optimization Pipeline

The SAM3-Adapter framework deploys task-specific loss functions and optimization pipelines tailored to segmentation objectives:

  • Camouflaged Object and Polyp Segmentation utilizes a composite objective:

LCOD=LBCE+λIOU LIoU\mathcal{L}_{\text{COD}} = \mathcal{L}_{\mathrm{BCE}} + \lambda_{\mathrm{IOU}}\,\mathcal{L}_{\mathrm{IoU}}

where LBCE\mathcal{L}_{\mathrm{BCE}} is the pixel-wise binary cross-entropy loss and

LIoU=1−∑py^p yp∑p(y^p+yp−y^p yp)\mathcal{L}_{\mathrm{IoU}} = 1 - \frac{\sum_{p} \hat{y}_p\,y_p}{\sum_{p} (\hat{y}_p+y_p-\hat{y}_p\,y_p)}

with default λIoU=1\lambda_{\mathrm{IoU}}=1.

  • Shadow Detection employs a balanced BCE loss to compensate for class imbalance:

Lshadow=LBCEbalanced\mathcal{L}_{\text{shadow}} = \mathcal{L}_{\mathrm{BCE}}^{\text{balanced}}

  • Cell Segmentation and other medical segmentation tasks adopt similar mixed objectives with varying Dice and IoU weights.

Optimization is conducted with AdamW, initial learning rate 2×10−42 \times 10^{-4}, cosine decay, and weight decay 10−210^{-2}. Only adapters and the SAM mask-decoder are fine-tuned; the encoder is strictly frozen (Chen et al., 24 Nov 2025, Xiong et al., 1 Dec 2025).

3. Empirical Performance and Efficiency

SAM3-Adapter establishes new state-of-the-art across several benchmarks:

Task/Metric Previous Best SAM3-Adapter Relative Gain
COD10K Sα_\alpha 0.899 (SAM2-A) 0.927 +3.5%
COD10K Eϕ_\phi 0.950 (SAM2-A) 0.965 +1.5%
COD10K Fβω^\omega_\beta 0.850 (SAM2-A) 0.882 +3.2%
COD10K MAE (↓) 0.018 (SAM2-A) 0.015 -16.7%
ISTD BER (Shadow) (↓) 1.43 (SAM2-A) 1.14 -20%
Kvasir-SEG mDice (Polyp) 0.873 (SAM2-A) 0.906 +3.3%
CellSeg F1 0.6036 (xLSTM-UNet) 0.7525 +24.9%

Each adapter stage adds ∼\sim1.2M parameters; the total adapter parameter count (<<5M) is ≪\ll1% of SAM3’s ∼\sim600M. Forward-pass FLOPs increase by only 0.02 TFLOPs0.02\,\mathrm{TFLOPs} (∼\sim1% overhead) and inference latency rises by just $3$ ms on A800 GPUs (∼\sim2% overhead). Robustness ablations show only <0.5%<0.5\% performance degradation when halving the train set in COD, indicating strong few-shot adaptation (Chen et al., 24 Nov 2025).

4. Adapter Variants and Alternative Designs

Recent work on simplified adapters for SAM3, exemplified by SAM3-UNet (Xiong et al., 1 Dec 2025), generalizes the bottleneck residual adapter paradigm with several key distinctions:

  • Adapters are placed before each transformer block in the ViT backbone and parameterized as low-rank (r=32r=32) bottleneck MLPs: Wdown∈RD×rW_\text{down} \in \mathbb{R}^{D \times r}, Wup∈Rr×DW_\text{up} \in \mathbb{R}^{r \times D} (D=1024D=1024 for SAM3-UNet).
  • The output is post-activated and added in a residual fashion, finalized with LayerNorm:

xout=LayerNorm(x+Wup σ(Wdown x))x_{\text{out}} = \text{LayerNorm}(x + W_\text{up}\,\sigma(W_\text{down}\,x))

  • The adapted encoder is coupled with a lightweight U-Net-style decoder for tasks such as mirror and salient object detection.
  • Only a few million new parameters are introduced, allowing training with batch size $12$ under $6$ GB VRAM for 336×336336 \times 336 input.

Empirical results show that in mirror and salient object detection, SAM3-UNet outperforms SAM2-UNet and domain baselines, achieving higher IoU/S-structure scores and enabling efficient single-GPU training (Xiong et al., 1 Dec 2025).

SAM3-Adapter extends the modular design first established in SAM-Adapter for SAM and SAM2 by:

  • Allowing broader task conditioning and guidance via auxiliary features in each adapter.
  • Supporting composable integration of adapters at all backbone stages, yielding maximal downstream performance (+3.3% mDice when using all four stages, compared to +1.7% for last two stages only).
  • Maintaining Turing-completeness of the frozen backbone, thereby avoiding catastrophic forgetting of pre-trained knowledge and enhancing out-of-domain generalization.

Adapters for foundation models in medical imaging, such as 3DSAM-adapter (Gong et al., 2023), generalize these principles to non-2D domains by deploying 3D spatial adapters and modifying patch embedding/projection layers, further validating the scalability and parameter-efficiency of the adapter-based adaptation paradigm across domains.

6. Implementation, Reproducibility, and Open Resources

The official SAM3-Adapter codebase is organized as follows (Chen et al., 24 Nov 2025):

  • Frozen encoder: /models/sam3_encoder.py
  • Adapter definition: /models/sam3_adapter.py
  • Task-specific fine-tuning scripts: train_cod.py, train_shadow.py, etc.
  • Config files manage hyperparameters, dataset paths, loss weights, and adapter placements.
  • Standard preprocessing (resize, flip, color jitter) and ImageNet normalization are applied.

Reproducibility is ensured via deterministic code, available pre-trained weights, and public data pipelines. Full code, models, and experiment instructions are available online.

7. Significance and Future Research Directions

SAM3-Adapter demonstrates that adapter-based fine-tuning strategies can unlock the segmentation capacity of large ViT foundation models for fine-grained and low-level segmentation tasks with minimal computational overhead. The consistent improvement over SAM, SAM2, and specialized baselines across camouflage, shadow, and medical image domains signals the adapter paradigm’s applicability to new segmentation challenges. Public availability of code and pretrained models is likely to accelerate further research into domain transfer, compositional adaptation, and hybrid-adapter architectures (Chen et al., 24 Nov 2025, Xiong et al., 1 Dec 2025, Gong et al., 2023).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to SAM3-Adapter.