Papers
Topics
Authors
Recent
Search
2000 character limit reached

Residual Adapter Block (RAB)

Updated 31 January 2026
  • Residual Adapter Block (RAB) is a lightweight two-layer bottleneck adapter that integrates frozen pre-trained features with task-specific updates using a residual blending mechanism.
  • It employs dual branches for visual and text outputs via a structured MLP formulation, optimized for few-shot generalization while maintaining low parameter overhead.
  • Empirical evidence shows that optimal residual blending yields superior performance by balancing pre-trained knowledge retention with new feature adaptation.

A Residual Adapter Block (RAB) is a lightweight, two-layer bottleneck module designed to augment task-specific adaptation for large-scale vision-LLMs, most notably in the CLIP-Adapter framework. RABs are appended to the frozen pre-trained CLIP backbones in both visual and text branches. Their core mechanism involves learning a residual-style blend between pre-trained and newly adapted features, controlled by a tunable hyperparameter. This approach is engineered to enhance few-shot generalization by reprojecting features through a low-dimensional bottleneck while explicitly preserving pre-trained knowledge.

1. Architectural Role and Placement

RABs operate as post-encoder modules that interface directly with the frozen outputs of both image and text encoders in CLIP. In the visual branch, after the global-pooled image feature fRDf\in\mathbb{R}^D is extracted by the frozen image encoder (e.g., ResNet-50), a two-layer bottleneck adapter AvA_v is applied. Analogously, in the text branch, a similar bottleneck adapter AtA_t is appended after the text encoder produces the classifier weight matrix WRD×K\mathbf{W}\in\mathbb{R}^{D\times K}. The output of each adapter is blended with the original feature via a residual-style combination controlled by a scalar coefficient (either learned or fixed), before classification proceeds with updated or frozen text weights (Gao et al., 2021).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Input Image I
    ↓
Frozen CLIP Visual Encoder
    ↓
f ∈ ℝ^D            ← original CLIP feature
    ↓
┌────────────────────────────┐
│   Bottleneck Adapter A_v   │
│      ℝ^D → ℝ^D via ℝ^d     │
└────────────────────────────┘
    ↓
A_v(f) ∈ ℝ^D       ← new feature
    ↓
f* = (1–α)·f + α·A_v(f)        ← residual blending
    ↓
Classification

2. Mathematical Formulation

The RAB consists of a two-layer bottleneck MLP. For the visual adapter: Av(f)=ReLU(fW1v)W2vRDA_v(f) = \mathrm{ReLU}\bigl(f^\top W_1^v\bigr) W_2^v \in\mathbb{R}^D and for the text adapter: At(W)=ReLU(WW1t)W2tRD×KA_t(\mathbf{W}) = \mathrm{ReLU}\bigl(\mathbf{W}^\top W_1^t\bigr) W_2^t \in\mathbb{R}^{D\times K} with W1v,W1tRD×dW_1^v, W_1^t\in\mathbb{R}^{D\times d} (bottleneck), W2v,W2tRd×DW_2^v, W_2^t\in\mathbb{R}^{d\times D}. The blended residual outputs are: f=(1α)f+αAv(f),W=(1β)W+βAt(W)f^\star = (1-\alpha)f + \alpha A_v(f),\quad \mathbf{W}^\star = (1-\beta)\mathbf{W} + \beta A_t(\mathbf{W}) These are used in the CLIP-style softmax classifier: pi=exp(Wif/τ)j=1Kexp(Wjf/τ)p_i = \frac{\exp\left({\mathbf{W}^\star_i}^\top f^\star/\tau\right)}{\sum_{j=1}^K \exp\left({\mathbf{W}^\star_j}^\top f^\star/\tau\right)}

3. Dimensions and Hyperparameters

CLIP-Adapter typically employs D=1024D=1024 (ResNet-50) or D=512D=512 (ViT). The bottleneck reduction is set at d=D/4d=D/4 (e.g., d=256d=256 for ResNet-50). Consequently, adapter layer shapes are W1v,W1tR1024×256W_1^v, W_1^t \in \mathbb{R}^{1024\times256} and W2v,W2tR256×1024W_2^v, W_2^t \in \mathbb{R}^{256\times1024} for the ResNet configuration. The residual coefficients α\alpha (visual) and β\beta (text) are dataset-specific hyperparameters: typical optimal values are α0.2\alpha\approx0.2 for generic datasets (ImageNet) and α0.6\alpha\approx0.6 for fine-grained datasets (DTD, EuroSAT). Selection is made via a small discrete search (Gao et al., 2021).

4. Training Protocol in Few-Shot Regimes

CLIP-Adapter with RABs is trained in a few-shot setting with the backbone fully frozen; only adapter weights ({W1v,W2v,W1t,W2t}\{W_1^v, W_2^v, W_1^t, W_2^t\}) and optionally the residual coefficients (α,β\alpha, \beta) are optimized. The AdamW optimizer is used, with a learning rate of 1×1051\times10^{-5} and batch size 32. The loss function is standard cross-entropy: L=1Nn=1Ni=1Kyi(n)logpi(n)\mathcal{L} = -\frac{1}{N}\sum_{n=1}^N \sum_{i=1}^K y_i^{(n)}\log p_i^{(n)} No explicit regularization (such as dropout or additional weight decay) is applied beyond that inherent in AdamW.

5. Empirical Performance: Ablation and Generalization

Ablation experiments demonstrate the critical role of the residual blend. On DTD (fine-grained, 16-shot) with α=0\alpha=0 (zero-shot) accuracy is 40.72%, with α=1\alpha=1 (adapter only) 63.79%, and with the optimal residual blend (α=0.6\alpha=0.6) 66.06%. On ImageNet (generic, 16-shot), results are 60.46% (α=0\alpha=0), 59.05% (α=1\alpha=1), and 61.33% (α=0.2\alpha=0.2) (Gao et al., 2021). Pure adapter adaptation (α=1\alpha=1) can overfit on broader datasets, while blending consistently yields superior few-shot generalization. This suggests that residual blending balances new knowledge injection and the preservation of pre-trained manifold structure.

6. Comparative Assessment: Advantages and Limitations

RAB-equipped CLIP-Adapter is highly parameter-efficient, requiring approximately 2×D×d2 \times D \times d parameters (about 0.5M for ResNet-50), in contrast to millions for full fine-tuning. The frozen backbone avoids catastrophic forgetting, ensuring the original zero-shot capability is partly retained. The method outperforms prompt tuning (CoOp) in nearly all few-shot benchmarks, while requiring no prompt-specific continuous tokens and presenting a simpler design. RABs can adapt visual, textual, or both branches with independent or shared blending ratios.

Limitations include the introduction of (typically one or two) new hyperparameters (α,β\alpha, \beta) that require dataset-level tuning, though grid search is computationally light. There is also a small additional computational footprint for the adapter forward pass, which remains negligible compared to full backbone passes. RABs do not replace prompt tuning in scenarios dominated by prompt engineering or complex language structure; they are complementary within the overall adaption toolkit.

7. Conceptual Summary and Significance

The Residual Adapter Block constitutes a minimal, two-layer MLP “bottleneck” attached to the output of a frozen pre-trained model. Through low-rank projection and a learnable residual blend, RABs integrate task-specific adaptation with preservation of prior knowledge. This design enables robust few-shot generalization with minimal parameter overhead and mitigates overfitting, and is empirically superior to both prompt-tuning and full fine-tuning baselines in the CLIP-Adapter context (Gao et al., 2021).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Residual Adapter Block (RAB).