Papers
Topics
Authors
Recent
Search
2000 character limit reached

Relative (Anchor-Based) Representations

Updated 28 January 2026
  • Relative (anchor-based) representations are a method for encoding data by measuring similarity to fixed anchor points, ensuring invariance to latent transformations.
  • They reduce alignment overhead and enable robust cross-domain and multimodal integration, facilitating tasks like zero-shot model stitching and interpretable embeddings.
  • Anchor selection strategies, including random sampling and farthest-point sampling, balance fidelity and computational efficiency to enhance interpretability and downstream performance.

Relative (Anchor-Based) Representations

Relative (anchor-based) representations constitute a general principle for encoding data, model states, or features not in some absolute coordinate system, but with respect to their similarity, transformation, or distance to a fixed, often small, set of reference points known as anchors. This approach has become foundational across zero-shot model stitching, cross-domain transfer, interpretable embeddings, multimodal fusion, semantic communication, and advanced geometric and topological alignment in neural networks. By grounding representations in relations to anchors, one can achieve invariance to latent space misalignments, structure preservation across domains, and extremely compact or interpretable model instantiations.

1. Core Definition and Mathematical Framework

Given a domain X\mathcal{X}, let f:X→Rdf:\mathcal{X}\to\mathbb{R}^d be an encoder. For a fixed set of kk anchors A={a1,…,ak}\mathcal{A}=\{a_1,\dots,a_k\}, the relative (anchor-based) representation of x∈Xx\in\mathcal{X} is the vector

r(x)=[sim(f(x),f(a1)),…,sim(f(x),f(ak))]⊤∈Rkr(x) = [\mathrm{sim}(f(x),f(a_1)), \ldots, \mathrm{sim}(f(x),f(a_k))]^\top \in \mathbb{R}^k

where sim(⋅,⋅)\mathrm{sim}(\cdot,\cdot) is typically cosine similarity or a metric-derived function; distances and kernel similarities are also used depending on context (Moschella et al., 2022, García-Castellanos et al., 2024, Shen et al., 25 Mar 2025, Kwak et al., 10 Dec 2025, Chen et al., 2023, Yu et al., 2 Jun 2025).

A key property is that these representations are invariant to isometric transformations (rotations, reflections, scalings) of the underlying latent space, as

T(z)=sRz  ⟹  r(T(x))=r(x)T(z) = sR z\implies r(T(x))=r(x)

for any orthogonal matrix RR and scalar s>0s>0 (Moschella et al., 2022, García-Castellanos et al., 2024).

Extensions include geodesics (relative to anchor points on nonlinear manifolds or latent charts) (Yu et al., 2 Jun 2025), and robust versions invariant to permutations and coordinate-wise scalings via batch-normalization style preprocessing (García-Castellanos et al., 2024).

2. Principles and Invariance Properties

Relative representations serve as a bridge between independently parameterized or trained latent spaces, offering a redundancy-free and symmetry-respecting summary that enables:

  • Invariance to latent isometries: By construction, relative representations remain unchanged under any shared orthogonal or scaling transform, offering natural compatibility for comparing or composing incompatible models (Moschella et al., 2022, García-Castellanos et al., 2024).
  • Reduction of alignment overhead: Unlike canonical correlation analysis or Procrustes alignment, anchor-based relative representations need only a small set of correspondences (anchors), and, in recent work, can even recover aligned anchors from a minimal seed via optimization and optimal-transport objectives (Cannistraci et al., 2023).
  • Symmetry and topology: Robustified forms (with normalization) account for the full intertwiner group of common neural activations, and topological densification regularizes class topology (García-Castellanos et al., 2024).
  • Interpretability and traceability: When anchors are interpretable entities—texts, concepts, or semantic prototypes—the resulting coordinates admit direct semantic inspection and manipulation (Wang et al., 15 May 2025, Fraser et al., 13 Dec 2025).

3. Constructing and Selecting Anchors

Anchor quality and selection are crucial for expressiveness, alignment, and interpretability. Common strategies include:

  • Random sampling: Uniformly drawing anchors from the data distribution, effective for general invariance but sometimes suboptimal for fine-grained structure (Moschella et al., 2022).
  • Farthest-point sampling (FPS): Greedily adding data points that maximize the minimum distance to any existing anchor, ensuring diverse coverage of the embedding manifold (Wang et al., 15 May 2025).
  • K-means or clustering prototypes: Anchors are cluster centroids representing main axes of semantic variability, beneficial for communication or compression tasks (Hüttebräucker et al., 2024).
  • Parallel anchors across domains: In cross-domain settings (e.g., multilingual, cross-modal), explicit anchor correspondences are constructed and, if necessary, bootstrapped via optimization from a minimal seed (Cannistraci et al., 2023).
  • Task-adaptive optimization: Learning or refining anchor positions jointly with downstream objectives or regularizers in order to maximize information flow or discriminability (Liang et al., 2020, Zhou et al., 12 Oct 2025).

Anchor selection balances the trade-off between expressivity (more anchors yield higher fidelity in downstream tasks) and memory or communication efficiency (compact anchor sets compress semantics) (Hüttebräucker et al., 2024, Wang et al., 15 May 2025).

4. Methodologies and Algorithmic Implementations

Relative representations are deployed in various architectures and modalities:

  • Feature/Descriptor Pooling: Dense visual descriptors are expressed relative to a bank of anchor filters (learned to be discriminative and geometrically stable) for semantic matching and correspondence (Novotny et al., 2017).
  • Motion Modeling: Structure-aware motion transfer leverages hierarchical anchor graphs—motion anchors, root anchors, and their affine relationships—to regularize and capture structure without ground-truth supervision (Tao et al., 2022).
  • Embedding Compression and Sparse Combination: Large discrete vocabularies are embedded by sparse mixtures over anchor prototypes, with both anchors and transformation matrices learned end-to-end (ANT/nbANT) (Liang et al., 2020).
  • Zero-Shot Model Stitching: Representations are projected into relative anchor-space, enabling post hoc composition of neural models without retraining or label supervision (Moschella et al., 2022, Cannistraci et al., 2023, García-Castellanos et al., 2024).
  • Manifold Geometric Alignment: Relative-geodesic representations encode anchorwise Riemannian distances, supporting invariance to chart reparametrization and robust alignment across highly nonlinear latent spaces (Yu et al., 2 Jun 2025).
  • Semantic Communication and Equalization: Encoded latents are recast as coordinates in the anchor basis, enabling agent–agent semantic protocol alignment across heterogeneous architectures without retraining (Hüttebräucker et al., 2024).
  • Interpretable and Controllable Representations: Sparse concept anchoring positions specific concepts along chosen axes, supporting both reversible behavioral steering at inference and permanent deletion via targeted weight ablation (Fraser et al., 13 Dec 2025).
  • Domain-Incremental and Multimodal Learning: Anchors defined in language- or modality-space serve as geometric or cross-modal pivots for guiding visual alignment and semantic fusion, as in LAVA and A-MESS (Geng et al., 18 Nov 2025, Shen et al., 25 Mar 2025).
  • Similarity Testing: Anchor-based maximum discrepancy defines the "distance" of two distributions to an anchor in kernel/RKHS space, enabling powerful adaptive two-sample and relative similarity tests (Zhou et al., 12 Oct 2025).

Relative representations are typically either integrated as non-trainable layers after an encoder, or serve as the core compositional mechanism in model training or inference. The mappings from original space to anchor-space are explicit, differentiable, and, in many cases, invertible in the sense of reconstructing or translating between domains (Hüttebräucker et al., 2024).

5. Empirical Results and Application Scope

Empirical studies across domains demonstrate the efficacy and versatility of relative representations:

  • Zero-shot model stitching: Near-perfect information transfer across independently trained encoders and decoders in image, text, and graph tasks (Moschella et al., 2022, García-Castellanos et al., 2024, Cannistraci et al., 2023).
  • Cross-lingual and cross-modal alignment: High performance with only a handful of seed anchor correspondences, and tolerant to task and data modality mismatch (Cannistraci et al., 2023, Chen et al., 2023).
  • Semantic embedding and retrieval: LDIR achieves comparable or better STS and retrieval metrics than dense black-box models, but with densely interpretable and human-traceable axes (Wang et al., 15 May 2025).
  • Compression: Anchor-based embedding schemes can achieve 5x–80x parameter reduction with negligible or slightly improved accuracy in text, language modeling, and matrix factorization (Liang et al., 2020).
  • Interpretability and control: Sparse Concept Anchoring allows specific semantic axes to be suppressed or ablated at inference, with MSE increases tightly tracking concept removal (Fraser et al., 13 Dec 2025).
  • Motion modeling and 4D rendering: Hierarchical or locally-canonical anchor systems enable memory-efficient, flicker-free, temporally coherent Gaussian splatting in dynamic scenes (Tao et al., 2022, Kwak et al., 10 Dec 2025).
  • Statistical relative similarity: Anchor-based testing delivers two-phase, consistent and statistically powerful discrimination of distributional proximity, outperforming traditional fixed-kernel methods (Zhou et al., 12 Oct 2025).
  • Domain-incremental and continual learning: Alignment via semantic (language-based) anchors preserves both knowledge and class geometry across domain shifts, outperforming direct feature alignment or isolation (Geng et al., 18 Nov 2025).

6. Limitations, Regularization, and Advanced Developments

Notable limitations and ongoing developments include:

  • Anchor selection bottlenecks: While small sets are often effective, insufficient anchor coverage can reduce fidelity, and unsupervised anchor discovery in cross-domain tasks remains underexplored (Cannistraci et al., 2023).
  • Computational and memory cost: Kernel-based or geodesic relative representations may introduce additional computation, though most variants remain tractable for practical anchor set sizes (Yu et al., 2 Jun 2025).
  • Robustness to latent permutations and non-isotropic scaling: Base methods are not invariant to coordinate-wise scaling and permutation; robust batch-normalization preprocessing addresses this (García-Castellanos et al., 2024).
  • Topological densification: Persistent-homology-based regularizers encourage tight class clusters and stabilize anchor-based fine-tuning (García-Castellanos et al., 2024).
  • Semantic drift and class non-alignment: In domain incremental learning, semantic geometry preservation via frozen language anchors assumes class correspondences hold across domains, which may not apply to unstructured or heavily drifted domains (Geng et al., 18 Nov 2025).
  • Sparse annotation: Some methods require sparse or seed-level supervision for concept anchoring or parallel anchor bootstrapping, though advances enable substantial reduction in labeled data (Fraser et al., 13 Dec 2025, Cannistraci et al., 2023).

7. Outlook and Research Frontiers

The anchor-based relative representation paradigm has catalyzed significant advances in model interoperability, efficient pre-training, interpretability, and semantic communication. Future research directions include:

Relative representations, anchored in well-founded geometric and topological theory, are establishing themselves as a core abstraction for model geometrization, alignment, explanation, and efficient learning across the spectrum of contemporary deep learning research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Relative (Anchor-Based) Representations.