Papers
Topics
Authors
Recent
Search
2000 character limit reached

Representational Dissimilarity Matrix (RDM)

Updated 8 February 2026
  • RDM is a technique that encodes how distinct stimulus-evoked responses are using dissimilarity metrics like Euclidean or correlation distances.
  • Methodological extensions such as time-resolved, topological, and stochastic RDMs enhance analysis of temporal dynamics and noise-adjusted comparisons.
  • RDMs facilitate cross-modal and model–brain benchmarking by aligning representations from neural, behavioral, and computational data with standardized metrics.

A Representational Dissimilarity Matrix (RDM) is a fundamental construct in representational similarity analysis (RSA), used to characterize the geometry of neural, behavioral, or model response patterns by summarizing the pairwise dissimilarities between all condition- or stimulus-evoked activity vectors. Each entry in an RDM encodes how distinct the responses to two stimuli are according to a chosen metric. This abstraction provides a common framework for comparing representations across diverse systems—brain areas, time points, individuals, and computational models—without the need for explicit unitwise correspondence.

1. Formal Definition and Construction of RDMs

Given a set of NN experimental conditions or stimuli, let %%%%1%%%% denote the MM-dimensional response vector (e.g., neuron firing rates, voxel activations, DNN features) evoked by stimulus ii. The RDM is an N×NN \times N symmetric matrix DD, with entries

Dij=d(ri,rj)D_{ij} = d(\mathbf{r}_i, \mathbf{r}_j)

where d(â‹…,â‹…)d(\cdot,\cdot) is a dissimilarity or distance metric. Common metrics include:

  • Euclidean distance: Dij=∥ri−rj∥2=[∑k=1M(ri,k−rj,k)2]1/2D_{ij} = \|\mathbf{r}_i - \mathbf{r}_j\|_2 = [\sum_{k=1}^M (r_{i,k}-r_{j,k})^2]^{1/2},
  • Correlation distance: Dij=1−corr(ri,rj)=1−riâ‹…rj∥ri∥∥rj∥D_{ij} = 1 - \mathrm{corr}(\mathbf{r}_i, \mathbf{r}_j) = 1 - \frac{\mathbf{r}_i \cdot \mathbf{r}_j}{\|\mathbf{r}_i\|\|\mathbf{r}_j\|}
  • Mahalanobis distance and specialized metrics for stochastic representations or behavioral data (Lin et al., 2019, Janik, 2019, Diedrichsen et al., 2020, Duong et al., 2022, Lin et al., 2023, Zimmermann, 2022).

Because RDMs are agnostic to the individual labelings or ordering of neurons/units, they enable direct representational comparison across architectures, individuals, and measurement modalities.

2. Methodological Extensions: Temporal, Topological, and Stochastic RDMs

While a "static" RDM summarizes the geometry for a set of fixed responses, several methodological extensions generalize the notion along temporal, topological, and probabilistic axes:

  • Time-Resolved RDMs (RDM movies): By extracting response vectors in sliding temporal windows, a time series {D(t)}\{D(t)\} is constructed, revealing the dynamics of representational geometry over time. Visualization of these trajectories, especially via Procrustes-aligned multidimensional scaling (pMDS), elucidates category-specific encoding dynamics and temporal hierarchy in cortical regions (Lin et al., 2019).
  • Topological Extensions (tRSA): Standard RDMs capture metric geometry but are blind to topological invariants (e.g., holes, loops). The geo-topological transform GTl,u(d)GT_{l,u}(d) compresses small and large distances to amplify neighborhood relations, interpolating between metric and pure topological (adjacency) summaries. This increases robustness to noise and interindividual variability, and provides a continuum for calibrating sensitivity to geometry vs topology (Lin et al., 2023).
  • Stochastic RDMs: For stochastic neural networks or neural data with trial-to-trial variability, RDMs can be constructed using metrics that compare distributions (e.g., 2-Wasserstein, energy distance), integrating both mean and covariance. This enables rigorous analysis in settings where noise structure is informative and captures attributes missed by deterministic metrics (Duong et al., 2022).

3. RDMs in Model–Brain and Across-Modality Comparisons

A central use of RDMs is to bridge representations between models (typically deep neural networks) and neural or behavioral data. Using identical dissimilarity metrics, one computes parallel RDMs for

  • different brain areas (fMRI, MEG/EEG, neural recordings),
  • each layer of a computational model,
  • behavioral similarity judgments, and so forth.

Model–data correspondence is quantified using matrix correlation measures (e.g., Spearman, Kendall τ\tau) or, more optimally, using statistical criteria that account for noise covariance in the RDM entries (see Section 4). This approach, intrinsic to RSA, enables "representational benchmarking"—identifying which model layers or architectures best recapitulate neural representational geometry (Janik, 2019, Diedrichsen et al., 2020, McClure et al., 2015, Lin et al., 2023).

RDMs also generalize across modalities and species: for instance, comparing fMRI-based RDMs in humans and single-unit-based RDMs in macaques, or comparing task-evoked RDMs in artificial and biological systems.

4. Statistical Considerations and Optimal Comparisons

The estimation and comparison of RDMs must account for biases and dependencies induced by noise:

  • Unbiased Estimation: Classic estimators (e.g., squared Euclidean) are biased by measurement noise; cross-validated estimators eliminate mean bias at the cost of increased variance, especially when partition number is small (Diedrichsen et al., 2020).
  • Dependency Structure: Pairwise distances in an RDM are statistically dependent (distances sharing a condition covary). Neglecting these correlations makes inferential tests and model ranking statistically suboptimal.
  • Whitened Unbiased Cosine (WUC) Similarity: Whitening the cross-validated RDM distance vector with respect to its null (noise) covariance achieves noise-variance equalization and de-correlation, yielding the WUC as a near-likelihood-optimal criterion for comparing model and measured RDMs (Diedrichsen et al., 2020).

In practice, the recommended pipeline involves cross-validated dissimilarities, explicit estimation of covariance structure, whitening, and subsequent similarity computation and inference at the group level.

5. Practical Issues: Partial RDMs and Computational Imputation

Empirical constraints in behavioral or neuroimaging studies often preclude exhaustive pairwise measurement, resulting in partial (sparse) RDMs. Computational imputation is required to reconstruct the full matrix:

  • Geometric Reconstruction Algorithm: For Euclidean RDMs, missing DijD_{ij} entries are imputed via Pythagorean estimates from known triplets, aggregating sum-of-squares and absolute-difference formulas across reference points kk. The estimate is taken as the median across all valid references, yielding robustness and rapid percolation through the matrix (Moerel et al., 31 May 2025).
  • Comparison to Alternatives: Compared to deep neural network imputation, graph-shortest-path methods, and partial-MDS, the geometric algorithm is parameter-free, transparent, computationally efficient for moderate nn (O(n3)O(n^3)), and empirically superior in accuracy and variance across broad missing-data regimes.

Table: Approaches to Partial RDM Imputation (Moerel et al., 31 May 2025)

Method Key Feature Pitfall
Geometric inference Median of Pythagorean estimates Assumes Euclidean geometry
Graph-based shortest path Enforces triangle upper bounds Underestimates; ignores lower bounds
Deep NN Flexible, data-driven Training/time-intensive, opaque
Partial-MDS Stress minimization over embedding Dimension choice, slow convergence

6. Visualization, Multidimensional Scaling, and Analysis

RDMs serve as the basis for manifold visualization and exploratory analysis:

  • Multidimensional Scaling (MDS): Classical (Torgerson), nonmetric, and stress-minimizing MDS project the RDM-prescribed geometry into low-dimensional Euclidean space. Procrustes alignment enables consistent "RDM movies" over time (Lin et al., 2019, Zimmermann, 2022).
  • Cluster and Block Matrix Methods: Partitioning of RDMs reveals fine-grained cluster structure among represented conditions.
  • Topological and Geodesic Analysis: tRSA workflows apply monotonic transforms and shortest-path (geodesic) embeddings to probe structure beyond metric geometry (Lin et al., 2023).

Empirically, MDS and stress-minimization approaches yield better faithfulness than eigen-decomposition for non-Euclidean or high-dimensional RDMs, especially in behavioral shape spaces (Zimmermann, 2022).

7. Applications and Theoretical Significance

RDMs underpin contemporary cross-disciplinary advances in neuroscience, cognitive science, and machine learning:

  • Hierarchical and Dynamic Representational Analysis: Temporal RDM movies resolve hierarchical category encoding and dynamic reorganization of population code in, e.g., macaque IT cortex (Lin et al., 2019).
  • Transfer and Representational Distance Learning: By optimizing a student's RDM to approximate that of a teacher (biological or artificial), RDM-based loss functions enable cross-architecture, cross-domain transfer without explicit unit alignment (McClure et al., 2015).
  • Benchmarking and Model Selection: RDM-driven benchmarks guide the development and comparison of representational models (e.g., DNNs), providing a quantitative basis for evaluating correspondence with human and animal neural data (Janik, 2019, Lin et al., 2023).
  • Limitations: Standard RDMs neglect higher-order statistics and are strictly geometric; topological and probabilistic generalizations partially address these limitations, yet estimation reliability and scalability remain subjects of active research.

In sum, the Representational Dissimilarity Matrix formalism provides a mathematically rigorous, modality-invariant, and computationally tractable approach for distilling and comparing the geometry of high-dimensional response patterns. Its centrality in RSA, adaptability to methodological innovations, and broad utility across disciplines underscore its foundational status in contemporary representational analysis (Lin et al., 2019, Janik, 2019, Diedrichsen et al., 2020, McClure et al., 2015, Moerel et al., 31 May 2025, Lin et al., 2023, Duong et al., 2022, Zimmermann, 2022).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Representational Dissimilarity Matrix (RDM).