Papers
Topics
Authors
Recent
Search
2000 character limit reached

Brain-Score Metric

Updated 18 February 2026
  • Brain-Score is a quantitative measure that evaluates the similarity between artificial neural network activations and human fMRI responses.
  • It uses statistical methods, including Pearson and Spearman correlations on representational dissimilarity matrices and regression-based predictions to compare models with neural data.
  • The metric aids in model selection and early stopping by linking computational performance with biological plausibility across vision and language domains.

The Brain-Score metric is a quantitative measure designed to assess the alignment between artificial neural network activations and neurological responses recorded from the human brain, most commonly via fMRI. It provides a framework to evaluate how “brain-like” a computational model’s internal representations are, with growing importance in both vision and LLM research. Metrics of this kind underpin efforts to bridge neuroscience and deep learning by making functional similarity between artificial and biological neural systems a directly optimizable and interpretable quantity (Blanchard et al., 2018, &&&1&&&).

1. Formal Definitions and Conceptual Foundations

Brain-Score, as introduced and adapted in multiple studies, quantifies the similarity between neural network activations and human brain measurements for the same external stimuli. The metric is instantiated differently depending on the domain (vision or language), but its core mathematics comprises either rank correlation between representational dissimilarity matrices (RDMs) (Blanchard et al., 2018) or normalized regression-based prediction of brain time series (Li, 2024).

For LLMs, the formal definition follows the framework of Schrimpf et al. (2018), as re-implemented by Caucheteux et al. (2023):

BrainScoreM,ROI,h,  =  corr(Y^M,h,,Yh)noise_ceilingh\mathrm{BrainScore}_{M,\,\mathrm{ROI},h,\ell} \;=\; \frac{\mathrm{corr}\bigl(\widehat{Y}_{M,h,\ell},\,Y_{h}\bigr)} {\mathrm{noise\_ceiling}_{h}}

where:

  • YhRTY_{h} \in \mathbb{R}^T is the cross-subject averaged, temporally aligned fMRI response time series for region of interest (ROI) hh,
  • Y^M,h,RT\widehat{Y}_{M,h,\ell} \in \mathbb{R}^T is a linear regression prediction of brain activity generated from the model MM's activations at layer \ell,
  • “corr” is the Pearson correlation coefficient,
  • noise_ceilingh\mathrm{noise\_ceiling}_h is the split-half reliability upper bound for ROI hh (normalizing for measurement noise).

For the original Human–Model Similarity (HMS) metric in vision:

HMS=ρ(R^human,R^model)\text{HMS} = \rho\left(\hat R^{\text{human}}, \hat R^{\text{model}} \right)

where ρ\rho denotes Spearman’s rank-order correlation between the flattened vectors of RDM entries computed for human fMRI and network activations respectively.

2. Construction of Neural and Model Features

a. Vision (RDM Approach)

Feature vectors vi\vec v_i are extracted for each stimulus sis_i: vi=[f1(si),f2(si),,fn(si)]T\vec v_i = [f_1(s_i), f_2(s_i), \ldots, f_n(s_i)]^T where fjf_j are either fMRI voxels (for humans) or neural units (for the model) (Blanchard et al., 2018).

Pairwise dissimilarity uses the centered Pearson correlation:

ψ(vi,vj)=1(vivˉi)(vjvˉj)vivˉi2vjvˉj2\psi(\vec v_i, \vec v_j) = 1 - \frac{(\vec v_i - \bar v_i) \cdot (\vec v_j - \bar v_j)}{\|\vec v_i - \bar v_i\|_2 \|\vec v_j - \bar v_j\|_2}

The RDM RR is populated as Rij=ψ(vi,vj)R_{ij} = \psi(\vec v_i, \vec v_j) for i<ji < j, Rii=0R_{ii}=0, symmetry enforced for i>ji>j. The upper triangular entries are flattened for the final metric computation.

b. Language (Regression Approach)

For each model–ROI–layer combination, an encoding model linearly projects LLM layer activations to predicted brain fMRI time series, tested on held-out examples for correlation estimation. Models are scored per ROI and hemisphere, using group-averaged BOLD data and careful temporal alignment (Li, 2024).

3. Data Collection and Preprocessing

a. Vision Domain

  • Stimuli: 92 object images selected for category diversity, e.g., animate vs. inanimate (Blanchard et al., 2018).
  • fMRI: Data acquired from four participants, two sessions each, for a total of eight RDMs; voxels from bilateral inferotemporal cortex, no spatial smoothing or averaging.
  • RDMs: Publicly available tools deliver precomputed 92×92 RDMs; these are averaged across all sessions and subjects.

b. Language Domain

  • fMRI: 190 human subjects' BOLD data, processed by averaging across subjects and reducing by ROI or hemisphere.
  • LLM activations: 39 LLMs and their untrained counterparts; layerwise token embeddings extracted for direct mapping onto brain responses.

4. Topological and Statistical Analyses

For LLMs, the interpretability of Brain-Score is augmented by constructing topological features using persistent homology:

  • Time-delay embedding of 1-D fMRI or model-activation time series into R3\mathbb{R}^3, followed by Vietoris–Rips persistent homology (k=0,1,2k = 0,1,2).
  • Wasserstein distances Wq(Dk(PY),Dk(PX))W_q(D^k(P_Y), D^k(P_X)) (q=1,,300,q=1, \ldots, 300, \infty) between persistence diagrams from each data source deliver a set of 903 features per data pair (Li, 2024).
  • Ordinary least squares regressions are then fitted to explain Brainscore variation in terms of these topological features, with model selection guided by cross-validated R2R^2 and Bonferroni-corrected pp-values.

5. Empirical Findings and Quantitative Properties

a. Performance Correlation

For vision models (PredNet variants):

  • HMS correlates strongly with performance on both next-frame video prediction (Spearman’s ρ=0.646\rho = -0.646, negative since lower MSE is better) and object-matching accuracy (ρ=+0.575\rho = +0.575), p<0.001p<0.001 (Blanchard et al., 2018).
Metric Mean ± SD (All) Mean ± SD (Top-10 HMS)
Next-frame MSE 0.092 ± 0.148 0.009 ± 0.003
Object-matching Accuracy 0.367 ± 0.134 0.459 ± 0.049
HMS 0.106 ± 0.055 0.178 ± 0.011

b. Early Stopping Strategy

  • HMS stabilizes within a model (SD 0.01\leq 0.01 over 25 epochs) after 33\sim 33 epochs, preceding the stabilization of standard task metrics.
  • Applying HMS-based early stopping would reduce training GPU time by 67%\simeq 67\% without adverse effect on downstream performance (Blanchard et al., 2018).

c. Model Size and Brainscore

  • Brainscore increases with model size (log10(#parameters)\log_{10}(\#\text{parameters})), with 83% of trained LLMs outperforming untrained versions in posterior cingulate cortex and other ROIs (Li, 2024).

d. Topological Feature Variability

  • Each brain ROI and hemisphere is best explained by a characteristic subset of Wasserstein/persistence features.
  • Increased topological dissimilarity between LLMs and fMRI typically reduces Brainscore, as indicated by negative regression coefficients.

6. Interpretation, Domain-Specific Differences, and Implications

While the HMS variant in vision focuses on the geometric correspondence of category structure via RDM rank consistency, the LLM variant emphasizes normalized time series predictability using a regression baseline and normalization for biological measurement reliability. This suggests that the brain-score concept is adaptable across domains, with modality-appropriate implementations.

A plausible implication is that Brain-Score provides a basis for model selection and early stopping that is explicitly linked to neural data, supporting both mechanistic interpretability and practical efficiency in neural architecture search. Layer–ROI correspondence heatmaps reveal potentially specialized alignments between deep neural model layers and specific cortical loci (e.g., posterior temporal lobe), supporting the hypothesis that different brain regions or processing stages are best approximated by specific computational stages in artificial networks.

7. Limitations and Future Directions

Brain-Score and its variants are constrained by the quality, granularity, and task alignment of available neural data (fMRI, MEG). The normalization by noise ceiling controls for reliability but does not address all inter-subject or inter-task variability. Expanding Brainscore-based evaluation to other modalities (e.g., electrophysiology, behavioral quantification), tasks, and model types is ongoing. The topological feature analysis demonstrates the potential for refined descriptive frameworks that link specific classes of dissimilarity to functional or anatomical variation (Li, 2024). Future work may further integrate non-linear encoding models, broader datasets, and dynamic adaptation of feature construction to maximize neuroscientific interpretability.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Brain-Score Metric.