Directional Conductance Divergence (DCD)
- DCD is a metric quantifying asymmetric functional coverage in vision–language models and anisotropic conductance in graphene nanostructures.
- In VLMs, DCD computes layerwise conductance using entropy-regularized softmax, offering model-specific transferability insights through block importance deviations.
- In strained graphene, DCD captures the divergence of ballistic conductance along specific crystallographic directions, signaling emergent directional superconductivity near critical strain.
Directional Conductance Divergence (DCD) denotes two distinct concepts in contemporary research literature: (i) an asymmetric, task- and model-specific metric for assessing functional similarity and transferability between visual tasks in vision–LLMs (VLMs), and (ii) the divergence of ballistic conductance along specific crystallographic directions in strained graphene at a critical deformation. The former is central to few-shot model selection in large VLM zoos; the latter describes the emergence of “directional superconductivity” in graphene nanostructures. Both share the feature that conductance (or functional coverage) becomes highly anisotropic and, in a rigorous sense, diverges or saturates along preferred axes as determined by system-specific criteria (Yang et al., 1 Feb 2026, Soodchomshom, 2011).
1. Asymmetric Task Similarity in Vision–LLMs
DCD in the context of VLMs encapsulates the need for an asymmetric, entropy-regularized divergence to measure how a pretrained source representation covers blocks critical to a target task. For a given model , the visual encoder is partitioned into coarse-grained blocks. Each image from task yields a layerwise conductance vector , where
with for output embedding . The mean profile over images allows model-level summaries on both source () and target ().
The normalized activation is obtained as
and the importance distribution arises via the entropy-regularized softmax: This quantifies target-specific saliency of encoder blocks.
The directional deviation between source and target is
weighted by target saliency, inducing the metric
which is generally non-symmetric due to the directional weighting . The degree to which covers the blocks salient for determines the inferred model transferability (Yang et al., 1 Feb 2026).
2. Entropy-Regularized Alignment and Importance Weighting
Entropy-regularized alignment ensures that the block importance distribution is both sharp where displays significant conductance and widespread to preserve statistical stability. Formally, this amounts to maximizing
for in the -simplex, with Shannon entropy . The resulting softmax has tunable “attention intensity” , interpolating between uniform weighting for and sharp focus on the maximal block(s) as . This framework underlines the model- and target-specific directionality intrinsic to DCD, precluding symmetric elementary divergences such as cosine or Jensen–Shannon, which do not sufficiently account for functional asymmetry between tasks. Ablations confirm a deficit in NDCG@5 for symmetric proxies (Yang et al., 1 Feb 2026).
3. DCD in Ballistic Transport of Strained Graphene
In the context of uniaxially zigzag-strained graphene, DCD refers to the physical divergence of conductance along the armchair direction as critical strain is approached. The system is governed by the modified tight-binding Hamiltonian
with anisotropic hopping parameters. Below a critical strain , the band-structure remains gapless; at (), Dirac points merge. The energy spectrum is an anisotropic Weyl form: with as . The Landauer conductance for carrier propagation at angle is determined by the number of transverse modes , producing (Soodchomshom, 2011): where
and remains finite. This physical “divergence” exemplifies directional (anisotropic) electronic transport and provides an analogy to superconductivity along selected directions.
4. Computational Workflow and Algorithm
For model selection in VLMs, the complete DCD computation is:
- Layerwise Conductance Extraction: For each block and sample in source and target, compute .
- Profile Averaging: Average conductance over all samples to obtain and .
- Normalization: Obtain and via normalization with regularization.
- Block Importance: Compute via softmax over .
- Relative Deviations: Evaluate per-block deviations .
- Metric Aggregation: Sum to yield .
In large-scale evaluation, rankings for held-out target tasks are predicted by aggregating known source task ranks, weighted by exponentiated negative DCD differences. The principal metrics are NDCG@5 and Kendall's $\tau@5$ (Yang et al., 1 Feb 2026).
For strained graphene, the analytical calculation derives from evaluating Landauer-mode integrals as , yielding formally divergent results for armchair conductance, while zigzag remains regular (Soodchomshom, 2011).
5. Experimental Evidence and Comparative Benchmarks
On 48 open-source VLMs and 21 image benchmarks (classification, OCR, satellite, medical, etc.), DCD-based selection achieves a 14.7% NDCG@5 improvement over symmetric and data-expensive baselines, with performance saturating at 25 source images per task. The approach demonstrates consistent gains in few-shot settings (one target image, 25 source images): DCD yields NDCG@5 = 0.707 versus SWAB’s 0.616, and $\tau@5 = 0.365$ vs. 0.318. Ablations confirm the necessity of both directionality and entropy-regularized block alignment (Yang et al., 1 Feb 2026).
For graphene physics, the analytical divergence is predicted under idealized conditions (zero temperature, ballistic regime, tight-binding model), with distinct physical signatures such as diverging density of states, vanishing resistance , and a synthetic superconducting analogy as the bandgap opens at (Soodchomshom, 2011).
6. Interpretation, Importance, and Scope
DCD, as formalized in VLM selection, is distinctively asymmetric, model-aware, and target-driven. It rectifies the limitations of previous proxies by quantifying transferability in a manner rooted in internal model dynamics rather than in textual or distributional similarity, enabling data- and compute-efficient model selection absent direct inference. The link between coverage of salient functional blocks and predicted transferability is a plausible mechanism underpinning effective transfer in modern multimodal architectures.
In condensed matter, DCD manifests physically as a divergence in conductance, arising from strong anisotropy and band-structure engineering. The analogy to superconductivity is justified by the vanishing resistance along the armchair axis and the opening of an excitation gap, albeit in a ballistic non-interacting regime.
Both utilizations of DCD reflect a broader recognition of directionality, asymmetry, and anisotropy as fundamental features—whether in information transfer across neural architectures, or in charge transport under symmetry-breaking structural perturbations. Future work may extend DCD metrics to broader classes of models or complex materials, contingent on analogous notions of saliency or mode-count divergence.
Key References:
- “Model Specific Task Similarity for Vision LLM Selection via Layer Conductance” (Yang et al., 1 Feb 2026)
- “Possible strain-induced directional superconductivity in graphene” (Soodchomshom, 2011)