Papers
Topics
Authors
Recent
Search
2000 character limit reached

DeepSeek-VL2 and TRILLsson Combination

Updated 18 January 2026
  • The paper presents a novel DIVINE architecture that fuses DeepSeek-VL2 and TRILLsson embeddings through hierarchical VAEs and sparse gated fusion for joint neuro-facial disorder prediction.
  • It employs a two-level VAE approach to disentangle local and utterance-level latent representations, yielding interpretable shared and modality-specific features.
  • Empirical results show significant gains in classification accuracy and F1 scores, demonstrating robust performance under both full and partial modality inputs.

The combination of DeepSeek-VL2 (visual) and TRILLsson (audio) embeddings represents a state-of-the-art approach for multimodal neuro-facial disorder assessment within the DIVINE framework. This architecture leverages hierarchical disentanglement, adaptive fusion with sparse gating, and learnable clinical symptom representations. The approach performs joint prediction of disorder class and severity, handling synchronized speech and facial video inputs to achieve superior results, generalization to single-modality scenarios, and interpretable clinical representations (Akhtar et al., 11 Jan 2026).

DIVINE utilizes frozen, pretrained foundation models for both visual and audio modalities to extract robust latent representations of neuro-facial data.

  • DeepSeek-VL2: The “base” DeepSeek-VL2 vision encoder (with weights frozen during DIVINE training) yields per-clip embeddings Xv=DS(v)RTv×dvX_v = \mathrm{DS}(v) \in \mathbb{R}^{T_v \times d_v}, taken from the final block of the vision transformer (prior to MoE gating), with dv=768d_v=768.
  • TRILLsson: The distilled TRILLsson encoder (also frozen) produces audio embeddings Xa=TR(a)RTa×daX_a = \mathrm{TR}(a) \in \mathbb{R}^{T_a \times d_a}, sourced from the last hidden states, with da=512d_a=512.

These embeddings serve as the basis for subsequent disentanglement and fusion operations.

Model Output Dimension Extraction Layer
DeepSeek-VL2 dv=768d_v=768 Final ViT block (pre–MoE gating)
TRILLsson da=512d_a=512 Last encoder hidden state

2. Hierarchical Disentanglement via Two-Level VAEs

DIVINE introduces a dual-stage VAE disentanglement procedure, operating at both local and global levels for each modality.

  • Local-window VAE: Each input sequence is windowed, and for window tt and modality m{v,a}m \in \{v,a\}:
    • Posterior parameters: (μwm(t),logσwm(t))=fencw(Xm[t])(\mu_w^m(t), \log \sigma_w^m(t)) = f^w_{\mathrm{enc}}(X'_m[t]).
    • Latent sample: zwm(t)=μwm(t)+exp(12logσwm(t))ϵz^m_w(t) = \mu_w^m(t) + \exp(\frac{1}{2} \log \sigma_w^m(t)) \odot \epsilon, ϵN(0,I)\epsilon \sim \mathcal{N}(0, I).
    • Reconstruction: X^m[t]=fdecw(zwm(t))\hat X'_m[t] = f^w_{\mathrm{dec}}(z^m_w(t)).
    • Loss: Lwm=1TtXm[t]X^m[t]2+KL[N(μwm(t),(σwm(t))2)N(0,I)]\mathcal{L}^m_w = \frac{1}{T''} \sum_t \| X'_m[t] - \hat X'_m[t] \|^2 + \mathrm{KL}[\mathcal{N}(\mu_w^m(t), (\sigma_w^m(t))^2) \| \mathcal{N}(0,I)].
    • Temporal pooling yields the global vector: zˉm=1Ttzwm(t)\bar z^m = \frac{1}{T''} \sum_t z^m_w(t).
  • Utterance-level VAE: For each zˉm\bar z^m, two parallel encoders decompose the latent into:
    • Shared latent (zsmz^m_s), with parameters tied across modalities.
    • Private latent (zpmz^m_p), modality-specific.
    • The full loss includes reconstruction, βs\beta_s-weighted KL for shared, βp\beta_p-weighted KL for private latents:

    Lum=zˉmfdec(zsm,zpm)2+βsKL[N(μsm,(σsm)2)N(0,I)]+βpKL[N(μpm,(σpm)2)N(0,I)]\mathcal{L}^m_u = \| \bar z^m - f_\mathrm{dec}(z^m_s, z^m_p) \|^2 + \beta_s\,\mathrm{KL}[\mathcal{N}(\mu_s^m, (\sigma_s^m)^2) \| \mathcal{N}(0,I)] + \beta_p\,\mathrm{KL}[\mathcal{N}(\mu_p^m, (\sigma_p^m)^2) \| \mathcal{N}(0,I)]

    where βs\beta_s, βp\beta_p are chosen by validation.

This structure disentangles shared and modality-specific sources at multiple temporal scales, enhancing interpretability and generalization for clinical assessments.

3. Sparse Gated Fusion and Clinical Token Injection

Following disentanglement, the system adaptively fuses latent spaces and integrates clinical priors.

  • Sparse gated fusion: For private encodings zpvz^v_p, zpaz^a_p:

    • Gates: gv=σ(Wvzpv+bv)g_v = \sigma(W_v z^v_p + b_v), ga=σ(Wazpa+ba)g_a = \sigma(W_a z^a_p + b_a) (elementwise sigmoid).
    • Fused representation: hfused=gvzsv+gazsah_\mathrm{fused} = g_v \odot z^v_s + g_a \odot z^a_s.
    • Sparsity is regularized with Lsparse=gv1+ga1\mathcal{L}_\mathrm{sparse} = \|g_v\|_1 + \|g_a\|_1.
  • Learnable symptom tokens: The fusion vector is prepended with KK learned “symptom tokens” {T1,,TK}\{T_1,\ldots,T_K\}:
    • Sequence: S=[T1,,TK,hfused]S = [T_1,\dots,T_K,\,h_\mathrm{fused}].
    • A transformer-like dense block produces HoutH_\mathrm{out}, with a token-specialization penalty Ltoken\mathcal{L}_\mathrm{token}.

This layered fusion enables interpretability (by relating features to clinical symptom axes) and provides robustness to missing modalities.

4. Multitask Prediction and Aggregate Loss

The architecture supports joint diagnosis and severity scoring through multitask output heads.

  • Heads: Classification and severity, with softmax outputs:

y^cls=softmax(Wclsh+bcls)\hat y_\mathrm{cls} = \mathrm{softmax}(W_\mathrm{cls} \mathbf{h} + b_\mathrm{cls})

y^sev=softmax(Wsevh+bsev)\hat y_\mathrm{sev} = \mathrm{softmax}(W_\mathrm{sev} \mathbf{h} + b_\mathrm{sev})

where h\mathbf{h} is the output from the fused representation post-dense block.

  • Losses:
    • Cross-entropy for classification (Lcls\mathcal{L}_\mathrm{cls}) and severity (Lsev\mathcal{L}_\mathrm{sev})
    • Cycle-consistency (Lcycle\mathcal{L}_\mathrm{cycle}) aligns shared latents.
    • Sparse gating and token penalties as above.
    • Full VAE reconstruction and KL objectives.

The total loss is:

Ltotal=Lcls+αLsev+ϵ(Lcycle+Lsparse+ϵλLtoken) +m{v,a}(Lwm+Lum)\begin{aligned} \mathcal{L}_\mathrm{total} = \mathcal{L}_\mathrm{cls} &+ \alpha\,\mathcal{L}_\mathrm{sev} + \epsilon\,\left( \mathcal{L}_\mathrm{cycle} + \mathcal{L}_\mathrm{sparse} + \epsilon\,\lambda\,\mathcal{L}_\mathrm{token} \right) \ &+\sum_{m\in\{v,a\}} \left( \mathcal{L}^m_w + \mathcal{L}^m_u \right) \end{aligned}

with fixed hyperparameters α=2\alpha=2, ϵ=0.1\epsilon=0.1, λ=0.4\lambda=0.4.

5. Training Protocol and Hyperparameterization

The model is trained and evaluated on the Toronto NeuroFace dataset using subject-wise five-fold cross-validation.

  • Optimization: Adam optimizer, learning rate 1×1031 \times 10^{-3}, batch size 32, up to 50 epochs with early stopping.
  • Model size: Fusion models contain 3.5–6.5M trainable parameters; unimodal variants \sim1M.
  • CNN refinement: Each backbone includes two 1D convolutional blocks and fully connected layers for embedding refinement.
  • Regularization: Dropout and L2L_2 regularization are applied to output heads as required.

These choices are calibrated for both convergence and robust generalization under full or partial modality input regimes.

6. Empirical Performance and Ablative Analyses

The DIVINE framework using DeepSeek-VL2 and TRILLsson embeddings achieves strong performance and demonstrates the impact of each architectural component through extensive ablation studies.

  • Unimodal CNN accuracy (multitask):
    • DeepSeek-VL2: 88.94%
    • TRILLsson: 90.51%
  • Naive DS+TR concatenation: 94.65% accuracy, 93.87% F1
  • Full DIVINE (DS+TR): 98.26% accuracy, 97.51% F1
  • Modality-Constrained Regimes:
    • Audio only: 89.27% / 88.23%
    • Video only: 84.34% / 83.20%
  • Regularization Ablation (DS+TR):
    • w/o cycle-consistency: 96.14% / 94.95%
    • w/o sparse gate: 95.83% / 94.21%
    • w/o token loss: 95.62% / 93.89%
  • Bottleneck Ablation:
    • Flat fusion: 93.87% / 92.10%
    • Single-level VAE: 95.22% / 93.80%
    • Two-level (full) DIVINE: 98.26% / 97.51%

This empirical evidence quantifies the contribution of hierarchical disentanglement, sparse gating, and symptom tokenization relative to baseline encoders and naive fusion.

7. Context, Implications, and Outlook

The DIVINE framework, as the first approach to integrate cross-modal disentanglement, adaptive sparse gating, and multitask predictive heads for oro-facial neurological assessment, establishes a new empirical standard for multimodal fusion using DeepSeek-VL2 and TRILLsson representations (Akhtar et al., 11 Jan 2026).

Its design enables:

  • Clinical interpretability through explicit shared/private latent decomposition.
  • Robustness to missing modalities via sparse gating and multitask heads.
  • Superior accuracy and F1 compared to unimodal approaches and simple fusion baselines, particularly in challenging cross-modality clinical settings.

A plausible implication is that this paradigm of multimodal representation disentanglement, dense fusion informed by symptom priors, and multitask learning may extend beyond neuro-facial disorder diagnostics to other clinical, behavioral, or affective computing domains requiring joint modeling of speech and facial data.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to DeepSeek-VL2 and TRILLsson Combination.