Papers
Topics
Authors
Recent
Search
2000 character limit reached

StutterFuse: Retrieval-Augmented Disfluency Detection

Updated 22 December 2025
  • StutterFuse is a retrieval-augmented classifier that leverages a non-parametric memory bank to detect complex, overlapping stuttering events with enhanced precision and recall.
  • It employs SetCon, a Jaccard-weighted metric learning loss that structures the embedding space based on label overlap, effectively mitigating modality collapse.
  • A gated fusion mechanism dynamically combines audio and retrieval expert outputs, achieving state-of-the-art multi-label detection and robust cross-domain, cross-lingual generalization.

StutterFuse is a retrieval-augmented classification framework for multi-label stuttering and disfluency detection in speech, integrating memory-augmented deep learning, Jaccard-weighted metric learning, and dynamic mixture-of-experts fusion. In contrast to conventional parametric approaches, StutterFuse incorporates a non-parametric memory bank of clinical audio exemplars, enabling classification by reference and mitigating the challenge of detecting overlapping and complex disfluencies. The architecture addresses "modality collapse"—where naive reliance on retrieval increases recall but erodes precision—by introducing a Jaccard-weighted metric loss (SetCon) and a gated expert fusion mechanism. StutterFuse achieves state-of-the-art multi-label detection and exhibits strong zero-shot cross-dataset and cross-lingual generalization (Singh et al., 15 Dec 2025).

1. Architectural Framework and Retrieval-Augmented Pipeline

StutterFuse comprises a three-stage inference and learning pipeline:

  1. Wav2Vec 2.0 Feature Extraction: Each 3 s audio segment is processed by a frozen Wav2Vec2-large-960h model, extracting T=150T=150 frames of D=1024D=1024 dimensional transformer hidden states, producing FXR150×1024\mathbf{F}_X \in \mathbb{R}^{150 \times 1024}.
  2. SetCon Embedder and Memory Bank: The features FX\mathbf{F}_X are mapped to a normalized 1024-dimensional embedding zz via a BiGRU (256 units per direction) with attention and ReLU projection. These embeddings form a Faiss IndexFlatIP memory bank (unit-normalized, inner-product search equating to cosine similarity), holding 58\sim58k clinical and augmented example vectors.
  3. Retrieval-Augmented Classifier (RAC) and Gated Fusion:
    • At inference, the query embedding zqz_q retrieves k=5k=5 nearest neighbors {zni}\{z_{n_i}\}, along with similarity scores sis_i and ground-truth labels yniy_{n_i}.
    • Two fusion paradigms are implemented:
      • Mid-Fusion (Cross-Attention): The query and neighbor representations are fused via Conformer-based cross-attention and MLP.
      • Late-Fusion ("StutterFuse" configuration): Independent "audio" and "retrieval" experts are fused using a gating network, g=σ(Wg[za;zr]+bg)g=\sigma(W_g[z_a;z_r]+b_g), with the fused vector [za;gzr][z_a; g \cdot z_r] input to the final classifier.

A schematic overview is shown below:

Stage Input Output
Wav2Vec2 3 s audio ($16$ kHz) FX\mathbf{F}_X (150×1024150\times1024)
SetCon+Faiss FX\mathbf{F}_X zR1024z\in\mathbb{R}^{1024} (memory bank)
Retrieval + Fusion zqz_q and {zni,yni,si}\{z_{n_i},y_{n_i},s_i\} Multi-label stutter probabilities

2. SetCon: Jaccard-Weighted Metric Learning

StutterFuse employs SetCon, a set-similarity contrastive loss that leverages continuous Jaccard overlap between multi-label targets to structure the embedding space. For anchor ii with embedding ziz_i and positives P(i)P(i),

J(yi,yj)=yiyjyiyjJ(y_i, y_j) = \frac{|y_i \cap y_j|}{|y_i \cup y_j|}

wip=J(yi,yp)w_{ip} = J(y_i, y_p)

The SetCon loss is defined as

LSetCon=i=1N1P(i)pP(i)wiplogexp(zizp/τ)aA(i)exp(ziza/τ)\mathcal{L}_{\text{SetCon}} = \sum_{i=1}^N \frac{-1}{|P(i)|} \sum_{p\in P(i)} w_{ip}\log\frac{\exp(z_i\cdot z_p/\tau)}{\sum_{a\in A(i)}\exp(z_i\cdot z_a/\tau)}

where A(i)=P(i)N(i)A(i)=P(i)\cup N(i), and τ\tau is the temperature. This facilitates semantic structuring such that embeddings from samples with larger label set overlap cluster more closely, improving retrieval for complex, overlapping stuttering events.

3. Gated Mixture-of-Experts Fusion

The StutterFuse late-fusion classifier integrates two specialized experts:

  • Audio Expert: Processes the query audio via a 2-block Conformer backbone to produce zaR256z_a\in\mathbb{R}^{256}.
  • Retrieval Expert: Processes retrieved neighbor embeddings (zn1,...,zn5)(z_{n_1}, ..., z_{n_5}) via MLP → GlobalAvgPool → Dense, outputting zrR128z_r\in\mathbb{R}^{128}.
  • Gating Network: Computes g=σ(Wg[za;zr]+bg)g = \sigma(W_g[z_a;z_r] + b_g).

The fused representation [za;gzr][z_a;g\cdot z_r] allows the model to dynamically arbitrate the contributions of acoustic evidence vs. retrieval context, mitigating error propagation from over-reliance on non-parametric neighbors ("echo chamber" or modality collapse).

4. Training Regimen and Hyperparameterization

The training pipeline proceeds in two distinct phases:

  • Phase 1 (SetCon Embedder): Optimized with Adam (1×1041\times10^{-4}), batch size $4096$, τ=0.1\tau=0.1, for 20 epochs with early stopping on Recall@5 (Jaccard 0.5\le 0.5). Recall@5 improves from 0.32 (mean-pooled Wav2Vec2) to 0.47 with SetCon.
  • Phase 2 (Classifier): Both mid- and late-fusion classifiers use AdamW (2×1052\times10^{-5}, 5×1045\times10^{-4} weight decay), batch size $128$, binary cross-entropy loss with 0.1 label smoothing. Conformer details: 2 blocks, feed-forward dim 512, dropout 0.3 (Conformer) and 0.5 (MLP).

The Faiss memory bank is instance-balanced across the 28\approx28k original and 30\approx30k augmented examples.

5. Empirical Performance and Cross-Domain Robustness

StutterFuse was evaluated on multi-label disfluency detection with several configurations:

  • SEP-28k (Speaker-Independent)
    • Audio-Only Conformer baseline: weighted F1 =0.60= 0.60 (precision 0.66, recall 0.56)
    • Mid-Fusion RAC: weighted F1 =0.64= 0.64 (precision 0.52, recall 0.82)
    • Late-Fusion StutterFuse: weighted F1 =0.65= 0.65 (precision 0.60, recall 0.72)
    • StutterFuse per-class F1: Prolongation 0.61, Block 0.66, SoundRep 0.54, WordRep 0.55, Interjection 0.77
  • Zero-Shot Cross-Dataset: FluencyBank
    • Weighted F1 =0.55= 0.55, with StutterFuse and RAC identical.
    • Relative gain over Audio-Only: SoundRep +7.5%, WordRep +6.6%.
  • Zero-Shot Cross-Lingual: KSoF (German)
    • English-to-German direct baseline: Block F1 =0.10= 0.10.
    • German-trained supervised topline: Block F1 =0.60= 0.60.
    • RAC (mid-fusion): Block F1 =0.68= 0.68, weighted F1 =0.58= 0.58.
    • StutterFuse: Block F1 =0.60= 0.60, weighted F1 =0.57= 0.57.

In ablation studies, removing retrieval degraded F1 from 0.65 to 0.60. Disabling SetCon or neighbor metadata similarly reduced performance, establishing the necessity of each component.

6. Modality Collapse: Definition and Remediation

"Modality collapse" or the "echo chamber" effect arises when retrieval-enhanced classifiers overfit to the label structure of nearest-neighbor samples, boosting recall (from 0.56 to 0.82) but sacrificing precision (from 0.66 to 0.52). This occurs because retrieved neighbors, being stutter-rich, bias the model toward overpredicting disfluencies shared among them, even when the query differs.

Mitigations within StutterFuse include:

  • SetCon: Constructs an embedding space that reflects partial set-overlap, supporting retrieval diversity; recall@5 increases from 0.32 to 0.47.
  • Gated Fusion: Enables dynamic attenuation of retrieval in high-certainty acoustic conditions, recovering precision to 0.60 (F1 improves to 0.65).

Qualitative diagnostics illustrate that StutterFuse recovers complex overlapping labels when retrieval context is label-diverse, but can propagate false positives if all neighbors share the same unrepresentative label.

7. Relation to Prior Disfluency Detection and Fusion Pipelines

StutterFuse builds on prior work such as FluentNet (Kourkounakis et al., 2020), which applies a Squeeze-and-Excitation ResNet \rightarrow BLSTM \rightarrow Attention architecture for frame-level disfluency classification using STFT spectrograms. FluentNet leverages SE blocks for channel-wise spectral weighting, BLSTM for temporal structure, and attention for segment-level focus, achieving mean accuracy of 91.75%91.75\% and miss rate 9.35%9.35\% on UCLASS.

Despite achieving state-of-the-art results in single-label tasks, FluentNet and similar purely parametric approaches exhibit limitations in handling high-order label co-occurrence, label-imbalance, and infrequent complex overlaps. StutterFuse extends these capabilities by:

  • Utilizing retrieval-augmented fused representations to reason about rare and complex stutter combinations.
  • Structuring the latent space with SetCon for multi-label compatibility.
  • Introducing dynamic expert gating to avoid modality-specific bias.

Recommendations derived from FluentNet, such as multi-modal fusion and streaming-friendly modifications, are compatible extensions for future StutterFuse designs.


StutterFuse defines a new class of Retrieval-Augmented Classifiers for multi-label stuttering detection, demonstrating that explicit retrieval, label-aware metric learning, and dynamic fusion jointly resolve critical limitations of previous methods and enable robust, cross-domain disfluency identification (Singh et al., 15 Dec 2025).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to StutterFuse.