Papers
Topics
Authors
Recent
Search
2000 character limit reached

Diarization-Guided Silence Suppression

Updated 27 January 2026
  • Diarization-guided silence suppression is a decoding strategy that uses frame-level silence estimates to block spurious timestamp tokens in ASR models.
  • It applies a threshold and guard band to accurately identify silence segments, reducing over-segmentation and improving utterance boundary placement.
  • Empirical results demonstrate improved mtWER, AER, and DER, highlighting its effectiveness in enhancing joint ASR-diarization performance.

Diarization-guided silence suppression is an inference-time decoding strategy employed in joint end-to-end automatic speech recognition (ASR) and speaker diarization systems, specifically within frameworks that utilize serialized output training (SOT) paradigms as in Whisper-based encoder-decoder architectures. The method leverages frame-level silence/activity estimates from a diarization head to mask out spurious timestamp emissions in silence regions, thereby improving utterance boundary placement, temporal segmentation accuracy, and reducing over-segmentation—without altering the core ASR training objectives or loss landscape (Xu et al., 25 Jan 2026).

1. Motivation and Problem Statement

The placement of timestamp tokens is critical in serialized output ASR+diarization models, where the decoder emits explicit tokens (e.g., "<|t_start|>", "<|t_end|>") denoting spoken segment boundaries. In Whisper-style SOT architectures, erroneous insertion of timestamps into silence intervals often leads to timestamp drift, over-segmentation, and degraded temporal accuracy. Silence intervals are void of lexical content or speaker turns and are, therefore, inappropriate points for utterance boundaries or diarization changes.

To address this, diarization-guided silence suppression constrains the decoder: when the frame-level diarization module predicts a high probability of silence, the system explicitly blocks emission of timestamp tokens in those regions. The suppression is guided solely by the silence posterior, even though the diarization head outputs probabilities for all speaker-role classes ("child," "adult," "silence") (Xu et al., 25 Jan 2026).

2. Algorithmic Formulation

The diarization-guided silence suppression procedure operates as follows. Let NN denote the total number of encoder frames and, for each frame nn, let the diarization head produce a posterior vector: s^n=[s^n(child),s^n(adult),s^n(sil)]\hat{\mathbf{s}}_n = [\hat{s}_n^{(\mathrm{child})},\, \hat{s}_n^{(\mathrm{adult})},\, \hat{s}_n^{(\mathrm{sil})}] A frame is deemed "silence" if: silence(n)=s^n(sil)>τsil\mathrm{silence}(n) = \hat{s}_n^{(\mathrm{sil})} > \tau_{\mathrm{sil}} where τsil=0.7\tau_{\mathrm{sil}} = 0.7. Contiguous spans [nistart,niend][n_i^{\mathrm{start}},\, n_i^{\mathrm{end}}] of successive silence frames are located, and mapped to time using a 20 ms/frame hop: tistart=0.02nistartt_i^{\mathrm{start}} = 0.02 \, n_i^{\mathrm{start}}, tiend=0.02niendt_i^{\mathrm{end}} = 0.02 \, n_i^{\mathrm{end}}. Each silence window is further shrunken to [tistart+δ,tiendδ][t_i^{\mathrm{start}}+\delta,\, t_i^{\mathrm{end}}-\delta] with δ=0.2s\delta=0.2\,\mathrm{s}, to create a guard band around speech transitions, reducing the risk of suppressing genuine boundary tokens.

During beam search decoding, whenever a token candidate is a timestamp <tx><|t_x|> with numerical value txt_x falling inside any current shrunken silence window, its probability is set to zero:

1
2
3
for each candidate token tok in beam:
    if tok is a timestamp t_x and i: t_i^start+δ  t_x  t_i^endδ:
        P(tok)  0
This ensures that utterance segmentation tokens cannot be emitted within high-confidence silence intervals, enforcing stricter correspondence between segment boundaries and actual acoustic speech (Xu et al., 25 Jan 2026).

3. Model Integration and Decoding Dynamics

Silence suppression is invoked exclusively at inference, directly after the frame-level diarization head (which is attached to the final encoder layer) computes the silence posteriors. As the SOT decoder proposes tokens, the suppression mask is reapplied at each beam search step to candidate timestamp tokens.

The method is fully compatible with a state-machine-based forced decoding framework. Specifically, within the constrained state space (S₂ and S₅ in the finite-state automaton), timestamp tokens are enumerated, but suppression operates orthogonally by zeroing out the tokens that are temporally misaligned with the silence mask, minimizing spurious state transitions and non-structural outputs (Xu et al., 25 Jan 2026).

4. Practical Implementation and Hyperparameters

Key implementation details include:

  • Silence posterior threshold: τsil=0.7\tau_{\mathrm{sil}} = 0.7 (tuned on a development set).
  • Guard band: δ=0.2\delta = 0.2 seconds, applied at both ends of every silence segment.
  • Frame rate: 20 ms per frame.
  • The diarization head undergoes pretraining for up to 10 epochs (Adam optimizer, learning rate 2×1042\times 10^{-4}, weight decay $0.01$) on frame-level labels, followed by joint fine-tuning (5×1065\times10^{-6} for Whisper-small, diarization loss weight λdiar=1\lambda_{\text{diar}}=1).
  • No extra training loss is introduced; the approach is strictly a decoding-time heuristic. Careful tuning of τsil\tau_{\mathrm{sil}} and δ\delta is employed to avoid over-suppression at true speech boundaries, with the aim of suppressing false alarms in silence, while not affecting legitimate transitions (Xu et al., 25 Jan 2026).

5. Experimental Assessment and Effects

Ablation results for diarization-guided silence suppression on the Playlogue and ADOS datasets (Whisper-small) show the following impact on key metrics:

Method mtWER WER AER DER
Pretrained only 37.8 % 35.8 % 2.0 % 41.4 %
+ silence suppression 37.4 % 35.5 % 1.9 % 40.6 %
Method mtWER WER AER DER
Pretrained only 29.3 % 28.3 % 1.1 % 23.6 %
+ silence suppression 28.8 % 27.8 % 1.0 % 21.8 %

Key empirical observations:

  • Multi-talker word error rate (mtWER) decreases by 0.4–0.5 percentage points.
  • Attributed error rate (AER) decreases by 0.1 percentage points.
  • Diarization error rate (DER) improves by 0.8–1.8 percentage points, representing a substantial reduction in false-alarm errors in silence regions, and improved boundary alignment (Xu et al., 25 Jan 2026).

6. Broader Significance and Outlook

Diarization-guided silence suppression demonstrates a robust approach for leveraging model-internal acoustic structure (i.e., frame-level silence probability) to guide emission constraints during sequence decoding. This approach delivers measurable improvements in segmentation precision and multi-talker performance without necessitating modifications to the underlying ASR loss or model architecture. A plausible implication is that similar strategies could enhance other sequence labeling architectures where non-lexical states (e.g., silence, noise) must be rigorously decoupled from output tokenization. The method reinforces the practical viability of unified ASR-diarization models for scalable, speaker-attributed transcript generation in multi-party spoken interaction analysis (Xu et al., 25 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Diarization-Guided Silence Suppression.