Papers
Topics
Authors
Recent
Search
2000 character limit reached

Feedforward Sequential Memory Networks

Updated 6 February 2026
  • Feedforward Sequential Memory Networks are feed-forward architectures equipped with learnable tapped-delay FIR filters for modeling long-term dependencies in sequential data.
  • They incorporate memory blocks and skip connections to efficiently capture context over hundreds to thousands of frames without recurrent feedback.
  • FSMNs deliver competitive performance in speech recognition and language modeling while reducing training and inference times compared to RNNs and LSTMs.

Feedforward Sequential Memory Networks (FSMNs) constitute a class of memory-augmented feed-forward architectures specifically designed to address the challenge of long-term temporal dependency modeling in sequential data such as speech and language. The defining characteristic of FSMNs is the inclusion of learnable tapped-delay memory blocks—finite impulse response (FIR) structures—within standard multi-layer perceptrons, eliminating all recurrent feedback. These FIR-style memory blocks enable FSMNs and their deep variants (DFSMN) to efficiently model context windows spanning hundreds or thousands of frames, competently rivaling and in many cases outperforming recurrent neural network (RNN) and Long Short-Term Memory (LSTM) models in large-scale speech recognition and synthesis tasks, all while yielding faster, more stable training and significantly lower inference cost (Zhang et al., 2015, Zhang et al., 2015, Zhang et al., 2018, Bi et al., 2018, Yang et al., 2018, You et al., 2019).

1. FSMN Architectural Principles

Traditional feed-forward networks process each input in isolation, lacking the temporal memory required for sequence modeling. FSMNs augment this paradigm by integrating a memory block—essentially a learnable tapped-delay (FIR) filter—into one or more hidden layers. For a given hidden sequence {ht(ℓ)}\{h_t^{(\ell)}\} in layer ℓ\ell, the memory block computes at each step:

  • Unidirectional (look-back only):

h~t(ℓ)=∑i=0Nai(ℓ)⊙ht−i(ℓ)\tilde h_t^{(\ell)} = \sum_{i=0}^N a_i^{(\ell)} \odot h_{t-i}^{(\ell)}

  • Bidirectional (look-back and look-ahead):

h~t(ℓ)=∑i=0N1ai(ℓ)⊙ht−i(ℓ)+∑j=1N2cj(ℓ)⊙ht+j(ℓ)\tilde h_t^{(\ell)} = \sum_{i=0}^{N_1} a_i^{(\ell)} \odot h_{t-i}^{(\ell)} + \sum_{j=1}^{N_2} c_j^{(\ell)} \odot h_{t+j}^{(\ell)}

where ⊙\odot denotes element-wise multiplication, ai(ℓ)a_i^{(\ell)} and cj(ℓ)c_j^{(\ell)} are learned tap coefficients, and N1N_1, N2N_2 set the number of look-back/look-ahead frames.

The output h~t(â„“)\tilde h_t^{(\ell)} is then concatenated with, or projected together with, the instantaneous activation ht(â„“)h_t^{(\ell)} to form the input to the next layer. This explicit memory aggregation enables purely feed-forward processing of sequential context, circumventing the need for recurrent feedback and back-propagation through time (BPTT) (Zhang et al., 2015, Zhang et al., 2015).

2. Deep FSMN (DFSMN) and Skip Connections

Scaling FSMNs to task-relevant depths introduces gradient flow and optimization challenges. The DFSMN architecture addresses these by structurally separating each hidden layer into a non-linear transformation followed by a low-dimensional linear projection and then applying the memory block to the projection. Crucially, DFSMN introduces identity skip-connections (residual links) between the memory blocks of adjacent layers. The memory update at layer ll follows:

p~t(l)=H(p~t(l−1))+pt(l)+∑i=0N1ai(l)⊙pt−s1i(l)+∑j=1N2cj(l)⊙pt+s2j(l)\tilde p_t^{(l)} = \mathcal H(\tilde p_t^{(l-1)}) + p_t^{(l)} + \sum_{i=0}^{N_1} a_i^{(l)} \odot p_{t - s_1 i}^{(l)} + \sum_{j=1}^{N_2} c_j^{(l)} \odot p_{t + s_2 j}^{(l)}

where pt(l)p_t^{(l)} is the projection, H\mathcal{H} the skip identity mapping, and (s1,s2)(s_1,s_2) are stride parameters controlling tap sparsity (Zhang et al., 2018, Bi et al., 2018).

These skip connections mitigate vanishing gradient issues and enable stable end-to-end training of networks with 10–12 memory blocks or more, while preserving computational tractability.

3. Training Methodologies and Computational Advantages

FSMNs (including DFSMN variants) are trained using standard mini-batch back-propagation—no BPTT—providing major advantages in training efficiency and stability. A typical training loop leverages large batch sizes (e.g., 4096 frames), high learning rates, and, for some variants, momentum and ℓ2\ell_2 regularization. In DFSMN-based acoustic or text models, the training loss typically targets frame-level cross-entropy or mean squared error (e.g., for speech synthesis tasks, the MSE is calculated over spectral features, F0, aperiodicity, and voiced/unvoiced predictions) (Zhang et al., 2015, Bi et al., 2018, Zhang et al., 2018).

Computationally, FSMN/DFSMN structures enable significant savings in both training and inference. For instance, in Mandarin TTS, a DFSMN with 6 memory layers and two FC layers (2048 units/layer, 512-dim projection) uses only 87 MB and 5.35 GFLOPS per second of synthesis, compared to a BLSTM baseline's 295 MB and 21 GFLOPS, yielding a nearly four-fold speed-up (Bi et al., 2018). In large-vocabulary speech recognition, DFSMNs are 3×\times faster per epoch and expose lower inference latency, which can be further tuned via look-ahead order and stride (Zhang et al., 2018, Zhang et al., 2015).

4. Extensions: Pyramidal, Hybrid, and Memory-Augmented Architectures

FSMN variants have evolved to further optimize context modeling and efficiency. Pyramidal FSMNs vary the memory block order across layers—increasing the context window at higher layers to match the hierarchical nature of phonetic-semantic abstraction—thereby reducing redundancy and parameter count (Yang et al., 2018). A canonical schedule assigns short-span context to lower blocks (e.g., (N1,N2)=(4,4)(N_1,N_2)=(4,4)), increasing steadily (to (20,20)(20,20)) at higher layers.

Hybrid architectures combine DFSMN modules with self-attention (SAN) blocks (DFSMN-SAN), leveraging FIR-style memory for efficient local aggregation and self-attention for dynamic, utterance-wide re-weighting. Alternating stacks of $10$ DFSMN layers and $1$ SAN block, repeated multiple times, have shown to outperform pure DFSMN or pure SAN of similar depth (You et al., 2019).

Further, integration of persistent memory—global key-value or input-embedding vectors accumulated over the training set—into the SAN layers allows the model to encode speaker, channel, or phonotactic context unavailable in a single utterance. Inclusion of persistent memory yields an additional 5–11% relative CER reduction across multiple Mandarin test sets (You et al., 2019).

5. Empirical Results in Language and Speech

FSMNs and DFSMN architectures have demonstrated consistent empirical gains:

  • Language Modeling: FSMN-based models (2×400, N=20N=20) achieve test set perplexity of $102$ on Penn Treebank ($10$K vocab), outperforming RNNLM and LSTM baselines. On enwik9 (LTCB, $80$K vocab), best vFSMN-LM with two memory blocks reaches perplexity $90$, versus $112$ for RNNLM (Zhang et al., 2015, Zhang et al., 2015).
  • Speech Recognition: On Switchboard (SWBD) 300 h, vFSMN (three bidirectional blocks, lookback/lookahead =50=50) reports WER 13.2%13.2\% (best under matched conditions), compared to BLSTM at 13.5%13.5\%. On Fisher English (2000 h), $12$-layer DFSMN attains 9.4%9.4\% WER, improving 1.5%1.5\% absolute over BLSTM (Zhang et al., 2015, Zhang et al., 2018).
  • Speech Synthesis: In Mandarin TTS, a $6+2$ layer DFSMN with memory order-stride ($10,10,2,2$) matches BLSTM on both objective (MSE, F0 RMSE, band-aperiodicity error, U/V error) and subjective (MOS of $4.23$) measures, with four times faster inference and less than a third the model size (Bi et al., 2018).
  • Hybrid and Pyramidal Models: Pyramidal-FSMN with a residual CNN front-end achieves WERs of 3.62%3.62\% (LibriSpeech test-clean) and 10.89%10.89\% (SWBD-300), reduced further to 2.97%2.97\% and 10.03%10.03\% with LSTM RNNLM rescoring. Joint LF-MMI and CE training with these architectures yields consistently better generalization (Yang et al., 2018).
Configuration Task Key Metric / Value Reference
2×400 FSMN, N=20N=20 Penn Treebank LM Perplexity $102$ (Zhang et al., 2015)
vFSMN, 3 bidir blocks (50,50)(50,50) SWBD-300 ASR WER 13.2%13.2\% (Zhang et al., 2015)
12-layer DFSMN Fisher English ASR WER 9.4%9.4\% (Zhang et al., 2018)
6+2 DFSMN Mandarin TTS MOS $4.23$, $87$MB, $5.35$GFLOPS (Bi et al., 2018)
pFSMN+Res-CNN, LF-MMI+CE LibriSpeech/SWBD ASR WER 3.62%3.62\%/10.89%10.89\% (Yang et al., 2018)

6. Implementation Recommendations and Limitations

FSMN/DFSMN deployment involves several key choices:

  • Tap order (NN): Controls context horizon; higher NN for longer dependencies. For speech, bidirectional blocks (N1,N2N_1,N_2) are often needed.
  • Activation: ReLU activation yields faster convergence compared to sigmoid.
  • Block placement: Empirically, one or two memory blocks in early layers suffice for language modeling, while speech tasks benefit from deeper stacks with skip connections.
  • Tap parameterization: Scalar taps are memory and compute efficient and often adequate for text; vectorized taps afford more power in acoustic tasks.
  • Latency and stride: For low-latency, reduce N2N_2 and stride. Latency is tunable as Ï„=∑ℓN2â„“s2\tau = \sum_\ell N_2^\ell s_2.
  • Efficient GPU batching: Memory operations can be implemented as large matrix multiplications (e.g., block-diagonal banded Toeplitz matrices), leveraging GEMM efficiency.

Limitations include the fixed context window (set by N/N1/N2N/N_1/N_2), lack of adaptive forgetting as in LSTMs, and linearly increasing parameter cost for very large context horizons. FSMNs do not natively support infinite history; thus, for truly non-local phenomena, further architectural innovations or hybridizations (e.g., attention mechanisms) are necessary (Zhang et al., 2015, You et al., 2019).

FSMN and DFSMN architectures have shifted the paradigm for sequential modeling in large-scale speech and language systems by offering a viable, often superior alternative to recurrent networks without the associated training and inference overhead. Innovations such as pyramidal stacking, skip/residual connections, and integration with self-attention and persistent memory have yielded state-of-the-art results across diverse large-scale ASR and TTS benchmarks.

Current research directions center on increasingly sophisticated hybridizations: combining FIR-style FSMN blocks with self-attention, convolutional front-ends, and global memory to address the remaining limits of fixed context and to better fuse local and global sequence information, as exemplified by DFSMN-SAN architectures and memory-augmented attention (You et al., 2019, Yang et al., 2018).

The general principle of replacing recurrence with structured, learnable memory filters and skip connections establishes FSMN/DFSMN as a foundational strategy for efficient, scalable sequence modeling in both academic and industrial settings.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Feedforward Sequential Memory Networks (FSMNs).