LatentMoE Architecture
- LatentMoE is a Mixture-of-Experts design that uses a low-dimensional latent bottleneck to minimize memory and communication overhead in large-scale language models.
- The architecture employs a shared gating network and latent projections to route tokens efficiently, achieving significant gains in accuracy per FLOP and per parameter.
- It enables scalable expert routing for both latency-critical and high-throughput modes, reinvesting saved resources to expand expert diversity without sacrificing performance.
LatentMoE is a Mixture-of-Experts (MoE) variant developed to address both hardware and model efficiency bottlenecks, particularly in large-scale LLMs such as NVIDIA’s Nemotron 3 family. The LatentMoE architecture employs a low-dimensional latent bottleneck within each sparse expert layer, enabling scaling of expert count and fan-out by leveraging hardware-friendly reductions in routed tensor size. This systematic compression and reinvestment strategy establishes new Pareto frontiers in accuracy per floating-point operation (FLOP) and accuracy per parameter, with empirical validation at scales up to the trillion-parameter regime (NVIDIA et al., 24 Dec 2025, Elango et al., 26 Jan 2026).
1. Architectural Design and Integration
LatentMoE is implemented as a direct replacement for traditional MoE-FFN sublayers, notably in hybrid Mamba–Transformer stacks as used in Nemotron 3 Super and Ultra. Each LatentMoE layer consists of the following sequential stages:
- Input activations are received from a preceding Mamba-2 or attention sublayer.
- A shared gating network computes per-token router logits in the full space , yielding a softmax distribution over experts:
- For each token , the Top- routing mask is derived by zeroing all but the largest entries in row , producing a masked gating matrix .
- The -dimensional inputs are projected into a latent space of dimension :
with .
- Each latent vector is dispatched to its Top- experts, which compute per-expert FFN transforms entirely in the latent space:
with .
- Expert outputs are aggregated via the masked gating weights:
- The aggregated latent output is projected back to full hidden dimension and residual-connected:
where , and is passed to the next layer (NVIDIA et al., 24 Dec 2025, Elango et al., 26 Jan 2026).
This design results in all expert computation and communication being confined to a low-rank latent subspace, while non-expert operations (gating, normalization, residual paths) remain in the full -dimensional space.
2. Mathematical Formulation of Gating, Routing, and Losses
The LatentMoE routing mechanism adheres to canonical MoE principles but adapts them for the latent setup:
- Gating: The softmax router operates in dimensions, outputting
- Top- Routing: The routing mask is defined as
- Tokens are mapped into the latent space before being sent to experts, decoupling the communication cost from .
- Load balancing: To ensure equitable expert utilization, the average load and importance of each expert are monitored over a batch. The standard MoE load-balancing loss remains:
where is the average expert load (NVIDIA et al., 24 Dec 2025).
Total loss is the sum of cross-entropy task loss and auxiliary load-balancing loss (Elango et al., 26 Jan 2026).
3. Hardware-Efficient Scaling and Memory–Bandwidth Analysis
LatentMoE was motivated by analysis revealing two dominant bottlenecks for large-scale MoE models:
- Memory-bandwidth bound kernel execution: In low-batch regimes, the limiting factor is the volume of weight reads (size per expert in standard MoE). LatentMoE projects routed tensors to , reducing per-expert memory and bandwidth demands by .
- All-to-all communication bottleneck: In high-throughput (large batch) settings, distributed MoE layers require synchronization of expert inputs. Reducing the tensor size from to linearly decreases communication payload per token.
By projecting tokens to before dispatch and projecting outputs back after expert aggregation, the system achieves substantial bandwidth and communication volume reductions. The freed memory and compute resources are reinvested to scale the total expert pool () and optionally active fan-out (), while holding the total FLOP and communication cost constant (NVIDIA et al., 24 Dec 2025, Elango et al., 26 Jan 2026).
4. Theoretical Properties: Nonlinear Capacity and Expressivity
LatentMoE’s scaling strategy is supported by function approximation theory:
- Nonlinear capacity: The emergent nonlinear capacity for each token remains , where is fan-out and is expert FFN width. Keeping and constant during dimension reduction preserves approximation power; scaling them further increases expressivity (Elango et al., 26 Jan 2026).
- Combinatorial sparsity: The number of possible expert combinations per token grows as ; scaling both and by results in , producing exponential increases in specialization and local diversity.
The up/down latent projections cost FLOPs per token. For typical choices (e.g., ), this constitutes overhead relative to baseline computation.
5. Empirical Performance and Hyperparameter Choices
LatentMoE has been validated at scales from 16B to 95B total parameters (8B active):
| Model Type | Active Params | MMLU-Pro | MMLU | Code | Math | Commonsense | |||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Standard MoE | 4096 | 16384 | 128 | 6 | – | 8.09B | 48.30 | 70.10 | 51.95 | 78.32 | 81.73 |
| LatentMoE(acc) | 4096 | 16384 | 512 | 24 | 1024 | 8.02B | 52.87 | 72.11 | 55.14 | 80.19 | 82.10 |
Nemotron 3 Super and Ultra use (i.e., ) with expert count and fan-out expanded by , yielding per-token routing to up to 22–24 experts. All improvements in accuracy occur with no measured degradation in inference throughput or per-token latency. Across five downstream evaluation categories, LatentMoE achieves $2$–$4$ points absolute gain in accuracy relative to standard MoE (NVIDIA et al., 24 Dec 2025, Elango et al., 26 Jan 2026).
6. Comparative Analysis and Related Approaches
LatentMoE is distinct from earlier factorized MoE variants such as Mixture of Latent Experts (MoLAE, also “MoLE” (Liu et al., 29 Mar 2025)), though both exploit bottlenecked expert computation. MoLAE performs factorization of expert parameters via a latent intermediate and allows conversion of pretrained MoE weights through SVD-based schemes, targeting parameter efficiency. LatentMoE targets bandwidth and communication cost, using the saved resources not for parameter minimization but for scaling up the total expert count and fan-out, thereby expanding network expressivity at fixed hardware budget.
While other approaches (e.g., MoLAE) may reduce parameter count by via factorization, LatentMoE adopts a hardware-aware focus: dimension reduction is used as an enabling lever for scaling expert diversity and resultant accuracy per byte, rather than for model shrinkage (Liu et al., 29 Mar 2025, Elango et al., 26 Jan 2026).
7. Deployment Regimes, Limitations, and Future Directions
LatentMoE supports two principal operational regimes:
- Online, latency-critical inference (small batch): Memory bandwidth is dominant; reducing tensor dimension from to directly improves step time.
- Offline, high-throughput serving (large batch): All-to-all communication volume dominates; sending -dimensional embeddings minimizes network bottleneck.
Latency, memory, and compute costs are minimized without sacrificing task accuracy. The architecture integrates transparently into hybrid blocks with Mamba-2 and attention, requiring no departure from standard MoE training recipes (e.g., NoisyTopK routing, load balancing, familiar optimizer and numerics).
Potential limitations include dependence on the selection of (too small an may bottleneck expressivity given intrinsic feature rank), and on hardware-specific communication patterns. Open research directions involve dynamic or input-adaptive latent bottleneck selection, further extension of latent routing to non-FFN subnetworks (attention, normalization), and investigation of non-linear bottleneck projections (Elango et al., 26 Jan 2026, Liu et al., 29 Mar 2025).
References
- NVIDIA et al., "NVIDIA Nemotron 3: Efficient and Open Intelligence" (NVIDIA et al., 24 Dec 2025)
- "LatentMoE: Toward Optimal Accuracy per FLOP and Parameter in Mixture of Experts" (Elango et al., 26 Jan 2026)
- "MoLAE: Mixture of Latent Experts for Parameter-Efficient LLMs" (Liu et al., 29 Mar 2025)