Learned Multi-Layer VAMP (LMLVAMP)
- LMLVAMP is a hybrid estimator that combines model-based inference with data-driven neural denoisers to recover signals from complex, nonlinear, and quantized measurements.
- It integrates spectral priors with neural network–based denoising in a message-passing framework, leveraging fast Fourier transforms for efficient processing.
- The approach achieves significant NMSE reductions and approaches oracle performance in FR3 6G scenarios, effectively mitigating interference, saturation, and quantization effects.
Learned Multi-Layer Vector Approximate Message Passing (LMLVAMP) is a hybrid model-based/data-driven estimator designed for recovering a desired signal from nonlinear, quantized receiver observations in the presence of strong out-of-band (OOB) interference, front-end saturation nonlinearities, additive receiver noise, and finite-resolution quantization. LMLVAMP integrates spectral priors with neural network–based denoising within a principled message-passing algorithmic structure. Applications include upper mid-band (FR3, 7–24 GHz) wideband radio receivers for 6G, where spectrally separated interference and hardware-induced nonlinear distortions degrade performance beyond the capacity of conventional linear methods (Joy et al., 30 Jan 2026).
1. System and Observation Model
The system considers -length time-domain samples comprising a superposition of frequency-domain sources:
where is the unitary inverse discrete Fourier transform (IDFT), and each is nonzero only within its designated frequency band , with for .
The frequency-domain coefficients are assigned Gaussian priors:
In typical use, denotes the desired user, and indicates a spectrally separated interferer.
The receiver front-end applies a smooth, memoryless amplitude-compressing nonlinearity, followed by additive white Gaussian noise and, optionally, scalar quantization:
with
(pre-nonlinearity noise), (post-nonlinearity noise), and the saturation threshold. For finite-resolution analog-to-digital conversion (ADC), a uniform scalar quantizer is applied: .
2. Multi-Layer VAMP Algorithmic Structure
Signal recovery is formulated as inference in a two-layer network:
- Layer 0 (Spectral):
- Layer 1 (Nonlinear):
The classical Multi-Layer Vector Approximate Message Passing (ML-VAMP) algorithm alternates two denoising steps per iteration, transitioning between frequency and time domains via orthogonal transforms. The denoisers are augmented by Onsager-like corrections for improved convergence. The Bayesian ML-VAMP update sequence is: Here, is the spectral (linear-Gaussian) denoiser, yielding, for each frequency bin :
with average divergence . The nonlinear denoiser is the componentwise conditional mean estimator , which lacks a closed-form in presence of saturation and quantization.
3. Learned Neural Network Denoisers
LMLVAMP generalizes ML-VAMP by replacing analytic denoisers and with small, trainable neural network denoisers and . These networks learn to emulate minimum mean-squared error (MMSE) properties and estimate Onsager-correcting divergences.
- Spectral Message Updater (): For each frequency bin at iteration ,
with input features . The iteration-wide coefficients are
- Nonlinear Denoiser (): For each sample ,
with features . Both and are two-layer networks (64 sigmoid units with linear outputs).
4. Algorithmic Workflow and Pseudocode
The LMLVAMP inference procedure unrolls for iterations as follows:
- Initialization: , .
- For :
- Nonlinear denoising: For all , .
- Spectral transformation: , .
- Spectral denoising: .
- Update: .
- Message update: For all , .
- Aggregate: .
- Next iterate: .
- User-band selection: .
Forward and inverse FFT operations are leveraged for computational efficiency ( per iteration).
5. Training Objectives and Optimization
Trainable parameters of and are optimized end-to-end via backpropagation through the -step unrolled LMLVAMP. The total loss function is a convex combination of:
- Early-iteration loss:
- Final-iteration loss:
- Total loss: with
Optimization employs Adam with exponential learning-rate decay. Regularization is enforced by restricting network size and introducing weighted intermediate losses for training stability.
6. Performance and Evaluation in FR3 Coexistence Scenarios
Simulated configurations included , , , pre-nonlinearity noise dB, post-nonlinearity noise dB, saturation SNR dB, signal SNR dB, interference-to-noise ratio (INR) dB, and 10-bit quantization with 12 dB backoff. Competing estimators were LMLVAMP-K/U (with/without known interferer band), linear Wiener baselines, and an "oracle" ideal nonlinearity-inversion bound.
Metrics:
- Achievable rate lower bound: (correlation coefficient )
- Normalized MSE
Key results:
- LMLVAMP-K approached oracle rates within two iterations at INR dB.
- LMLVAMP-U achieved 20 dB NMSE reduction compared to linear methods in saturation-dominated scenarios.
- With 10-bit quantization, LMLVAMP retained a 10 dB advantage over linear approaches.
- Additional algorithmic unfolding (iterations) yielded incremental performance improvements.
- Ablations fixing had negligible performance loss, indicating stable convergence.
LMLVAMP's O() per-iteration complexity and compact parameterization (small neural nets) demonstrate scalability to large systems and robust gains in realistic 6G coexistence conditions (Joy et al., 30 Jan 2026).
7. Significance and Implications
LMLVAMP exemplifies the hybridization of model-based inference (structured priors, orthogonal transforms) and machine-learned components (data-driven denoisers) within message-passing frameworks. By incorporating spectral priors and leveraging neural denoisers to bypass intractable conditional expectations due to nonlinearities and quantization, LMLVAMP successfully bridges practical front-end hardware constraints and advanced signal recovery objectives. This architecture is particularly suitable for wideband communications in spectrally dense environments, such as future FR3 6G scenarios, where classical linear estimators fail to address nonlinear spectral leakage and quantization artifacts.
A plausible implication is the extensibility of LMLVAMP principles to other nonlinear or quantized inference tasks in communications and signal processing, given its stability, computational efficiency, and end-to-end trainability.