Papers
Topics
Authors
Recent
Search
2000 character limit reached

VMC Framework: Autoregressive Row-Wise Sampling

Updated 29 January 2026
  • Variational Monte Carlo (VMC) framework is a stochastic technique for simulating quantum many-body systems by approximating ground state properties through variational wavefunctions.
  • It employs autoregressive row-wise sampling to enable rejection-free configuration sampling, enhancing convergence and reducing autocorrelation compared to local updates.
  • Recent advancements extend VMC through tensor network states like PEPS and streaming methods for time series, ensuring scalable and statistically robust inference.

The Variational Monte Carlo (VMC) framework constitutes a class of stochastic optimization and sampling techniques broadly utilized for the simulation and analysis of quantum many-body systems and statistical models. VMC approximates ground state properties through expectation value estimation over variational wavefunction classes, employing scalable sampling algorithms to address the exponential complexity inherent in high-dimensional configurations. In recent advances, VMC has expanded to leverage tensor network states such as Projected Entangled Pair States (PEPS) and to adopt autoregressive, row-wise sampling paradigms, which markedly enhance mixing rates and mitigate critical slowing-down in frustrated or near-critical regimes (Chen et al., 28 Jan 2026). The VMC methodological core underpins a diverse array of contemporary data reduction, streaming, and statistical inference techniques, spanning quantum systems, time series analysis, and randomized numerical linear algebra.

1. Mathematical Formulation and Configuration Sampling

In the VMC framework, target observables such as energy or correlation functions are estimated over a distribution P(σ)P(\sigma) proportional to the squared modulus of a variational wavefunction, Ψ(σ)2|\Psi(\sigma)|^2. For lattice systems structured as R×LR \times L grids, spin configurations σ={sx,y}\sigma = \{s_{x,y}\} are sampled from this distribution. Recent innovations propose a factorization:

P(σ)=r=1RP(σrσ<r),P(\sigma) = \prod_{r=1}^R P(\sigma_r \mid \sigma_{<r}),

where σr\sigma_r represents the spins of row rr, and σ<r\sigma_{<r} those of preceding rows. Conditional row probabilities are computed by contracting the PEPS network with fixed upper boundary (MPS_upper), sampling within the current row, and compressing lower rows (MPS_lower) (Chen et al., 28 Jan 2026).

This row-wise, autoregressive structure admits a further within-row decomposition:

P(σrσ<r)=i=1LP(sr,iσ<r,sr,1...i1),P(\sigma_{r} \mid \sigma_{<r}) = \prod_{i=1}^L P(s_{r,i} \mid \sigma_{<r}, s_{r,1...i-1}),

where each P(sr,i...)P(s_{r,i} \mid ...) is computed with normalized single-layer tensor contractions. The result is an exact, rejection-free sampler for the prescribed distribution, sidestepping the Metropolis accept-reject procedure.

2. Algorithmic Paradigms: Autoregressive Row-Wise Update

The paradigm shift from sequential local spin-flip Metropolis updates to autoregressive row-wise sampling fundamentally improves the efficiency and effectiveness of VMC techniques. In the row-wise scheme:

  1. Precompute upper and lower boundary environments as MPS of bond dimension χ\chi.
  2. Sequentially sample each spin in a row using its exact conditional, updating the environment at each step.
  3. Iterate over all rows to produce a full configuration in a single pass.

This process yields rejection-free proposals, meaning each sampled configuration adheres to the target distribution without requiring explicit acceptance steps. The computational scaling is O(RD6χ2+ND4χ2)O(R D^6 \chi^2 + N D^4 \chi^2) per full configuration, with DD the PEPS tensor bond dimension and N=RLN=R L the total site count (Chen et al., 28 Jan 2026).

3. Comparative Analysis: Local Metropolis Versus Row-Wise Sampling

Local Metropolis updates propose single-site flips, recalculating the ratio Ψ(σ)2/Ψ(σ)2|\Psi(\sigma')|^2 / |\Psi(\sigma)|^2 via tensor contractions. This methodology suffers from long autocorrelation times, especially near criticality or in glass-like energy landscapes, leading to slow mixing and inefficient sampling.

In contrast, autoregressive row-wise updates exhibit the following properties:

  • Near critical points (e.g., 2D transverse-field Ising model), row-wise sampling achieves autocorrelation time τrow1\tau_\text{row} \approx 1 for all LL, while local Metropolis exhibits τlocalL2.1\tau_\text{local} \sim L^{2.1} (Chen et al., 28 Jan 2026).
  • In spin glass landscapes, row-wise updates suppress the exponential scaling of τ\tau observed in local approaches, enabling faster equilibration and more stable variational optimization.
  • Hybrid strategies combining row-wise and local sweeps further reduce τ\tau and improve optimization convergence, yielding lower ground-state energies and narrower distributions of observables.

4. Extensions and Connections to Streaming and Time Series Models

The VMC row-wise framework conceptually aligns with recent advances in online leverage score-based sampling and streaming data reduction for time series analysis. The Sequential Leveraging Sampling (SLS) method for streaming autoregressive (AR) models (Xie et al., 25 Sep 2025) employs blocked autoregressive sampling: randomly identifies a block start via leverage scores and expands the block using a sequential stopping rule to accumulate sufficient information. The sampled block admits efficient least-squares or M-estimator inference, with guaranteed asymptotic normality of parameter estimators. The extension to nonlinear AR models substitutes score vectors and information thresholds, maintaining statistical guarantees.

Analogous approaches in online row sampling (Cohen et al., 2016) and efficient RandNLA algorithms for Big Time Series Data (Eshragh et al., 2019) utilize row-wise autoregressive probability computations, leverage score approximations, and adaptive sampling probabilities to construct reduced sketches yielding spectral or statistical approximations with provable bounds.

5. Benchmark Results and Performance

Empirical benchmarks highlight the performance advantages of the VMC row-wise framework:

  • In the 2D transverse-field Ising model, row-wise and hybrid samplers converge in O(1)O(1) steps independent of system size, while Metropolis diverges for large LL.
  • In quantum spin glasses, row-wise and hybrid updates achieve lower ground-state energies and reduced variance compared to local Metropolis, indicating improved exploration of configuration space and avoidance of metastable traps (Chen et al., 28 Jan 2026).
  • In streaming AR models, SLS precisely identifies seismic events and temporal dependence structures in macroseismic and microseismic datasets, confirming efficiency and statistical robustness (Xie et al., 25 Sep 2025).
  • RandNLA and LSAR demonstrate scalability for fitting high-order AR models (p=20200p = 20 \ldots 200) in datasets with n=106107n = 10^6 \ldots 10^7, recovering models with near-optimal error and substantial runtime reduction (Eshragh et al., 2019).

6. Practical Implementation and Parameter Tuning

Key considerations for practical deployment include:

  • Selection of PEPS bond dimension DD and environment bond χ\chi, with χ3\chi \approx 3–$5 D$ sufficient for accurate conditional sampling (Chen et al., 28 Jan 2026).
  • Learning rate η\eta typically in [0.05,0.2][0.05, 0.2], with decay schedules to stabilize convergence.
  • Batch sizes between $500$ and $2000$ samples are used per gradient update to control noise.
  • In streaming AR models, SLS block size is controlled by the information threshold cc, determining estimation accuracy, while initial pilot sample size n0n_0 sets the precision matrix computation cost (Xie et al., 25 Sep 2025).
  • In LSAR, error ε\varepsilon and failure probability δ\delta are tuned to trade off runtime and statistical accuracy, sample size ss is chosen O(plogp/ε2)O(p \log p/\varepsilon^2), and per-iteration guarantees are maintained with union bounds (Eshragh et al., 2019).

7. Theoretical Guarantees and Statistical Properties

The row-wise VMC and its associated methods inherit strong theoretical guarantees:

  • Rejection-free row-wise sampling produces exact draws from the target distribution, bypassing the Markov chain mixing limitations of local moves.
  • Central limit convergence and asymptotic normality of parameter estimators are achieved in the block sampling framework for both linear and nonlinear AR models (Xie et al., 25 Sep 2025).
  • Online sampling algorithms yield spectral approximations (1±ε)ATA±δI(1 \pm \varepsilon) A^T A \pm \delta I with provably optimal sample and memory complexity (Cohen et al., 2016).
  • LSAR formally bounds leverage score approximation errors and guarantees recovery of AR model order and parameters within (1+O(ε))(1+O(\sqrt{\varepsilon})) accuracy, exploiting block-Hankel matrix structure (Eshragh et al., 2019).

Taken together, the Variational Monte Carlo framework’s progression toward row-wise, autoregressive, and efficiently sketched sampling architectures establishes a foundational methodology for scalable, statistically principled optimization and inference in quantum physics, time series analysis, and large-scale data reduction.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Variational Monte Carlo (VMC) Framework.