Papers
Topics
Authors
Recent
Search
2000 character limit reached

Expectation-Free Hamiltonian Learning

Updated 19 February 2026
  • Expectation-Free Hamiltonian is a method for quantum Hamiltonian learning that uses only single-shot projective measurements, avoiding expectation value estimation.
  • It employs local randomization and scheduled short-time evolutions to reconstruct Hamiltonians in a local Pauli basis, surpassing the standard quantum limit.
  • The approach achieves transient Heisenberg-limited scaling for parallel multi-parameter estimation without relying on entanglement or coherent joint measurements.

An expectation-free Hamiltonian refers to an approach for quantum Hamiltonian learning that eliminates the need for expectation value estimation—relying exclusively on single-shot, projective measurement data. In the context of recent research, specifically "Resource-Free Quantum Hamiltonian Learning Below the Standard Quantum Limit" (Baran et al., 28 Jul 2025), this term characterizes protocols that reconstruct an unknown Hamiltonian, decomposed in a local Pauli basis, using only short-time product-state trajectories, one-local randomized pre-processing, and projective measurements, and implements a single-shot maximum-likelihood estimator (MLE). These methods require neither entanglement, coherent measurements, nor dynamical control, yet can surpass the standard quantum limit (SQL) and achieve a transient Heisenberg-limited regime for parameter estimation.

1. Trajectory-Based Expectation-Free Protocol

The protocol begins by fixing a reference product state ψ0=0n\ket{\psi_0} = \ket{0}^{\otimes n}. Each experimental run generates a "spread" state via local random unitaries:

ψ0(r)=Uspread(r)ψ0,Uspread(r)=j=1nRz(ξj)Ry(χj)Rz(ϕj),\ket{\psi_0^{(r)}} = U_{\rm spread}^{(r)}\ket{\psi_0},\quad U_{\rm spread}^{(r)} = \bigotimes_{j=1}^n R_z(\xi_j)R_y(\chi_j)R_z(\phi_j),

where {ξj,χj,ϕj}\{\xi_j, \chi_j, \phi_j\} are sampled independently from the single-qubit Haar measure. This local randomization delocalizes the spectral support and ensures sensitivity to each Pauli term at short times.

A schedule of evolution times {tk}\{t_k\} is chosen according to tk=Δtkαt_k = \Delta t\,k^\alpha (k{1,...,mt}k \in \{1,...,m_t\}, α>1\alpha > -1), with the total evolution time Ttot=ktkT_{\rm tot} = \sum_k t_k acting as the resource budget. For each spread state, time, and a fixed set of KK random product-Pauli measurement bases p{X,Y,Z}np_\ell \in \{X,Y,Z\}^n, the state is evolved under the target Hamiltonian and measured once in basis pp_\ell, yielding single-shot outcomes br,k,b_{r,k,\ell}. No expectation values over measurement outcomes are formed at any stage.

2. Maximum-Likelihood Estimation without Expectation Values

Given a dataset of single-shot outcomes D={br,k,}D = \{b_{r,k,\ell}\}, the trial Hamiltonian H^(θ)\hat{H}(\theta) is parametrized as a Hermitian matrix via a complex lower-triangular embedding A(θ)A(\theta):

H^ij(θ)={Aij(θ),ij, Aji(θ),i<j,\hat{H}_{ij}(\theta) = \begin{cases} A_{ij}(\theta), & i \ge j, \ \overline{A_{ji}(\theta)}, & i < j, \end{cases}

ensuring Hermiticity for all θ\theta. For each datum, the likelihood is

P(br,k,θ)=br,k,eiH^(θ)tkψ0(r)2.P(b_{r,k,\ell} \mid \theta) = \left| \bra{b_{r,k,\ell}} e^{-i \hat{H}(\theta) t_k} \ket{\psi_0^{(r)}} \right|^2.

The total (negative) log-likelihood,

LD(θ)=1Dr,k,lnbr,k,eiH^(θ)tkψ0(r)2,\mathcal{L}_D(\theta) = -\frac1{|D|} \sum_{r,k,\ell} \ln \left|\bra{b_{r,k,\ell}} e^{-i \hat{H}(\theta)t_k} \ket{\psi_0^{(r)}}\right|^2,

is minimized using gradient-based optimization (with backpropagation through the "extended-parameter embedding" neural network mapping). Importantly, all inference is performed directly from single-shot data—there is no construction of empirical expectation values at any point.

3. Transient Heisenberg-Limited Scaling and Error Analysis

The protocol exhibits error scaling that supersedes the SQL in total time for short-time probes. For an unknown Pauli-decomposed Hamiltonian,

H(θ)=j=1dθjPj,PjPn,H(\theta) = \sum_{j=1}^d \theta_j P_j,\quad P_j \in \mathcal{P}_n,

the single-parameter Fisher information at probe time tt is given by

Ij(t)=b(θjpb(t))2pb(t),pb(t)=beiH(θ)tψspread2.\mathcal{I}_j(t) = \sum_{b} \frac{(\partial_{\theta_j} p_b(t))^2}{p_b(t)},\quad p_b(t) = |\langle b | e^{-i H(\theta) t} | \psi_{\rm spread} \rangle|^2.

Averaging over randomizations, E[Ij(t)]=Θ(t2)\mathbb{E}[\mathcal{I}_j(t)] = \Theta(t^2) in the short-time regime (t=o(1)t = o(1)), indicating Heisenberg-limited scaling: from the Cramér–Rao bound, Δθj=O(t1)\Delta \theta_j = O(t^{-1}).

For a protocol summing over mtm_t times tkt_k, the total Fisher information satisfies

Itot=k=1mtI(tk)=Θ(Ttotp),p=αγ0+1α+1+O(1/mt),γ0=2.I_{\rm tot} = \sum_{k=1}^{m_t} \mathcal{I}(t_k) = \Theta(T_{\rm tot}^p), \quad p = \frac{\alpha \gamma_0 + 1}{\alpha + 1} + O(1/m_t),\quad \gamma_0 = 2.

Consequently,

Δθj=O(Itot1/2)=O(Ttotp/2),\Delta \theta_j = O\left(I_{\rm tot}^{-1/2}\right) = O\left(T_{\rm tot}^{-p/2}\right),

enabling continuous interpolation from SQL error scaling (T1/2T^{-1/2}) to the Heisenberg limit (T1T^{-1}) by tuning α\alpha.

4. Parallel Multi-Parameter Estimation without Structural Priors

Each short-time probability is generically sensitive to all Hamiltonian parameters. For Pauli string PaP_a and measurement outcome b\ket{b},

p(b,t)bψspread22t[aθabψspreadbPaψspread].p(b,t) \approx |\langle b | \psi_{\rm spread} \rangle|^2 - 2t\, \Im\left[\sum_a \theta_a \langle b | \psi_{\rm spread}^* \langle b | P_a | \psi_{\rm spread}\rangle\right].

Owing to full support under Haar randomness, all overlaps bPaψspread\langle b | P_a | \psi_{\rm spread} \rangle are nonzero with probability one, yielding generic sensitivity.

Averaging the multi-parameter Fisher matrix over many spread states,

limR1Rr=1RIr(θ)=diag(c1,...,cd),cj>0,\lim_{R \to \infty} \frac{1}{R} \sum_{r=1}^R \mathcal{I}_r(\theta) = \mathrm{diag}(c_1, ..., c_d),\quad c_j>0,

the protocol statistically diagonalizes the Fisher matrix, ensuring all parameters are uncorrelated and recoverable in parallel at the Heisenberg rate. No term isolation or assumptions (e.g., sparsity, commutativity) are required.

5. Empirical Validation on One-Dimensional Spin Chains

The method was benchmarked on disordered, anisotropic Heisenberg models for 1D spin-$1/2$ chains, including:

  • XYZ nearest-neighbor: H=i=1N1[JixXiXi+1+JiyYiYi+1+JizZiZi+1]+ihiXiH = \sum_{i=1}^{N-1}[J^x_i X_i X_{i+1} + J^y_i Y_i Y_{i+1} + J^z_i Z_i Z_{i+1}] + \sum_i h_i X_i.
  • XYZ2: Incorporates arbitrary local fields and next-nearest-neighbor couplings Kiννiνi+2K_i^\nu \nu_i \nu_{i+2}.
  • XYZ3: Adds three-body couplings νiνi+1νi+2\nu_i \nu_{i+1} \nu_{i+2}.
  • Gapless XXZ: H=i(XiXi+1+YiYi+1+ΔZiZi+1)H = \sum_i (X_i X_{i+1} + Y_i Y_{i+1} + \Delta Z_i Z_{i+1}), Δ1|\Delta|\le 1.

The probe ensemble utilized R=32R=32 spread states, mt8m_t \leq 8 evolution times tk=0.01kαt_k=0.01\,k^\alpha, K=25K=25 random Pauli bases, and a single shot per basis. Reconstruction error ε\varepsilon fit as εTtotβ\varepsilon \propto T_{\rm tot}^{-\beta} yields β0.66\beta \approx 0.66 at α=1.0\alpha=1.0, exceeding the SQL value ($0.5$). Varying α\alpha produces error scaling β(α)=12αγ0+1α+1\beta(\alpha) = \frac{1}{2} \frac{\alpha \gamma_0 + 1}{\alpha+1}, matching theoretical predictions. Increasing RR pushes β\beta toward the single-parameter bound ($0.75$ for α=1\alpha=1), confirming statistical independence of the parameters.

Crucially, only one shot per measurement is sufficient to achieve these super-SQL scalings in practice.

6. Experimental Considerations and Implementation

The expectation-free Hamiltonian learning protocol does not require entanglement, coherent joint measurements, or dynamical multi-qubit control. Random local pre-rotations are sufficient, implemented independently on each qubit. The method’s minimal experimental requirements—short-time product-state trajectories, local randomization, scheduled single-shot Pauli measurements, and single-shot MLE—make it suitable for near-term quantum hardware. The associated codebase is online and open-source (Baran et al., 28 Jul 2025).

7. Significance and Conceptual Distinction

By avoiding any post-processing to form expectation values, the expectation-free Hamiltonian learning paradigm provides a streamlined, statistically efficient route to quantum Hamiltonian identification. Its analytic and numerical validation of transient Heisenberg scaling without quantum resources such as entanglement or dynamical control distinguishes it from prior resource-intensive schemes. The ability to learn all Hamiltonian parameters in parallel without structural priors further underscores the generality and potential impact of this class of protocols for characterizing complex quantum systems (Baran et al., 28 Jul 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Expectation-Free Hamiltonian.