Papers
Topics
Authors
Recent
Search
2000 character limit reached

Holographic Random Circuit Sampling

Updated 14 November 2025
  • Holographic random circuit sampling is a protocol that uses mid-circuit measurements, register re-use, and circuit depth to exponentially expand the effective Hilbert space.
  • It employs a two-register system and 2-design circuits to achieve rapid anticoncentration, providing both theoretical rigor and experimental validation.
  • This approach enables scalable quantum advantage by allowing a fixed qubit device to sample from an exponentially large distribution.

The holographic random circuit sampling algorithm is a protocol that leverages repeated mid-circuit measurements, register re-use, and circuit depth to exponentially scale the effective dimension of a quantum sampling task far beyond the native physical qubit count. Recent work establishes its theoretical foundations, rigorous anticoncentration properties, and experimental viability for demonstrating quantum advantage on pre-fault-tolerant devices (Zhang et al., 7 Nov 2025).

1. Algorithmic Structure and Protocol

The algorithm partitions the quantum processor into two registers:

  • System register AA of %%%%1%%%% physical qubits
  • Bath register BB of NBN_B physical qubits

At each of tt sequential steps:

  1. A random circuit UkU_k, typically an 8-layer hardware-efficient ansatz (approximate 2-design), acts jointly on ABA \cup B.
  2. All qubits in BB are measured in the computational basis yielding outcome zkz_k; optionally, BB can be reset to 0NB\lvert 0\rangle^{\otimes N_B}.
  3. After the final (tt-th) step, AA is measured producing outcome x(t)x(t).

The joint output is the “spatio-temporal” bitstring (z1,...,zt;x(t))(z_1, ..., z_t; x(t)), living on Neff=NA+tNBN_{\text{eff}} = N_A + t N_B bits. While the physical device comprises only NA+NBN_A + N_B qubits, repeated use and measurement of BB at each step causes the effective Hilbert space dimension to scale as

Deff=2NA+tNB.D_{\text{eff}} = 2^{N_A + t N_B}.

This constitutes the “holographic expansion,” wherein circuit depth tt functions similar to additional logical qubits.

2. Theoretical Underpinnings: Collision Probability and Anticoncentration

Let Z=xp(x)2Z = \sum_x p(x)^2 denote the collision probability for outcome distribution p()p(\cdot). Anticoncentration—essential for quantum advantage arguments—corresponds to Z2#bitsZ \approx 2^{-\#\text{bits}}.

The collision probability under ensemble averaging over step circuits UkU_k (2-designs), after tt rounds, is rigorously computed as

ZHRCS(t)=2(dA+1)t1(1+dAdB)t(dAdB+1)tZ_{\mathrm{HRCS}}(t) = 2 \frac{(d_A + 1)^{t-1}(1 + d_A d_B)^t}{(d_A d_B + 1)^t}

where dA=2NAd_A = 2^{N_A}, dB=2NBd_B = 2^{N_B}. Asymptotically (dA1d_A \gg 1),

ZHRCS(t)ZHaar(Neff)exp[t(11/dB)+dBt1dA+O(1/dA2)],Z_{\mathrm{HRCS}}(t) \simeq Z_{\mathrm{Haar}}(N_{\mathrm{eff}}) \exp\left[ \frac{t (1 - 1/d_B) + d_B^{-t}-1}{d_A} + O(1/d_A^2) \right],

with ZHaar(N)=2/(2N+1)Z_{\mathrm{Haar}}(N) = 2/(2^N + 1).

This demonstrates that even for moderate circuit depths tO(2NA)t \lesssim O(2^{N_A}), the output distribution closely approximates that of Haar-random circuits on NeffN_{\text{eff}} qubits. This implies that the sampling task remains exponentially anticoncentrated with respect to NeffN_{\text{eff}}.

3. Sampling Complexity and Scaling Law

By construction,

Deff(t)=2NA+tNBD_{\text{eff}}(t) = 2^{N_A + t N_B}

so log DeffD_{\text{eff}} scales linearly with both register size and circuit depth. In the regime tO(2NA)t \sim O(2^{N_A}), the effective sampling complexity grows exponentially in NAtN_A t.

This scaling law allows physical devices with fixed qubit number to compete far past previous quantum hardware limits. For instance, with NA=NB=10N_A = N_B = 10 and t=19t = 19, a device with only 20 physical qubits samples from a 200-qubit distribution in Hilbert space.

4. Cross-Entropy Benchmarking and Noise Modeling

Fidelity between the experimental sampler P~\tilde{P} and the ideal Haar-random distribution PP is quantified by linear cross-entropy benchmarking (XEB):

FXEB=2NeffP(z1zt,x)exp1F_{\mathrm{XEB}} = 2^{N_{\text{eff}}} \langle P(z_1 \ldots z_t, x) \rangle_{\text{exp}} - 1

For ideal Porter–Thomas output over NeffN_{\mathrm{eff}} qubits FXEB1F_{\mathrm{XEB}} \rightarrow 1; for uniform random output FXEB0F_{\mathrm{XEB}} \approx 0.

Under channel noise modeled as local depolarizing maps per step, the XEB decays approximately as

FXEB(t)γ2t[1+1γγdBt]+γ(1γ)dAF_{\mathrm{XEB}}(t) \simeq \gamma^{2t} \left[1 + \frac{1-\gamma}{\gamma d_B} t \right] + \frac{\gamma}{(1-\gamma) d_A}

where γ\gamma is the depolarizing parameter. This formula captures both the per-step decay and the partial plateau due to repeated measurement and reset.

5. Experimental Realization and Empirical Results

On IBMQ Torino (27-qubit device), the protocol was implemented with

  • Step circuit: 8-layer 1D hardware-efficient ansatz (single-qubit rotations + CZ in brick-wall pattern).
  • Each step utilized mid-circuit measurement and optional reset of BB.
  • Sampling: 10610^6 shots per task, averaged over 10 random circuit instances per tt.

Key benchmarks:

  • For NA=NB=5N_A = N_B = 5, t=16t = 16 (Neff=85N_\text{eff} = 85): FXEB0.044F_\mathrm{XEB} \approx 0.044, a 3-fold improvement over previous 83-qubit RCS.
  • For NA=NB=10N_A = N_B = 10, two-patch protocol, t=19t = 19 (Neff=200N_\text{eff} = 200): FXEB=0.0593F_\mathrm{XEB} = 0.0593.

This constitutes experimental sampling from a 22002^{200}-dimensional space using only 20 physical qubits.

Each HRCS instance involved resource counts (per step): \sim300 SX, \sim200 RZ, \sim60 CZ. For t=19t = 19 steps, total gate count exceeds 20,000 two-qubit gates.

6. Rigorous Bounds, Limitations, and Open Questions

All collision probability and KKth-moment results derive from sequential Haar-twirling identities on a “doubled” Hilbert space, assuming step circuits implement at least approximate 2-designs. The total variation distance to true Haar sampling over NeffN_\text{eff} qubits is O(exp[(t1)/(2dA)])\mathcal{O}(\exp[-(t-1)/(2d_A)]).

Algorithmic and runtime limitations include:

  • 2-design assumption: Hardware-efficient ansätze approximate but do not formally guarantee 2-design behavior.
  • Complexity-theoretic hardness: While hardness evidence parallels random circuit sampling (RCS), full #P-hardness or rigorous average-case complexity results for HRCS remain open.
  • Error accumulation: No error correction is employed; resilience derives only from circuit anticoncentration and mid-circuit measurements. Longer tt eventually accumulates noise beyond error-mitigated or statistical averaging capabilities.
  • Reset fidelity and mid-circuit measurement quality are critical to XEB performance.
  • Scalability to larger tt would necessitate QRAM-style error correction.

HRCS shares conceptual foundations with teleportation-inspired algorithms for classical simulation of low-depth circuits (Chen et al., 2019) and measurement-driven state generation paradigms (Zhang et al., 2024). All exploit a trade-off between spatial quantum resources (NN) and temporal or circuit-depth resources (tt or sequential rounds TT), often referred to as holographic space-time tradeoff.

Unlike classical holographic simulation, which allows memory-efficient contraction for low-depth wide circuits, HRCS achieves quantum advantage by physically sampling exponentially large joint bitstrings (NeffN_\text{eff}) via repeated circuit application and measurement.

In holographic deep thermalization (Zhang et al., 2024), similar sequential measure–reset protocols enable Haar-random state generation with only O(1)O(1) ancilla, with rigorous decoupling guarantees and empirical frame-potential and XEB benchmarks.

8. Implications and Significance

The HRCS algorithm demonstrates that circuit depth, when exploited in the presence of mid-circuit measurement and register re-use, functions holographically as additional qubits. This enables an exponential scaling of sampling complexity relative to physical qubit resources. Verified both by exact theoretical formulas for collision probability and empirically via cross-entropy benchmarking, HRCS establishes a new route to scalable quantum advantage on near-term devices with fixed qubit count.

A plausible implication is the possibility of extending quantum supremacy demonstrations to much larger effective Hilbert spaces without hardware scaling, subject to the caveats of noise management and formal hardness proofs. The protocol synthesizes space-time resource trade-offs into a practical sampling benchmark, expanding the frontier for experimental quantum advantage.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Holographic Random Circuit Sampling Algorithm.