Holographic Random Circuit Sampling
- Holographic random circuit sampling is a protocol that uses mid-circuit measurements, register re-use, and circuit depth to exponentially expand the effective Hilbert space.
- It employs a two-register system and 2-design circuits to achieve rapid anticoncentration, providing both theoretical rigor and experimental validation.
- This approach enables scalable quantum advantage by allowing a fixed qubit device to sample from an exponentially large distribution.
The holographic random circuit sampling algorithm is a protocol that leverages repeated mid-circuit measurements, register re-use, and circuit depth to exponentially scale the effective dimension of a quantum sampling task far beyond the native physical qubit count. Recent work establishes its theoretical foundations, rigorous anticoncentration properties, and experimental viability for demonstrating quantum advantage on pre-fault-tolerant devices (Zhang et al., 7 Nov 2025).
1. Algorithmic Structure and Protocol
The algorithm partitions the quantum processor into two registers:
- System register of %%%%1%%%% physical qubits
- Bath register of physical qubits
At each of sequential steps:
- A random circuit , typically an 8-layer hardware-efficient ansatz (approximate 2-design), acts jointly on .
- All qubits in are measured in the computational basis yielding outcome ; optionally, can be reset to .
- After the final (-th) step, is measured producing outcome .
The joint output is the “spatio-temporal” bitstring , living on bits. While the physical device comprises only qubits, repeated use and measurement of at each step causes the effective Hilbert space dimension to scale as
This constitutes the “holographic expansion,” wherein circuit depth functions similar to additional logical qubits.
2. Theoretical Underpinnings: Collision Probability and Anticoncentration
Let denote the collision probability for outcome distribution . Anticoncentration—essential for quantum advantage arguments—corresponds to .
The collision probability under ensemble averaging over step circuits (2-designs), after rounds, is rigorously computed as
where , . Asymptotically (),
with .
This demonstrates that even for moderate circuit depths , the output distribution closely approximates that of Haar-random circuits on qubits. This implies that the sampling task remains exponentially anticoncentrated with respect to .
3. Sampling Complexity and Scaling Law
By construction,
so log scales linearly with both register size and circuit depth. In the regime , the effective sampling complexity grows exponentially in .
This scaling law allows physical devices with fixed qubit number to compete far past previous quantum hardware limits. For instance, with and , a device with only 20 physical qubits samples from a 200-qubit distribution in Hilbert space.
4. Cross-Entropy Benchmarking and Noise Modeling
Fidelity between the experimental sampler and the ideal Haar-random distribution is quantified by linear cross-entropy benchmarking (XEB):
For ideal Porter–Thomas output over qubits ; for uniform random output .
Under channel noise modeled as local depolarizing maps per step, the XEB decays approximately as
where is the depolarizing parameter. This formula captures both the per-step decay and the partial plateau due to repeated measurement and reset.
5. Experimental Realization and Empirical Results
On IBMQ Torino (27-qubit device), the protocol was implemented with
- Step circuit: 8-layer 1D hardware-efficient ansatz (single-qubit rotations + CZ in brick-wall pattern).
- Each step utilized mid-circuit measurement and optional reset of .
- Sampling: shots per task, averaged over 10 random circuit instances per .
Key benchmarks:
- For , (): , a 3-fold improvement over previous 83-qubit RCS.
- For , two-patch protocol, (): .
This constitutes experimental sampling from a -dimensional space using only 20 physical qubits.
Each HRCS instance involved resource counts (per step): 300 SX, 200 RZ, 60 CZ. For steps, total gate count exceeds 20,000 two-qubit gates.
6. Rigorous Bounds, Limitations, and Open Questions
All collision probability and th-moment results derive from sequential Haar-twirling identities on a “doubled” Hilbert space, assuming step circuits implement at least approximate 2-designs. The total variation distance to true Haar sampling over qubits is .
Algorithmic and runtime limitations include:
- 2-design assumption: Hardware-efficient ansätze approximate but do not formally guarantee 2-design behavior.
- Complexity-theoretic hardness: While hardness evidence parallels random circuit sampling (RCS), full #P-hardness or rigorous average-case complexity results for HRCS remain open.
- Error accumulation: No error correction is employed; resilience derives only from circuit anticoncentration and mid-circuit measurements. Longer eventually accumulates noise beyond error-mitigated or statistical averaging capabilities.
- Reset fidelity and mid-circuit measurement quality are critical to XEB performance.
- Scalability to larger would necessitate QRAM-style error correction.
7. Comparison to Related "Holographic" Algorithms
HRCS shares conceptual foundations with teleportation-inspired algorithms for classical simulation of low-depth circuits (Chen et al., 2019) and measurement-driven state generation paradigms (Zhang et al., 2024). All exploit a trade-off between spatial quantum resources () and temporal or circuit-depth resources ( or sequential rounds ), often referred to as holographic space-time tradeoff.
Unlike classical holographic simulation, which allows memory-efficient contraction for low-depth wide circuits, HRCS achieves quantum advantage by physically sampling exponentially large joint bitstrings () via repeated circuit application and measurement.
In holographic deep thermalization (Zhang et al., 2024), similar sequential measure–reset protocols enable Haar-random state generation with only ancilla, with rigorous decoupling guarantees and empirical frame-potential and XEB benchmarks.
8. Implications and Significance
The HRCS algorithm demonstrates that circuit depth, when exploited in the presence of mid-circuit measurement and register re-use, functions holographically as additional qubits. This enables an exponential scaling of sampling complexity relative to physical qubit resources. Verified both by exact theoretical formulas for collision probability and empirically via cross-entropy benchmarking, HRCS establishes a new route to scalable quantum advantage on near-term devices with fixed qubit count.
A plausible implication is the possibility of extending quantum supremacy demonstrations to much larger effective Hilbert spaces without hardware scaling, subject to the caveats of noise management and formal hardness proofs. The protocol synthesizes space-time resource trade-offs into a practical sampling benchmark, expanding the frontier for experimental quantum advantage.