Quantum-Classical Separation in Bounded-Resource Tasks Arising from Measurement Contextuality
Abstract: The prevailing view is that quantum phenomena can be harnessed to tackle certain problems beyond the reach of classical approaches. Quantifying this capability as a quantum-classical separation and demonstrating it on current quantum processors has remained elusive. Using a superconducting qubit processor, we show that quantum contextuality enables certain tasks to be performed with success probabilities beyond classical limits. With a few qubits, we illustrate quantum contextuality with the magic square game, as well as quantify it through a Kochen--Specker--Bell inequality violation. To examine many-body contextuality, we implement the N-player GHZ game and separately solve a 2D hidden linear function problem, exceeding classical success rate in both. Our work proposes novel ways to benchmark quantum processors using contextuality-based algorithms.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
Quantum-Classical Separation in Bounded-Resource Tasks Arising from Measurement Contextuality — explained simply
1. What is this paper about?
This paper shows that today’s quantum computers can do certain tasks better than any classical (ordinary) computer when both are limited in specific ways, like how many steps they’re allowed to take. The key idea behind the advantage is called measurement contextuality — a very “quantum” behavior where the outcome of a measurement can depend on which other measurements you choose to do at the same time.
In plain words: the team uses a special quantum computer (with superconducting qubits) to play and win certain logic games and solve puzzles more reliably than classical methods can, and they prove this advantage using experiments.
2. What questions did the researchers ask?
They focused on five simple questions:
- Can we directly see, on real hardware, that quantum strategies beat all classical ones under fair limits (like no communication between players or a fixed number of steps)?
- Is contextuality (not just entanglement) the key resource giving quantum computers their power?
- Can we measure “how quantum” our device is, using a number that should be impossible for classical physics to reach?
- Does this advantage still show up in larger, many-particle systems (not just a few qubits)?
- Can we solve a specific kind of puzzle (called the hidden linear function problem) with very shallow quantum circuits, while classical circuits would need to be deeper?
3. How did they study it? (Methods with simple analogies)
They ran four kinds of experiments on a 2D grid of superconducting qubits (tiny quantum bits on a chip that’s cooled near absolute zero). Think of each experiment as a game or puzzle with strict rules that limit what strategies are allowed.
- Magic Square Game (2 players):
- Imagine two players separated so they can’t talk. Each must fill in numbers (+1 or −1) in a 3×3 grid so that row and column rules are satisfied, and their shared square matches. Classically, it’s impossible to always win; the best you can do is win 8 out of 9 times on average. With entangled qubits and a quantum measurement plan, quantum players can win every time in theory.
- This game uses contextuality: the “right” value for a box depends on which other boxes you measured — like an answer that changes depending on which other questions you ask together.
- Kochen–Specker–Bell (KSB) contextuality test:
- This is like a scoreboard that totals up certain measurement results. Classical physics can’t push the score above a limit (4). Quantum mechanics predicts up to 6. Measuring above 4 proves contextuality is present in the device.
- They used a method called QND (quantum non-demolition) measurement: think of it as reading a value in a gentle way that doesn’t “break” the system for the next read.
- GHZ parity game (many players):
- Now many players each get a yes/no question (a bit), promised to have an even total. Without quantum resources, the best strategy is only slightly better than guessing. With a special entangled state (a GHZ state), the team can win perfectly in theory by choosing measurements based on their question bit.
- This tests quantum advantage as the number of players grows.
- Hidden Linear Function (HLF) problem (shallow-circuit computing):
- This is a puzzle built from a grid of bits and connections. A shallow (few-layer) quantum circuit can find the hidden answer exactly, but classical circuits need more layers as the problem grows.
- Hardware view: They apply simple layers of gates (like “flip,” “phase,” and “link-two-qubits” operations), then measure all qubits and check whether the output bitstrings are correct. They repeat many times to estimate the success rate.
Technical terms in everyday language:
- Qubit: a quantum version of a bit that can be 0, 1, or a mix (superposition).
- Entanglement: a strong link between qubits so measuring one tells you about the others.
- Contextuality: outcomes can depend on what else you choose to measure at the same time, not just on some pre-set hidden value.
- Shallow circuit: a program with only a few steps (layers), like finishing a recipe in 4 moves instead of 40.
4. What did they find, and why is it important?
Here are the main results and why they matter:
- Magic Square Game:
- Result: They achieved about a 98.3% average win rate, beating the best possible classical average of 8/9 ≈ 88.9%.
- Why it matters: This is an experimental, on-chip demonstration that quantum contextuality helps win a game beyond classical limits.
- KSB contextuality inequality:
- Result: They measured a contextuality score of about 5.62, clearly above the classical cap of 4, and close to the quantum maximum of 6.
- Why it matters: It directly confirms strong contextuality on modern hardware and quantifies “how quantum” the device behaves.
- GHZ parity game with many players:
- Result: They played with up to 71 qubits (players) and consistently beat the best classical winning probability, although the margin shrinks as more qubits are added (because more qubits means more chances for tiny errors).
- Why it matters: This shows many-body (large-scale) contextuality at work — not just in tiny systems.
- Hidden Linear Function (HLF) with shallow circuits:
- Result: On up to 105 qubits, their shallow quantum circuit solved the problem with an effective depth of around 4–6 layers. By comparison, the classical circuit depth needed should grow with problem size (they plot a reasonable classical lower bound that sits above their quantum results for these sizes).
- Why it matters: It’s strong, practical evidence that shallow quantum circuits can outperform classical ones on structured tasks — a concrete step toward reliable quantum advantage.
Big picture: Across four tasks, the quantum processor consistently exceeded classical limits or benchmarks, especially when the rules restrict resources (no communication, fixed depth). The experiments also let them pinpoint and reduce different error sources in the hardware.
5. What could this mean for the future?
- New benchmarks for quantum hardware: Contextuality-based games and puzzles provide clear, easy-to-interpret tests that say, “This device is doing something no classical device can do under the same limits.”
- Better understanding of quantum power: The results support the idea that contextuality (not just entanglement) is a key resource for quantum computation.
- Scaling up: Showing advantage with up to 71–105 qubits is encouraging for larger, more useful quantum systems.
- Practical algorithms: Shallow quantum circuits that already beat classical depth bounds point the way to near-term applications, especially for problems that can be laid out on 2D grids.
In short, this work demonstrates, with real experiments, that quantum contextuality can boost performance on well-defined, limited-resource tasks — and it offers practical tools to measure and improve the “quantumness” of future processors.
Knowledge Gaps
Unresolved gaps and open questions
Below is a concise list of the main knowledge gaps, limitations, and open questions left unresolved by the paper. Each point is formulated to be actionable for future research.
- Loophole closure in contextuality tests: The experiments do not close key loopholes (compatibility, measurement independence, detection, locality). Design and execute contextuality and pseudo-telepathy tests with space-like separation, strict timing constraints, and verified measurement compatibility to enable loophole-free claims.
- Quantitative link between contextuality and computational advantage: The paper does not establish a resource-theoretic, quantitative relation between measured contextuality (e.g., ) and task performance (magic square, GHZ parity, HLF). Develop contextuality monotones for many-body states/circuits and correlate them with success probabilities and scaling behavior.
- Compatibility and measurement disturbance: Sequential QND measurements via an ancilla can induce back-action and context disturbance. Rigorously characterize and bound measurement-induced disturbance and crosstalk; provide per-context “agreement” statistics and models linking disturbance to deviations in .
- State-independent many-body contextuality: GHZ parity games demonstrate advantage but do not certify contextuality via scalable, state-independent witnesses for large . Construct and implement many-body, state-independent contextuality inequalities or tests that scale and are robust to realistic noise.
- Non-communication enforcement: All “players” are qubits on one chip without physical separation. Implement non-communication games with physically separated devices and enforce no-signaling via delays, shielding, and clock synchronization; quantify any residual information leakage.
- Noise thresholds and finite-size scaling: There is no systematic derivation of minimum fidelity/noise thresholds required to beat classical bounds (magic square, BKS, GHZ, HLF). Perform finite-size scaling and threshold analyses under measured error channels (including coherent, correlated, SPAM, leakage) to predict the crossover where quantum advantage disappears.
- Error budget completeness: Error models primarily assume depolarizing channels and symmetric readout; other dominant processes (coherent errors, , leakage, correlated/readout crosstalk) are not fully incorporated. Build comprehensive, validated error models and assess their impact on contextuality witnesses and winning probabilities.
- Targeted error mitigation and correction: The work does not explore task-specific error mitigation (e.g., symmetry verification, stabilizer filtering, readout mitigation) nor lightweight error correction for shallow circuits. Evaluate mitigation strategies tailored to contextuality tasks and quantify gains in and game success.
- GHZ scaling limits: The GHZ advantage narrows with without a predictive model or optimization strategy. Develop models to forecast the maximal for robust advantage given device parameters; explore alternative GHZ growth patterns, dynamical decoupling, and entangler choices to extend scaling.
- Classical baselines for HLF at practical sizes: The comparison relies on a conjectured classical lower bound (depth ) without proof or empirical baselines for moderate ; fan-in, parallelism, connectivity, and pre/post-processing are not systematically accounted for. Derive tighter classical lower bounds under realistic constraints and implement optimized classical circuits to provide empirical baselines.
- Depth metric justification: The “effective number of layers” defined via lacks theoretical grounding. Develop principled time-to-solution metrics that integrate circuit depth, error rates, sampling complexity, and verification costs; compare quantum vs. classical time-to-solution under matched assumptions.
- Instance structure and robustness in HLF: Only random sparse matrices with were studied. Systematically vary topology, sparsity, and disorder; identify instance families that maximize robustness to noise and advantage; relate to phase transitions (e.g., Nishimori) in constant-depth circuits.
- Contextuality benchmarking standardization: The proposal to use many-body contextuality as a benchmark is not yet standardized. Define benchmark suites (task sets, sizes, metrics, thresholds), quantify sensitivity to calibration/drift, and assess cross-platform reproducibility.
- Device-independent framing: The experiments are not device-independent; clarify claims to avoid overinterpretation. Where possible, design DI-compatible tests (e.g., loophole-free Bell) that connect to contextuality-inspired tasks.
- Resource accounting in “bounded-resource” tasks: The bounded-resource framing is qualitative. Precisely quantify resource usage (depth, connectivity, ancilla count, resets, measurement rounds) per task and tie these to the theoretical separations claimed.
- Correlating with task outcomes: was measured on a two-qubit subsystem and not connected to performance in the GHZ or HLF tasks. Measure contextuality on the same multiqubit states/circuits used for GHZ/HLF and test whether higher contextuality predicts improved winning probability or robustness.
- Alternative contextuality games and witnesses: Beyond magic square and GHZ parity, evaluate other pseudo-telepathy/multipartite contextuality structures (e.g., KCBS, Mermin star, Cabello-type constructions for larger systems) to probe different operator algebras and hardware pathways.
- Readout asymmetry and SPAM impacts: Asymmetric SPAM and correlated readout errors are not fully modeled. Quantify their contributions to losses in each task, deploy targeted readout calibration/mitigation, and report improvements.
- Verification and certification costs: For HLF, correctness is verified via classical simulation (Clifford). In realistic scenarios without oracle-like verification, define certification methods and their resource costs; analyze how verifier noise affects reported advantage.
- Cross-architecture generalization: Results are shown on specific superconducting devices. Replicate across platforms (ion traps, photonics, neutral atoms) and multi-chip networks to assess how architecture-dependent noise/connectivity affects contextuality-based advantage.
Practical Applications
Immediate Applications
Below are specific, deployable use cases that leverage the paper’s contextuality-driven findings and methods. Each item notes sectors, potential tools/workflows, and key assumptions or dependencies.
- Contextuality-based benchmarking for quantum processors
- Sectors: hardware, cloud quantum services, software
- What to do: Adopt the Mermin–Peres magic square game, Kochen–Specker–Bell (KSB) inequality scoring (XKSB), GHZ parity games, and the 2D Hidden Linear Function (HLF) circuits as standardized benchmarks to quantify quantum-classical separation under bounded resources.
- Tools/workflows: Cirq-based benchmark suite, Stim-based verification of bitstrings, dashboards reporting Pw (win probability), XKSB scores, and time-to-solution (effective depth).
- Assumptions/dependencies: Reliable QND readout (with ancilla reset), stable CZ/H/S gate calibrations, sufficient shots Ns to reduce statistical error, 2D grid connectivity, and error-budget modeling consistent with depolarizing/Pauli error approximations.
- Processor calibration and diagnostics using many-body contextual games
- Sectors: hardware (superconducting qubits), software (control/compilers)
- What to do: Use GHZ parity games and KSB sequences to reveal subtle coherent errors that may be missed by random circuits; break down loss contributions via an error budget (readout, single-qubit, two-qubit).
- Tools/workflows: Continuous QND measurement pipelines, automated error-budget analyzers, dynamic decoupling validation, gate compilation checks, per-mechanism noisy simulations.
- Assumptions/dependencies: Accurate per-qubit measurement error rates (e0/e1), gate-specific Pauli error rates (1Q RB, 2Q XEB), sufficient Ns for confidence intervals, compatible/incompatible context design to test commutation relations.
- Cloud QC quality assurance and procurement criteria
- Sectors: cloud quantum platforms, government/enterprise procurement
- What to do: Include contextuality thresholds (e.g., XKSB > 4; Pw above classical bounds in pseudo-telepathy and GHZ parity games) in SLAs and device certification. Require shallow-circuit HLF time-to-solution metrics for acceptance tests.
- Tools/workflows: Standardized test batteries run pre-deployment and periodically; reports of classical baseline comparisons; reproducibility checks across qubit subsets.
- Assumptions/dependencies: Transparent reporting of classical baselines, uniform random assignment of game contexts, consistent device behavior across time and qubit layouts.
- Compiler and workflow optimization for constant-depth circuits
- Sectors: software (compilers, SDKs), hardware control
- What to do: Exploit fixed-depth quantum circuits (HLF-style with local CZ/S/H layers) in pipelines to reduce latency, prioritize workloads where shallow circuits are known to excel, and use effective-depth (time-to-solution) as a scheduling metric.
- Tools/workflows: Connectivity-aware circuit synthesis in Cirq, shallow-depth scheduling policies, Stim-backed unit testing.
- Assumptions/dependencies: 2D nearest-neighbor topology, gate set supporting H/S/CZ, stable compilation and calibration routines.
- Education, training, and outreach using pseudo-telepathy games
- Sectors: education (higher ed, STEM outreach), media/communication
- What to do: Deploy small-qubit demonstrations of the magic square and GHZ games to teach non-classical correlations and measurement contextuality; build lab modules and interactive visualizations.
- Tools/workflows: Classroom-ready circuits, guided labs, visualization of commuting contexts and product constraints.
- Assumptions/dependencies: Access to small quantum devices or high-quality simulators; clear guidance on error sources to avoid confusion with classical strategies.
- Research methodology for probing contextuality in many-body settings
- Sectors: academia (quantum information, condensed matter)
- What to do: Use GHZ parity games and KSB sequences to study contextuality features in engineered many-body states on current devices; compare experimental XKSB against theory and prior state-of-the-art.
- Tools/workflows: Randomized context streams, compatible/incompatible context analysis, stabilizer-based fidelity estimates.
- Assumptions/dependencies: Ability to prepare target states reliably; shot counts sufficient to distinguish from classical probabilities; careful handling of context-ordering and commutation.
Long-Term Applications
These opportunities require further research, scaling, improved fidelity, or broader ecosystem development before deployment.
- Device-independent cryptography and certified randomness from contextuality
- Sectors: cybersecurity, telecommunications
- What could emerge: Protocols leveraging contextuality/nonlocal games to produce device-independent randomness and secure key distribution, without trusting internal device models.
- Tools/products: Contextuality-based DIQKD, randomness beacons certified via loophole-free tests.
- Assumptions/dependencies: Space-like separation, loophole closure (detection/locality), high-fidelity multi-party entanglement distribution across quantum networks.
- Verification of quantum phases and materials via nonlocal games
- Sectors: materials science, condensed matter physics
- What could emerge: Nonlocal game-based certification of contextuality in ground states of physical systems; phase diagnostics beyond entanglement witnesses.
- Tools/products: Game-derived witnesses integrated into quantum simulation platforms; automated certification workflows.
- Assumptions/dependencies: Ability to prepare and probe many-body ground states; mapping game observables to Hamiltonian stabilizers; scalable measurement with low noise.
- Practical shallow-circuit advantages for real-world computational tasks
- Sectors: optimization (logistics, energy), finance, healthcare analytics, ML
- What could emerge: Constant-depth quantum subroutines (HLF-like) embedded in hybrid pipelines for graph analytics, error-correcting code inference, feature extraction, or fast parity/constraint checks.
- Tools/products: Hybrid solvers that offload structured subproblems to shallow circuits; orchestration that exploits time-to-solution metrics.
- Assumptions/dependencies: Robust encodings from domain problems to 2D local circuits; error mitigation or QEC to ensure correctness at scale; proven classical lower bounds relevant at practical sizes (current provable separations often kick in at very large n and rely on conjectures).
- Standardization of contextuality-based benchmarks and certifications
- Sectors: standards bodies, policy/regulation
- What could emerge: ISO-like standards defining tests (XKSB thresholds, GHZ parity margins vs classical baselines, pseudo-telepathy Pw metrics) for claims of “quantum advantage” in bounded-resource regimes.
- Tools/products: Public benchmark suites, certification labels for devices/services.
- Assumptions/dependencies: Community consensus on test specifications, reproducibility across hardware types, transparent reporting of classical baselines and error models.
- Autonomous hardware autotuning guided by contextuality metrics
- Sectors: hardware, software (control systems)
- What could emerge: Closed-loop calibration systems that optimize gates/readout to maximize contextuality scores (e.g., push XKSB toward 6, increase Pw margin) and minimize coherent error signatures detected by GHZ/KSB protocols.
- Tools/products: ML-driven calibrators, contextuality-aware control firmware.
- Assumptions/dependencies: Stable telemetry linking contextual metrics to physical error parameters; continuous monitoring and safe update mechanisms.
- Quantum network coordination via pseudo-telepathy-style protocols
- Sectors: distributed computing, telecom
- What could emerge: Multi-party coordination primitives where shared entanglement plus local measurements enforce global parity/constraint satisfaction without classical communication at runtime.
- Tools/products: Entanglement-backed coordination APIs for distributed systems; protocols for synchronized responses and consensus checks.
- Assumptions/dependencies: Entanglement distribution across nodes with low loss; synchronized control; robust error handling.
- Secure multi-party computation (MPC) primitives using GHZ parity constraints
- Sectors: finance, healthcare, public sector
- What could emerge: MPC subroutines where GHZ-like parity checks ensure correctness or detect tampering; entanglement-assisted verifiable computation layers.
- Tools/products: Verifiable aggregation and parity enforcement modules; audit trails via contextual games.
- Assumptions/dependencies: Reliable entanglement resources among parties; practical interoperability with classical MPC frameworks; compliance and governance.
- Improved error correction and noise characterization informed by contextuality
- Sectors: hardware, software (QEC)
- What could emerge: QEC strategies tuned by contextuality-sensitive diagnostics to target coherent error modes; thresholds and transitions (e.g., Nishimori-type) identified via constant-depth contextual circuits.
- Tools/products: Contextuality-aware decoders; calibration routines aligned with QEC thresholds.
- Assumptions/dependencies: High-quality syndrome extraction; scalable stabilizer measurements; consistent linkage between contextual metrics and decoder performance.
- Expanded education and workforce development around contextuality
- Sectors: education, workforce training
- What could emerge: Standard curricula and certifications focused on contextuality as a computational resource, preparing talent for hardware, theory, and applications.
- Tools/products: Courseware, lab kits, training programs integrated with cloud quantum access.
- Assumptions/dependencies: Broad availability of educational devices/simulators; alignment with industry benchmarks.
Notes on key assumptions across applications:
- Proven separations for shallow circuits often rely on asymptotic bounds or conjectured classical limits (e.g., log2(s) depth assumptions); near-term “advantage” claims must be framed with these caveats.
- Experimental demonstrations in this paper are not device-independent; they require trusting the hardware and error models.
- Many-body contextuality benefits diminish with size under current noise; success margins can be recovered statistically (large Ns), but practical deployment demands higher fidelities and/or error mitigation/QEC.
Glossary
- Ancilla qubit: A helper qubit used to facilitate operations or measurements on other qubits without collapsing their state. "we achieve by using an ancilla qubit (An)."
- Bell pairs: Maximally entangled two-qubit states used as shared resources in quantum protocols. "Starting with a pair of Bell pairs,"
- Bell-Kochen-Specker (BKS) theorem: A no-go theorem showing that non-contextual hidden-variable models cannot reproduce all quantum predictions. "The Bell-Kochen-Specker (BKS) theorem states that no non-contextual hidden-variables (NCHV) theory can reproduce the predictions of quantum mechanics"
- Bernstein–Vazirani problem: A foundational quantum query problem used to study algorithmic speedups; here, a non-oracular variant is considered. "non-oracular variant of the well-known Bernstein-Vazirani problem"
- Binomial distribution: A discrete probability distribution modeling the number of successes in repeated independent trials. "the error bar plotted for each game is 100 times the statistical uncertainty derived from the binomial distribution."
- Clifford simulations: Classical simulations restricted to stabilizer (Clifford) circuits used to validate quantum experiments. "Comparing measured bitstrings with classical Clifford simulations using Stim [61],"
- Controlled-NOT (C-NOT) gate: A two-qubit gate that flips the target qubit conditional on the control qubit being in state |1>. "C-NOT gate"
- Controlled-Z (CZ) gate: A native two-qubit entangling operation that applies a phase of -1 when both qubits are |1>. "followed by four layers of CZ gates,"
- Depolarizing noise: A common noise model where a qubit randomly depolarizes to the maximally mixed state after a gate. "by either adding depolarizing noise following the single-qubit (SQ) or two-qubit (2Q) gates,"
- Dynamical decoupling: Sequences of pulses used to mitigate decoherence during idle times. "we omit the dynamical decoupling sequences applied to qubits during idle periods."
- Ensemble average: The average of a quantity over many identically prepared experimental runs. "the average is the ensemble average of the prod- ucts of the outcomes of the observables listed in the first row,"
- GHZ parity game: A nonlocal multi-player game where players sharing a GHZ state can win with certainty. "In the N-qubit GHZ parity game,"
- GHZ state: A multipartite entangled state of the form (|00…0> + |11…1>)/√2. "the GHZ state, |GHZ) = (|000 ... ) + |111 ... ))/v2,"
- Hadamard gate: A single-qubit gate that creates superposition by mapping computational basis states to equal-weight superpositions. "Then all players apply the Hadamard gate and measure their qubit in the Z basis,"
- Hidden linear function (HLF) problem: A computational task where one must recover a linear function hidden within a quadratic form modulo 4. "We consider the fol- lowing hidden linear function (HLF) problem:"
- Kochen-Specker-Bell inequality: A contextuality inequality bounding correlations under non-contextual models, which quantum experiments can violate. "Kochen-Specker-Bell inequality violation."
- Logarithmic-depth circuits: Classical circuit families whose depth grows as O(log n), used here as a lower-bound benchmark. "requires logarithmic-depth circuits."
- Magic square game: The Mermin–Peres pseudo-telepathy game demonstrating quantum contextuality with perfect quantum strategies. "we implement the magic square game [40- 44] on our superconducting qubit processor."
- Many-body contextuality: Contextuality manifesting in systems with many entangled qubits, relevant for scalable tasks and benchmarking. "many- body quantum contextuality can enable bounded-resource tasks more efficiently than classical counterparts."
- Measurement context: The set of jointly measured observables whose compatibility affects outcomes in contextual scenarios. "Each row and column represents a distinct measurement context,"
- Measurement contextuality: The phenomenon that measurement outcomes depend on the set of compatible measurements performed, not on preassigned values. "Quantum measurement contextuality arises from the observation that in certain quantum systems the out- comes of observables depend on the measurement context and are not predetermined"
- Non-commutativity (of observables): The property that certain quantum operators do not commute, leading to order-dependent measurement outcomes. "rooted in the non-commutativity of quantum observables."
- Non-contextual hidden-variables (NCHV) theory: A classical model assuming measurement outcomes are predetermined and independent of context. "no non-contextual hidden-variables (NCHV) theory"
- Pauli operators: The fundamental single-qubit operators X, Y, Z used to define measurements and stabilizers. "3-qubit Pauli oper- ators A = X1Y2Y3, B = Y1X2Y3, and C = Y1Y2X3,"
- Phase gate (S gate): A single-qubit gate that adds a phase i to |1>, used to switch measurement bases. "each applies an S gate to their qubit if xj = 1."
- Quantum advantage: Demonstrated superiority of quantum algorithms over classical ones under specified resource constraints. "gave the first unconditional quantum advan- tage result for a restricted class of circuits"
- Quantum non-demolition (QND) measurement: A measurement that preserves the observable’s value, allowing repeated readout without collapse. "quantum non- demolition (QND) measurements"
- Quantum pseudo-telepathy: Games where shared entanglement enables perfect coordination without communication, surpassing classical limits. "In these so-called quantum pseudo-telepathy games, players achieve higher winning probabilities"
- Randomized benchmarking (RB): A method to characterize average gate error rates via random sequences of gates. "single-qubit randomized benchmarking (1Q RB) errors,"
- Readout error: The probability of misreporting a qubit’s measured state due to imperfections in measurement. "The readout errors eo and e1 correspond to the probability of reporting |1) when the qubit is prepared in |0) and of reporting |0) when prepared in |1), respectively."
- Shallow circuits (fixed constant depth): Quantum circuits with a constant number of gate layers, central to certain advantage proofs. "a quantum circuit of fixed constant depth,"
- Stabilizer: An operator that leaves a quantum state unchanged, used to characterize entangled states like GHZ. "is stabilized by a tensor product of Pauli X and Y with an even number of Y,"
- Superconducting qubit processor: A quantum computing platform using superconducting circuits to implement qubits and gates. "Using a superconducting qubit processor, we show that quantum contextuality enables certain tasks to be performed with success probabilities beyond classical limits."
- Tensor product: The operation combining multiple subsystem operators or states into a composite system. "is stabilized by a tensor product of Pauli X and Y with an even number of Y,"
- Time-to-solution: A performance metric indicating how many runs are needed to obtain a correct result, inversely related to success fraction. "This metric, sometimes called time-to-solution, quantifies the extra runs needed to find the correct bitstring"
- Two-qubit cross-entropy benchmarking (XEB): A protocol to estimate two-qubit gate errors by comparing measured output distributions to ideal ones. "two-qubit cross-entropy benchmarking (2Q XEB) Pauli errors for CZ gates."
Collections
Sign up for free to add this paper to one or more collections.