Papers
Topics
Authors
Recent
Search
2000 character limit reached

Certified Randomness Generation

Updated 9 February 2026
  • Certified randomness generation is a framework based on fundamental physical axioms, such as no-signaling, to produce unpredictable random bits.
  • It utilizes protocols like Bell tests and robust randomness extractors to quantify and certify the min-entropy of the output against adversarial quantum side information.
  • These techniques play a vital role in both foundational quantum nonlocality experiments and practical cryptographic applications, ensuring high security and reliability.

Certified randomness generation refers to protocols that produce random bits whose unpredictability is provable under explicit, minimal physical or mathematical assumptions—even in the presence of adversarial or unknown devices. Unlike conventional random number generation, which relies on presumed device models, certified (sometimes “device-independent”) randomness requires only the statistics of the observed outcomes and a few fundamental physical axioms, such as the impossibility of faster-than-light signaling or a bound on physical resources. This approach enables rigorous statements about the min-entropy of the generated output, quantifies imperfections, and is central both for foundational tests of quantum nonlocality and for secure cryptographic applications.

1. Foundational Models and Physical Assumptions

Certified randomness protocols derive their security from physical principles rather than detailed device modeling. The canonical scenario involves two or more spatially separated quantum devices (or subsystems) that do not communicate during the protocol. Key model features include:

  • No-signaling: The joint conditional output distribution p(a,bx,y)p(a,b|x,y) must satisfy ap(a,bx,y)=ap(a,bx,y)\sum_a p(a,b|x,y) = \sum_a p(a,b|x',y) for all x,xx,x', yy (and symmetrically for Alice's marginal), ensuring that measurement choices on one device cannot influence the outcome statistics of the other in the same round. This assumption precludes classical hidden-variable strategies mimicking quantum randomness (Vazirani et al., 2011).
  • Adversarial devices: The devices may be arbitrary black boxes, pre-programmed or even built and entangled by an adversary; only spatial separation and the no-signaling condition (or its analogs in temporal, semi-device-independent, or prepare-and-measure scenarios) constrain their behavior.
  • Quantum adversary variant: In the strongest model, an adversary may hold arbitrary quantum side information EE correlated with the devices’ initial state, modeled as general ρABE\rho_{ABE}, but may not interact with the devices during the protocol (Vazirani et al., 2011).

Alternative physical regimes include one-sided device independence (steering scenario), semi-device-independent models leveraging dimension witnesses or physical resource bounds, contextuality-based (Kochen–Specker) certification, and relativistic or temporal protocols (e.g., using the Leggett–Garg inequality).

2. Protocol Structures for Certified Randomness

Protocols for certified randomness generation fall into several broad classes, often differentiated by their physical assumption set:

  • Device-independent (DI) random generation: Protocols such as the Vazirani–Vidick construction (Vazirani et al., 2011) use two spatially separated, untrusted quantum devices and statistical tests based on Bell inequality (e.g., CHSH) violation to certify randomness. The process typically involves: generating a sparse random sampling of inputs, grouping rounds into blocks, performing CHSH or generalized nonlocal games, post-selecting on statistical-test success, and finally applying a classical or quantum-proof randomness extractor seeded with fresh uniform bits.
  • Randomness expansion and extraction: The “expansion” task grows a small (e.g., O(logn)O(\log n)) uniform random seed into nn certifiable random bits. A final cryptographic extractor (e.g., Trevisan’s) is then used to amplify the raw output's min-entropy into nearly uniform bits, even in the presence of quantum side information (Vazirani et al., 2011, Acín et al., 2017).
  • Steering-based protocols: One-sided device-independent (1SDI) settings relax requirements by trusting one party's devices (e.g., using quantum steering inequalities and reconstructing the assemblage) while treating the other as adversarial. These enable robust randomness certification with lower detector efficiencies and without space-like separation (Coyle et al., 2020, Joch et al., 2021).
  • Contextuality or temporal protocols: These certify randomness through violations of contextuality (e.g., Kochen–Specker theorems (Kulikov et al., 2017)) or temporal inequalities (Leggett–Garg inequalities), often requiring only single quantum systems and no spatial separation (Nath et al., 2024, Nath et al., 5 Feb 2025).
  • Semi-device-independent protocols: Dimension witnesses or resource constraints (e.g., on the Hilbert-space dimension or energy uncertainty) yield certified randomness with minimal assumptions (Chen et al., 2020, Jones et al., 17 Jun 2025).

3. Quantification, Extractors, and Security Measures

The central information-theoretic metric is the min-entropy of the output conditioned on all side information (including a potentially quantum adversary's knowledge):

Hε(BE)=supρρ1εH(ρ)H_\infty^\varepsilon(B|E) = \sup_{\|\rho'-\rho\|_1\le\varepsilon} H_\infty(\rho')

A protocol typically provides a security dichotomy:

  • Either the output register has min-entropy at least nn bits,
  • Or the protocol aborts/fails with probability at most ε\varepsilon (Vazirani et al., 2011).

Randomness extraction uses quantum-secure strong extractors, such as Trevisan’s, instantiated with a (classical) uniform seed, to convert strings with high min-entropy into nearly uniform bits independent of all side information (Vazirani et al., 2011, Acín et al., 2017, Bierhorst et al., 2017). The extractor seed size is O(log3n)O(\log^3 n) or similar, and the output uniformity can be made arbitrarily close to ideal, up to a total variation distance error parameter.

Protocols often report both the authenticated amount of smooth min-entropy and the total error probability (soundness). Performance trade-offs—such as seed length, total rounds, device repetitions, and noise tolerance—are quantitatively analyzed (Vazirani et al., 2011).

4. Key Proof Techniques and Protocol Efficiency

Security proofs harness several core arguments:

  • Guessing-game reduction: If the output has too little min-entropy, the device would enable signaling (contradicting no-signaling). For example, in the Vazirani–Vidick protocol, a block with low entropy from Bob enables Alice to learn Bob's output with bias >1/2>1/2, which is impossible under the no-signaling hypothesis (Vazirani et al., 2011).
  • Entropy accumulation: Min-entropy is shown to be generated at the per-block level and then accumulated to yield globally high entropy (Vazirani et al., 2011).
  • Robust block design: Grouping of rounds and checks (e.g., fixed input rounds interleaved with random input “test” rounds) forces the device to supply genuine quantum randomness, not just average-case violations.
  • Quantum-proof extraction and side-information analysis: Quantum-proof extractors and list-decoding arguments guarantee security even against adversaries holding quantum advice; measurements on such advice cannot guess the output string except with exponentially small probability (Vazirani et al., 2011).

Efficiency is addressed through careful seed selection, block grouping, and spot-checking, allowing for exponential randomness expansion using only O(logn)O(\log n) or O(log3n)O(\log^3 n) fresh seed bits (Vazirani et al., 2011).

Noise tolerance presently remains a limitation: the Vazirani–Vidick protocol, for instance, tolerates errors scaling as O(1/ln2n)O(1/\ln^2 n), leaving robust performance under realistic levels of device imperfection open (Vazirani et al., 2011).

5. Extensions, Generalizations, and Open Challenges

Extensions beyond the original Bell-based, fully DI model are significant:

  • Witness- and input-based certification: All entangled states, including those not violating standard Bell inequalities with classical inputs, can be used for certified randomness by tailored Bell-like witnesses, quantum inputs, and appropriately designed statistical tests (Chen, 2017).
  • Quantum steering and 1SDI schemes: By certifying randomness through steering inequalities or sequential measurements with partially trusted devices, one achieves unbounded—in principle, arbitrary—randomness expansion with relaxed requirements (Coyle et al., 2020, Joch et al., 2021).
  • Temporal and resource-bounded protocols: Leggett–Garg tests (temporal correlations under no-signaling-in-time) or energy-spread–bounded prepare-and-measure scenarios allow certified randomness in single system protocols without spatial separation (Nath et al., 5 Feb 2025, Nath et al., 2024, Jones et al., 17 Jun 2025).
  • Cryptographic and computational hardness-based schemes: Newer protocols leverage computational assumptions (e.g., learning with errors, random circuit sampling) to certify statistical randomness generation in constant rounds with a single quantum device or over the cloud (Mahadev et al., 2022, Liu et al., 26 Mar 2025, Aaronson et al., 2023).

Major open questions include: boosting noise and loss tolerance to practical or imperfect-device regimes, minimizing round-complexity to O(n)O(n), parallelizing (fully) the certification tests, achieving efficient randomness extraction with minimal seed expansion, leveraging non-classical resources beyond CHSH-type Bell tests (e.g., contextuality or multipartite games), and integrating certified randomness into cryptographic primitives without significant seed or communication overheads (Vazirani et al., 2011).

6. Comparison of Certified Randomness Protocols

Protocol Family Physical Assumptions Output Security Seed Use Noise Tolerance
Fully device-independent (DI), Bell No-signaling, space-like Against quantum E O(logn)O(\log n) O(1/ln2n)O(1/\ln^2 n) (Vazirani–Vidick)
1SDI (steering) Trust in 1 device Against quantum E Small (tomographic) High (E.g., detector efficiency >0.5>0.5)
Semi-DI (dimension, energy witness) Bound on dim./energy spread Against classical/quantum Small Depends on witness
Contextuality-based (Kochen–Specker) Quantum context-independence Against all non-contextual None Robust, high rates
Temporal (Leggett–Garg) No-signaling-in-time Against classical MR/NIM None High (No spatial separation)
Computational (LWE/random circuits) Lattice/complexity hardness Statistical/Comp. Assumpt. Arbitrary Device-specific

This spectrum illustrates the trade-offs between physical rigor, experimental feasibility, randomness rate, and the degree of device trust.

7. Impact and Practical Considerations

Certified randomness generation has enabled:

  • Experimental demonstrations with device-independent security at the bit-level, e.g., photonic Bell tests closing both detection and locality loopholes (Bierhorst et al., 2017, Bierhorst et al., 2018), contextuality-based QRNGs with high throughput (Kulikov et al., 2017), or computational cloud-based expansion at scale (Liu et al., 26 Mar 2025).
  • Randomness expansion and amplification: expanding short seeds into arbitrarily long random bitstrings under minimal, quantifiable assumptions (Vazirani et al., 2011, Acín et al., 2017).
  • Deep links to quantum foundations, such as the relationship between Bell nonlocality, contextuality, and inherent unpredictability.
  • Foundational implications for the completeness of quantum theory: device-independent random number generation provides experimental evidence against deterministic or hidden variable extensions of quantum mechanics (Acín et al., 2017).

Ongoing advances focus on improving protocol robustness, practical throughput, and bias/imperfection tolerance, as well as on integrating certified randomness into cryptographic applications with minimal practical overheads. Despite experimental challenges, protocols such as Vazirani–Vidick's establish the feasibility of exponential randomness expansion, and recent hardware improvements and cryptographic tests achieve generation rates and statistical randomness suitable for demanding security environments (Vazirani et al., 2011).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Certified Randomness Generation.