Papers
Topics
Authors
Recent
Search
2000 character limit reached

Constant Overhead Entanglement Distillation via Scrambling

Published 13 Feb 2025 in quant-ph | (2502.09483v1)

Abstract: High-fidelity quantum entanglement enables key quantum networking capabilities such as secure communication and distributed quantum computing, but distributing entangled states through optical fibers is limited by noise and loss. Entanglement distillation protocols address this problem by extracting high-fidelity Bell pairs from multiple noisy ones. The primary objective is minimizing the resource overhead: the number of noisy input pairs needed to distill each high-fidelity output pair. While protocols achieving optimal overhead are known in theory, they often require complex decoding operations that make practical implementation challenging. We circumvent this challenge by introducing protocols that use quantum scrambling - the spreading of quantum information under chaotic dynamics - through random Clifford operations. Based on this scrambling mechanism, we design a distillation protocol that maintains asymptotically constant overhead, independent of the desired output error rate $\bar{\varepsilon}$, and can be implemented with shallow quantum circuits of depth $O(\operatorname{poly} \log \log \bar{\varepsilon}{-1})$ and memory $O(\operatorname{poly} \log \bar{\varepsilon}{-1})$. We show this protocol remains effective even with noisy quantum gates, making it suitable for near-term devices. Furthermore, by incorporating partial error correction, our protocol achieves state-of-the-art performance: starting with pairs of 10% initial infidelity, we require only 7 noisy inputs per output pair to distill a single Bell pair with infidelity $\bar{\varepsilon}=10{-12}$, substantially outperforming existing schemes. Finally, we demonstrate the utility of our protocols through applications to quantum repeater networks.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.