Papers
Topics
Authors
Recent
Search
2000 character limit reached

Turing-equivalent automata using a fixed-size quantum memory

Published 24 May 2012 in cs.CC and quant-ph | (1205.5395v1)

Abstract: In this paper, we introduce a new public quantum interactive proof system and the first quantum alternating Turing machine: qAM proof system and qATM, respectively. Both are obtained from their classical counterparts (Arthur-Merlin proof system and alternating Turing machine, respectively,) by augmenting them with a fixed-size quantum register. We focus on space-bounded computation, and obtain the following surprising results: Both of them with constant-space are Turing-equivalent. More specifically, we show that for any Turing-recognizable language, there exists a constant-space weak-qAM system, (the nonmembers do not need to be rejected with high probability), and we show that any Turing-recognizable language can be recognized by a constant-space qATM even with one-way input head. For strong proof systems, where the nonmembers must be rejected with high probability, we show that the known space-bounded classical private protocols can also be simulated by our public qAM system with the same space bound. Besides, we introduce a strong version of qATM: The qATM that must halt in every computation path. Then, we show that strong qATMs (similar to private ATMs) can simulate deterministic space with exponentially less space. This leads to shifting the deterministic space hierarchy exactly by one-level. The method behind the main results is a new public protocol cleverly using its fixed-size quantum register. Interestingly, the quantum part of this public protocol cannot be simulated by any space-bounded classical protocol in some cases.

Citations (6)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.