Papers
Topics
Authors
Recent
Search
2000 character limit reached

Solving boolean satisfiability problems with the quantum approximate optimization algorithm

Published 14 Aug 2022 in quant-ph | (2208.06909v1)

Abstract: The quantum approximate optimization algorithm (QAOA) is one of the most prominent proposed applications for near-term quantum computing. Here we study the ability of QAOA to solve hard constraint satisfaction problems, as opposed to optimization problems. We focus on the fundamental boolean satisfiability problem, in the form of random $k$-SAT. We develop analytic bounds on the average success probability of QAOA over random boolean formulae at the satisfiability threshold, as the number of variables $n$ goes to infinity. The bounds hold for fixed parameters and when $k$ is a power of 2. We complement these theoretical results with numerical results on the performance of QAOA for small $n$, showing that these match the limiting theoretical bounds closely. We then use these results to compare QAOA with leading classical solvers. In the case of random 8-SAT, we find that for around 14 ansatz layers, QAOA matches the scaling performance of the highest-performance classical solver we tested, WalkSATlm. For larger numbers of layers, QAOA outperforms WalkSATlm, with an ultimate level of advantage that is still to be determined. Our methods provide a framework for analysing the performance of QAOA for hard constraint satisfaction problems and finding further speedups over classical algorithms.

Citations (42)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.