Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Depth-Independent Linear Chain Ansatz for Large-Scale Quantum Approximate Optimization

Published 22 Sep 2025 in quant-ph | (2509.17296v1)

Abstract: Combinatorial optimization lies at the heart of numerous real-world applications. For a broad category of optimization problems, quantum computing is expected to exhibit quantum speed-up over classic computing. Among various quantum algorithms, the Quantum Approximate Optimization Algorithm (QAOA), as one of variational quantum algorithms, shows promise on demonstrating quantum advantage on noisy intermediate-scale quantum (NISQ) hardware. However, with increasing problem size, the circuit depth demanded by original QAOA scales rapidly and quickly surpasses the threshold at which meaningful results can be obtained. To address this challenge, in this work, we propose a variant of QAOA (termed linear chain QAOA) and demonstrate its advantages over original QAOA on paradigmatic MaxCut problems. In original QAOA, each graph edge is encoded with one entangling gate. In our ansatz, we locate a linear chain from the original MaxCut graph and place entangling gates sequentially along this chain. This linear-chain ansatz is featured by shallow quantum circuits and with the low execution time that scales independently of the problem size. Leveraging this ansatz, we demonstrate an approximation ratio of 0.78 (without post-processing) on non-hardware-native random regular MaxCut instances with 100 vertices in a digital quantum processor using 100 qubits. Our findings offer new insights into the design of hardware-efficient ansatz and point toward a promising route for tackling large-scale combinatorial optimization problems on NISQ devices.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.