Papers
Topics
Authors
Recent
Search
2000 character limit reached

Polynomially efficient quantum enabled variational Monte Carlo for training neural-network quantum states for physico-chemical applications

Published 16 Dec 2024 in quant-ph, cond-mat.str-el, and physics.chem-ph | (2412.12398v1)

Abstract: Neural-network quantum states (NQS) offer a versatile and expressive alternative to traditional variational ans\"atze for simulating physical systems. Energy-based frameworks, like Hopfield networks and Restricted Boltzmann Machines, leverage statistical physics to map quantum states onto an energy landscape, functioning as memory descriptors. Here, we show that such models can be efficiently trained using Monte Carlo techniques enhanced by quantum devices. Our algorithm scales linearly with circuit width and depth, requires constant measurements, avoids mid-circuit measurements, and is polynomial in storage, ensuring optimal efficiency. It applies to both phase and amplitude fields, significantly expanding the trial space compared to prior methods. Quantum-assisted sampling accelerates Markov Chain convergence and improves sample fidelity, offering advantages over classical approaches. We validate our method by accurately learning ground states of local spin models and non-local electronic structure Hamiltonians, even in distorted molecular geometries with strong multi-reference correlations. Benchmark comparisons show robust agreement with traditional methods. This work highlights the potential of combining machine learning protocols with near-term quantum devices for quantum state learning, with promising applications in theoretical chemistry and condensed matter physics.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.