Papers
Topics
Authors
Recent
Search
2000 character limit reached

Quantum Natural Gradient with Efficient Backtracking Line Search

Published 1 Nov 2022 in quant-ph | (2211.00615v1)

Abstract: We consider the Quantum Natural Gradient Descent (QNGD) scheme which was recently proposed to train variational quantum algorithms. QNGD is Steepest Gradient Descent (SGD) operating on the complex projective space equipped with the Fubini-Study metric. Here we present an adaptive implementation of QNGD based on Armijo's rule, which is an efficient backtracking line search that enjoys a proven convergence. The proposed algorithm is tested using noisy simulators on three different models with various initializations. Our results show that Adaptive QNGD dynamically adapts the step size and consistently outperforms the original QNGD, which requires knowledge of optimal step size to {perform competitively}. In addition, we show that the additional complexity involved in performing the line search in Adaptive QNGD is minimal, ensuring the gains provided by the proposed adaptive strategy dominates any increase in complexity. Additionally, our benchmarking demonstrates that a simple SGD algorithm (implemented in the Euclidean space) equipped with the adaptive scheme above, can yield performances similar to the QNGD scheme with optimal step size. Our results are yet another confirmation of the importance of differential geometry in variational quantum computations. As a matter of fact, we foresee advanced mathematics to play a prominent role in the NISQ era in guiding the design of faster and more efficient algorithms.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.