2000 character limit reached
Evaluating the Convergence Limit of Quantum Neural Tangent Kernel
Published 5 Dec 2023 in quant-ph | (2312.02451v1)
Abstract: Quantum variational algorithms have been one of major applications of quantum computing with current quantum devices. There are recent attempts to establish the foundation for these algorithms. A possible approach is to characterize the training dynamics with quantum neural tangent kernel. In this work, we construct the kernel for two models, Quantun Ensemble and Quantum Neural Network, and show the convergence of these models in the limit of infinitely many qubits. We also show applications of the kernel limit in regression tasks.
- Erfan Abedi, Salman Beigi and Leila Taghavi “Quantum lazy training” In Quantum 7 Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften, 2023, pp. 989
- “On exact computation with an infinitely wide neural net” In Advances in Neural Information Processing Systems 32, 2019
- Lenaic Chizat, Edouard Oyallon and Francis Bach “On lazy training in differentiable programming” In Advances in Neural Information Processing Systems 32, 2019
- “Integration with respect to the Haar measure on unitary, orthogonal and symplectic group” In Communications in Mathematical Physics 264.3 Springer, 2006, pp. 773–795
- Motohisa Fukuda, Robert König and Ion Nechita “RTNI—A symbolic integrator for Haar-random tensor networks” In Journal of Physics A: Mathematical and Theoretical 52.42 IOP Publishing, 2019, pp. 425303
- “Implicit bias of gradient descent on linear convolutional networks” In Advances in Neural Information Processing Systems 31, 2018
- “Implicit regularization in matrix factorization” In Advances in Neural Information Processing Systems 30, 2017
- Arthur Jacot, Franck Gabriel and Clément Hongler “Neural tangent kernel: Convergence and generalization in neural networks” In Advances in neural information processing systems 31, 2018
- “Foundations of modern probability” Springer, 1997
- Junyu Liu, Zexi Lin and Liang Jiang “Laziness, barren plateau, and noise in machine learning” In arXiv preprint arXiv:2206.09313, 2022
- “Analytic theory for the dynamics of wide quantum neural networks” In Physical Review Letters 130.15 APS, 2023, pp. 150601
- “Representation learning via quantum neural tangent kernels” In PRX Quantum 3.3 APS, 2022, pp. 030323
- Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar “Foundations of machine learning” MIT press, 2018
- Zbigniew Puchała and Jarosław Adam Miszczak “Symbolic integration with respect to the Haar measure on the unitary group” In arXiv preprint arXiv:1109.4244, 2011
- “How do infinite width bounded norm networks look in function space?” In Conference on Learning Theory, 2019, pp. 2667–2690 PMLR
- Maria Schuld “Supervised quantum machine learning models are kernel methods” In arXiv preprint arXiv:2101.11020, 2021
- Maria Schuld, Ryan Sweke and Johannes Jakob Meyer “Effect of data encoding on the expressive power of variational quantum-machine-learning models” In Physical Review A 103.3 APS, 2021, pp. 032430
- “Quantum tangent kernel” In arXiv:2111.02951, 2021
- “Kernel and rich regimes in overparametrized models” In Conference on Learning Theory, 2020, pp. 3635–3673 PMLR
- “Average fidelity between random quantum states” In Physical Review A 71.3 APS, 2005, pp. 032313
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.