Papers
Topics
Authors
Recent
Search
2000 character limit reached

On the Interpretability of Quantum Neural Networks

Published 22 Aug 2023 in quant-ph and cs.LG | (2308.11098v2)

Abstract: Interpretability of AI methods, particularly deep neural networks, is of great interest. This heightened focus stems from the widespread use of AI-backed systems. These systems, often relying on intricate neural architectures, can exhibit behavior that is challenging to explain and comprehend. The interpretability of such models is a crucial component of building trusted systems. Many methods exist to approach this problem, but they do not apply straightforwardly to the quantum setting. Here, we explore the interpretability of quantum neural networks using local model-agnostic interpretability measures commonly utilized for classical neural networks. Following this analysis, we generalize a classical technique called LIME, introducing Q-LIME, which produces explanations of quantum neural networks. A feature of our explanations is the delineation of the region in which data samples have been given a random label, likely subjects of inherently random quantum measurements. We view this as a step toward understanding how to build responsible and accountable quantum AI models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 3rd ed. (Prentice Hall, 2010).
  2. T. M. Mitchell, Machine Learning, 1st ed. (McGraw-Hill, Inc., USA, 1997).
  3. C. Molnar, Interpretable Machine Learning, 2nd ed. (2022).
  4. Z. C. Lipton, The mythos of model interpretability, Queue 16, 31 (2018).
  5. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT Press, 2016) http://www.deeplearningbook.org.
  6. M. Schuld and F. Petruccione, Machine Learning with Quantum Computers (Springer International Publishing, 2021).
  7. S. M. Lundberg and S.-I. Lee, A unified approach to interpreting model predictions, in Advances in Neural Information Processing Systems, Vol. 30 (Curran Associates, Inc., 2017).
  8. I. Burge, M. Barbeau, and J. Garcia-Alfaro, A quantum algorithm for shapley value estimation, arXiv preprint arXiv:2301.04727 10.48550/arXiv.2301.04727 (2023).
  9. K. Simonyan, A. Vedaldi, and A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps, arXiv preprint arXiv:1312.6034 10.48550/arXiv.1312.6034 (2013).
  10. C. Olah, A. Mordvintsev, and M. Tyka, Feature visualization: How neural networks build up their understanding of images, Distill 2 (2017).
  11. M. D. Zeiler and R. Fergus, Visualizing and understanding convolutional networks, European conference on computer vision , 818 (2014).
  12. R. A. Fisher, The use of multiple measurements in taxonomic problems (1936).
  13. J. C. Spall, An overview of the simultaneous perturbation method for efficient optimization, Johns Hopkins APL Technical Digest 19, 482 (1998).
  14. IBM, Qiskit: An Open-Source Framework for Quantum Computing (2021), accessed on May 1, 2023.
  15. M. Schuld and F. Petruccione, Supervised Learning with Quantum Computers, Quantum Science and Technology (Springer International Publishing, 2018).
  16. E. Farhi and H. Neven, Classification with quantum neural networks on near term processors, arXiv preprint arXiv:1802.06002 10.48550/arXiv.1802.06002 (2018).
  17. L. Pira and C. Ferrie, Interpret QNN: Explicability and Inexplicability in the Interpretation of Quantum Neural Networks, https://github.com/lirandepira/interpret-qnn (2023), accessed on July 24, 2023.
  18. A. Youssry, G. A. Paz-Silva, and C. Ferrie, Characterization and control of open quantum systems beyond quantum noise spectroscopy, npj Quantum Information 6, 95 (2020).
  19. A. Youssry, G. A. Paz-Silva, and C. Ferrie, Noise detection with spectator qubits and quantum feature engineering, New Journal of Physics 25, 073004 (2023b).
  20. A. Sarkar, Is explainable AI a race against model complexity?, arXiv preprint arXiv:2205.10119 10.48550/arXiv.2205.10119 (2022).
Citations (5)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.