Papers
Topics
Authors
Recent
Search
2000 character limit reached

Evaluating the Explainable AI Method Grad-CAM for Breath Classification on Newborn Time Series Data

Published 13 May 2024 in cs.AI, cs.CY, and cs.LG | (2405.07590v1)

Abstract: With the digitalization of health care systems, artificial intelligence becomes more present in medicine. Especially machine learning shows great potential for complex tasks such as time series classification, usually at the cost of transparency and comprehensibility. This leads to a lack of trust by humans and thus hinders its active usage. Explainable artificial intelligence tries to close this gap by providing insight into the decision-making process, the actual usefulness of its different methods is however unclear. This paper proposes a user study based evaluation of the explanation method Grad-CAM with application to a neural network for the classification of breaths in time series neonatal ventilation data. We present the perceived usefulness of the explainability method by different stakeholders, exposing the difficulty to achieve actual transparency and the wish for more in-depth explanations by many of the participants.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58, 82–115. 10.1016/j.inffus.2019.12.012.
  2. Baker, D.J. (2020). Artificial Ventilation: A Basic Clinical Guide. Springer International Publishing, Cham. 10.1007/978-3-030-55408-8.
  3. Computational analysis of neonatal ventilator waveforms and loops. Pediatric Research, 89(6), 1432–1441. 10.1038/s41390-020-01301-9.
  4. Who is afraid of black box algorithms? on the epistemological and ethical basis of trust in medical ai. Journal of Medical Ethics, medethics–2020–106820. 10.1136/medethics-2020-106820.
  5. XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification. Mathematics, 9(23), 3137. 10.3390/math9233137.
  6. Deep learning for time series classification: a review. Data mining and knowledge discovery, 33(4), 917–963. 10.1007/s10618-019-00619-1.
  7. A Unified Approach to Interpreting Model Predictions. 10.48550/ARXIV.1705.07874.
  8. XAI4EEG: spectral and spatio-temporal explanation of deep learning-based seizure detection in eeg time series. Neural Computing and Applications, 35(14), 10051–10068. 10.1007/s00521-022-07809-x.
  9. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. ACM, San Francisco California USA. 10.1145/2939672.2939778.
  10. The great multivariate time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Mining and Knowledge Discovery, 35(2), 401–449. 10.1007/s10618-020-00727-3.
  11. Weaning and extubation from neonatal mechanical ventilation: an evidenced-based review. BMC Pulmonary Medicine, 22(1), 1–12. 10.1186/s12890-022-02223-4.
  12. Grad-CAM: Why did you say that? 10.48550/arXiv.1611.07450.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.