Papers
Topics
Authors
Recent
Search
2000 character limit reached

Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models

Published 29 Jan 2024 in cs.LG and cs.AI | (2401.16521v1)

Abstract: This work undertakes studies to evaluate Interpretability Methods for Time-Series Deep Learning. Sensitivity analysis assesses how input changes affect the output, constituting a key component of interpretation. Among the post-hoc interpretation methods such as back-propagation, perturbation, and approximation, my work will investigate perturbation-based sensitivity Analysis methods on modern Transformer models to benchmark their performances. Specifically, my work answers three research questions: 1) Do different sensitivity analysis (SA) methods yield comparable outputs and attribute importance rankings? 2) Using the same sensitivity analysis method, do different Deep Learning (DL) models impact the output of the sensitivity analysis? 3) How well do the results from sensitivity analysis methods align with the ground truth?

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. Interpreting Time Series Transformer Models and Sensitivity Analysis of Population Age Groups to COVID-19 Infections. In 2024 The Association for the Advancement of Artificial Intelligence AI4TS Workshop.
  2. Population Age Group Sensitivity for COVID-19 Infections with Deep Learning. ArXiv:2307.00751 [cs.LG], arXiv:2307.00751.
  3. DLinear: A Simple Way to Improve the Performance and Interpretability of Deep Learning Models. arXiv preprint arXiv:1906.02799.
  4. PatchTST: A Transformer-based Model for Time Series Forecasting with Patch Tokenization and Splitting. arXiv preprint arXiv:2301.02628.
  5. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1605.06063.
  6. Ablation Studies in Artificial Neural Networks. arXiv:1901.08644.
  7. Morris, M. D. 1991. Factorial Sampling Plans for Preliminary Computational Experiments. Technometrics, 33(2): 161–174.
  8. Model-agnostic interpretability of machine learning models. arXiv preprint arXiv:1606.05386.
  9. Learning Important Features Through Propagating Activation Differences. arXiv:1704.02685.
  10. Axiomatic Attribution for Deep Networks. arXiv:1703.01365.
  11. Autoformer: A Transformer-based Model for Time Series Forecasting. arXiv preprint arXiv:2108.06051.
  12. TimesNet: A Temporal 2D-variation Modeling Framework for Time Series Forecasting. arXiv preprint arXiv:2302.00666.
  13. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, 818–833.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.