Papers
Topics
Authors
Recent
Search
2000 character limit reached

Systematic Evaluation of Deep Learning Models for Log-based Failure Prediction

Published 13 Mar 2023 in cs.SE | (2303.07230v4)

Abstract: With the increasing complexity and scope of software systems, their dependability is crucial. The analysis of log data recorded during system execution can enable engineers to automatically predict failures at run time. Several Machine Learning (ML) techniques, including traditional ML and Deep Learning (DL), have been proposed to automate such tasks. However, current empirical studies are limited in terms of covering all main DL types -- Recurrent Neural Network (RNN), Convolutional Neural network (CNN), and transformer -- as well as examining them on a wide range of diverse datasets. In this paper, we aim to address these issues by systematically investigating the combination of log data embedding strategies and DL types for failure prediction. To that end, we propose a modular architecture to accommodate various configurations of embedding strategies and DL-based encoders. To further investigate how dataset characteristics such as dataset size and failure percentage affect model accuracy, we synthesised 360 datasets, with varying characteristics, for three distinct system behavioral models, based on a systematic and automated generation approach. Using the F1 score metric, our results show that the best overall performing configuration is a CNN-based encoder with Logkey2vec. Additionally, we provide specific dataset conditions, namely a dataset size >350 or a failure percentage >7.5%, under which this configuration demonstrates high accuracy for failure prediction.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. Bauer E, Adams R (2012) Reliability and availability of cloud computing. John Wiley & Sons
  2. Black PE (2020) Strongly connected component. Dictionary of Algorithms and Data Structures URL https://www.nist.gov/dads/HTML/stronglyConnectedCompo.html
  3. Breiman L (2001) Random forests. Machine learning 45(1):5–32, DOI 10.1023/A:1010933404324
  4. Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  5. Cortes C, Vapnik V (1995) Support-vector networks. Machine learning 20(3):273–297
  6. Foundation CC (2023) Common crawl corpus. URL https://commoncrawl.org/
  7. Hochreiter S, Schmidhuber J (1997b) Long short-term memory. Neural Computation 9(8):1735–1780
  8. Kim Y (2014) Convolutional neural networks for sentence classification. arXiv preprint arXiv:14085882
  9. Lipton ZC (2015) A critical review of recurrent neural networks for sequence learning. ArXiv abs/1506.00019
  10. Package RP (2019) URL https://docs.python.org/3/library/random.html, accessed 2022-11-14
  11. Tauber A (2018) exrex: Irregular methods for regular expressions. URL https://github.com/asciimoo/exrex, accessed 2022-11-14
Citations (1)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.