Papers
Topics
Authors
Recent
Search
2000 character limit reached

Inroads into Autonomous Network Defence using Explained Reinforcement Learning

Published 15 Jun 2023 in cs.CR and cs.LG | (2306.09318v1)

Abstract: Computer network defence is a complicated task that has necessitated a high degree of human involvement. However, with recent advancements in machine learning, fully autonomous network defence is becoming increasingly plausible. This paper introduces an end-to-end methodology for studying attack strategies, designing defence agents and explaining their operation. First, using state diagrams, we visualise adversarial behaviour to gain insight about potential points of intervention and inform the design of our defensive models. We opt to use a set of deep reinforcement learning agents trained on different parts of the task and organised in a shallow hierarchy. Our evaluation shows that the resulting design achieves a substantial performance improvement compared to prior work. Finally, to better investigate the decision-making process of our agents, we complete our analysis with a feature ablation and importance study.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. Towards automated network mitigation analysis, in: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, SAC ’19, 2019.
  2. Asynchronous methods for deep reinforcement learning, in: International conference on machine learning, PMLR, 2016, pp. 1928–1937.
  3. Human-level control through deep reinforcement learning, Nature (2015).
  4. Playing Atari with Deep Reinforcement Learning, arXiv:1312.5602 [cs] (2013).
  5. Proximal Policy Optimization Algorithms, in: arXiv:1707.06347 [cs], 2017.
  6. O. et al., Dota 2 with Large Scale Deep Reinforcement Learning, 2019.
  7. Deep Reinforcement Learning framework for Autonomous Driving, Electronic Imaging 29 (2017) 70–76. URL: http://arxiv.org/abs/1704.02532. doi:10.2352/ISSN.2470-1173.2017.19.AVM-023, arXiv:1704.02532 [cs, stat].
  8. Reinforcement learning in robotics: A survey, International Journal of Robotics Research 32 (2013) 1238–1274. URL: https://doi.org/10.1177/0278364913495721. doi:10.1177/0278364913495721.
  9. Autonomous Network Defence using Reinforcement Learning, in: Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, ASIA CCS ’22, Association for Computing Machinery, New York, NY, USA, 2022, pp. 1252–1254. doi:10.1145/3488932.3527286.
  10. Explainable Face Recognition, in: A. Vedaldi, H. Bischof, T. Brox, J.-M. Frahm (Eds.), Computer Vision - ECCV 2020, Lecture Notes in Computer Science, Springer International Publishing, Cham, 2020, pp. 248–263.
  11. T. T. Nguyen, V. J. Reddi, Deep Reinforcement Learning for Cyber Security, 2021.
  12. Adversarial Policy Training against Deep Reinforcement Learning (2021) 1883–1900. URL: https://www.usenix.org/conference/usenixsecurity21/presentation/wu-xian.
  13. Curiosity-driven exploration by self-supervised prediction, in: Proceedings of the 34th International Conference on Machine Learning, ICML’17, 2017.
  14. Explainability in deep reinforcement learning, Knowledge-Based Systems (2021). URL: https://www.sciencedirect.com/science/article/pii/S0950705120308145.
  15. E. Puiutta, E. Veith, Explainable Reinforcement Learning: A Survey, in: 4th International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), 2020. URL: https://hal.inria.fr/hal-03414722.
  16. Cyborg: A gym for the development of autonomous cyber agents, in: IJCAI-21 1st International Workshop on Adaptive Cyber Defense, 2021a.
  17. CAGE, Cage challenge 1, in: IJCAI-21 1st International Workshop on Adaptive Cyber Defense., 2021.
  18. S. M. Lundberg, S.-I. Lee, A unified approach to interpreting model predictions, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, Curran Associates Inc., Red Hook, NY, USA, 2017, pp. 4768–4777.
  19. Reinforcement Learning for Autonomous Defence in Software-Defined Networking, arXiv:1808.05770 [cs, stat] (2018). URL: http://arxiv.org/abs/1808.05770, arXiv: 1808.05770.
  20. Adversarial Reinforcement Learning in a Cyber Security Simulation, in: Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART, 2017. doi:10.5220/0006197105590566.
  21. Dynamic Causal Bayesian Optimization, in: Advances in Neural Information Processing Systems, editor = M. Ranzato and A. Beygelzimer and Y. Dauphin and P.S. Liang and J. Wortman Vaughan, volume 34, 2021.
  22. Developing Optimal Causal Cyber-Defence Agents via Cyber Security Simulation, in: Workshop on Machine Learning for Cybersecurity (ML4Cyber) as part of the Proceedings of the 39th International Conference on Machine Learning, 2022.
  23. Network Environment Design for Autonomous Cyberdefense (2021). URL: https://arxiv.org/abs/2103.07583.
  24. Deep hierarchical reinforcement agents for automated penetration testing (2021). URL: https://arxiv.org/abs/2109.06449.
  25. K. Hammar, R. Stadler, Learning Intrusion Prevention Policies through Optimal Stopping, in: 2021 17th International Conference on Network and Service Management (CNSM), 2021. doi:10.23919/CNSM52442.2021.9615542.
  26. M. Feng, H. Xu, Deep reinforecement learning based optimal defense for cyber-physical system in presence of unknown cyber-attack, in: 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, 2017.
  27. DeepBLOC: A Framework for Securing CPS through Deep Reinforcement Learning on Stochastic Games, in: 2020 IEEE Conference on Communications and Network Security (CNS), 2020, pp. 1–9. doi:10.1109/CNS48642.2020.9162219.
Citations (13)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 21 likes about this paper.