Papers
Topics
Authors
Recent
Search
2000 character limit reached

HYDRA: Pruning Adversarially Robust Neural Networks

Published 24 Feb 2020 in cs.CV, cs.LG, and stat.ML | (2002.10509v3)

Abstract: In safety-critical but computationally resource-constrained applications, deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size (often millions of parameters). While the research community has extensively explored the use of robust training and network pruning independently to address one of these challenges, only a few recent works have studied them jointly. However, these works inherit a heuristic pruning strategy that was developed for benign training, which performs poorly when integrated with robust training techniques, including adversarial training and verifiable robust training. To overcome this challenge, we propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune. We realize this insight by formulating the pruning objective as an empirical risk minimization problem which is solved efficiently using SGD. We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously. We demonstrate the success of our approach across CIFAR-10, SVHN, and ImageNet dataset with four robust training techniques: iterative adversarial training, randomized smoothing, MixTrain, and CROWN-IBP. We also demonstrate the existence of highly robust sub-networks within non-robust networks. Our code and compressed networks are publicly available at \url{https://github.com/inspire-group/compactness-robustness}.

Citations (25)

Summary

  • The paper introduces the HYDRA approach, which integrates robust training objectives into the pruning process to enhance adversarial resilience.
  • Empirical results on CIFAR-10, SVHN, and ImageNet show that HYDRA achieves competitive benign and robust accuracy at high compression rates.
  • The method redefines neuron importance in adversarial contexts, enabling efficient deployment of secure neural networks in resource-constrained environments.

An Expert Overview of HYDRA: Pruning Adversarially Robust Neural Networks

The research paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" addresses a significant dual challenge in the field of deep learning: achieving adversarial robustness and reducing the cumbersome size of neural networks. The conundrum arises in safety-critical applications constrained by computational resources, where the balance between maintaining robustness against adversarial attacks and minimizing network size is crucial.

Key Contributions and Methodology

This paper introduces HYDRA, an innovative approach that integrates the concepts of network pruning and robust training—a combination that has been insufficiently explored. Traditional pruning methods, tailored for benign training, falls short in adversarial contexts. HYDRA tackles this issue by incorporating robust training objectives directly into the pruning process, allowing the pruning strategy to be informed and guided by the robust training objective. This is operationalized through the formulation of the pruning objective as an empirical risk minimization problem solved via stochastic gradient descent (SGD).

Notably, HYDRA is empirically validated across established datasets—CIFAR-10, SVHN, and ImageNet—using a suite of robust training techniques, namely iterative adversarial training, randomized smoothing, MixTrain, and CROWN-IBP. The approach not only preserves compressed network size but also significantly enhances both benign and robust accuracy.

Experimental Results and Findings

The empirical results demonstrate the efficacy of HYDRA in uncovering highly robust sub-networks within non-robust networks. A noteworthy result is its ability to maintain competitive performance at high compression rates. For instance, across various architectures and datasets, HYDRA shows that it is feasible to achieve robustness comparable to fully-trained large networks even after substantial pruning. This is a critical finding for deploying efficient and secure machine learning models in resource-constrained environments.

Implications and Future Directions

The findings suggest profound theoretical and practical implications. Theoretically, HYDRA prompts a re-evaluation of how neuron importance is determined in the context of robust learning, advocating for a synergistic approach that intertwines pruning methodologies with robust training algorithms. Practically, the development of such an approach facilitates the deployment of more secure, efficient, and scalable neural networks in environments where adversarial safety and resource efficiency are paramount.

Looking forward, the implications of HYDRA beckon further investigation into the boundaries of pruning and robustness, particularly in extending these concepts to varied architectures and adversarial defense mechanisms. Additionally, integrating HYDRA with other model optimization techniques such as quantization and neural architecture search could forge paths toward even more sophisticated compact model development.

In conclusion, the HYDRA framework represents a significant step toward harmonizing robust training with network pruning, advocating for a model architecture that does not compromise on safety or efficiency. The open-source availability of HYDRA further empowers the research community to augment advancements in this field, fostering developments that could revolutionize adversarially robust machine learning applications.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.