Papers
Topics
Authors
Recent
Search
2000 character limit reached

Self-Supervised Contrastive Learning with Adversarial Perturbations for Defending Word Substitution-based Attacks

Published 15 Jul 2021 in cs.CL | (2107.07610v3)

Abstract: In this paper, we present an approach to improve the robustness of BERT LLMs against word substitution-based adversarial attacks by leveraging adversarial perturbations for self-supervised contrastive learning. We create a word-level adversarial attack generating hard positives on-the-fly as adversarial examples during contrastive learning. In contrast to previous works, our method improves model robustness without using any labeled data. Experimental results show that our method improves robustness of BERT against four different word substitution-based adversarial attacks, and combining our method with adversarial training gives higher robustness than adversarial training alone. As our method improves the robustness of BERT purely with unlabeled data, it opens up the possibility of using large text datasets to train robust LLMs against word substitution-based adversarial attacks.

Citations (9)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.