Papers
Topics
Authors
Recent
Search
2000 character limit reached

Minority Reports Defense: Defending Against Adversarial Patches

Published 28 Apr 2020 in cs.LG, cs.CR, cs.CV, and stat.ML | (2004.13799v1)

Abstract: Deep learning image classification is vulnerable to adversarial attack, even if the attacker changes just a small patch of the image. We propose a defense against patch attacks based on partially occluding the image around each candidate patch location, so that a few occlusions each completely hide the patch. We demonstrate on CIFAR-10, Fashion MNIST, and MNIST that our defense provides certified security against patch attacks of a certain size.

Citations (54)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.