Papers
Topics
Authors
Recent
Search
2000 character limit reached

Out-of-distribution Detection with Boundary Aware Learning

Published 22 Dec 2021 in cs.CV | (2112.11648v3)

Abstract: There is an increasing need to determine whether inputs are out-of-distribution (\emph{OOD}) for safely deploying machine learning models in the open world scenario. Typical neural classifiers are based on the closed world assumption, where the training data and the test data are drawn \emph{i.i.d.} from the same distribution, and as a result, give over-confident predictions even faced with \emph{OOD} inputs. For tackling this problem, previous studies either use real outliers for training or generate synthetic \emph{OOD} data under strong assumptions, which are either costly or intractable to generalize. In this paper, we propose boundary aware learning (\textbf{BAL}), a novel framework that can learn the distribution of \emph{OOD} features adaptively. The key idea of BAL is to generate \emph{OOD} features from trivial to hard progressively with a generator, meanwhile, a discriminator is trained for distinguishing these synthetic \emph{OOD} features and in-distribution (\emph{ID}) features. Benefiting from the adversarial training scheme, the discriminator can well separate \emph{ID} and \emph{OOD} features, allowing more robust \emph{OOD} detection. The proposed BAL achieves \emph{state-of-the-art} performance on classification benchmarks, reducing up to 13.9\% FPR95 compared with previous methods.

Citations (6)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (4)

Collections

Sign up for free to add this paper to one or more collections.