Papers
Topics
Authors
Recent
Search
2000 character limit reached

Mitigating the Impact of Labeling Errors on Training via Rockafellian Relaxation

Published 30 May 2024 in cs.LG | (2405.20531v2)

Abstract: Labeling errors in datasets are common, arising in a variety of contexts, such as human labeling, noisy labeling, and weak labeling (i.e., image classification). Although neural networks (NNs) can tolerate modest amounts of these errors, their performance degrades substantially once error levels exceed a certain threshold. We propose a new loss reweighting, architecture-independent methodology, Rockafellian Relaxation Method (RRM) for neural network training. Experiments indicate RRM can enhance neural network methods to achieve robust performance across classification tasks in computer vision and natural language processing (sentiment analysis). We find that RRM can mitigate the effects of dataset contamination stemming from both (heavy) labeling error and/or adversarial perturbation, demonstrating effectiveness across a variety of data domains and machine learning tasks.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.