Papers
Topics
Authors
Recent
Search
2000 character limit reached

On the Consistency of Top-k Surrogate Losses

Published 30 Jan 2019 in cs.LG and stat.ML | (1901.11141v2)

Abstract: The top-$k$ error is often employed to evaluate performance for challenging classification tasks in computer vision as it is designed to compensate for ambiguity in ground truth labels. This practical success motivates our theoretical analysis of consistent top-$k$ classification. Surprisingly, it is not rigorously understood when taking the $k$-argmax of a vector is guaranteed to return the $k$-argmax of another vector, though doing so is crucial to describe Bayes optimality; we do both tasks. Then, we define top-$k$ calibration and show it is necessary and sufficient for consistency. Based on the top-$k$ calibration analysis, we propose a class of top-$k$ calibrated Bregman divergence surrogates. Our analysis continues by showing previously proposed hinge-like top-$k$ surrogate losses are not top-$k$ calibrated and suggests no convex hinge loss is top-$k$ calibrated. On the other hand, we propose a new hinge loss which is consistent. We explore further, showing our hinge loss remains consistent under a restriction to linear functions, while cross entropy does not. Finally, we exhibit a differentiable, convex loss function which is top-$k$ calibrated for specific $k$.

Citations (42)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.