Papers
Topics
Authors
Recent
Search
2000 character limit reached

Deep Bregman Divergence for Contrastive Learning of Visual Representations

Published 15 Sep 2021 in cs.CV, cs.AI, and cs.LG | (2109.07455v2)

Abstract: Deep Bregman divergence measures divergence of data points using neural networks which is beyond Euclidean distance and capable of capturing divergence over distributions. In this paper, we propose deep Bregman divergences for contrastive learning of visual representation where we aim to enhance contrastive loss used in self-supervised learning by training additional networks based on functional Bregman divergence. In contrast to the conventional contrastive learning methods which are solely based on divergences between single points, our framework can capture the divergence between distributions which improves the quality of learned representation. We show the combination of conventional contrastive loss and our proposed divergence loss outperforms baseline and most of the previous methods for self-supervised and semi-supervised learning on multiple classifications and object detection tasks and datasets. Moreover, the learned representations generalize well when transferred to the other datasets and tasks. The source code and our models are available in supplementary and will be released with paper.

Citations (15)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.