Papers
Topics
Authors
Recent
Search
2000 character limit reached

Self-Supervised Learning by Estimating Twin Class Distributions

Published 14 Oct 2021 in cs.CV and cs.LG | (2110.07402v4)

Abstract: We present TWIST, a simple and theoretically explainable self-supervised representation learning method by classifying large-scale unlabeled datasets in an end-to-end way. We employ a siamese network terminated by a softmax operation to produce twin class distributions of two augmented images. Without supervision, we enforce the class distributions of different augmentations to be consistent. However, simply minimizing the divergence between augmentations will cause collapsed solutions, i.e., outputting the same class probability distribution for all images. In this case, no information about the input image is left. To solve this problem, we propose to maximize the mutual information between the input and the class predictions. Specifically, we minimize the entropy of the distribution for each sample to make the class prediction for each sample assertive and maximize the entropy of the mean distribution to make the predictions of different samples diverse. In this way, TWIST can naturally avoid the collapsed solutions without specific designs such as asymmetric network, stop-gradient operation, or momentum encoder. As a result, TWIST outperforms state-of-the-art methods on a wide range of tasks. Especially, TWIST performs surprisingly well on semi-supervised learning, achieving 61.2% top-1 accuracy with 1% ImageNet labels using a ResNet-50 as backbone, surpassing previous best results by an absolute improvement of 6.2%. Codes and pre-trained models are given on: https://github.com/bytedance/TWIST

Citations (17)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.