Papers
Topics
Authors
Recent
Search
2000 character limit reached

Big Transfer (BiT): General Visual Representation Learning

Published 24 Dec 2019 in cs.CV and cs.LG | (1912.11370v3)

Abstract: Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.

Citations (1,124)

Summary

  • The paper demonstrates that large-scale supervised pre-training with BiT significantly enhances performance on visual tasks even in low-data regimes.
  • It introduces the BiT-HyperRule fine-tuning strategy, which streamlines hyperparameter tuning while adapting models across over 20 diverse tasks.
  • Empirical results show state-of-the-art accuracy, including 87.5% top-1 on ILSVRC-2012 and impressive few-shot performance on benchmarks like CIFAR-10.

Big Transfer (BiT): General Visual Representation Learning

The paper "Big Transfer (BiT): General Visual Representation Learning" presents an insightful study into the efficacy of pre-trained visual representations across a diverse array of downstream tasks. Conducted by a team at Google Research, the work revisits the paradigm of pre-training on large supervised datasets followed by fine-tuning on target tasks, focusing on scalability and generalizability.

Core Contributions

The paper identifies key contributions in both methodologies and empirical results:

  1. Scalability of Pre-Training:
    • BiT models are pre-trained on three varying scales of datasets: ILSVRC-2012 (1.28M images), ImageNet-21k (14M images), and JFT-300M (300M images).
    • The largest model, BiT-L, pre-trained on JFT-300M, achieves robust performance across a comprehensive set of 20+ downstream tasks, even in low-data regimes.
  2. Performance Evaluation:
    • BiT demonstrated strong numerical results, achieving 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19-task VTAB benchmark.
    • Particularly noteworthy is BiT's performance in few-shot scenarios: 76.8% accuracy on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class.
  3. Transfer Protocol:
    • The paper proposes a simple fine-tuning heuristic, BiT-HyperRule, which effectively generalizes across tasks without the need for extensive hyperparameter tuning.
    • This approach significantly reduces the computational cost for practitioners, making the pre-trained models more accessible for various applications.

Detailed Methodology

The authors stress the importance of a scalable pre-training approach. They explore how the combination of larger datasets and architectures lead to better transfer learning outcomes, elucidating the relationship between computational budget, dataset size, and model architecture.

Upstream Pre-Training

The pre-training phase employs ResNet architectures with Group Normalization (GN) and Weight Standardization (WS), put forth as substitutes to the widely used Batch Normalization (BN). The rationale is GN/WS's superior performance in the large-batch small-device context and better transfer capability across diverse tasks. This choice alleviates the common pitfalls in BN, especially the need for inter-device synchronization and running statistics updates, which can be detrimental to transfer learning.

Downstream Fine-Tuning

BiT-HyperRule simplifies downstream fine-tuning by leveraging three key hyperparameters adjusted per task: training schedule length, resolution, and MixUp regularization. This heuristic enables efficient and effective adaptation of pre-trained models to downstream tasks of varying sizes and complexities.

Empirical Analysis

The rigorous experimental setup provides a comprehensive evaluation across various benchmarks:

  • Standard Vision Benchmarks: BiT-L attains state-of-the-art results on ILSVRC-2012, CIFAR-10/100, Oxford-IIIT Pet, and Oxford Flowers.
  • Few-Shot Learning: BiT-L demonstrates unprecedented performance with extremely limited labeled data, outperforming existing semi-supervised approaches.
  • VTAB-1k Benchmark: BiT-L excels in specialized tasks involving natural, structured, and specialized imaging.

Additionally, robustness tests on ObjectNet and out-of-context images confirm BiT's high adaptability and precision even in unpredictable real-world scenarios.

Implications and Future Directions

The implications of BiT models are multifold:

  • Practical Applications: BiT's robust pre-trained models necessitate minimal tuning, facilitating their application in diverse visual tasks without substantial computational overhead.
  • Versatility: The ability of BiT to generalize across datasets with varying data regimes underscores its versatility, rendering it suitable for both high- and low-resource settings.
  • Theoretical Insights: The study underscores the value of scale—both in terms of datasets and model architectures—in achieving superior transfer learning performance.

Future work could investigate:

  1. Further Scaling: Exploring larger datasets and more sophisticated architectures could unlock additional performance gains.
  2. Fine-Tuning Heuristics: Refining and potentially automating optimal fine-tuning strategies per task could streamline the transfer learning process.
  3. Broader Applicability: Extending BiT's methodologies to other domains beyond visual representation, such as language and multimodal tasks, could yield similarly promising results.

In summary, the Big Transfer (BiT) approach offers a methodologically sound and empirically validated strategy for harnessing large-scale pre-training to achieve exceptional performance across a wide spectrum of visual tasks, presenting a valuable asset for the research community and practical applications alike.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 3 tweets with 1 like about this paper.