Papers
Topics
Authors
Recent
Search
2000 character limit reached

Towards Data-Efficient Detection Transformers

Published 17 Mar 2022 in cs.CV and cs.AI | (2203.09507v3)

Abstract: Detection Transformers have achieved competitive performance on the sample-rich COCO dataset. However, we show most of them suffer from significant performance drops on small-size datasets, like Cityscapes. In other words, the detection transformers are generally data-hungry. To tackle this problem, we empirically analyze the factors that affect data efficiency, through a step-by-step transition from a data-efficient RCNN variant to the representative DETR. The empirical results suggest that sparse feature sampling from local image areas holds the key. Based on this observation, we alleviate the data-hungry issue of existing detection transformers by simply alternating how key and value sequences are constructed in the cross-attention layer, with minimum modifications to the original models. Besides, we introduce a simple yet effective label augmentation method to provide richer supervision and improve data efficiency. Experiments show that our method can be readily applied to different detection transformers and improve their performance on both small-size and sample-rich datasets. Code will be made publicly available at \url{https://github.com/encounter1997/DE-DETRs}.

Citations (53)

Summary

Towards Data-Efficient Detection Transformers

This paper addresses the critical issue of data efficiency in detection transformers, which have been lauded for their effectiveness on large datasets like COCO but suffer from significant performance drops on smaller datasets such as Cityscapes. The authors empirically identify key factors that contribute to the data inefficiency of detection transformers through a detailed analysis conducted via a transformation from a Sparse RCNN, known for its data efficiency, to the representative DETR model. Their findings suggest that the sparse feature sampling from local image areas is crucial in mitigating the data-hungry nature of detection transformers.

The authors propose a straightforward yet impactful modification to existing detection transformers by altering how key and value sequences are constructed within the cross-attention layer of the transformer decoder. This is achieved with minor changes to the original models, alongside the introduction of a novel label augmentation method designed to provide richer supervision and thus improve data efficiency.

Key Findings and Contributions

  1. Data Efficiency Problem Identification: The authors document a stark performance contrast between detection transformers and CNN-based object detectors like Faster-RCNN on small datasets. They highlight that existing detection transformers are generally data-hungry, which is detrimental given the resources required to curate large datasets.

  2. Empirical Analysis through Model Transition: By incrementally transforming a Sparse RCNN to a DETR, the study isolates factors that affect data efficiency:

    • Sparse feature sampling from local image regions.
    • The utilization of multi-scale features made feasible through sparse sampling.
    • Making predictions relative to initial spatial priors, which seems to avoid extensive learning of locality from data.
  3. Proposed Solutions:

    • Sparse Feature Sampling: The paper presents a method that samples features based on predicted bounding boxes and integrates these within the decoder, allowing for enriched data context while maintaining minimal model alterations.
    • Multi-scale Feature Incorporation: By sampling from multi-scale features, the modified detection transformers leverage additional context without excessive computational cost.
    • Label Augmentation Strategy: The authors enhance supervision by repeating positive labels, thereby enriching the training signal for detection transformers.
  4. Experimental Validation: Extensive experiments demonstrate that the proposed modifications significantly enhance the performance of detection transformers on small datasets and maintain, if not improve, performance on larger datasets like COCO. Specifically, the proposed methods achieve substantial performance gains on the Cityscapes dataset while also showing improved efficiency on sub-sampled COCO datasets.

Implications and Future Directions

The implications of this research are manifold. Practically, the ability to reduce the data demands of detection transformers expands their usability in real-world applications where data is scarce or costly to annotate. Theoretically, this work furthers our understanding of transformer architectures in the vision domain, providing insights into the integration of inductive biases commonly used in CNNs into transformer-based models.

Looking forward, the exploration of data-efficient architectures holds promise for transforming diverse applications across AI. Future studies could explore the extension of these principles to other vision tasks, such as segmentation or 3D object detection, potentially leading to a broader paradigm shift in how transformers are designed and trained. Additionally, these findings may spark interest in devising new architecture designs that inherently account for data efficiency without relying on extensive pre-training.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.