Papers
Topics
Authors
Recent
Search
2000 character limit reached

Boosting Few-Shot Learning via Attentive Feature Regularization

Published 23 Mar 2024 in cs.CV | (2403.17025v1)

Abstract: Few-shot learning (FSL) based on manifold regularization aims to improve the recognition capacity of novel objects with limited training samples by mixing two samples from different categories with a blending factor. However, this mixing operation weakens the feature representation due to the linear interpolation and the overlooking of the importance of specific channels. To solve these issues, this paper proposes attentive feature regularization (AFR) which aims to improve the feature representativeness and discriminability. In our approach, we first calculate the relations between different categories of semantic labels to pick out the related features used for regularization. Then, we design two attention-based calculations at both the instance and channel levels. These calculations enable the regularization procedure to focus on two crucial aspects: the feature complementarity through adaptive interpolation in related categories and the emphasis on specific feature channels. Finally, we combine these regularization strategies to significantly improve the classifier performance. Empirical studies on several popular FSL benchmarks demonstrate the effectiveness of AFR, which improves the recognition accuracy of novel categories without the need to retrain any feature extractor, especially in the 1-shot setting. Furthermore, the proposed AFR can seamlessly integrate into other FSL methods to improve classification performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. Mixture-based Feature Space Learning for Few-shot Image Classification. In ICCV, 9021–9031.
  2. Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning. In ICCV, 9042–9051.
  3. CAD: Co-Adapting Discriminative Features for Improved Few-Shot Classification. In CVPR, 14534–14543.
  4. Remix: Rebalanced Mixup. In ECCV Workshops (6), volume 12540, 95–110.
  5. Describing Textures in the Wild. In CVPR, 3606–3613.
  6. Zero Shot Learning via Multi-scale Manifold Regularization. In CVPR, 5292–5299.
  7. Improved Regularization of Convolutional Neural Networks with Cutout. CoRR, abs/1708.04552.
  8. Z-score normalization, hubness, and few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 142–151.
  9. Quick, stat!: A statistical analysis of the quick, draw! dataset. arXiv preprint arXiv:1907.06417.
  10. MixUp as Locally Linear Out-of-Manifold Regularization. In AAAI, 3714–3722.
  11. RankDNN: Learning to Rank for Few-Shot Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 728–736.
  12. Low-Shot Visual Recognition by Shrinking and Hallucinating Features. In ICCV, 3037–3046.
  13. Cross Attention Network for Few-shot Classification. In NeurIPS, 4005–4016.
  14. DualNet: Learn Complementary Features for Image Recognition. In ICCV, 502–510.
  15. Detection of traffic signs in real-world images: The German traffic sign detection benchmark. In IJCNN, 1–8.
  16. Squeeze-and-Excitation Networks. In CVPR, 7132–7141.
  17. Label hallucination for few-shot classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 7005–7014.
  18. Relational Embedding for Few-Shot Classification. In ICCV, 8802–8813.
  19. Supervised Contrastive Learning. In NeurIPS.
  20. Adam: A Method for Stochastic Optimization. In ICLR.
  21. tSF: Transformer-Based Semantic Filter for Few-Shot Learning. In ECCV.
  22. Human-level concept learning through probabilistic program induction. Science, 1332–1338.
  23. Meta-Learning With Differentiable Convex Optimization. In CVPR, 10657–10665.
  24. Boosting Few-Shot Learning With Adaptive Margin Loss. In CVPR, 12573–12581.
  25. Large-Scale Few-Shot Learning: Knowledge Transfer With Class Hierarchy. In CVPR, 7212–7220.
  26. Selective-Supervised Contrastive Learning with Noisy Labels. In CVPR, 316–325.
  27. Learning a Few-shot Embedding Model with Contrastive Learning. In AAAI, 8635–8643.
  28. Pose-Guided Complementary Features Learning for Amur Tiger Re-Identification. In ICCV Workshops, 286–293.
  29. Semantic-based Selection, Synthesis, and Supervision for Few-shot Learning. In ACM Multimedia, 3569–3578.
  30. Channel Importance Matters in Few-Shot Image Classification. In ICML, volume 162 of Proceedings of Machine Learning Research, 14542–14559. PMLR.
  31. Automated Flower Classification over a Large Number of Classes. In ICVGIP, 722–729.
  32. Few-Shot Image Recognition With Knowledge Transfer. In ICCV, 441–449.
  33. Meta-Learning for Semi-Supervised Few-Shot Classification. In ICLR.
  34. Embedding Propagation: Smoother Manifold for Few-Shot Classification. In ECCV, 121–138.
  35. FeLMi: Few shot Learning with hard Mixup. In NeurIPS.
  36. A Feature Complementary Attention Network Based on Adaptive Knowledge Filtering for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote. Sens., 61: 1–19.
  37. Prototypical Networks for Few-shot Learning. In NeurIPS, 4077–4087.
  38. Fungi Recognition: A Practical Use Case. In WACV, 2305–2313.
  39. Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096.
  40. Attention is All you Need. In NeurIPS, 5998–6008.
  41. A Closer Look at Embedding Propagation for Manifold Smoothing. The Journal of Machine Learning Research, 23: 1–27.
  42. Manifold Mixup: Better Representations by Interpolating Hidden States. In ICML, 6438–6447.
  43. Matching Networks for One Shot Learning. In NeurIPS, 3630–3638.
  44. The caltech-ucsd birds-200-2011 dataset.
  45. Large-Scale Few-Shot Learning via Multi-modal Knowledge Discovery. In ECCV, 718–734.
  46. Multi-directional Knowledge Transfer for Few-Shot Learning. In ACM Multimedia, 3993–4002.
  47. Simpleshot: Revisiting nearest-neighbor classification for few-shot learning. arXiv preprint arXiv:1911.04623.
  48. Low-Shot Learning From Imaginary Data. In CVPR, 7278–7286.
  49. Few-Shot Classification With Feature Map Reconstruction Networks. In CVPR, 8012–8021.
  50. Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot Classification. In CVPR, 7962–7971.
  51. Alleviating the sample selection bias in few-shot learning by removing projection to the centroid. Advances in Neural Information Processing Systems, 35: 21073–21086.
  52. SEGA: Semantic guided attention on visual prototype for few-shot learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1056–1066.
  53. Free Lunch for Few-shot Learning: Distribution Calibration. In ICLR.
  54. Few-Shot Learning via Embedding Adaptation With Set-to-Set Functions. In CVPR, 8805–8814.
  55. Interventional Few-Shot Learning. In NeurIPS.
  56. CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features. In ICCV, 6022–6031.
  57. mixup: Beyond Empirical Risk Minimization. In ICLR.
  58. Binocular Mutual Learning for Improving Few-shot Classification. In ICCV, 8382–8391.
  59. Not all features matter: Enhancing few-shot clip with adaptive prior refinement. arXiv preprint arXiv:2304.01195.
Citations (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.