Papers
Topics
Authors
Recent
Search
2000 character limit reached

2023 Low-Power Computer Vision Challenge (LPCVC) Summary

Published 11 Mar 2024 in cs.CV | (2403.07153v1)

Abstract: This article describes the 2023 IEEE Low-Power Computer Vision Challenge (LPCVC). Since 2015, LPCVC has been an international competition devoted to tackling the challenge of computer vision (CV) on edge devices. Most CV researchers focus on improving accuracy, at the expense of ever-growing sizes of machine models. LPCVC balances accuracy with resource requirements. Winners must achieve high accuracy with short execution time when their CV solutions run on an embedded device, such as Raspberry PI or Nvidia Jetson Nano. The vision problem for 2023 LPCVC is segmentation of images acquired by Unmanned Aerial Vehicles (UAVs, also called drones) after disasters. The 2023 LPCVC attracted 60 international teams that submitted 676 solutions during the submission window of one month. This article explains the setup of the competition and highlights the winners' methods that improve accuracy and shorten execution time.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. IEEE Low Power Computer Vision Challenge, https://lpcv.ai/.
  2. Evolution of winning solutions in the 2021 low-power computer vision challenge. Computer, 56(8):28–37, 2023.
  3. Low-Power Computer Vision: Status, Challenges, and Opportunities. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 9(2):411–421, June 2019. IEEE Journal on Emerging and Selected Topics in Circuits and Systems.
  4. Low-Power Computer Vision: Improve the Efficiency of Artificial Intelligence. Chapman & Hall, February 2022.
  5. The 2020 low-power computer vision challenge. In 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), pages 1–4, 2021.
  6. Drones and other technologies to assist in disaster relief efforts. Technical report, Tennessee Department of Transportation, 2020.
  7. Real-time semantic segmentation with fast attention. IEEE Robotics and Automation Letters, 6(1):263–270, 2021.
  8. Real-time semantic segmentation with fast attention, 2020.
  9. Dynamic neural networks: A survey. TPAMI, 2021.
  10. Latency-aware unified dynamic networks for efficient image recognition. arXiv:2308.15949, 2023.
  11. Learning to weight samples for dynamic early-exiting networks. In ECCV, 2022.
  12. Dynamic perceiver for efficient visual recognition. In ICCV, 2023.
  13. Latency-aware spatial-wise dynamic networks. In NeurIPS, 2022.
  14. Resolution adaptive networks for efficient inference. In CVPR, 2020.
  15. Dynamic spatial focus for efficient compressed video action recognition. IEEE TCSVT, 2023.
  16. Not all images are worth 16x16 words: Dynamic transformers for efficient image recognition. In NeurIPS, 2021.
  17. Glance and focus networks for dynamic visual recognition. IEEE TPAMI, 2022.
  18. Adaptive focus for efficient video recognition. In ICCV, 2021.
  19. Adaptive rotated convolution for rotated object detection. In ICCV, 2023.
  20. Rank-DETR for high quality object detection. In NeurIPS, 2023.
  21. An adaptive object detection system based on early-exit neural networks. IEEE Transactions on Cognitive and Developmental Systems.
  22. Pyramid scene parsing network. In CVPR, 2017.
  23. Fine-grained recognition with learnable semantic data augmentation. arXiv:2309.00399, 2023.
  24. Topformer: Token pyramid transformer for mobile semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12083–12093, 2022.
  25. Fully transformer network for change detection of remote sensing images. In Proceedings of the Asian Conference on Computer Vision, pages 1691–1708, 2022.
  26. Transy-net: Learning fully transformer networks for change detection of remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 2023.
  27. Siamese attentive convolutional network for effective remote sensing image change detection. In 2022 International Conference on Virtual Reality, Human-Computer Interaction and Artificial Intelligence (VRHCIAI), pages 167–176. IEEE, 2022.
  28. Autocompress: An automatic dnn structured pruning framework for ultra-high compression rates. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 4876–4883, 2020.
  29. Scalekd: Distilling scale-aware knowledge in small object detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19723–19733, 2023.
  30. Segformer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34:12077–12090, 2021.
  31. Seaformer: Squeeze-enhanced axial transformer for mobile semantic segmentation, 2023.
  32. MMSegmentation Contributors. Mmsegmentation: Openmmlab semantic segmentation toolbox and benchmark, 2020.
  33. Pidnet: A real-time semantic segmentation network inspired by pid controllers, 2023.
  34. Uavid: A semantic segmentation dataset for uav imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 165:108 – 119, 2020.
  35. Deep dual-resolution networks for real-time and accurate semantic segmentation of traffic scenes. IEEE Transactions on Intelligent Transportation Systems, 24(3):3448–3460, 2023.
  36. Exploring tensorrt to improve real-time inference for deep learning. In 2022 IEEE 24th Int Conf on High Performance Computing and Communications; 8th Int Conf on Data Science and Systems; 20th Int Conf on Smart City; 8th Int Conf on Dependability in Sensor, Cloud and Big Data, Systems and Application (HPCC/DSS/SmartCity/DependSys), pages 2011–2018, 2022.
Citations (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.