Papers
Topics
Authors
Recent
Search
2000 character limit reached

Enhancing Visual Place Recognition via Fast and Slow Adaptive Biasing in Event Cameras

Published 25 Mar 2024 in cs.RO and cs.CV | (2403.16425v2)

Abstract: Event cameras are increasingly popular in robotics due to beneficial features such as low latency, energy efficiency, and high dynamic range. Nevertheless, their downstream task performance is greatly influenced by the optimization of bias parameters. These parameters, for instance, regulate the necessary change in light intensity to trigger an event, which in turn depends on factors such as the environment lighting and camera motion. This paper introduces feedback control algorithms that automatically tune the bias parameters through two interacting methods: 1) An immediate, on-the-fly \textit{fast} adaptation of the refractory period, which sets the minimum interval between consecutive events, and 2) if the event rate exceeds the specified bounds even after changing the refractory period repeatedly, the controller adapts the pixel bandwidth and event thresholds, which stabilizes after a short period of noise events across all pixels (\textit{slow} adaptation). Our evaluation focuses on the visual place recognition task, where incoming query images are compared to a given reference database. We conducted comprehensive evaluations of our algorithms' adaptive feedback control in real-time. To do so, we collected the QCR-Fast-and-Slow dataset that contains DAVIS346 event camera streams from 366 repeated traversals of a Scout Mini robot navigating through a 100 meter long indoor lab setting (totaling over 35km distance traveled) in varying brightness conditions with ground truth location information. Our proposed feedback controllers result in superior performance when compared to the standard bias settings and prior feedback control methods. Our findings also detail the impact of bias adjustments on task performance and feature ablation studies on the fast and slow adaptation mechanisms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. G. Gallego, T. Delbrück, et al., “Event-based vision: A survey,” IEEE transactions on pattern analysis and machine intelligence, vol. 44, no. 1, pp. 154–180, 2020.
  2. S. Garg, T. Fischer, and M. Milford, “Where is your place, visual place recognition?” in International Joint Conference on Artificial Intelligence, vol. 8, 2021, pp. 4416–4425.
  3. S. Schubert, P. Neubert, et al., “Visual place recognition: A tutorial,” IEEE Robotics & Automation Magazine, 2023.
  4. C. Masone and B. Caputo, “A survey on deep visual place recognition,” IEEE Access, vol. 9, pp. 19 516–19 547, 2021.
  5. X. Zhang, L. Wang, and Y. Su, “Visual place recognition: A survey from deep learning perspective,” Pattern Recognition, vol. 113, p. 107760, 2021.
  6. T. Fischer and M. Milford, “How many events do you need? event-based visual place recognition using sparse but varying pixels,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 12 275–12 282, 2022.
  7. D. Kong, Z. Fang, et al., “Event-VPR: End-to-end weakly supervised deep network architecture for visual place recognition using event-based vision sensor,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–18, 2022.
  8. T. Fischer and M. Milford, “Event-based visual place recognition with ensembles of temporal windows,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6924–6931, 2020.
  9. A. J. Lee and A. Kim, “EventVLAD: Visual place recognition with reconstructed edges from event cameras,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021, pp. 2247–2252.
  10. H. Lee and H. Hwang, “Ev-reconnet: Visual place recognition using event camera with spiking neural networks,” IEEE Sensors Journal, 2023.
  11. F. Yu, Y. Wu, et al., “Brain-inspired multimodal hybrid neural network for robot place recognition,” Science Robotics, vol. 8, no. 78, 2023.
  12. L. Zhu, M. Mangan, and B. Webb, “Neuromorphic sequence learning with an event camera on routes through vegetation,” Science Robotics, vol. 8, no. 82, 2023.
  13. K. Hou, D. Kong, et al., “Fe-fusion-vpr: Attention-based multi-scale network architecture for visual place recognition by fusing frames and events,” IEEE Robotics and Automation Letters, 2023.
  14. T. Delbruck, R. Graca, and M. Paluch, “Feedback control of event cameras,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1324–1332.
  15. H. Rebecq, T. Horstschäfer, G. Gallego, and D. Scaramuzza, “Evo: A geometric approach to event-based 6-dof parallel tracking and mapping in real time,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 593–600, 2016.
  16. A. I. Maqueda, A. Loquercio, et al., “Event-based vision meets deep learning on steering prediction for self-driving cars,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5419–5427.
  17. M. Liu and T. Delbruck, “Adaptive time-slice block-matching optical flow algorithm for dynamic vision sensors,” in British Machine Vision Conference, 2018.
  18. U. M. Nunes, R. Benosman, and S.-H. Ieng, “Adaptive global decay process for event cameras,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 9771–9780.
  19. T. Finateu, A. Niwa, et al., “5.10 a 1280×\times× 720 back-illuminated stacked temporal contrast event-based vision sensor with 4.86 μ𝜇\muitalic_μm pixels, 1.066 geps readout, programmable event-rate controller and compressive data-formatting pipeline,” in IEEE International Solid-State Circuits Conference, 2020, pp. 112–114.
  20. R. Tapia, A. G. Eguíluz, J. R. M.-d. Dios, and A. Ollero, “ASAP: Adaptive scheme for asynchronous processing of event-based vision algorithms,” arXiv preprint arXiv:2209.08597, 2022.
  21. A. Glover, V. Vasco, and C. Bartolozzi, “A controlled-delay event camera framework for on-line robotics,” in IEEE International Conference on Robotics and Automation, 2018, pp. 2178–2183.
  22. R. Berner, C. Brändli, and M. Zannoni, “Data rate control for event-based vision sensor,” U.S. Patent US10 715 750B2, August 30, 2022.
  23. T. Delbruck et al., “Frame-free dynamic digital vision,” in International Symposium on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, 2008, pp. 21–26.
  24. M. Yang, S.-C. Liu, C. Li, and T. Delbruck, “Addressable current reference array with 170db dynamic range,” in IEEE International Symposium on Circuits and Systems, 2012, pp. 3110–3113.
  25. M. S. Dilmaghani, W. Shariff, et al., “Control and evaluation of event cameras output sharpness via bias,” in International Conference on Machine Vision, vol. 12701, 2023, pp. 455–462.
  26. B. McReynolds, R. Graca, and T. Delbruck, “Experimental methods to predict dynamic vision sensor event camera performance,” Optical Engineering, vol. 61, no. 7, pp. 074 103–074 103, 2022.
  27. S. P. Engelson and D. V. McDermott, “Error correction in mobile robot map learning,” in IEEE International Conference on Robotics and Automation, 1992, pp. 2555–2556.
  28. Y. Hu, J. Binas, et al., “DDD20 end-to-end event camera driving dataset: Fusing frames and events with deep learning for improved steering prediction,” in IEEE International Conference on Intelligent Transportation Systems, 2020.
  29. R. Graca and T. Delbruck, “Unraveling the paradox of intensity-dependent DVS pixel noise,” in International Image Sensor Workshop, 2021.
  30. R. Graca, B. McReynolds, and T. Delbruck, “Optimal biasing and physical limits of DVS event noise,” in International Image Sensor Workshop, 2023.
  31. G. Taverni, D. P. Moeys, et al., “Front and back illuminated dynamic and active pixel vision sensors comparison,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 65, no. 5, pp. 677–681, 2018.
  32. Y. Nozaki and T. Delbruck, “Temperature and parasitic photocurrent effects in dynamic vision sensors,” IEEE Transactions on Electron Devices, vol. 64, no. 8, pp. 3239–3245, 2017.
  33. S. Guo and T. Delbruck, “Low cost and latency event camera background activity denoising,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 1, pp. 785–795, 2022.
  34. B. McReynolds, R. Graca, and T. Delbruck, “Exploiting alternating DVS shot noise event pair statistics to reduce background activity,” in International Image Sensor Workshop, 2023.
  35. T. Delbrück and A. V. Schaik, “Bias current generators with wide dynamic range,” Analog Integrated Circuits and Signal Processing, vol. 43, pp. 247–268, 2005.
  36. T. Delbruck, R. Berner, P. Lichtsteiner, and C. Dualibe, “32-bit configurable bias current generator with sub-off-current capability,” in IEEE International Symposium on Circuits and Systems, 2010, pp. 1647–1650.
  37. R. Graça, B. McReynolds, and T. Delbruck, “Shining light on the DVS pixel: A tutorial and discussion about biasing and optimization,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4044–4052.
  38. R. Berner, C. Brandli, et al., “A 240×\times× 180 10mw 12us latency sparse-output vision sensor for mobile applications,” in Symposium on VLSI Circuits, 2013, pp. C186–C187.
  39. S. Macenski and I. Jambrecic, “SLAM Toolbox: SLAM for the dynamic world,” Journal of Open Source Software, vol. 6, no. 61, p. 2783, 2021.
  40. Y. Hu, S.-C. Liu, and T. Delbruck, “v2e: From video frames to realistic DVS events,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1312–1321.
  41. M. Milford, H. Kim, S. Leutenegger, and A. Davison, “Towards Visual SLAM with Event-based Cameras,” in Robotics: Science and Systems Workshops, 2015.
  42. M. J. Milford and G. F. Wyeth, “SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights,” in IEEE International Conference on Robotics and Automation, 2012, pp. 1643–1649.
  43. S. Hussaini, M. Milford, and T. Fischer, “Spiking neural networks for visual place recognition via weighted neuronal assignments,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4094–4101, 2022.
  44. S. Hussaini, M. Milford, and T. Fischer, “Ensembles of compact, region-specific & regularized spiking neural networks for scalable place recognition,” in IEEE International Conference on Robotics and Automation, 2023, pp. 4200–4207.
  45. A. D. Hines, P. G. Stratton, M. Milford, and T. Fischer, “VPRTempo: A fast temporally encoded spiking neural network for visual place recognition,” in IEEE International Conference on Robotics and Automation, 2024.
Citations (3)

Summary

  • The paper introduces a dual-mode adaptive biasing method using fast and slow controllers to dynamically adjust event camera parameters for improved VPR.
  • The fast adaptation modulates the refractory period in real time to manage event rates, ensuring optimal feature capture without data overload.
  • Experimental validation on a mobile robot under variable lighting shows significant improvements in R@1 metrics compared to static or previous adaptive techniques.

Adaptive Bias Control in Event Cameras for Visual Place Recognition

The paper "Enhancing Visual Place Recognition via Fast and Slow Adaptive Biasing in Event Cameras" presents a novel feedback control methodology for dynamically adjusting the bias parameters in event cameras. Event cameras are gaining traction in robotics due to their unique properties, such as low latency, energy efficiency, and a high dynamic range. However, their performance in tasks like Visual Place Recognition (VPR) largely depends on the optimization of their bias parameters. These parameters govern event generation, making their adaptive control crucial for reliable performance across varying environmental conditions.

Methodological Overview

The authors propose a control strategy that involves two interconnected feedback mechanisms: a "fast" adaptation that modifies the refractory period and a "slow" adaptation that adjusts pixel bandwidth and event thresholds.

  1. Fast Adaptation: The fast controller aims to maintain the event rate within a predefined optimal range by adjusting the refractory period in real-time. This parameter determines the interval required between successive events at a pixel. By maintaining the event rate within desired bounds, the fast adaptation ensures event-based features are captured without overwhelming the system with excessive data rates.
  2. Slow Adaptation: If event rates consistently exceed or fall below the targeted range, the slow controller steps in to adjust the pixel bandwidth and event thresholds. These adjustments are made conservatively due to their potential to temporarily increase noise through sudden bursts of events.

The combination of these feedback mechanisms allows the event camera to adapt efficiently to changes in light and motion, optimizing event generation for clearer, more relevant data capture for VPR tasks.

Experimental Validation

The effectiveness of the proposed method was evaluated using a customized dataset with a DAVIS346 event camera mounted on a mobile robot. The robot maneuvered through a controlled indoor environment under varying lighting conditions. The introduced dataset, QCR-Fast-and-Slow, contains extensive recordings suitable for evaluating event-based VPR systems.

The paper benchmarked the performance of the proposed controllers against baseline methods, including static bias parameters and adaptive techniques from previous works. The results demonstrate a noticeable improvement in R@1 metrics across challenging test scenarios, particularly under significant changes in environmental lighting. The fast-and-slow adaptive method outperforms existing methods, particularly in conditions with substantial appearance changes (e.g., high-to-low brightness scenarios).

Implications and Future Directions

This research suggests significant implications for the development of autonomous systems relying on event cameras. By automating bias tuning, these cameras can now achieve optimal performance without manual intervention, a crucial capability for autonomous systems operating in dynamic environments.

Future work could explore integrating this adaptive control mechanism with Spiking Neural Networks (SNNs) to leverage the natural alignment between event data streams and neuromorphic computing paradigms. Additionally, incorporating inertial measurements could further refine event frame processing, complementing optical data with motion cues.

Overall, this paper makes a strong case for the necessity and utility of adaptive bias control in event cameras, offering a clear pathway for enhancing their efficacy in robotic and neuromorphic applications. The approach sets a foundation for further explorations into more intelligent and context-aware calibration methods, potentially incorporating scene understanding for optimal bias adjustments.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 7 likes about this paper.