Enhancing Visual Place Recognition via Fast and Slow Adaptive Biasing in Event Cameras
Abstract: Event cameras are increasingly popular in robotics due to beneficial features such as low latency, energy efficiency, and high dynamic range. Nevertheless, their downstream task performance is greatly influenced by the optimization of bias parameters. These parameters, for instance, regulate the necessary change in light intensity to trigger an event, which in turn depends on factors such as the environment lighting and camera motion. This paper introduces feedback control algorithms that automatically tune the bias parameters through two interacting methods: 1) An immediate, on-the-fly \textit{fast} adaptation of the refractory period, which sets the minimum interval between consecutive events, and 2) if the event rate exceeds the specified bounds even after changing the refractory period repeatedly, the controller adapts the pixel bandwidth and event thresholds, which stabilizes after a short period of noise events across all pixels (\textit{slow} adaptation). Our evaluation focuses on the visual place recognition task, where incoming query images are compared to a given reference database. We conducted comprehensive evaluations of our algorithms' adaptive feedback control in real-time. To do so, we collected the QCR-Fast-and-Slow dataset that contains DAVIS346 event camera streams from 366 repeated traversals of a Scout Mini robot navigating through a 100 meter long indoor lab setting (totaling over 35km distance traveled) in varying brightness conditions with ground truth location information. Our proposed feedback controllers result in superior performance when compared to the standard bias settings and prior feedback control methods. Our findings also detail the impact of bias adjustments on task performance and feature ablation studies on the fast and slow adaptation mechanisms.
- G. Gallego, T. Delbrück, et al., “Event-based vision: A survey,” IEEE transactions on pattern analysis and machine intelligence, vol. 44, no. 1, pp. 154–180, 2020.
- S. Garg, T. Fischer, and M. Milford, “Where is your place, visual place recognition?” in International Joint Conference on Artificial Intelligence, vol. 8, 2021, pp. 4416–4425.
- S. Schubert, P. Neubert, et al., “Visual place recognition: A tutorial,” IEEE Robotics & Automation Magazine, 2023.
- C. Masone and B. Caputo, “A survey on deep visual place recognition,” IEEE Access, vol. 9, pp. 19 516–19 547, 2021.
- X. Zhang, L. Wang, and Y. Su, “Visual place recognition: A survey from deep learning perspective,” Pattern Recognition, vol. 113, p. 107760, 2021.
- T. Fischer and M. Milford, “How many events do you need? event-based visual place recognition using sparse but varying pixels,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 12 275–12 282, 2022.
- D. Kong, Z. Fang, et al., “Event-VPR: End-to-end weakly supervised deep network architecture for visual place recognition using event-based vision sensor,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–18, 2022.
- T. Fischer and M. Milford, “Event-based visual place recognition with ensembles of temporal windows,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6924–6931, 2020.
- A. J. Lee and A. Kim, “EventVLAD: Visual place recognition with reconstructed edges from event cameras,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021, pp. 2247–2252.
- H. Lee and H. Hwang, “Ev-reconnet: Visual place recognition using event camera with spiking neural networks,” IEEE Sensors Journal, 2023.
- F. Yu, Y. Wu, et al., “Brain-inspired multimodal hybrid neural network for robot place recognition,” Science Robotics, vol. 8, no. 78, 2023.
- L. Zhu, M. Mangan, and B. Webb, “Neuromorphic sequence learning with an event camera on routes through vegetation,” Science Robotics, vol. 8, no. 82, 2023.
- K. Hou, D. Kong, et al., “Fe-fusion-vpr: Attention-based multi-scale network architecture for visual place recognition by fusing frames and events,” IEEE Robotics and Automation Letters, 2023.
- T. Delbruck, R. Graca, and M. Paluch, “Feedback control of event cameras,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1324–1332.
- H. Rebecq, T. Horstschäfer, G. Gallego, and D. Scaramuzza, “Evo: A geometric approach to event-based 6-dof parallel tracking and mapping in real time,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 593–600, 2016.
- A. I. Maqueda, A. Loquercio, et al., “Event-based vision meets deep learning on steering prediction for self-driving cars,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5419–5427.
- M. Liu and T. Delbruck, “Adaptive time-slice block-matching optical flow algorithm for dynamic vision sensors,” in British Machine Vision Conference, 2018.
- U. M. Nunes, R. Benosman, and S.-H. Ieng, “Adaptive global decay process for event cameras,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 9771–9780.
- T. Finateu, A. Niwa, et al., “5.10 a 1280×\times× 720 back-illuminated stacked temporal contrast event-based vision sensor with 4.86 μ𝜇\muitalic_μm pixels, 1.066 geps readout, programmable event-rate controller and compressive data-formatting pipeline,” in IEEE International Solid-State Circuits Conference, 2020, pp. 112–114.
- R. Tapia, A. G. Eguíluz, J. R. M.-d. Dios, and A. Ollero, “ASAP: Adaptive scheme for asynchronous processing of event-based vision algorithms,” arXiv preprint arXiv:2209.08597, 2022.
- A. Glover, V. Vasco, and C. Bartolozzi, “A controlled-delay event camera framework for on-line robotics,” in IEEE International Conference on Robotics and Automation, 2018, pp. 2178–2183.
- R. Berner, C. Brändli, and M. Zannoni, “Data rate control for event-based vision sensor,” U.S. Patent US10 715 750B2, August 30, 2022.
- T. Delbruck et al., “Frame-free dynamic digital vision,” in International Symposium on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, 2008, pp. 21–26.
- M. Yang, S.-C. Liu, C. Li, and T. Delbruck, “Addressable current reference array with 170db dynamic range,” in IEEE International Symposium on Circuits and Systems, 2012, pp. 3110–3113.
- M. S. Dilmaghani, W. Shariff, et al., “Control and evaluation of event cameras output sharpness via bias,” in International Conference on Machine Vision, vol. 12701, 2023, pp. 455–462.
- B. McReynolds, R. Graca, and T. Delbruck, “Experimental methods to predict dynamic vision sensor event camera performance,” Optical Engineering, vol. 61, no. 7, pp. 074 103–074 103, 2022.
- S. P. Engelson and D. V. McDermott, “Error correction in mobile robot map learning,” in IEEE International Conference on Robotics and Automation, 1992, pp. 2555–2556.
- Y. Hu, J. Binas, et al., “DDD20 end-to-end event camera driving dataset: Fusing frames and events with deep learning for improved steering prediction,” in IEEE International Conference on Intelligent Transportation Systems, 2020.
- R. Graca and T. Delbruck, “Unraveling the paradox of intensity-dependent DVS pixel noise,” in International Image Sensor Workshop, 2021.
- R. Graca, B. McReynolds, and T. Delbruck, “Optimal biasing and physical limits of DVS event noise,” in International Image Sensor Workshop, 2023.
- G. Taverni, D. P. Moeys, et al., “Front and back illuminated dynamic and active pixel vision sensors comparison,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 65, no. 5, pp. 677–681, 2018.
- Y. Nozaki and T. Delbruck, “Temperature and parasitic photocurrent effects in dynamic vision sensors,” IEEE Transactions on Electron Devices, vol. 64, no. 8, pp. 3239–3245, 2017.
- S. Guo and T. Delbruck, “Low cost and latency event camera background activity denoising,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 1, pp. 785–795, 2022.
- B. McReynolds, R. Graca, and T. Delbruck, “Exploiting alternating DVS shot noise event pair statistics to reduce background activity,” in International Image Sensor Workshop, 2023.
- T. Delbrück and A. V. Schaik, “Bias current generators with wide dynamic range,” Analog Integrated Circuits and Signal Processing, vol. 43, pp. 247–268, 2005.
- T. Delbruck, R. Berner, P. Lichtsteiner, and C. Dualibe, “32-bit configurable bias current generator with sub-off-current capability,” in IEEE International Symposium on Circuits and Systems, 2010, pp. 1647–1650.
- R. Graça, B. McReynolds, and T. Delbruck, “Shining light on the DVS pixel: A tutorial and discussion about biasing and optimization,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4044–4052.
- R. Berner, C. Brandli, et al., “A 240×\times× 180 10mw 12us latency sparse-output vision sensor for mobile applications,” in Symposium on VLSI Circuits, 2013, pp. C186–C187.
- S. Macenski and I. Jambrecic, “SLAM Toolbox: SLAM for the dynamic world,” Journal of Open Source Software, vol. 6, no. 61, p. 2783, 2021.
- Y. Hu, S.-C. Liu, and T. Delbruck, “v2e: From video frames to realistic DVS events,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1312–1321.
- M. Milford, H. Kim, S. Leutenegger, and A. Davison, “Towards Visual SLAM with Event-based Cameras,” in Robotics: Science and Systems Workshops, 2015.
- M. J. Milford and G. F. Wyeth, “SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights,” in IEEE International Conference on Robotics and Automation, 2012, pp. 1643–1649.
- S. Hussaini, M. Milford, and T. Fischer, “Spiking neural networks for visual place recognition via weighted neuronal assignments,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4094–4101, 2022.
- S. Hussaini, M. Milford, and T. Fischer, “Ensembles of compact, region-specific & regularized spiking neural networks for scalable place recognition,” in IEEE International Conference on Robotics and Automation, 2023, pp. 4200–4207.
- A. D. Hines, P. G. Stratton, M. Milford, and T. Fischer, “VPRTempo: A fast temporally encoded spiking neural network for visual place recognition,” in IEEE International Conference on Robotics and Automation, 2024.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.