Papers
Topics
Authors
Recent
Search
2000 character limit reached

Event-based Camera Pose Tracking using a Generative Event Model

Published 7 Oct 2015 in cs.CV and cs.RO | (1510.01972v1)

Abstract: Event-based vision sensors mimic the operation of biological retina and they represent a major paradigm shift from traditional cameras. Instead of providing frames of intensity measurements synchronously, at artificially chosen rates, event-based cameras provide information on brightness changes asynchronously, when they occur. Such non-redundant pieces of information are called "events". These sensors overcome some of the limitations of traditional cameras (response time, bandwidth and dynamic range) but require new methods to deal with the data they output. We tackle the problem of event-based camera localization in a known environment, without additional sensing, using a probabilistic generative event model in a Bayesian filtering framework. Our main contribution is the design of the likelihood function used in the filter to process the observed events. Based on the physical characteristics of the sensor and on empirical evidence of the Gaussian-like distribution of spiked events with respect to the brightness change, we propose to use the contrast residual as a measure of how well the estimated pose of the event-based camera and the environment explain the observed events. The filter allows for localization in the general case of six degrees-of-freedom motions.

Citations (55)

Summary

  • The paper presents a novel Bayesian EKF framework employing a probabilistic generative event model to tackle DVS pose tracking.
  • It demonstrates efficiency with both synthetic and real datasets, achieving minor errors in position and velocity estimation.
  • The approach leverages sensor-specific contrast residuals to simplify event-based SLAM and enhance robot localization.

Event-based Camera Pose Tracking using a Generative Event Model

The paper "Event-based Camera Pose Tracking using a Generative Event Model" signifies an important step in the advancement of pose tracking using event-based cameras, specifically the Dynamic Vision Sensor (DVS). It presents an innovative approach by leveraging a generative event model to tackle the challenge of real-time localization of event-based cameras in a known environment without reliance on supplementary sensing technologies. The methodology employs a Bayesian filtering framework, specifically utilizing an Extended Kalman Filter (EKF), to process asynchronously generated brightness change information or "events."

Technical Contributions

The core contribution of this research lies in the formulation and application of a probabilistic generative event model within a Bayesian filtering context. This model is used to derive the likelihood function necessary for the correction step of the EKF, allowing efficient and accurate processing of the events generated by DVS. The authors assume a Gaussian-like distribution of spiked events concerning brightness changes, as indicated by empirical evidence from sensor data. This assumption facilitates the application of EKF, which benefits from Gaussian noise assumptions in the estimate updates.

The paper introduces a straightforward generative model for event generation, characterized by the contrast residual—a metric indicating how well the estimated pose of the event-based camera and its environment explain the observed events. The model integrates the physical properties of the DVS and assumptions about the behavior of brightness in the observed environment. The utilization of this contrast residual as a measurement in the EKF represents a significant departure from traditional explicit measurement models.

Experimental Outcomes

Experiments conducted on both synthetic and real datasets were insightful. Synthetic data generated through computer graphics simulations validated the effectiveness of the measurement function against known trajectories, displaying minor relative errors in position and velocity estimation. In real-world experiments, the method proved capable of accurately tracking the pose and velocities of the DVS mounted on a moving platform, processing substantial quantities of event data efficiently.

Implications and Future Work

This method's practical implications include the potential for high-speed maneuvering applications using event-based sensors, particularly in scenarios where traditional cameras may falter due to limitations in temporal response or dynamic range. Moreover, by presenting a way to process dense map data directly through events without confronting the data association complication typically encountered in localization tasks, the approach has streamlined event-based SLAM scenarios, paving the way for further research in robot localization systems.

The theoretical implications suggest a paradigm shift in thinking about pose tracking problems, advocating for generative models that closely align with sensor-specific dynamics. Future developments could extend this method toward simultaneous localization and mapping tasks without ancillary sensing, enhancing robustness and adaptability in robot navigation and sensor fusion applications.

In summary, the paper delivers robust quantitative insights and lays the groundwork for future explorations in the domain of neuromorphic sensing, significantly enriching the toolset available for researchers in the fields of computer vision and robotics. The discussed generative event model provides a promising lens through which event-based camera systems' potential can be fully realized and expanded upon.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.