Papers
Topics
Authors
Recent
Search
2000 character limit reached

Vision-based Multi-MAV Localization with Anonymous Relative Measurements Using Coupled Probabilistic Data Association Filter

Published 18 Sep 2019 in cs.RO | (1909.08200v2)

Abstract: We address the localization of robots in a multi-MAV system where external infrastructure like GPS or motion capture systems may not be available. Our approach lends itself to implementation on platforms with several constraints on size, weight, and power (SWaP). Particularly, our framework fuses the onboard VIO with the anonymous, visual-based robot-to-robot detection to estimate all robot poses in one common frame, addressing three main challenges: 1) the initial configuration of the robot team is unknown, 2) the data association between each vision-based detection and robot targets is unknown, and 3) the vision-based detection yields false negatives, false positives, inaccurate, and provides noisy bearing, distance measurements of other robots. Our approach extends the Coupled Probabilistic Data Association Filter (CPDAF)[1] to cope with nonlinear measurements. We demonstrate the superior performance of our approach over a simple VIO-based method in a simulation with the measurement models statistically modeled using the real experimental data. We also show how onboard sensing, estimation, and control can be used for formation flight.

Citations (33)

Summary

  • The paper introduces a vision and IMU-based framework that integrates anonymous relative measurements with a coupled probabilistic data association filter.
  • It extends CPDAF to manage nonlinear measurement models and address ambiguities in dynamic multi-agent formations.
  • Simulation results validate the method’s robust performance in reducing state errors and enhancing computational efficiency for multi-robot coordination.

Vision-based Multi-MAV Localization with Anonymous Relative Measurements Using Coupled Probabilistic Data Association Filter

The paper "Vision-based Multi-MAV Localization with Anonymous Relative Measurements Using Coupled Probabilistic Data Association Filter" presents a methodology targeted at localizing multi-robot aerial systems, particularly small-scale Multi-Agent Vehicles (MAVs), in environments where traditional localization methods such as GPS or motion capture systems are unavailable or unreliable. The proposed vision and IMU-based system is tailored for platforms with stringent restrictions on size, weight, and payload (SWaP), capitalizing on visual data and inertial measurements to derive relative distance and bearing data between MAVs.

Key Contributions

  1. Localization Framework: The framework integrates visual-inertial odometry with anonymous relative visual measurements to resolve the collective poses of MAVs within a common reference frame. This addresses several notable challenges: unknown initial configurations, ambiguous data associations, and erroneous vision-based measurements which are prone to outliers and noise.
  2. Extension of CPDAF: The study extends the Coupled Probabilistic Data Association Filter (CPDAF) to accommodate nonlinear measurement models inherent in vision-based systems. This specifically caters to the complexities associated with bearing and distance estimations in dynamic multi-agent environments.
  3. Measurement Models from Real Data: Performance validation is conducted through simulation based on realistic measurement models derived from empirical data. The MAVs utilize a 250-cm platform, likened to the real-world Falcon 250, featuring dual stereo cameras and an inertial measurement unit for odometry and detection.
  4. On-Board Sensing and Formation Flight: The work demonstrates practical applications for formation flight, showcasing the ability to employ on-board sensing for coordinated control and estimation tasks necessary for stable and adaptable multi-robot formations.

Implications and Observations

This research has several important implications for the field of multi-robot systems, particularly those deployed in GPS-deprived environments. The development of accurate, vision-based localization techniques enhances operational robustness, especially in dynamic and unpredictable settings. The integration of the CPDAF allows for handling the inherently probabilistic nature of anonymous measurements, providing a more reliable approach for multi-agent navigation and coordination.

Moreover, the framework supports flexibility in sensor choices, potentially adaptable to any bearing and distance sensing modality, although tested with vision sensors. The ability to maintain accurate localization with noisy and incomplete data introduces resilience which is crucial for practical deployment scenarios.

Numerical Results and Discussion

Simulation results illustrate the framework's efficacy over naive vision-inertial odometry methods, notably in maintaining lower relative state errors across various dynamic configurations. The inclusion of gating and hypothesis evaluation steps serves to drastically reduce computational burden, showing significant improvements in processing times. This efficiency is particularly valuable in scaling systems to accommodate greater numbers of agents or when dealing with higher-dimensional state spaces.

Future Directions

The work invites several avenues for future research, including the refinement of close-loop control strategies for enhanced real-time adaptive formation management. Additionally, further enhancements to hypothesis evaluation could further optimize computational performance. The integration into other sensing platforms, such as LiDAR or ultrawideband, could provide broader applicability and robustness improvements.

Overall, this paper contributes a crucial step in bridging the gap between theoretical localization frameworks and their deployment in complex, real-world scenarios, thereby advancing the capabilities of MAV systems in real-time collaborative contexts.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.