Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fusion of Visual-Inertial Odometry with LiDAR Relative Localization for Cooperative Guidance of a Micro-Scale Aerial Vehicle

Published 30 Jun 2023 in cs.RO | (2306.17544v2)

Abstract: A novel relative localization approach for guidance of a micro-scale UAV by a well-equipped aerial robot fusing VIO with LiDAR is proposed in this paper. LiDAR-based localization is accurate and robust to challenging environmental conditions, but 3D LiDARs are relatively heavy and require large UAV platforms, in contrast to lightweight cameras. However, visual-based self-localization methods exhibit lower accuracy and can suffer from significant drift with respect to the global reference frame. To benefit from both sensory modalities, we focus on cooperative navigation in a heterogeneous team of a primary LiDAR-equipped UAV and a secondary micro-scale camera-equipped UAV. We propose a novel cooperative approach combining LiDAR relative localization data with VIO output on board the primary UAV to obtain an accurate pose of the secondary UAV. The pose estimate is used to precisely and reliably guide the secondary UAV along trajectories defined in the primary UAV reference frame. The experimental evaluation has shown the superior accuracy of our method to the raw VIO output and demonstrated its capability to guide the secondary UAV along desired trajectories while mitigating VIO drift. Thus, such a heterogeneous system can explore large areas with LiDAR precision, as well as visit locations inaccessible to the large LiDAR-carrying UAV platforms, as was showcased in a real-world cooperative mapping scenario.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. doi:10.48550/arXiv.2206.08185.
  2. doi:10.1002/rob.22021.
  3. doi:10.1109/ICRA48506.2021.9560911.
  4. doi:10.1109/TRO.2013.2277549.
  5. doi:10.1109/TRO.2019.2927835.
  6. doi:10.48550/arXiv.2208.01787.
  7. doi:10.1109/LRA.2019.2901683.
  8. doi:10.1109/LRA.2021.3062298.
  9. doi:10.1109/LRA.2020.2972819.
  10. doi:10.48550/ARXIV.2207.08301.
  11. doi:10.1109/TCYB.2019.2905570.
  12. doi:10.1109/LRA.2021.3138156.
  13. doi:10.1109/LRA.2022.3191204.
  14. doi:10.1109/TRO.2021.3058502.
  15. doi:10.1016/j.robot.2021.103933.
  16. doi:10.33012/2019.16912.
  17. doi:10.1109/PLANS46316.2020.9110125.
  18. doi:10.1109/ICUAS51884.2021.9476681.
  19. doi:10.1109/TRO.2021.3094157.
  20. doi:10.1109/LRA.2021.3136286.
  21. doi:10.1109/LRA.2021.3062347.
  22. doi:10.1109/ICRA40945.2020.9196944.
  23. doi:10.1109/LRA.2022.3171096.
  24. doi:10.48550/arXiv.2011.00830.
  25. doi:10.48550/arXiv:2305.18193.
  26. doi:10.1109/IROS45743.2020.9341690.
  27. doi:10.1109/TRO.2021.3133730.
  28. doi:10.1109/ICRA40945.2020.9197029.
  29. doi:10.1109/LRA.2019.2896472.
  30. doi:10.1109/ICRA40945.2020.9197352.
  31. doi:10.1007/s10846-021-01383-5.
  32. doi:10.1016/j.robot.2021.103970.
  33. doi:10.48550/arXiv.2303.05404.
  34. doi:https://doi.org/10.1016/j.robot.2022.104315.
  35. doi:10.15607/RSS.2014.X.007.
  36. doi:10.1109/TRO.2018.2853729.
  37. doi:10.48550/arXiv.2306.07229.
  38. doi:10.1109/IROS.2018.8593941.
  39. doi:10.1109/ICRA40945.2020.9196524.
Citations (5)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.