Generalization of ForestLPR to dense jungle environments

Determine whether the ForestLPR LiDAR place recognition method, which extracts global descriptors from multiple bird’s-eye view density images of horizontal slices and employs a multi-BEV interaction module, generalizes to dense jungle environments beyond the forest-like datasets used in the experiments.

Background

ForestLPR is introduced as a LiDAR-based place recognition framework tailored to natural forest environments. It leverages multiple bird’s-eye view (BEV) density images generated from horizontal slices between 1 and 6 meters and a multi-BEV interaction module to adaptively attend to discriminative heights.

The method is evaluated on forest-like datasets (Wild-Places, ANYmal, Botanic) and demonstrates strong performance and generalization across these settings. However, the paper explicitly notes uncertainty regarding how the approach would perform in dense jungles, which may present greater occlusion, vegetation complexity, and seasonal variability than the tested environments.

References

Since our method is proposed for forests and has only been tested on forest-like datasets, it is still uncertain about its generalization to dense jungles, where it is likely that all methods would have difficulties.

ForestLPR: LiDAR Place Recognition in Forests Attentioning Multiple BEV Density Images  (2503.04475 - Shen et al., 6 Mar 2025) in Discussion and Conclusion (final paragraph)