POLAR-Sim: Augmenting NASA's POLAR Dataset for Data-Driven Lunar Perception and Rover Simulation
Abstract: NASA's POLAR dataset contains approximately 2,600 pairs of high dynamic range stereo photos captured across 13 varied terrain scenarios, including areas with sparse or dense rock distributions, craters, and rocks of different sizes. The purpose of these photos is to spur development in robotics, AI-based perception, and autonomous navigation. Acknowledging a scarcity of lunar images from around the lunar poles, NASA Ames produced on Earth but in controlled conditions images that resemble rover operating conditions from these regions of the Moon. We report on the outcomes of an effort aimed at accomplishing two tasks. In Task 1, we provided bounding boxes and semantic segmentation information for all the images in NASA's POLAR dataset. This effort resulted in 23,000 labels and semantic segmentation annotations pertaining to rocks, shadows, and craters. In Task 2, we generated the digital twins of the 13 scenarios that have been used to produce all the photos in the POLAR dataset. Specifically, for each of these scenarios, we produced individual meshes, texture information, and material properties associated with the ground and the rocks in each scenario. This allows anyone with a camera model to synthesize images associated with any of the 13 scenarios of the POLAR dataset. Effectively, one can generate as many semantically labeled synthetic images as desired -- with different locations and exposure values in the scene, for different positions of the sun, with or without the presence of active illumination, etc. The benefit of this work is twofold. Using outcomes of Task 1, one can train and/or test perception algorithms that deal with Moon images. For Task 2, one can produce as much data as desired to train and test AI algorithms that are anticipated to work in lunar conditions. All the outcomes of this work are available in a public repository for unfettered use and distribution.
- S. M. Parkes, I. Martin, M. Dunstan, and D. Matthews, “Planet Surface Simulation with PANGU,” in Space OPS 2004 Conference, Montreal, Quebec, Canada, May 2004.
- R. Brochard, J. Lebreton, C. Robin, K. Kanani, G. Jonniaux, A. Masson, N. Despré, and A. Berjaoui, “Scientific image rendering for space scenes with the SurRender software,” arXiv:1810.01423, 2018.
- M. Allan, U. Wong, P. M. Furlong, A. Rogg, S. McMichael, T. Welsh, I. Chen, S. Peters, B. Gerkey, M. Quigley, M. Shirley, M. Deans, H. Cannon, and T. Fong, “Planetary Rover Simulation for Lunar Exploration Missions,” in IEEE Aerospace Conference (AERO), Big Sky, Montana, USA, 2019, pp. 1–19.
- M. G. Müller, M. Durner, A. Gawel, W. Stürzl, R. Triebel, and R. Siegwart, “A Photorealistic Terrain Simulation Pipeline for Unstructured Outdoor Environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, Sep. 2021, pp. 9765–9772.
- M. Sewtz, H. Lehner, Y. Fanger, J. Eberle, M. Wudenka, M. G. Müller, T. Bodenmüller, and M. J. Schuster, “URSim - A Versatile Robot Simulator for Extra-Terrestrial Exploration,” in IEEE Aerospace Conference (AERO), Big Sky, Montana, USA, 2022, pp. 1–14.
- A. Kuzminykh, “Physically based real-time rendering of the moon,” Ph.D. dissertation, Hochschule Hannover, 2021.
- U. Wong, A. Nefian, L. Edwards, X. Buoyssounouse, P. M. Furlong, M. Deans, and T. Fong, “Polar Optical Lunar Analog Reconstruction (POLAR) stereo dataset,” NASA Ames Research Center, 2017.
- B.-H. Chen, P. Negrut, T. Liang, N. Batagoda, H. Zhang, and D. Negrut, “POLAR3D Dataset,” https://github.com/uwsbel/POLAR-digital.git, 2023.
- A. Colaprete, D. Andrews, W. Bluethmann, R. C. Elphic, B. Bussey, J. Trimble, K. Zacny, and J. E. Captain, “An overview of the volatiles investigating polar exploration rover (viper) mission,” in AGU fall meeting abstracts, vol. 2019, 2019, pp. P34B–03.
- G. Neuhold, T. Ollmann, S. Rota Bulo, and P. Kontschieder, “The mapillary vistas dataset for semantic understanding of street scenes,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 4990–4999.
- X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, and R. Yang, “The ApolloScape Dataset for Autonomous Driving,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Salt Lake City, UT, USA, Jun. 2018, pp. 1067–1073.
- F. Yu, H. Chen, X. Wang, W. Xian, Y. Chen, F. Liu, V. Madhavan, and T. Darrell, “BDD100K: A diverse driving dataset for heterogeneous multitask learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
- M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213–3223.
- A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
- M. Vayugundla, F. Steidle, M. Smisek, M. J. Schuster, K. Bussmann, and A. Wedler, “Datasets of Long Range Navigation Experiments in a Moon Analogue Environment on Mount Etna,” in International Symposium on Robotics (ISR), Munich, Germany, Jun. 2018, pp. 77–83.
- Q. J. Romain Pessia, Genya Ishigami, “Artificial lunar landscape dataset,” 2019. [Online]. Available: https://www.kaggle.com/datasets/romainpessia/artificial-lunar-rocky-landscape-dataset
- K. Wagstaff, Y. Lu, A. Stanboli, K. Grimes, T. Gowda, and J. Padams, “Deep Mars: CNN Classification of Mars Imagery for the PDS Imaging Atlas,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
- L. Meyer, M. Smíšek, A. Fontan Villacampa, L. Oliva Maza, D. Medina, M. J. Schuster, F. Steidle, M. Vayugundla, M. G. Müller, B. Rebele, A. Wedler, and R. Triebel, “The MADMAX data set for visual-inertial rover navigation on Mars,” Journal of Field Robotics, vol. 38, no. 6, pp. 833–853, 2021.
- Tzutalin, “LabelImg,” GitHub repository, 2015. [Online]. Available: https://github.com/tzutalin/labelImg
- M. Kazhdan, M. Bolitho, and H. Hoppe, “Poisson Surface Reconstruction,” in Proceedings of the Eurographics Symposium on Geometry Processing, ser. SGP ’06. Cagliari, Sardinia, Italy: Eurographics Association, 2006, pp. 61–70.
- A. Elmquist, R. Serban, and D. Negrut, “A Sensor Simulation Framework for Training and Testing Robots and Autonomous Vehicles,” Journal of Autonomous Vehicles and Systems, vol. 1, no. 2, pp. 021 001–1– 021 001–10, 2021.
- A. Elmquist and D. Negrut, “Modeling Cameras for Autonomous Vehicle and Robot Simulation: An Overview,” IEEE Sensors Journal, vol. 21, no. 22, pp. 25 547–25 560, 2021.
- B. Burley and W. D. A. Studios, “Physically-based shading at Disney,” in SIGGRAPH, 2012.
- W. D. A. Studios, “BRDF Explorer,” GitHub repository, 2012. [Online]. Available: https://github.com/wdas/brdf
- H. Sato, M. S. Robinson, B. Hapke, B. W. Denevi, and A. K. Boyd, “Resolved Hapke parameter maps of the Moon,” Journal of Geophysical Research: Planets, vol. 119, no. 8, pp. 1775–1805, 2014.
- B. Hapke, “Bidirectional reflectance spectroscopy: 1. Theory,” Journal of Geophysical Research: Solid Earth, vol. 86, no. B4, pp. 3039–3054, 1981.
- ——, “Bidirectional reflectance spectroscopy: 6. Effects of porosity,” Icarus, vol. 195, no. 2, pp. 918–926, 2008.
- B. Chen and D. Negrut, “Instance performance difference (ipd),” Simulation-Based Engineering Laboratory, University of Wisconsin-Madison, Tech. Rep., 2023, https://sbel.wisc.edu/technicalreports/.
- A. Tasora, D. Mangoni, D. Negrut, R. Serban, and P. Jayakumar, “Deformable soil with adaptive level of detail for tracked and wheeled vehicles,” International Journal of Vehicle Performance, vol. 5, no. 1, pp. 60–76, 2019.
- NASA/Goddard Space Flight Center Scientific Visualization Studio, “Deep Star Maps,” 2020. [Online]. Available: https://svs.gsfc.nasa.gov/4851
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.