Papers
Topics
Authors
Recent
Search
2000 character limit reached

Foundation Models for Generalist Geospatial Artificial Intelligence

Published 28 Oct 2023 in cs.CV and cs.LG | (2310.18660v2)

Abstract: Significant progress in the development of highly adaptable and reusable AI models is expected to have a significant impact on Earth science and remote sensing. Foundation models are pre-trained on large unlabeled datasets through self-supervision, and then fine-tuned for various downstream tasks with small labeled datasets. This paper introduces a first-of-a-kind framework for the efficient pre-training and fine-tuning of foundational models on extensive geospatial data. We have utilized this framework to create Prithvi, a transformer-based geospatial foundational model pre-trained on more than 1TB of multispectral satellite imagery from the Harmonized Landsat-Sentinel 2 (HLS) dataset. Our study demonstrates the efficacy of our framework in successfully fine-tuning Prithvi to a range of Earth observation tasks that have not been tackled by previous work on foundation models involving multi-temporal cloud gap imputation, flood mapping, wildfire scar segmentation, and multi-temporal crop segmentation. Our experiments show that the pre-trained model accelerates the fine-tuning process compared to leveraging randomly initialized weights. In addition, pre-trained Prithvi compares well against the state-of-the-art, e.g., outperforming a conditional GAN model in multi-temporal cloud imputation by up to 5pp (or 5.7%) in the structural similarity index. Finally, due to the limited availability of labeled data in the field of Earth observation, we gradually reduce the quantity of available labeled data for refining the model to evaluate data efficiency and demonstrate that data can be decreased significantly without affecting the model's accuracy. The pre-trained 100 million parameter model and corresponding fine-tuning workflows have been released publicly as open source contributions to the global Earth sciences community through Hugging Face.

Citations (55)

Summary

  • The paper presents Prithvi, a foundation model pre-trained on over 1TB of multispectral satellite imagery using a masked autoencoder architecture.
  • The paper details a novel data sampling and preprocessing pipeline for harmonized Landsat-Sentinel imagery, ensuring broad geospatial representation.
  • The paper demonstrates Prithvi's effectiveness on cloud gap imputation, flood mapping, and wildfire scar segmentation, outperforming traditional methods.

Foundation Models for Generalist Geospatial Artificial Intelligence

This essay provides an expert overview of the paper "Foundation Models for Generalist Geospatial Artificial Intelligence" (2310.18660). The paper presents a significant advancement in the application of foundation models to geospatial data, focusing on the Prithvi model—a large-scale, transformer-based model trained on substantial volumes of multispectral satellite imagery.

Introduction to Geospatial Foundation Models

Geospatial AI has traditionally relied on task-specific models trained on labeled data, which is labor-intensive to acquire and labeled. Foundation models, which employ self-supervised learning on large unlabeled datasets before fine-tuning on smaller labeled datasets, represent a paradigm shift. Prithvi leverages this approach, being pre-trained on over 1TB of data from the Harmonized Landsat-Sentinel 2 (HLS) dataset and fine-tuned for tasks such as cloud gap imputation and wildfire scar segmentation. Figure 1

Figure 1: We propose a first-of-its-kind framework for the development of geospatial foundation models from raw satellite imagery, which we leverage to generate the Prithvi-100M model.

Data Preprocessing and Training

Harmonized Landsat Sentinel-2 Dataset

Prithvi utilizes the HLS dataset, which offers harmonized data from multiple satellite sources at a resolution of 30 meters. The dataset combines observations from Landsat and Sentinel satellites to provide frequent and comprehensive imagery suitable for pretraining large models.

Efficient Data Sampling and Preprocessing

The paper introduces a novel pipeline to efficiently sample and preprocess satellite imagery. A stratified sampling approach based on geospatial statistics ensures broad representation without redundancy, followed by preprocessing to exclude data with significant cloud cover, leveraging Fmask for these determinations.

Model Architecture

Prithvi uses a Masked Autoencoder (MAE) architecture with a Vision Transformer (ViT) backbone for self-supervised learning. The model processes spatiotemporal data through 3D positional and patch embeddings tailored for satellite imagery. Figure 2

Figure 2: The masked autoencoder (MAE) structure for pre-training Prithvi on large-scale multi-temporal and multi-spectral satellite images.

Pretraining Details

The pretraining adopts MAE, reconstructing masked tokens through latent representations. Optimized through the AdamW optimizer, the training process involves large-scale data handling, improved by using Zarr files for efficient data input.

Application on Downstream Tasks

Prithvi's architecture allows it to be effectively adapted for various Earth observation tasks, including:

Cloud Gap Imputation

Prithvi outperforms traditional CGAN-based architectures in recreating pixel values for cloud-covered areas, showcasing data efficiency and rapid convergence on metrics such as Structural Similarity Index (SSIM). Figure 3

Figure 3

Figure 3: Prithvi: Model can infer pixel values without access to the date of any of the time steps.

Flood and Wildfire Scar Mapping

Evaluations on flood mapping with Sentinel data at a 10-meter resolution demonstrate Prithvi's capability to generalize across resolutions and global datasets. Similarly, wildfire scar segmentation is effectively performed, highlighting Prithvi's versatility and efficiency with limited labeling resources.

Discussion

The framework laid by Prithvi for geospatial models is crucial for domains where labeled data is sparse. By leveraging self-supervised pretraining, Prithvi demonstrates robustness in adapting to diverse geospatial tasks, setting precedence for future improvements such as incorporating global data and multi-scale features during pretraining.

Conclusion

The Prithvi model represents a pivotal advancement in geospatial AI by proving the efficacy of foundation models in Earth sciences. Through the open-sourcing of Prithvi's architecture and weight contributions, this paper significantly contributes to the field, potentially enhancing AI applications in remote sensing and climate science. The work exemplifies the vital role of self-supervision and efficient data handling in developing generalist AI models capable of addressing varied geospatial challenges.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 52 likes about this paper.