Papers
Topics
Authors
Recent
Search
2000 character limit reached

Falcon: A Remote Sensing Vision-Language Foundation Model

Published 14 Mar 2025 in cs.CV | (2503.11070v1)

Abstract: This paper introduces a holistic vision-language foundation model tailored for remote sensing, named Falcon. Falcon offers a unified, prompt-based paradigm that effectively executes comprehensive and complex remote sensing tasks. Falcon demonstrates powerful understanding and reasoning abilities at the image, region, and pixel levels. Specifically, given simple natural language instructions and remote sensing images, Falcon can produce impressive results in text form across 14 distinct tasks, i.e., image classification, object detection, segmentation, image captioning, and etc. To facilitate Falcon's training and empower its representation capacity to encode rich spatial and semantic information, we developed Falcon_SFT, a large-scale, multi-task, instruction-tuning dataset in the field of remote sensing. The Falcon_SFT dataset consists of approximately 78 million high-quality data samples, covering 5.6 million multi-spatial resolution and multi-view remote sensing images with diverse instructions. It features hierarchical annotations and undergoes manual sampling verification to ensure high data quality and reliability. Extensive comparative experiments are conducted, which verify that Falcon achieves remarkable performance over 67 datasets and 14 tasks, despite having only 0.7B parameters. We release the complete dataset, code, and model weights at https://github.com/TianHuiLab/Falcon, hoping to help further develop the open-source community.

Summary

  • The paper introduces Falcon, a 0.7B parameter vision-language foundation model designed for remote sensing, utilizing a unified prompt-based approach for various tasks.
  • A key innovation is Falcon_SFT, a large-scale (78M samples, 5.6M images), multi-task instruction-tuning dataset that enables Falcon's robust training and versatility.
  • Falcon achieves state-of-the-art performance across 14 tasks and 67 datasets, demonstrating strong generalization and potential for practical applications in remote sensing.

Falcon: A Remote Sensing Vision-Language Foundation Model

This paper introduces Falcon, a vision-language foundation model explicitly tailored for remote sensing applications. Falcon is designed to integrate vision and language in a holistic paradigm, leveraging a unified, prompt-based approach to perform a multitude of complex remote sensing tasks. The model demonstrates exceptional understanding and reasoning abilities across different levels of image processing tasks, corroborated by robust evaluations across 14 distinct tasks such as image classification, object detection, segmentation, and image captioning.

A crucial innovation underpinning Falcon's capabilities is the development of Falcon_SFT, a large-scale, multi-task instruction-tuning dataset. This dataset is composed of approximately 78 million high-quality samples encompassing 5.6 million remote sensing images with diverse resolution and viewpoint attributes. Falcon_SFT is characterized by its comprehensive annotation hierarchy and rigorous sample quality verification.

The study highlights Falcon's capacity to achieve remarkable performance across various benchmarks, surpassing existing state-of-the-art models despite its relatively modest architecture of 0.7 billion parameters. Extensive evaluations confirm Falcon's efficacy over 67 datasets and in performing the defined tasks effectively, setting a new standard in the domain of remote sensing.

One of the significant conceptual advancements presented is tackling the domain and knowledge gap traditionally existing between natural images and remote sensing data. Previous works have primarily concentrated on task-specific models for remote sensing, which has constrained their scalability and adaptability. Falcon aims to address these limitations by serving as a more versatile and comprehensive foundational model capable of executing reasoning tasks at different granularity levels.

Additionally, the paper emphasizes Falcon’s success in data-driven training facilitated by Falcon_SFT. This dataset leverages a broad array of remote sensing images and includes innovative data annotation techniques, ensuring the training model learns robust and generalizable representations. The dataset extends beyond typical remote sensing annotations by incorporating hierarchical annotations, enhancing the model’s versatility and application range.

The architecture of Falcon integrates an image encoder and a multi-modality encoder-decoder, with the capacity to transform a singular or paired image input into a universal textual output. Furthermore, Falcon adopts a dynamic prompt training strategy, uniquely designed to process diverse instruction formats—thus enhancing its instinctual comprehension of task prompts.

Falcon’s utility extends beyond immediate task performance benefits; it holds potential for significant practical applications in the remote sensing domain, such as land cover classification, urban planning, and environmental monitoring. The model's adaptability to diverse tasks underscores its potential as a baseline platform for further advancements in remote sensing vision-LLMs.

The paper concludes with a commitment to open-source the complete dataset, source code, and model weights, which is expected to catalyze further exploration and development in the community. Such transparency and collaboration are pivotal for stimulating innovation and advancing the state-of-the-art in remote sensing AI.

Future directions for this research include refining the model's performance on more nuanced and complex remote sensing tasks, exploring the integration with additional types of non-image data, and further reducing computational demands without sacrificing accuracy. Considering Falcon's promising results, it has the capacity to significantly shape the future of AI applications in remote sensing.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

GitHub

Tweets

Sign up for free to view the 3 tweets with 38 likes about this paper.