🔥[2024-01-22]: Initial release of our benchmark!
Large Vision-Language Models (VLMs) have demonstrated impressive performance on complex tasks involving visual input with natural language instructions. However, it remains unclear to what extent capabilities on natural images transfer to Earth observation (EO) data, which are predominantly satellite and aerial images less common in VLM training data. In this work, we propose a comprehensive benchmark to gauge the progress of VLMs toward being useful tools for EO data by assessing their abilities on scene understanding, localization and counting, and change detection tasks. Motivated by real-world applications, our benchmark includes scenarios like urban monitoring, disaster relief, land use, and conservation. We discover that, although state-of-the-art VLMs like GPT-4V possess extensive world knowledge that leads to strong performance on open-ended tasks like location understanding and image captioning, their poor spatial reasoning limits usefulness on object localization and counting tasks.
In this paper, we provide an application-focused evaluation of instruction-following VLMs like GPT-4V for different capabilities in EO, including location understanding, zero-shot remote sensing scene understanding, world knowledge, text-grounded object localization and counting, and change detection. These capabilities provide the EO community with pathways for impact in real-world application areas, including urban monitoring, disaster relief, land use, and conservation.
Desired Capabilities for EO Data. To build an EO benchmark for VLMs, we focus on three broad categories of capabilities in our initial release: scene understanding, localization and counting, and change detection. Within each category, we construct evaluations based on applications ranging from animal conservation to urban monitoring. Our goals are to (1) evaluate the performance of existing VLMs, (2) provide insights into prompting techniques suitable for repurposing existing VLMs to EO tasks, and (3) implement an interface of data and models for flexible benchmark updates and evaluations of future VLMs. Our categories and tasks are:
We note that a number of capabilities desired for EO data remain unattainable by current-generation VLMs due to their inability to ingest multi-spectral, non-optical, or multi-temporal images. This is unlikely to be addressed by the vision community while its focus remains on natural images. Furthermore, available VLMs do not yet perform image segmentation, although we expect this to change in the near future.
GPT-4V has scene understanding abilities but cannot accurately count or localize objects. We only select part of the user prompt and model response for illustration purposes.
Below, we summarize insights from our evaluations, with a focus on GPT-4V, as it is generally the best-performing VLM across Earth observation tasks. We elaborate on the results in Sections Scene Understanding, Localization & Counting, and Change Detection.
@article{zhang2024vleobench,
title = {Good at captioning, bad at counting: Benchmarking GPT-4V on Earth observation data},
author = {Chenhui Zhang and Sherrie Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2401.17600}
}