Remote sensing visual question answering (RSVQA) involves interpreting complex geospatial information captured by satellite imagery to answer natural language questions, making it a vital tool for observing and analyzing Earth’s surface without direct contact. Although numerous studies have addressed RSVQA, most have focused primarily on answer accuracy, often overlooking the underlying reasoning capabilities required to interpret spatial and contextual cues in satellite imagery. To address this gap, this study presents a comprehensive evaluation of four large multimodal models (LMMs) as follows: GPT-4o, Grok 3, Gemini 2.5 Pro, and Claude 3.7 Sonnet. We used a curated subset of the EarthVQA dataset consisting of 100 rural images with 29 question–answer pairs each and 100 urban images with 42 pairs each. We developed the following three task-specific frameworks: (1)
Zero-GeoVision, which employs zero-shot prompting with problem-specific prompts that elicit direct answers from the pretrained knowledge base without fine-tuning; (2)
CoT-GeoReason, which enhances the knowledge base with chain-of-thought prompting, guiding it through explicit steps of feature detection, spatial analysis, and answer synthesis; and (3)
Self-GeoSense, which extends this approach by stochastically decoding five independent reasoning chains for each remote sensing question. Rather than merging these chains, it counts the final answers, selects the majority choice, and returns a single complete reasoning chain whose conclusion aligns with that majority. Additionally, we designed the
Geo-Judge framework to employ a two-stage evaluation process. In Stage 1, a GPT-4o-mini-based LMM judge assesses reasoning coherence and answer correctness using the input image, task type, reasoning steps, generated model answer, and ground truth. In Stage 2, blinded human experts independently review the LMM’s reasoning and answer, providing unbiased validation through careful reassessment. Focusing on
Self-GeoSense with Grok 3, this framework achieves superior performance with 94.69% accuracy in Basic Judging, 93.18% in Basic Counting, 89.42% in Reasoning-Based Judging, 83.29% in Reasoning-Based Counting, 77.64% in Object Situation Analysis, and 65.29% in Comprehensive Analysis, alongside RMSE values of 0.9102 in Basic Counting and 1.0551 in Reasoning-Based Counting.
Full article