Next Article in Journal
BD-GNN: Integrating Spatial and Administrative Boundaries in Property Valuation Using Graph Neural Networks
Previous Article in Journal
Construction of Ultra-Wideband Virtual Reference Station and Research on High-Precision Indoor Trustworthy Positioning Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

An Operational Ethical Framework for GeoAI: A PRISMA-Based Systematic Review of International Policy and Scholarly Literature

Department of Drone and GIS Engineering, Namseoul University, 91 Daehak-ro, Seobuk-gu, Cheonan-si 31020, Republic of Korea
ISPRS Int. J. Geo-Inf. 2026, 15(1), 51; https://doi.org/10.3390/ijgi15010051
Submission received: 18 November 2025 / Revised: 6 January 2026 / Accepted: 19 January 2026 / Published: 22 January 2026

Abstract

This study proposes a systematic framework for establishing ethical guidelines for GeoAI (Geospatial Artificial Intelligence), which integrates AI with spatial data science, GIS, and remote sensing. While general AI ethics have advanced through the OECD, UNESCO, and the EU AI Act, ethical standards tailored to GeoAI remain underdeveloped. Geospatial information exhibits unique characteristics, spatiality, contextuality, and spatial autocorrelation—and consequently entails distinct risks such as geo-privacy, spatial fairness and bias, data provenance and quality, and misuse prevention related to mapping and surveillance. Following PRISMA 2020, a systematic review of 32 recent international policy documents and peer-reviewed articles was conducted; through content analysis with intercoder reliability verification (Krippendorff’s α ≥ 0.76), GeoAI ethical principles were extracted and normalized. The analysis identified twelve ethical axes—Geo-privacy, Data Provenance and Quality, Spatial Fairness and Bias, Transparency, Accountability and Auditability, Safety (Security and Robustness), Human Oversight and Human-in-the-Loop, Public Benefit and Sustainability, Participation and Stakeholder Engagement, Lifecycle Governance, Misuse Prevention, and Inclusion and Accessibility—each accompanied by an operational guideline. These axes together form a practical framework that integrates universal AI ethics principles with spatially specific risks inherent in GeoAI and specifies actionable assessment points across the GeoAI lifecycle. The framework is intended for direct use as checklists and governance artifacts (e.g., model/data cards) and as procurement and audit criteria in academic, policy, and administrative settings.

1. Introduction

The rapid advancement of AI has brought about structural transformations across social, economic, and technological domains in the twenty-first century. Machine learning and deep learning technologies (including large language models) have established themselves as new systems that augment human cognition and decision making by enabling rapid analysis of large-scale datasets [1,2,3,4]. In parallel, the ethical, legal, and social issues (ELSI) surrounding AI has become a central governance agenda, leading to the creation of numerous global guidelines and frameworks for responsible AI development and use. The OECD AI Principles identify five core axes—inclusive growth, human rights and democratic values, transparency and explainability, safety and robustness, and accountability—while the G20 reaffirmed these at an international level to provide benchmarks for national AI strategies [5]. UNESCO’s Recommendation on the Ethics of Artificial Intelligence and its 2024 revision emphasized human rights, sustainability, and multistakeholder governance as foundational to a global ethical standard [6,7]. The European Union became the first jurisdiction to legislate a risk-based regulatory framework through the EU AI Act [8], and the Council of Europe adopted in 2024 the world’s first binding AI Convention addressing human rights, democracy, and the rule of law [9]. Collectively, these developments underscore that AI is not merely a technological achievement, but a governance issue directly tied to human rights, social values, and public trust. Geospatial information—collected from satellites, aircraft, drones, LiDAR, IoT sensors, authoritative surveys and base maps produced by publicly mandated authorities, administrative datasets, and citizen-science platforms—exhibits spatiality, contextuality, and spatial autocorrelation, enabling policy-relevant analysis while raising location-related risks that require governance and standardization [10,11,12,13,14,15,16,17,18,19,20] Accordingly, international frameworks and standards (e.g., guidance from the United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM), modern specifications from the Open Geospatial Consortium (OGC), and the INSPIRE Directive) have positioned geospatial data as shared infrastructures for AI training and evidence-based policymaking [19,20,21,22]. UN-GGIM functions as an intergovernmental coordination body that promotes institutional arrangements and common standards so that geospatial information can operate as a trustworthy public infrastructure across jurisdictions.
Within this evolution, the concept of GeoAI has emerged. GeoAI integrates AI methods with spatial data science, geographic information systems (GIS), remote sensing, and cartography, establishing a transformative paradigm that advances spatial object recognition, land-cover classification, urban and environmental change detection, super-resolution reconstruction, and predictive modeling [23,24,25,26,27]. The EU’s Destination Earth project integrates AI, high-performance computing (HPC), and ultra-high-resolution numerical models to realize digital-twin simulations for extreme weather and climate-adaptation scenarios, revolutionizing decision support [28]. In this context, GeoAI reorganizes the entire lifecycle of data generation, processing, and utilization, offering new methodologies for addressing complex societal problems, while simultaneously amplifying the ethical stakes of spatial data use.
However, the rapid diffusion of GeoAI also brings significant ethical challenges. Location-based privacy violations, risks of re-identification and linkage, spatial imbalances such as the modifiable areal unit problem (MAUP), map-visualization distortion, and surveillance-oriented misuse have been identified as domain-specific ethical concerns [29,30,31,32]. For example, even sparse spatiotemporal traces can enable re-identification or sensitive inference about individuals’ routines. Moreover, because GeoAI is often image-intensive (e.g., large-scale remote-sensing imagery and foundation models), its computational and ecological footprint has become an additional ethical consideration under sustainability and public-interest criteria. General AI ethics frameworks alone cannot adequately address these spatial specificities; hence, a dedicated ethical framework tailored to the GeoAI context is essential [32,33]. The Locus Charter proposed by the EthicalGEO and Benchmark Initiative highlights principles such as minimal data collection, purpose limitation, anonymization, protection of vulnerable groups, and accountability [34]. In this context, minimal data collection denotes the practice of limiting spatial and temporal granularity, retention duration, and data attributes to what is strictly necessary, for example, by using aggregated grids or proximity zones rather than raw trajectories. Purpose limitation refers to the restriction of data use to the explicitly stated objective, excluding secondary applications. Accountability is understood as the clear identification of responsible entities, supported by auditable documentation and logging, together with established mechanisms for review and remedy.
The World Geospatial Industry Council [35] outlines fairness, transparency, and data-quality principles for spatial AI/ML applications, and the UK Geospatial Commission promotes the ABC Principles (Accountability–Bias–Clarity) to foster public confidence [36]. These efforts provide valuable starting points, yet they remain distributed across different venues and levels of normativity, leaving limited guidance on how to integrate them into a coherent and operational GeoAI ethics framework.
Academic contributions complement these institutional efforts. Rao et al. propose technical solutions such as privacy-preserving synthetic-trajectory generation and federated learning [37,38], while Ye et al. discuss ethical design in human-centered GeoAI foundation models that reflect human activity and social dynamics [39]. These studies underscore that GeoAI ethics must extend beyond technical safeguards to encompass institutional and societal contexts. Although broad consensus exists around universal AI ethics principles—Transparency, Fairness, Privacy, Accountability and Auditability, and Safety, Security and Robustness [1,2,40], GeoAI-specific ethics research remains nascent. McKenzie et al. [29] emphasize Geo-privacy and community-level privacy; Janowicz [30] examines sustainability, diversity, and bias from philosophical and normative perspectives; Kang et al. [31] review AI applications in cartography, highlighting issues of epistemic authority and stigmatization in GeoAI outputs; and Oluoch [32] explores the ethical implications of combining AI and geographic information technologies from a multidisciplinary standpoint. Yet, much of the existing work remains declarative or case-specific, and a systematic, integrated ethical framework that bridges policy requirements and technical-operational guidance remains underdeveloped.
The aim of this study is therefore to move beyond the simple enumeration or transplantation of general AI ethics principles and to construct a GeoAI-specific ethical framework that reflects risks inherent in geospatial properties—spatiality, contextuality, and spatial autocorrelation. Specifically, this paper makes two contributions: (i) it clarifies the central problematics of GeoAI ethics by synthesizing spatially specific risks and governance gaps across policy and scholarly corpora, and (ii) it proposes an operational twelve-axis framework with guidelines and checklists that translate these requirements into implementable governance and evaluation items.
To this end, the study systematically selected recent international policy documents and academic papers [41] and applied independent coding and intercoder-reliability testing (Krippendorff’s α) using a predefined codebook to extract and normalize ethical criteria [42]. The resulting framework provides twelve ethical axes encompassing both universal AI ethics principles and GeoAI-specific dimensions, each accompanied by an operational checklist integrating policy, standards, and scholarly evidence. Ultimately, this framework goes beyond merely transplanting generic AI ethics principles into a spatial context by embedding the unique risks of GeoAI. Academically, it establishes a foundational basis for GeoAI ethics research; politically, it fills institutional gaps across national systems; and practically, it offers actionable tools—checklists, model cards, and procurement standards—for responsible implementation. By integrating the ethical, legal, and social implications of GeoAI, this study lays the groundwork for building a responsible GeoAI ecosystem and for fostering international collaboration.

2. Background and Related Work

2.1. Geospatial Information and AI

Building on the characteristics outlined in Section 1, geospatial data have evolved into shared infrastructures for analysis and decision making, supported by interoperability standards and governance frameworks (e.g., UN-GGIM, OGC, and ISO/TC 211) [17,18,19,23]. INSPIRE and recent OGC specifications exemplify the shift toward harmonized, API-based SDIs that enable scalable GeoAI applications [19,20]. Within this governance landscape, the EU INSPIRE Directive represents a prominent example of legislating harmonization, interoperability, and metadata management to support environmental policy [20]. In parallel, technical interoperability has evolved from legacy OGC Web Services (e.g., WMS and WFS, and catalogue services for metadata discovery) and lightweight exchange formats such as GeoJSON toward API-based and 3D streaming–oriented specifications [19]. Catalogue services are critical in SDIs because they enable searchable metadata and dataset discovery, which directly affects provenance, reuse, and accountability. In particular, the adoption of 3D Tiles v1.1 strengthens large-scale interoperability for 3D cities and point clouds by integrating metadata structures and implicit tiling mechanisms, thereby supporting digital-twin and high-throughput visualization applications [19]. Beyond acquisition and interoperability, GeoAI workflows also depend on modelling decisions made during product scoping—e.g., defining target variables and class taxonomies, selecting spatial units of analysis, and constructing labels and ‘ground truth’. Such representational choices shape what becomes measurable and optimizable, and they can introduce downstream biases and value trade-offs that are not addressed by standards compliance alone.
Building on this maturing infrastructure, AI has been increasingly applied to automate and enhance key geospatial tasks such as object recognition, change detection, spatial prediction, and super-resolution reconstruction [23,43,44]. This convergence has crystallized into the emerging field of GeoAI, which has improved accuracy, scalability, and efficiency across urban, environmental, and remote-sensing applications. The EU’s Destination Earth initiative, for example, integrates high-resolution numerical models, high-performance computing (HPC), and AI to enable digital-twin simulations that transform maps into predictive testbeds for climate and environmental scenarios [28]. Policy initiatives have also accelerated “AI-ready” geospatial ecosystems: the High-Value Datasets regulation mandates free, API-based, machine-readable access to key public data domains (including geospatial, Earth observation, and meteorology), strengthening institutional conditions for AI training and evidence-based services [45]. In addition, the EU AI Act establishes a risk-based governance framework that imposes transparency, audit, and data-governance obligations for high-risk AI systems, further reinforcing the transition toward AI-enabled SDIs [8].
Taken together, global standards, open data policies, 3D streaming specifications, and digital-twin investments are reshaping the end-to-end “generation–processing–utilization” cycle of geospatial information [19,20,21,22,23]. This transformation enables powerful GeoAI applications and supports evidence-based spatial planning and democratic accountability, but it also increases the ethical stakes of geospatial data use, motivating the need to examine how governance and ethical safeguards should be interpreted and operationalized in GeoAI-specific contexts.

2.2. Ethics in GeoAI

The rapid evolution of AI has improved efficiency across sectors but has also amplified ethical challenges such as bias, opacity, and unclear accountability [40,46,47,48,49]. In response, the international community has moved from broad, declarative principles toward more structured governance instruments for responsible AI. The OECD AI Principles and UNESCO’s Recommendation provide widely adopted reference points for human-centered values, fairness, transparency, accountability, and privacy protection [5,7]. More recently, the EU AI Act introduced a legally enforceable, risk-based regulatory approach that mandates transparency, auditing, and data-governance obligations for high-risk AI systems [8]. These developments collectively signal that AI ethics is increasingly treated as an operational governance requirement rather than solely a normative aspiration.
However, when AI is applied to geospatial information, simply transferring general AI ethics principles is insufficient [29]. Geospatial data encode where, when, and with whom human activity occurs, and even limited spatiotemporal traces may enable re-identification or sensitive inference [50]. Moreover, GeoAI introduces spatially specific risks that are not fully captured by general AI frameworks, including linkage attacks, surveillance and tracking misuse, spatial aggregation effects (e.g., the MAUP), and geographic representational imbalances that can translate into inequitable outcomes [30,37]. Therefore, GeoAI ethics requires explicit attention to Geo-privacy, spatial fairness and bias, and data provenance and quality, alongside universal ethical dimensions such as transparency, accountability, and safety [23,47]. Because GeoAI outputs are frequently used in public planning and resource allocation, GeoAI ethics also intersects with democratic values—procedural transparency, contestability, and meaningful participation in spatial decision making—beyond individual privacy protection.
Policy- and practice-oriented initiatives have begun to articulate such domain-specific concerns. The Locus Charter emphasizes minimal collection, purpose limitation, anonymization, protection of vulnerable groups, and accountability as foundational principles for responsible location data use [34]. Operationally, these principles imply reducing spatial/temporal resolution and retention to what is necessary and preventing secondary uses of location data beyond the declared purpose; this paper later translates them into checklist items under Geo-privacy (Section 5.1). The UK Geospatial Commission promotes the ABC Principles (Accountability–Bias–Clarity) to build public confidence, foregrounding practical communication of data journeys and rights [36]. UN-GGIM’s Integrated Geospatial Information Framework recommends institutionalizing data ethics, privacy, quality management, and standardization within national geospatial governance [21]. These efforts provide important guardrails, yet they are often dispersed across documents with different scopes and levels of normativity and remain primarily principle-oriented, leaving ambiguity about how to translate them into concrete lifecycle controls and auditable evidence for GeoAI systems.
Academic research complements these institutional efforts by clarifying GeoAI-specific risks and proposing technical responses. Work on Geo-privacy and community-level privacy highlights the need for privacy-by-design and education for responsible practice [29]. Other studies examine sustainability, diversity, and bias from philosophical and normative perspectives [30], and reviews in cartography emphasize risks of epistemic authority, stigmatization, and integrity problems in map production and visualization [31]. Technical contributions propose privacy-preserving methods such as synthetic trajectory generation and federated learning to manage privacy–utility trade-offs under realistic operational constraints [37,38]. Despite these advances, the GeoAI ethics literature remains comparatively nascent: much of the discussion is either principle-oriented (high-level) or solution-specific (case- or technique-based), and there remains limited guidance on systematically connecting policy expectations, technical safeguards, and operational governance into an integrated framework aligned with geospatial specificities [29,30,31,32]. For example, policy charters articulate normative requirements (e.g., minimal collection and purpose limitation) [34,36], whereas technical studies often target a specific safeguard (e.g., privacy-preserving learning) [37,38]; however, guidance on aligning these strands into end-to-end evaluation and governance procedures remains underdeveloped. This gap motivates the need for a structured synthesis that consolidates universal AI ethics with GeoAI-specific risks into actionable, verifiable, and auditable ethical axes.

3. Materials and Methods

3.1. Systematic Literature Search and Selection (PRISMA 2020)

To derive the ethical criteria for GeoAI, international policy documents and scholarly articles related to AI ethics were comprehensively collected and analyzed. The selection of materials was conducted based on content-based criteria, and PRISMA (Table A1) (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) was employed to ensure both international and academic representativeness [41,51,52]. First, to construct the initial candidate set, authoritative documents—issued by international organizations, standards bodies, government agencies, and peer-reviewed journals—were gathered from the AI ethics literature, specifically those that explicitly present ethical or governance principles and directly or indirectly address guidelines or standards relevant to GeoAI. Subsequently, the search engines and repositories to be queried were defined, and the corresponding search keywords were compiled. The literature search was conducted from 5 September 2025 to 3 October 2025. Inclusion criteria required that a record (i) was published within the target period (2019–2025), (ii) originated from an official/primary policy, standards, or governmental source or a peer-reviewed scholarly outlet, and (iii) explicitly articulated ethical or governance principles relevant to AI/GeoAI, including those addressing location or geospatial data risks. Records were excluded if they were duplicates, secondary/press materials, outside the time window, lacked substantive ethics/governance content, or showed weak linkage to GeoAI ethical axes. All identified records were deduplicated, screened at the title/abstract level, and subsequently assessed in full text using the predefined inclusion/exclusion criteria; exclusion decisions were recorded with explicit reasons, and ambiguous cases were resolved by re-checking the full text to ensure consistent application. In accordance with PRISMA 2020, the entire process is organized into four phases—identification, screening, eligibility, and inclusion—with the counts at each phase and the reasons for exclusions reported [41,51,52]. To improve traceability, the study-specific post-selection workflow (ethical principle extraction and coding) is summarized in Table 1 and Table 2, including explicit decision rules for distinguishing core vs. indirect principles.

3.2. Policy Document–Based Extraction Method (PDEP-NcM)

Establishing ethical principles for the responsible use of AI and GeoAI must be grounded in policy reports published by international organizations and national governments. However, each report articulates ethical principles in different ways. Some documents explicitly state principles or requirements in the formal provisions, while others address ethical implications indirectly within the context of operational procedures, governance structures, or collaborative mechanisms. Therefore, to achieve a consistent and systematic analysis, a clear methodological framework is required to extract and categorize ethical principles from within each document. In this study, the searched policy reports were analyzed using the Policy Document-based Ethical Principle Extraction and Normativity Classification Methodology (PDEP-NcM), which identifies and classifies ethical principles explicitly or implicitly presented in the texts. The PDEP-NcM approach is based on three key analytical components, as summarized below [41,51,52] (Table 1).

3.2.1. Normative Strength Detection

When the text includes strong normative language such as shall, must, is required to, or prohibited, the statement is classified as a core ethical principle. Conversely, if the document uses softer, advisory language such as should, recommend, encourage, may, or facilitate, it is classified as an indirect ethical principle.

3.2.2. Structural Signal Identification

Items presented in tables of contents, clauses, bullet points, tables, or figures are treated as core ethical principles, as they explicitly define standards or requirements. In contrast, items appearing within sections related to implementation, operation, governance, or international cooperation are regarded as indirect ethical principles, reflecting implicit ethical intent rather than explicit rules.

3.2.3. Extraction and Normalization Process

The extraction and normalization procedure comprised four phases. Phase 1 scanned the document structure to identify sections, bullet points, and lists containing principles or requirements. Phase 2 collected sentences containing normative expressions and classified each item as a core principle or an indirect principle. Phase 3 normalized synonymous or conceptual equivalent terms under unified labels while preserving the original terminology and standardized them into GeoAI ethical principles. Phase 4 attached metadata (document name, section, and page references) to each extracted item to ensure traceability and verifiability throughout the analysis.
Each document’s ethical principles were categorized into two types according to whether they were explicitly defined or implicitly conveyed. Core ethical principles refer to items that are directly articulated or enumerated in the form of principles, requirements, clauses, or frameworks within the document. Indirect ethical principles, on the other hand, are those implicitly presented or supported through references to operational methods, governance structures, cooperative mechanisms, procedures, or implementation processes. The classification of core and indirect ethical principles was determined based on the degree of explicitness and form of expression within each document. In other words, principles that were explicitly stated in the text were classified as core ethical principles, while elements linked to operational or institutional mechanisms were categorized as indirect ethical principles. These two categories were then integrated and standardized to form the set of GeoAI ethical principles extracted from policy reports in Section 4.2. To operationalize PDEP-NcM consistently, a shared codebook defined (i) the unit of analysis (sentence/bullet/clause), (ii) normativity decision rules for core vs. indirect principles, and (iii) labeling conventions for synonym normalization; coding was verified through duplicate review and adjudication, with reconciliation decisions logged for traceability.

3.3. Scholarly Literature–Based Extraction Method (SEPE-NcM)

To systematically extract and classify ethical principles based on academic discourse, this study proposes the Scholarly Ethical Principle Extraction and Normativity Classification Methodology (SEPE-NcM). SEPE-NcM builds upon an integrated framework that combines content analysis and document analysis, determining the strength of normativity by integrating linguistic markers of deontic modality with structural cues within texts. Through this approach, statements explicitly presenting principles, guidelines, requirements, or frameworks are identified as core ethical principles, while those that indirectly support ethical reasoning through discussions of risks, limitations, governance mechanisms, operational processes, or design principles are categorized as indirect ethical principles. By integrating both types, the methodology yields a final and essential set of ethical principles that are most relevant for defining GeoAI ethics [42,53,54].
The methodological foundation consists of three layers: First, Systematic text-level coding and categorization follow standard procedures of content analysis. Segmentation of the original text, context preservation, codebook development, axial coding, and traceability of evidence snippets serve as key mechanisms for ensuring replicability and reliability. Second, Normativity assessment distinguishes the strength of normative expressions through grammatical and pragmatic interpretation of deontic markers such as shall, must, should, recommend, and prohibit. When deontic verbs appear together with ethical objects, agents, and application targets within explicit lists, tables, or frameworks, they are categorized as core principles; when expressed indirectly through descriptions of risks, limitations, recommendations, or governance/process suggestions, they are categorized as indirect principles. Third, Normalization of ethical axes draws from internationally verified meta-analyses of common value dimensions—privacy, transparency/explainability, fairness/non-discrimination, accountability, and safety/security—and directly maps them from the textual descriptions within each paper.
The SEPE-NcM procedure is summarized in Table 2. First, the included academic papers were screened for ethics-relevant sections (e.g., ethics discussions, governance proposals, limitations, and methodological reflections), and candidate normative statements were extracted. Next, each statement was mapped to the shared GeoAI ethical codebook and classified as core or indirect based on deontic expressions, explicit recommendations, and structural placement. Finally, synonymous terms were normalized, and each item was recorded with citation and traceability metadata (paper title, section, page or paragraph, and supporting text snippet) and summarized into paper-specific summary cards and an integrated matrix. Two coders independently applied the shared codebook and recorded evidence snippets; discrepancies were resolved through consensus meetings by reapplying the normativity rules, and all reconciliation decisions were documented to ensure consistent interpretation.
The validity of SEPE-NcM is established on two fronts: First, Normative validity—The common ethical axes identified in international meta-reviews are reaffirmed within individual papers, and by incorporating the tension between declarative principles and implementable norms as a classification criterion, the method enhances interpretive precision and practical relevance. Second, Methodological validity—Reproducibility and reliability are ensured through standardized content-analysis procedures, codebooks, traceable evidence snippets, and double-coding verification. When necessary, PRISMA flow documentation is reported in parallel to ensure transparency in search, selection, and reporting processes. In summary, SEPE-NcM is a procedural framework designed to derive ethical principles solely from the textual content of academic literature, combining linguistic deontic markers and structural normative signals to rigorously distinguish and normalize core and indirect principles. This framework incorporates an extended labeling system that reflects the domain specificity of GeoAI, producing comparable analytical matrices through coded evidence and dual coding. As a result, it enables the extraction and organization of GeoAI ethical principles across diverse types of academic research in a replicable and verifiable manner, with the outcomes summarized in Section 4.3.

3.4. Determination of GeoAI Ethical Axes

The final establishment of the GeoAI ethical framework was carried out through a systematic integration and convergence process between the ethical axes extracted from policy reports and those derived from academic literature. Specifically, this study evaluated the normativity of each extracted ethical axis, standardized synonymous or conceptually similar terms into common standard axes, and prioritized them by removing redundancies while reflecting the unique risks of GeoAI [5,20,55,56]. All normalization, merge/split, and prioritization decisions were documented with source pointers (document/paper, section, and evidence snippet) to maintain an auditable trail from raw text units to final ethical axes. The final set of GeoAI ethical axes was validated by examining: the number of core and indirect citations, the number of sources in which each axis appeared, the coverage ratio (i.e., the proportion of sources that explicitly mention the axis), and inter-coder agreement assessed via Krippendorff’s α. The extracted textual units were then coded consistently against a common codebook, and inter-coder reliability was evaluated using Krippendorff’s α to ensure reproducibility [42]. Finally, both the frequency of occurrence for each axis and a cross-document coverage indicator were reported to strengthen the quantitative basis of the findings [1,40].

3.4.1. Normalization Stage: Grouping Synonyms and Equivalent Concepts into Standard Axes

First, the diverse ethical terms identified across documents were integrated and normalized into a unified set of standard ethical axes. This process extracted the common denominators of key values repeatedly emphasized in international frameworks—such as those from OECD, UNESCO, EU (EUR-Lex), Council of Europe, UN-GGIM, ISO/IEC, NIST, and OGC—while incorporating the spatial specificities emphasized in GeoAI research [5,7,10,30,55]. For example, The Privacy/Data Protection axis includes data minimization, anonymization/pseudonymization, privacy-by-design, and re-identification prevention [5,7]. The Fairness/Bias axis integrates non-discrimination, representativeness, spatial bias mitigation (including MAUP), and geographic justice [10]. Additional ethical dimensions include Transparency and Explainability, Accountability and Auditability, Safety, Security and Robustness, Human Oversight and Participation, Data Quality and Provenance, Social and Environmental Responsibility, Community and Vulnerable Group Protection, Lifecycle Risk Governance, Misuse and Surveillance Prevention, and Accessibility and Inclusion [20,30,55]. At this stage, technical mechanisms such as geo-masking, differential privacy, synthetic trajectory generation, and federated learning were treated as operational sub-labels linked to their corresponding higher-level axes [56,57].

3.4.2. Terminological Integration Stage: Removing Redundancies and Merging Related Items

In this stage, items expressing the same or similar ethical intent were consolidated under unified axes. For example, Transparency and Explainability were merged into a single Transparency–Explainability axis. Accountability, Audit, Appeal, and Remedy were merged into the Accountability–Audit–Remedy axis. Safety, Security, and Robustness were grouped into one comprehensive axis [7,55]. Detailed implementation techniques were absorbed as sub-components under their corresponding principles—e.g., differential privacy, geo-masking, and synthetic trajectories under Privacy, and federated learning, encryption, and model protection under Security [57]. Policy mechanisms such as impact assessments, conformity assessments, and documentation were grouped under Lifecycle Risk Governance, while spatially unique issues (e.g., MAUP, geographic representativeness, geofencing misuse prevention) were retained as distinct axes [10,30].

3.4.3. Prioritization Stage: Selection Criteria

To converge dozens of extracted ethical items into a definitive set of core ethical axes, three prioritization criteria were applied. First, frequency and consensus: items repeatedly identified across multiple documents—such as privacy, fairness, transparency, accountability, and safety—were prioritized [5,7,55]. Second, domain specificity (GeoAI-specificity): ethical axes addressing spatially distinctive risks—such as geographic bias, location privacy, data provenance, and surveillance/misuse prevention—were retained as standalone axes [10,30]. Third, operational feasibility: priority was given to axes directly linked to management, auditing, evaluation, and technical implementation, such as Data Quality, and Human Oversight and Participation [20,56]. Accordingly, “items of importance but subordinate to higher axes” were absorbed into broader categories. For example, Consent and right-to-information were positioned under Privacy and Transparency. Open data and open source were included under Transparency/Provenance. Benchmarking and verification were classified under Safety/Accountability/Provenance [55,57].

3.4.4. Derivation of the Final GeoAI Ethical Axes

Through the sequential processes of normativity judgment, normalization, integration, and prioritization, the policy-based universal axes and domain-specific axes from academic research were ultimately consolidated into final GeoAI ethical axes. These axes serve as header labels that enable the mapping of detailed items from both policy (principles/procedures) and technical (methods/operations) perspectives. They are designed to comprehensively encompass both normative policy guidance and practical implementation mechanisms, allowing expansion into concrete tools such as checklists, model cards, and procurement standards [56,58]. Accordingly, this framework acts as a bridge between policy and technology, providing a foundational structure to strengthen the ethical, legal, and social trustworthiness of GeoAI [10,30].

4. Results

4.1. Search and Selection of Policy Reports and Scholarly Literature

To systematically derive the ethical principles of GeoAI, this study comprehensively collected and analyzed international policy documents (including laws, regulations, guidelines, and standards) and academic papers related to AI ethics. The selection of materials was based not on geographic or national distinctions but on content-based criteria, and the process followed the PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) protocol to ensure both international representativeness and methodological rigor [41]. Given the rapid evolution and growing discourse surrounding AI ethics, the target literature consisted of documents published between 2019 and 2025 that explicitly addressed ethical or governance principles, directly discussed GeoAI or geospatial data-related risks, guidelines, or standards, and were officially issued by reputable entities such as international organizations, standardization bodies, government agencies, or peer-reviewed journals. Databases and official repositories consulted included OECD, UNESCO, EU (EUR-Lex), Council of Europe, UN-GGIM, ISO/IEC, NIST, OGC, Scopus, Web of Science, and Google Scholar. The search employed keywords such as “AI ethics,” “trustworthy AI,” “responsible AI,” “GeoAI,” “location data,” “spatial fairness,” “provenance,” and “geospatial ethics.” After duplicate removal, the identified publications were screened based on titles and abstracts in the first phase. In the second phase, the full texts were reviewed in detail, and only documents that demonstrated clear relevance to the research objectives were retained. The entire procedure followed the four-stage PRISMA 2020 workflow—identification, screening, eligibility assessment, and inclusion—with the number of records and exclusion reasons reported for each stage.
A stratified corpus was constructed to ensure a balanced synthesis of evidence from policy and scholarly literature on GeoAI ethics. Document selection followed the PRISMA workflow (identification, screening, eligibility, inclusion) and was designed to maintain representativeness across document type (laws/regulations, international recommendations, standards/management systems, policy guidance/scholarly articles), region (UN/G7–OECD/EU/UK/North America/other), time period (2019–2025), and ethical theme axes (general AI ethics axes and GeoAI-specific axes). The numbers of items at each PRISMA stage—identification, screening, eligibility, and final inclusion—by set (policy reports vs. scholarly articles) are summarized in Table 3.
During the identification stage, a total of 210 records were retrieved (102 policy/standards/guidance; 108 scholarly). After de-duplication, 45 records were removed (21 policy; 24 scholarly), yielding 165 unique records (81 policy; 84 scholarly). At title/abstract screening, 115 records were excluded according to pre-specified criteria. Exclusion reasons were as follows: for policy reports, 58 were removed (Topic mismatch with AI/GeoAI ethics, 31; Non-primary/press/secondary, 11; Insufficient ethics/governance content, 16); for scholarly articles, 57 were removed (Topic mismatch with AI/GeoAI ethics, 29; Insufficient ethics/governance content, 15; Outside time window (<2019) or outlet unclear, 13).
At eligibility (full-text) assessment, 18 records were excluded. Exclusion reasons were as follows: for policy reports, 7 were removed (Not primary/official source, 3; Insufficient operationalization (governance/evaluation/audit), 4); for scholarly articles, 11 were removed (Limited methodological contribution to GeoAI ethics axis derivation, 9; Weak linkage to GeoAI ethics axes, 2).
In the final inclusion stage, 32 publications were selected for the corpus (16 policy reports; 16 scholarly articles) (Table 3). This corpus satisfies structural representativeness and methodological defensibility under the PRISMA 2020 workflow and is designed to capture both the policy perspective of what should be done and the scholarly perspective of how it can be implemented. The 16 policy documents ultimately selected encompass five hierarchical dimensions—international consensus (soft law), legally binding regulations (hard law), technical standards, GeoAI-specific guidelines, and national implementation frameworks—thereby providing the most comprehensive and coherent analytical framework for research on GeoAI ethics [16,21,22,35,36]. Table A2 summarizes the key characteristics of these sixteen policy reports. Based on this foundation, the present study derives the core ethical axes of GeoAI, evaluates their alignment with international norms, and proposes practical and effective ethical guidelines applicable to the field of geospatial information.
In addition, the 16 academic papers ultimately selected serve to complement and expand the normative framework established by the policy documents (Table A3). These studies range from universal analyses of AI ethics to discussions addressing the domain-specific characteristics of GeoAI—including Geo-privacy, Spatial Fairness and Bias, Data Provenance and Quality, and Explainability—as well as technical responses such as synthetic trajectories, differential privacy, federated learning, and GeoAI foundation models [2,40]. Methodologically, they combine multiple approaches—systematic literature review, theoretical inquiry, policy analysis, and empirical model development—to create an integrated framework that enables cross-validation across normative, technical, and social dimensions. Notably, the corpus gives greater weight to works published between 2023 and 2025, thereby reflecting emerging issues such as GeoAI and large-scale models, generative AI, and human-centered governance. To support transparency on corpus composition, Table A2 and Table A3 list the 32 included documents and summarize their institutional origins and key characteristics, including territorial scope and normative status (e.g., intergovernmental soft law, legally binding regulations, and technical standards). Table A4 and Table A5 further report the document-level ethical principles extracted from each policy and scholarly source, thereby indicating how individual documents informed the derivation of the twelve ethical axes. Because this review is a content-based synthesis rather than a jurisdiction-by-jurisdiction legal inventory, differences in legal binding force are treated as contextual metadata, while axis-level coverage across the full corpus is summarized in Table 4 and Table 5.

4.2. Ethical Principles Extracted from Policy Reports

Establishing ethical principles for the responsible use of GeoAI must be grounded in policy reports issued by international organizations and national governments. However, while some reports explicitly present principles or requirements as formal provisions, others address ethical implications indirectly in the context of operating procedures, governance arrangements, or collaboration mechanisms. In this study, 16 selected policy reports were analyzed, and the ethical axes were extracted using the Policy Document-based Ethical Principle Extraction and Normativity Classification Methodology (PDEP-NcM). For each document, the stated ethical principles were classified as core ethical principles or indirect ethical principles. Core ethical principles are items directly specified or enumerated in the form of principles, requirements, articles, or frameworks. Indirect ethical principles are items implicitly presented or supported through references to operating modes, governance structures, cooperation mechanisms, procedures, or implementation modalities. Accordingly, principles explicitly stated in the text were classified as core, whereas elements linked to operational or institutional arrangements were classified as indirect. These two categories were then integrated to normalize the GeoAI ethical principles detected across the 16 policy reports (Table A4), with each extracted item being linked to its source metadata (document, section) and assigned a normativity tag (core/indirect), as described in Section 3.2, to preserve traceability.
The policy reports present the five universal core axes of AI ethics—privacy, fairness, transparency, accountability, and safety—as a common foundation [5,6,7,17,21,22]. They also enhance operability and enforceability through risk-based approaches and lifecycle management, which encompass risk management systems, impact assessments, documentation and logging, conformity assessments, post-market monitoring, and corrective measures [8,59,60,61,62]. Documents directly related to GeoAI include location-data-specific principles such as data minimization, purpose limitation, anonymization/pseudonymization, proximity-based prioritization, periodic pseudonym renewal, and device-centric architectures, along with frameworks such as ABC (Accountability–Bias–Clarity) and standards for provenance and AI-readiness [16,34,35,36]. Overall, the policy reports collectively provide an institutional baseline for GeoAI ethics. Across the policy corpus, privacy protection, bias mitigation, documentation, and auditing are repeatedly emphasized across the design–deployment–operation lifecycle [8,16,18,61,62], and data minimization and purpose limitation are consistently highlighted as foundational principles for location data [16,34].

4.3. Ethical Principles Extracted from Academic Literature

The Scholarly Ethical Principle Extraction & Normativity Classification Methodology (SEPE-NcM) was employed to systematically derive and classify ethical principles from the academic literature. Built on an integrated framework of content analysis and document analysis, SEPE-NcM assesses the strength of normativity by combining linguistic markers of deontic modality with structural signals in the text. Accordingly, statements that explicitly present principles, guidelines, requirements, or frameworks are categorized as core ethical principles, whereas items indirectly supported through discussions of risks, limitations, governance, operational procedures, or design principles are categorized as indirect ethical principles. These two categories were then integrated to determine the set of GeoAI ethical principles extracted from the 16 scholarly articles (Table A5).
While academic research shares the universal ethical axes established in policy frameworks, it presents a more refined articulation of GeoAI-specific risks and technical responses. Representative themes repeatedly highlighted include Geo-privacy (risks of re-identification and linkage), geographic bias and the Modifiable Areal Unit Problem (MAUP), data quality and provenance, explainability and interpretability, and human-in-the-loop involvement [29,30,31,32]. Moreover, the literature introduces practical protection and utilization techniques such as synthetic trajectories (CATS), differential privacy/geo-indistinguishability, geo-masking, and federated learning [37,38], while also integrating social and environmental dimensions—including sustainability, geographic justice, and community participation—into the operational logic of GeoAI [24,30,39,63]. Across the scholarly corpus, high-level policy principles are operationalized through field-level and technical practices such as privacy-by-design, bias measurement and mitigation workflows, provenance and AI-ready standardization, and decentralized (federated) learning [2,3,40].

4.4. Derivation of the Final 12 GeoAI Ethical Axes

Following the integration procedure described in Section 3.4, twelve ethical axes were converged upon by integrating the universal axes from policy with the domain-specific axes derived from the scholarly literature. This integration was carried out through a sequence of systematic steps: first, core and indirect determinations were identified; then, normalization was performed to group synonyms and near-synonyms; next, terminology was consolidated through the removal of duplicates and the merging of similar terms; following this, prioritization was conducted to establish the relative importance of each concept; and finally, these refined elements were synthesized into twelve final ethical axes, as presented in Table 4. These twelve axes serve as header labels that make it straightforward to map detailed items from both policy (principles and procedures) and technology (methods and operations) onto a single axis. They are designed to encompass policy principles and technical implementation simultaneously and, in practice, can be expanded into checklists, model cards, and procurement contract criteria as concrete, operational requirements [56,58]. Accordingly, this framework acts as a bridge between policy and technology, providing a foundational structure to strengthen the ethical, legal, and social trustworthiness of GeoAI [10,30].
Although several axes in Table 4 may appear overlapping (e.g., Transparency, Accountability and Auditability, Human Oversight, and Lifecycle Governance), this overlap largely reflects differences in governance function rather than redundancy of principles. The axes can be interpreted functionally as (i) foundational enablers (Data Provenance and Quality; Transparency), (ii) governance controls that ensure accountable implementation across the lifecycle (Accountability and Auditability; Safety/Security/Robustness; Human Oversight; Lifecycle Governance), and (iii) societal/impact objectives (Geo-privacy; Spatial Fairness and Bias; Participation and Stakeholder Engagement; Inclusion and Accessibility; Misuse Prevention; Public Benefit and Sustainability). Under this view, Transparency enables auditable accountability and informed human intervention, while Lifecycle Governance coordinates how all axes are applied across the design–deployment–operation stages, clarifying their complementary roles. Notably, although the axes are functionally distinct, implementation may involve practical tensions (e.g., Geo-privacy vs. Transparency, Security vs. Data Openness, or Fairness vs. Model Accuracy); these trade-offs are discussed as a key operational consideration in Section 6.
The twelve final GeoAI ethical axes were designed to encompass both the universal principles of general AI ethics and the unique ethical characteristics of geospatial AI. The five general AI ethics axes—repeatedly emphasized in the OECD AI Principles, UNESCO Recommendation, and EU AI Act—serve as foundational dimensions: Transparency, Accountability, Safety/Robustness/Security, Human Oversight, and Public Benefit/Sustainability. In addition, the seven GeoAI-specific axes, reflecting the distinctive properties of geospatial information such as location, spatial context, map visualization, and data interoperability, include Geo-privacy, Data Provenance and Quality, Spatial Fairness and Bias, Participation/Community Involvement, Lifecycle Governance, Misuse and Surveillance Prevention, and Inclusion and Accessibility. The reliability of the twelve selected ethical axes was verified through an evaluation of core/indirect citation counts, the number of documents referencing each axis, the coverage ratio (percentage of documents explicitly addressing each axis), and Krippendorff’s α for intercoder reliability (Table 5).
An examination of the coverage ratio and the diversity of document types and regions confirmed that all twelve axes consistently appeared across multiple sources and time periods [5,7]. All extracted text segments were consistently coded using a predefined codebook, and intercoder agreement was tested via Krippendorff’s α, ensuring reproducibility [42]. Finally, the frequency of occurrence and the degree of cross-document consensus (coverage ratio) were reported to provide quantitative evidence supporting the adoption of the twelve axes [1,40]. During extraction, independent double coding was performed using the predefined codebook, and intercoder agreement was verified through Krippendorff’s α, ensuring reliability [42].
Two quantitative indicators were used to assess the consistency of the extracted ethical axes: Inter-coder reliability (calculated as Krippendorff’s α (or Cohen’s κ) for each axis), and Axis-level frequency and coverage (indicating the proportion of documents in which each axis appeared in either core or indirect form). Intercoder reliability ranged between Krippendorff’s α = 0.76–0.86, exceeding the acceptable threshold (≥0.67) for all axes, with particularly high consistency observed for Transparency (α = 0.86) and Geo-privacy (α = 0.84) [42]. The coverage ratio of individual axes ranged from 56.3% to 81.3%, demonstrating that all twelve axes were repeatedly identified across a broad set of documents—supporting both the representativeness and consensus validity of the resulting ethical framework.

5. Proposed Guidelines with Checklists

To clarify scope, the following checklists are intended as GeoAI-focused extensions and spatial reinterpretations of widely used AI governance checklists (e.g., risk management, TEVV (testing, evaluation, verification, and validation) reporting, and model/data documentation templates), rather than standalone replacements. Items that may appear generic are retained only when they require spatially explicit operationalization (e.g., CRS (coordinate reference system), spatial resolution/aggregation, map visualization, and location-based rights risks), while GeoAI-specific items are those that arise directly from spatial specificity.

5.1. Geo-Privacy

Geospatial information inherently reveals individuals’ everyday trajectories—the “where–when–with whom” of their lives—and when combined with other datasets, it poses significant risks of re-identification and linkage [29]. Due to this sensitivity, GeoAI systems must integrate purpose limitation, data minimization, robust anonymization/pseudonymization, limited retention and post-deletion, and device-centered architectures minimizing centralized trust throughout the entire data lifecycle [16]. The Locus Charter, an international guideline specifically addressing location data, defines privacy protection, prevention of individual identification, data/intrusion minimization, protection of vulnerable groups, and accountability as its core principles [34]. Similarly, the UK Geospatial Commission emphasizes building public confidence through the ABC framework (Accountability–Bias–Clarity), requiring clear communication of data journeys, rights notices, and accessible language [36]. Academic research further refines these risks and technical countermeasures. It identifies vulnerabilities such as re-identification and linkage attacks, data brokerage and resale, and falsified or deepfake geography, recommending the implementation of protection techniques including geo-masking, k-anonymity, differential privacy/geo-indistinguishability, encryption and homomorphic encryption, and synthetic data generation across all stages of GeoAI [29]. For trajectory data publication, validation procedures based on distribution-level k-anonymity, ε-geo-indistinguishability, and privacy–utility balance—combined with indicators for preventing re-identification (TUL) and home location clustering (HLC) detection—are considered effective [37,38]. Industry reports recommend operationalizing these principles and techniques within a governance framework that integrates data quality, provenance, and standardization [35].
In summary, Geo-privacy achieves effectiveness through the integration of four complementary rule sets (Table 6): Rights principles (Purpose limitation, data minimization, and assurance of consent and user rights), Design principles (Privacy-by-design, device-centric architectures, and periodic renewal of pseudonymous identifiers), Validation principles (Attack simulation, re-identification testing, and indicator-based reporting), Governance principles (Documentation, auditing, and post-deletion accountability). When these components operate collectively, Geo-privacy ensures both ethical soundness and technical robustness in GeoAI practice [16,29,34,35,38].

5.2. Data Provenance and Quality

Provenance refers to a systematic framework for tracking and documenting the origin, transformation, history, and responsible ownership of data and models throughout their entire lifecycle (Table 7). It serves as a core foundation for ensuring the trustworthiness, reproducibility, and accountability of GeoAI systems [35]. However, provenance documentation alone does not guarantee geospatial data quality; quality should also be evaluated as fitness-for-purpose relative to the original production objective and downstream decision context (i.e., who uses the data and for what decisions). Geospatial data are derived from multiple heterogeneous sources—such as satellites, aerial imagery, drones, IoT sensors, authoritative survey products (e.g., cadastral/topographic layers maintained by publicly mandated authorities), and administrative datasets—and undergo complex preprocessing steps, including projection transformation, orthorectification, mosaicking, and resampling. Each of these processes may introduce loss, distortion, or temporal misalignment, which in turn can propagate systematic errors in policy decision making and public services [35,36]. From a standardization perspective, ISO/IEC 23894 mandates the recording, reporting, and consultation of information throughout all stages of the risk management process—identification, analysis, evaluation, treatment, and monitoring [61]. Similarly, ISO/IEC 42001 institutionalizes provenance through an AI management system comprising policy, roles, responsibilities, documentation, traceability, and internal auditing [62]. The NIST AI Risk Management Framework (AI RMF) recommends comprehensive documentation and governance across testing, evaluation, verification, and validation (TEVV), continuous monitoring, and the management of supply chain and pre-trained model risks [60]. In GeoAI applications, transparency, explainability, and provenance are critical tools for mitigating authority bias in map production and visualization while improving interpretability of results [31]. From a privacy standpoint, given the risks of data brokerage and linkage attacks, making data lineage and usage pathways visible is a prerequisite for both rights assurance and risk control [29]. Likewise, international guidelines on location data emphasize data quality, accuracy, and accountability, which are operationalized through systematic provenance documentation and disclosure [36]. Industrial reports identify metadata standardization, data quality, integrity, and provenance as foundational pillars of GeoAI governance, recommending the use of AI-ready schemas and interoperable frameworks to facilitate procurement, validation, and auditing [35].
In summary, Data Provenance and Quality in GeoAI combines lineage documentation with fitness-for-purpose quality management, linking integrity, traceability, explainability, and auditability across the three critical layers of data, models, and operations [35,36,60,61,62].

5.3. Spatial Fairness and Bias

Spatial Fairness refers to the principle of systematically identifying and mitigating geographic representational imbalance and spatial bias to ensure that GeoAI does not produce discriminatory impacts on specific regions or population groups (Table 8). Spatial datasets are inherently prone to bias due to factors such as urban/non-urban divides, sensor visibility (terrain, building density), administrative boundaries, and aggregation effects (MAUP: Modifiable Areal Unit Problem). These biases can accumulate throughout all stages of the workflow—data collection, preprocessing, modeling, and visualization—ultimately leading to stigmatization, spatial exclusion, or surveillance-based discrimination [24,30,31,32]. From an academic perspective, mitigation strategies include diagnosing spatial statistical structures (MAUP, sampling bias, visibility), evaluating performance disparities across regions or social groups (e.g., urban vs. rural, high vs. low density), and applying spatial cross-validation to assess geographical generalization. Furthermore, extending fairness metrics spatially—by computing precision, recall, false positive/negative rates, and geographic balance indices—is recommended [24,30,31]. From a policy and standards viewpoint, minimizing bias and ensuring non-discrimination are recognized as core ethical axes. For instance, Bias (B) in the UKGC’s ABC Framework, Fairness in WGIC, and harmful bias management in ISO/IEC 23894 and NIST AI RMF all emphasize stakeholder participation and transparent reporting to build public trust [35,36,60,61].
In summary, Spatial Fairness functions as a governance and design principle for quantifying, mitigating, and explaining geographic bias throughout the chain of data representativeness, aggregation, evaluation, visualization, and decision making. It requires community engagement and clear communication as essential components of ethical GeoAI practice [30,34,36].

5.4. Transparency

The outputs of GeoAI—such as maps, indicators, and alerts—carry strong visual authority, which can lead users to misunderstand the basis, scope, and limitations of decisions. Therefore, transparency in GeoAI must extend beyond mere disclosure to include understandable explanations (explainability), data and model lineage (provenance), clarity in visualization design, verification and documentation procedures, and effective communication with stakeholders [31] (Table 9). In the field of cartography, transparency, explainability, and provenance of both data and models are repeatedly identified as key ethical axes. Ethical visualization design—such as appropriate legends, color schemes, boundary definitions, and uncertainty representation—is essential to avoid stigmatization or misinterpretation [31]. From a policy framework perspective, the UK Geospatial Commission emphasizes Clarity—the “C” in its ABC Framework (Accountability–Bias–Clarity)—and recommends that the data journey (collection–sharing–reuse) and rights notices be communicated in clear, accessible language [36]. From an industry and standards standpoint, the WGIC highlights metadata standardization and provenance as enablers for easier verification and auditing [35]. ISO/IEC 23894 mandates recording, reporting, and consultation throughout the entire risk-management process, while ISO/IEC 42001 institutionalizes organizational transparency through policy, roles, traceability, and internal auditing [61,62]. The NIST AI RMF further recommends public documentation and disclosure of TEVV results (Testing, Evaluation, Verification, Validation), continuous monitoring, and supply chain or pre-trained model risks, in order to systematize stakeholder risk communication [60]. In terms of location data, the EDPB identifies transparency, information provision, and AI/app usage notifications as essential, encouraging decentralized, device-centered architectures and open reviews to strengthen public trust [16]. Likewise, the Locus Charter defines transparency, alongside accountability, as a foundational ethical principle [34].
In short, transparency in GeoAI represents an integrated technical–governance principle that addresses: What (data, models, visualization should be disclosed), How (through explanations, documentation, and provenance), To whom (citizens, affected communities, verification bodies), and When (throughout the entire lifecycle) [16,31,34,35,36,61,62].

5.5. Accountability and Auditability

Accountability in GeoAI encompasses the clarification of roles and responsibilities, traceability (through documentation and logging), auditing and public verification, and the inclusion of appeal and redress mechanisms across the entire lifecycle (Table 10). International principles explicitly identify accountability as a core value and require its implementation by organizations and governments [5]. Documents specific to location data also place Accountability (A) at the forefront of the ABC Framework (Accountability–Bias–Clarity), emphasizing clear identification of responsible entities and governance structures [36]. The Locus Charter defines “Provide Accountability” as a fundamental principle, requiring transparency regarding the purpose, impact, and responsibility chain of data use [34]. From a regulatory and data protection perspective, the EDPB Guidelines establish detailed accountability obligations—legal basis, data controller identification, execution and publication of DPIA, and user information disclosure [16]. Standards and frameworks further institutionalize accountability in practice. The NIST AI RMF requires documentation, testing, verification, continuous monitoring, and supply chain risk management within its GOVERN–MAP–MEASURE–MANAGE framework, recommending evidence-based recording of risks, performance, and decisions [60]. Similarly, ISO/IEC 23894 emphasizes communication, recording, reporting, monitoring, and review throughout risk management processes [61], while ISO/IEC 42001 institutionalizes accountability at the organizational level through policy definition, role assignment, authorization, documentation, internal auditing, and continuous improvement [62]. Industry reports identify provenance, metadata standardization, benchmarking, testing, verification, and algorithm auditing as foundational elements of data governance [35]. Academically, reviews in cartography address responsibility as an ethical axis, emphasizing accountability, interpretability, and provenance in the map production and visualization process [31]. From a human rights and rule-of-law standpoint, the Council of Europe Convention (2024) requires signatory states to implement procedural safeguards and redress mechanisms alongside transparency and accountability provisions. For GeoAI-specific risks—such as geo-privacy, linkage attacks, and data brokerage—the integration of governance, transparency, and accountability is reaffirmed as essential for ensuring both rights protection and risk control [29]. In the latest GeoAI Foundation Model contexts, embedded procedures for version management, monitoring, and auditing are proposed to address operational risks, including prompt contamination and reward manipulation [38].

5.6. Safety, Security and Robustness

Safety in GeoAI refers to the capability to prevent and mitigate harm across the entire data–model–operation lifecycle, to respond resiliently to unexpected events and malicious behaviors, and to trace and recover effectively in the event of incidents (Table 11). This requires an operational framework that integrates the core characteristics of trustworthy AI—safety, security, reliability, and resilience—with governance, documentation, testing, verification, and continuous monitoring [60]. From a standards perspective, ISO/IEC 23894 mandates the establishment and operation of risk identification, analysis, evaluation, treatment, and monitoring, along with documentation, reporting, and consultation throughout the process [61]. ISO/IEC 42001 institutionalizes organizational safety through an AI management system composed of policy, roles, authority, documentation, internal auditing, and continuous improvement [62]. From a human rights and rule-of-law perspective, the Council of Europe Convention reinforces enforceability by codifying requirements for safe innovation (controlled testing environments), risk and impact assessment and mitigation (based on proportionality and severity), and procedural safeguards and remedies [9]. Industry reports position data quality, integrity, provenance, benchmarking, testing, validation, and algorithm auditing as practical pillars for implementing safety [35]. Processing and storage of location data must adhere to end-to-end security—including encryption, access control, device-centered design, minimization of centralized trust, and restricted retention or post-deletion/anonymization [16]. In the context of GeoAI foundation models, robustness evaluation, version control, monitoring, and recovery strategies are essential to defend against malicious prompts, data or reward contamination, weight leakage, and membership inference [37,38]. From a cartographic perspective, vigilance is required against the spread of “deepfake geography” and manipulated visuals; ensuring the truthfulness and interpretability of results is key to minimizing social harm [31].
In summary, GeoAI safety is an integrated techno-governance principle encompassing security, robustness, and integrity, combined with risk assessment, TEVV, monitoring, incident response, remediation, management systems, and auditing [9,35,60,61,62].

5.7. Human Oversight and Human-in-the-Loop

Human oversight in GeoAI refers to a combined organizational, technical, and procedural framework that ensures humans retain the ability to monitor, intervene, halt, and assume responsibility throughout the entire lifecycle of design, deployment, and operation (Table 12). Because spatial outputs such as maps, indicators, and alerts possess strong visual authority, they can easily induce automation bias—overreliance on AI results. To counter this, explicit human intervention points (HITL/HOTL/HIC), override or kill-switch mechanisms, and escalation protocols (based on thresholds or geographic critical zones) must be predefined [60]. From a standardization and management perspective, ISO/IEC 42001 institutionalizes supervisory responsibility through an AI management system structured around policy, roles, authority, internal audits, and continuous improvement [62]. ISO/IEC 23894 ensures information-based oversight by requiring communication, documentation, reporting, monitoring, and review throughout the risk management process [61]. From a human rights and rule-of-law standpoint, the Council of Europe Convention calls on states to reinforce oversight and enforcement through transparency, accountability, procedural safeguards, and remedies [9]. From a public trust perspective, the UK Geospatial Commission’s ABC Framework emphasizes Clarity (C)—providing clear communication on data journeys and rights notices to enable informed human judgment [36]. Academic reviews in cartography identify responsibility, explainability, and provenance as core ethical axes, requiring explanatory tools and pre-assessment of visualization design (legends, thresholds) to prevent stigmatization [31]. In the context of GeoAI foundation models, human oversight procedures such as safe human–robot deployment, monitoring of malicious prompts or reward contamination, and version control and auditing have been proposed [38]. Approaches such as humans-as-sensors and mixed-experts further integrate human expertise and local context into model decision flows, reinforcing contextual relevance, societal acceptance, and accountability [39].
In summary, Human Oversight in GeoAI is a composite principle integrating governance (roles, authority, procedures), information (explanation and uncertainty display), rights (intervention, interruption, remedy), and operations (safe HMI and field practices) [9,31,36,37,39,60,61,62].

5.8. Public Benefit and Sustainability

Public Benefit and Sustainability is the normative principle that GeoAI should be designed and operated to genuinely contribute to social welfare and sustainable development for current and future generations, rather than solely pursuing efficiency, profit, or technological advance [5] (Table 13). Beyond mere efficiency gains, public benefit requires safe innovation compatible with human rights, democracy, and the rule of law, underpinned by procedural safeguards and redress mechanisms, institutionalized within national enforcement systems [9]. GeoAI’s public benefit also encompasses inclusion and equity. International agreements emphasize human-centered values, non-discrimination, and digital inclusion, ensuring that the benefits of technology are distributed fairly across regions and groups by addressing both accessibility and capability disparities [6]. A key dimension of public benefit is environmental and ecological sustainability. The energy, carbon, and resource costs of GeoAI training and deployment should be transparently accounted for in public-interest decisions (the “Green vs. Red AI” paradigm). Practical strategies include sustainable GeoAI foundation model training, distributed learning, and secure collaborative computing [24,30]. Public benefit also implies non-maleficence and proportionality. International location data ethics principles explicitly call for do-no-harm, protection of vulnerable groups, data/intrusion minimization, prevention of personal identification, and accountability, requiring proportionate assessments of privacy and individual freedoms when pursuing public-interest objectives [34]. At the industry and governance level, mechanisms such as data-quality assurance, provenance tracking, metadata standardization, benchmarking, verification, and algorithm auditing enhance verifiability and accountability, thereby supporting public benefit realization [35].
In summary, Public Benefit and Sustainability is a composite principle that integrates three dimensions: (1) the generation of positive social value, (2) the mitigation of externalities and environmental burdens, and (3) long-term sustainability and intergenerational responsibility [5,6,9,24,30,35].

5.9. Participation and Stakeholder Engagement

Participation refers to the principle of systematically identifying and engaging diverse stakeholders—including citizens, local communities, field experts, regulators, and oversight bodies—throughout the entire design–deployment–operation lifecycle of GeoAI (Table 14). Its goal is to enhance legitimacy, social acceptance, and trust in decision making. The UK Geospatial Commission, which aims to foster public trust, emphasizes Clarity (C) in its ABC Framework (Accountability–Bias–Clarity) by requiring transparent communication of data journeys (collection–sharing–reuse), clear rights notices, and accessible dialogue and participation channels [36]. The Locus Charter identifies community rights, protection of vulnerable groups, and provision of accountability as core principles, affirming that participation must go beyond consultation to function as a rights-based guarantee [34]. From a human rights and rule-of-law perspective, the Council of Europe Convention codifies public participation, multistakeholder consultation, and procedural safeguards and remedies, thereby ensuring enforceability of participatory governance [9]. In the domain of data rights, the EDPB Guidelines connect participation and control by mandating information provision, consent and withdrawal, and rights exercise mechanisms (access, rectification, deletion). They also recommend device-centric and decentralized architectures and open review mechanisms to strengthen trust [16]. At the governance level, the NIST AI RMF establishes stakeholder participation and DEIA (Diversity, Equity, Inclusion, and Accessibility) as governance constants, embedding participation into operational workflows through risk communication, continuous monitoring, and documentation [60]. ISO/IEC 23894 similarly requires communication, consultation, documentation, reporting, monitoring, and review throughout the risk management process, emphasizing participation as continuous interaction rather than one-time consultation [61]. Industry reports further recommend enabling public evaluation and review through open benchmarking, validation, and algorithm auditing combined with metadata standardization and provenance transparency [35]. In human-centered GeoAI, approaches such as humans-as-sensors and mixed-experts integrate local expertise and community feedback directly into model decision pipelines, embedding participation as a design component [39]. From a cartographic ethics perspective, ensuring visualization practices that avoid stigmatization and providing interpretive aids enable citizens to give informed feedback [31].

5.10. Lifecycle Governance

Lifecycle Governance is an integrated normative framework that systematically identifies, evaluates, and mitigates risks across all stages of the GeoAI lifecycle—design, development, deployment, operation, and decommissioning—supported by policies, defined roles, documentation, and auditing mechanisms (Table 15). The NIST AI Risk Management Framework (AI RMF) structures governance under the GOVERN–MAP–MEASURE–MANAGE model, requiring organizations to ensure the following measures are taken: define and allocate responsibilities (Policy/Roles), map risks and stakeholders (Use Context/Stakeholder Mapping), conduct TEVV (Testing, Evaluation, Verification, Validation), and maintain continuous monitoring and improvement loops [60]. ISO/IEC 23894 applies risk identification, analysis, evaluation, treatment, and monitoring throughout the AI lifecycle, mandating procedures for communication, consultation, documentation, reporting, and review [61]. ISO/IEC 42001 institutionalizes this risk management and accountability through an AI management system encompassing policy, roles, authority, documentation, internal auditing, and continual improvement [62]. From a human rights and rule-of-law perspective, the Council of Europe Convention strengthens enforceability by mandating procedural safeguards, public participation, multistakeholder consultation, and risk/impact assessments [9]. For location data governance, the EDPB explicitly prescribes purpose limitation, data minimization, restricted retention, and post-deletion/anonymization, establishing foundational lifecycle control principles for GeoAI [16]. Industry guidance from WGIC identifies provenance, metadata standardization, data quality, benchmarking, validation, and algorithm auditing as operational pillars of governance [35]. GeoAI-specific considerations must also be embedded in lifecycle governance. Cartographic studies emphasize provenance, explainability, and visualization accountability, calling for interpretive control through legend, threshold, and uncertainty representation [31]. Philosophical and policy research advocates for the inclusion of sustainability, geographic justice, and bias (including MAUP) in lifecycle evaluation metrics [24,30].
In summary, Lifecycle Governance integrates risk management standards (processes and management systems), data control (retention, deletion, anonymization), provenance, auditing, reporting, and spatially specific risk considerations into a unified techno-institutional operational framework [9,16,24,30,31,35,60,61,62].

5.11. Misuse Prevention

Misuse Prevention in GeoAI is the normative principle that defines prohibited uses and establishes technical and governance controls throughout the lifecycle to prevent geospatial data, models, or services from being exploited for rights violations, surveillance, discrimination, manipulation, or safety threats (Table 16). In the processing of location data, controls such as purpose limitation, data minimization, restricted retention, and post-deletion/anonymization serve as the first line of defense to proportionally constrain use and deter surveillance-related misuse [16]. Design principles—such as device-centered architectures, periodic pseudonym renewal, and minimized centralized trust—structurally reduce large-scale aggregation and abuse potential [16]. At the international level, the principles of “Do no harm,” “Protect the vulnerable,” and “Prevent personal identification” are recognized as foundational norms against misuse, coupled with accountability provisions to clarify the purpose and chain of responsibility for data use [34]. Within the public trust framework, the UK Geospatial Commission highlights Clarity (C) in its ABC Framework, requiring clear communication of data journeys and rights notices to make covert misuse more difficult [36]. Misuse can manifest in many technical forms. From a cartographic perspective, threats include “deepfake geography,” shape manipulation, distorted legends or thresholds, which can lead to stigmatization or misinterpretation; thus, truthfulness and interpretive control must be maintained [31]. In the context of GeoAI foundation models, countermeasures such as prompt attack detection and blocking, automatic masking of sensitive geospatial information (e.g., residential or critical sites), and mechanisms for versioning, monitoring, and recovery are essential for resilience against malicious prompts, data leaks, reward contamination, and membership inference [37,38]. From an industry and standards standpoint, provenance, metadata standardization, benchmarking, testing, validation, and algorithmic auditing enhance traceability of data and models, reducing concealment or manipulation and clarifying accountability in the event of violations [35,61,62]. At the governance level, frameworks such as ISO/IEC 23894 (risk identification–analysis–evaluation–treatment–monitoring), ISO/IEC 42001 (policy–roles–authority–internal audit–improvement), and NIST AI RMF (GOVERN–MAP–MEASURE–MANAGE) institutionalize prohibition rules, pre-assessment, and post-response protocols [60,61,62]. From a human rights and rule-of-law perspective, the Council of Europe Convention enhances enforceability of misuse monitoring through procedural safeguards, independent oversight, public participation, and remedies [9].
In summary, Misuse Prevention is a techno-institutional principle that integrates the following: definition of prohibited uses and proportionality control, defense against map/model manipulation, leakage, and attacks, mechanisms for traceability, auditing, sanctions, and redress, and transparent communication and stakeholder engagement [9,16,31,34,35,36,37,38,60,61,62].

5.12. Inclusion and Accessibility

Inclusion in GeoAI refers to the principle of ensuring accessibility, equity, representativeness, and diversity across the entire lifecycle—design, deployment, and operation—so that the benefits of data, algorithms, and services are not concentrated in specific regions or populations (Table 17). At the policy level, international frameworks highlight human-centered values, non-discrimination, and digital inclusion as universal principles, requiring governments and organizations to contribute to the welfare and sustainability of society as a whole [5,6]. From a human rights and rule-of-law perspective, the Council of Europe Convention strengthens enforceability by codifying equality, non-discrimination, public participation, multistakeholder consultation, procedural safeguards, and remedies [9]. Within the context of GeoAI, inclusion becomes tangible through geographic representativeness and the reduction in access barriers. Research in cartography and GeoAI shows that spatial bias and the Modifiable Areal Unit Problem (MAUP) can cause performance and benefit disparities across urban/rural divides, population densities, and administrative boundaries, calling for regional performance disaggregation, spatial cross-validation, and visualization designs that avoid stigmatization [24,30,31]. Privacy protection and inclusion are complementary, not contradictory. The EDPB Guidelines emphasize voluntariness, non-discrimination, and clear communication of information and rights (access, rectification, deletion), recommending device-centered and decentralized architectures to protect vulnerable groups [16]. The UK Geospatial Commission, through Clarity (C) in its ABC Framework, calls for accessible communication—data journey explanations, rights notices, and plain-language interfaces. The NIST AI RMF embeds stakeholder participation and DEIA (Diversity, Equity, Inclusion, Accessibility) as governance constants, recommending operational integration of inclusion through risk communication, continuous monitoring, and documentation [60]. From an industry standpoint, WGIC advocates open and interoperable standards, metadata and provenance transparency, and algorithm auditing and validation to ensure equitable access and external review [35].
In summary, Inclusion and Accessibility is a social value alignment principle that combines the following: guaranteed access to data, models, and services, geographic representativeness and equity, transparent communication in plain language, participatory governance, and institutionalized DEIA practices [5,6,9,16,24,30,31,35,36,60].

6. Discussion

6.1. Interpretation and Contributions of the GeoAI Ethics Framework

Building on the results reported in Section 4 and Section 5, this study contributes meaningfully by establishing GeoAI ethics as an independent ethical framework that reflects the unique risks and responsibilities arising from the spatiality and contextuality of geospatial information, rather than viewing it as a mere extension of general AI ethics. Accordingly, the discussion below focuses on interpretive implications and practical operationalization pathways (governance instruments, design guidelines, and evaluation procedures), rather than reiterating the full results. Existing AI ethics frameworks—such as those from the OECD, UNESCO, and the European Union—propose universal principles like fairness, transparency, and accountability, but these values alone do not sufficiently encompass the distinctive attributes of geospatial data, such as location-based privacy violations, re-identification risks, spatial inequities, and surveillance misuse. In contrast, this study systematically analyzed 16 policy reports and 16 academic papers to propose twelve GeoAI ethical axes. These axes function as operational ethics structures applicable across the full data lifecycle—collection, processing, and utilization—and advance beyond declarative principles to provide actionable, verifiable, and auditable ethical guidelines. Notably, axes such as Geo-privacy, Data Provenance, Spatial Fairness, and Misuse Prevention represent ethical dimensions unique to GeoAI, clearly demonstrating that location-based data and geographic context are central to ethical judgment. Furthermore, each axis aligns with international standards such as ISO/IEC 23894, ISO/IEC 42001, OGC, and the UN-GGIM framework, serving as a policy–technology bridge connecting ethics, standardization, and technical implementation.

6.2. Comparison with Existing Studies and Extensibility

This study builds upon prior AI ethics literature, which outlined the “five universal axes of AI ethics”—Privacy, Fairness, Transparency, Accountability, and Safety—while reinterpreting and extending these principles within the geospatial context.
For example, whereas “privacy” in general AI typically focuses on personal data protection, GeoAI privacy centers on risks such as location-based re-identification, linkage attacks, and trajectory predictability. Accordingly, the proposed Geo-privacy axis extends beyond data protection to include data minimization, purpose limitation, privacy-by-design, and re-identification testing, forming a practical, process-oriented principle. Similarly, Spatial Fairness moves beyond conventional fairness to address spatial bias and representational imbalance (e.g., the MAUP effect), providing new, metric-driven approaches. Even identical algorithms can produce different results across urban vs. rural or high- vs. low-density regions, potentially leading to policy inequities. Operationally, these findings imply that GeoAI evaluation should incorporate spatial cross-validation, regional performance disaggregation, and visualization checks designed to prevent stigmatization or exclusion. The Data Provenance axis also represents a conceptual expansion. While prior studies focused on data quality and accuracy, this research broadens the concept to include data lineage, model versioning, and traceability through auditing, forming an ethical-governance dimension essential for ensuring reproducibility and trust in GeoAI. This principle could serve as a key reference for future data standardization and procurement policies.

6.3. Policy and Societal Implications

The institutionalization of GeoAI ethics extends beyond technical safeguards—it constitutes a core element of policy governance and public trust. The proposed framework can be operationalized at three levels, detailed below.
First, Policy and Legal Level: It provides a practical foundation for applying risk-based regulation, as outlined in the OECD AI Principles and EU AI Act, to the GeoAI domain. For instance, location-based or trajectory-driven models can be treated as high-risk GeoAI candidates and assessed through mandatory transparency, auditing, privacy protection, and social impact assessments (DPIA/AIIA).
Second, Industrial and Procurement Level: The twelve ethical axes can serve as the foundation for developing ethical checklists and model cards to evaluate accountability and transparency in GeoAI systems. These tools could inform public procurement, research grant evaluations, and academic publication ethics requirements.
Third, Societal Acceptance Level: Ethical axes such as Participation and Inclusion are critical to building citizen trust. Given GeoAI’s impact on urban management, disaster prediction, and surveillance, clear citizen notices, risk communication, and feedback mechanisms are essential. Such participatory ethics represent a paradigm shift from “top-down AI” to “bottom-up, human-centered GeoAI.” Across these levels, operationalization can be implemented by embedding the axis-specific checklists into stage-gate governance (design–deployment–operation) and by requiring evidence packages (e.g., model/data cards, TEVV results, audit logs, and monitoring reports) as decision inputs. In practice, operationalizing the axes requires explicit balancing when tensions arise. For example, transparency obligations may need to be implemented as tiered disclosure (public vs. restricted) to avoid exposing sensitive location data, while open data initiatives may require stronger security and misuse-prevention safeguards. Similarly, fairness interventions can affect model accuracy or calibration across regions, underscoring the need for disaggregated evaluation, documented rationales, and continuous monitoring under lifecycle governance.

6.4. Academic Implications and International Comparison

Academically, this study structures GeoAI ethics across three interlinked dimensions: normative (principles), technical (implementation), and governance (policy integration). This integrative approach overcomes the limitations of prior works that focused narrowly on philosophical discourse or isolated technologies, providing a unified model linking theory, standards, and practice. In comparative terms: Europe (EU, UNESCO) emphasizes legal and regulatory approaches, North America (OECD, IEEE) adopts voluntary normative frameworks, and Asia (UN-GGIM, WGIC) focuses on standardization and interoperability. Beyond these regional emphases, the growing geopolitical salience of AI and geoinformation is likely to amplify divergence across regulatory ecosystems, with potentially conflicting priorities around openness and transparency versus national security constraints, data sovereignty, and strategic infrastructure protection. For internationally deployed GeoAI, this implies that ethical governance may require a shared baseline complemented by jurisdiction-aware implementation profiles and explicit documentation of how cross-bloc constraints are reconciled.
This study synthesizes these approaches, presenting a balanced model that connects policy normativity with technical feasibility. The use of PRISMA-based literature selection and Krippendorff’s α reliability verification also strengthens the methodological rigor, setting a benchmark for future meta-studies on AI ethics guidelines.

6.5. Limitations and Future Research Directions

While this study presents a systematic and internationally grounded framework for GeoAI ethics, several limitations remain.
First, Temporal Scope: The analysis covers policy documents and academic papers published between 2019 and 2025, excluding emerging technologies such as Geo-LLMs and Generative Mapping AI. Future studies should address new issues like hallucination and map integrity in generative GeoAI.
Second, Empirical Validation: This research is primarily literature-based. Future work should involve empirical validation in industrial and administrative settings through performance indicators, simulations, and policy experiments for each ethical axis.
Third, Ethical Trade-offs: In real-world deployments, implementing the axes may involve inherent tensions—for example, Geo-privacy vs. Transparency (disclosure vs. re-identification risk), Security vs. Data Openness (open access vs. protection from misuse), and Spatial Fairness vs. Model Accuracy (equalized performance vs. predictive optimality). These tensions should be treated as multi-objective governance problems and addressed through explicitly documented balancing decisions (e.g., AIA/DPIA, model/data cards, and audit logs), rather than as independent checklist items. Future work should further formalize these conflicts through quantitative trade-off analysis and design-time evaluation.
Fourth, Local Adaptation: Ethical frameworks must be tailored to national legal contexts. For example, differences between the EU INSPIRE Directive and the U.S. Open Geospatial Data Policy imply that the same ethical axes may require distinct operational criteria.
Fifth, Geopolitical and Regulatory Fragmentation: As geopolitics increasingly shape AI and geoinformation governance, ethical recommendations and compliance requirements may diverge across states or geopolitical blocks, directly affecting cross-border data flows, disclosure obligations, and security restrictions. Future studies should map axis-level requirements across major jurisdictions and develop operational mechanisms—such as tiered disclosure, access controls, and evidence packages—to reconcile or transparently document these divergences in real-world GeoAI deployments.

6.6. Overall Discussion

Ultimately, GeoAI ethics should not be perceived as a constraint on technological innovation, but rather as an infrastructure for trustworthy spatial intelligence. The twelve-axis framework proposed in this study provides end-to-end governance, from data collection to disposal, ensuring social legitimacy and sustainability in GeoAI development. As GeoAI continues to expand rapidly in urban planning, environmental management, disaster response, and national land monitoring, it is vital that technological advances do not outpace the protection of human rights, fairness, and safety. Institutionalizing ethical pre-assessment, continuous monitoring, and transparent reporting mechanisms will be essential. Practically, this can be advanced by integrating the framework into procurement specifications, AIA/DPIA templates, and continuous monitoring/audit routines. Given emerging geopolitical fragmentation in AI and geoinformation governance, future international standardization will likely require interoperable minimum requirements while allowing transparent, jurisdiction-specific constraints on data access, disclosure, and security.
This study lays the foundational groundwork for that transformation and offers practical evidence to inform future international standardization efforts in GeoAI ethics.

7. Conclusions

This study analyzed the ethical, legal, and social issues arising from the rapid proliferation of Geospatial Artificial Intelligence (GeoAI)—the convergence of artificial intelligence (AI) and geospatial technologies—and developed a comprehensive ethical framework for their systematic governance. While existing general AI ethics norms focus primarily on universal human values such as human rights, fairness, transparency, and accountability, this study expands beyond these boundaries to incorporate the spatial specificity of GeoAI—namely, spatiality, contextuality, and spatial autocorrelation—and the distinctive risks they entail. Following the PRISMA 2020 protocol, a total of 32 documents (16 international policy reports and 16 academic papers published between 2019 and 2025) were systematically reviewed. By combining the normative strength of policy literature with the content-based evidence of scholarly works, this study introduced two analytical frameworks: PDEP-NcM (Policy Document-based Ethical Principle Extraction and Normativity Classification Methodology) and SEPE-NcM (Scholarly Ethical Principle Extraction and Normativity Classification Methodology). Through synonym normalization, axis merging, and prioritization, twelve GeoAI ethical axes were ultimately derived: Geo-privacy, Data Provenance and Quality, Spatial Fairness and Bias, Transparency, Accountability and Auditability, Safety, Security and Robustness, Human Oversight and Human-in-the-Loop, Public Benefit and Sustainability, Participation and Stakeholder Engagement, Lifecycle Governance, Misuse Prevention, and Inclusion and Accessibility. These twelve ethical axes extend the philosophical principles of general AI into practical technological and governance dimensions, formulated as operational checklists applicable across the entire lifecycle—from design and development to deployment, operation, and decommissioning.
This systematic approach overcomes the limitations of previous research, which often remained at the level of declarative principles or single-issue technical solutions (e.g., privacy protection or anonymization algorithms); moreover, it represents the first integrated attempt to bridge the gap between normative policy consensus and technical implementation. Furthermore, this study establishes the GeoAI ethics framework in terms of both policy applicability and international coherence. By integrating global ethical and technical standards—such as the OECD AI Principles, UNESCO’s “Recommendation on the Ethics of AI”, the EU AI Act, ISO/IEC 23894 and 42001, and the UN-GGIM’s Integrated Geospatial Information Framework (UN-IGIF)—the study reinterprets the core values of privacy, transparency, accountability, and sustainability within the geospatial context. Consequently, GeoAI ethics is positioned not as a subdomain of AI ethics but as a new normative system within spatial data governance. Academically, the proposed twelve-axis framework provides a triple-layered analytical structure—normative, technical, and institutional—that explains GeoAI processes holistically. For instance, the Geo-privacy axis translates into procedural elements such as data minimization, purpose limitation, and privacy-by-design; the Provenance axis encompasses data lineage, quality control, versioning, documentation, and auditing; the Spatial Fairness axis operationalizes the management of MAUP effects, regional representativeness, and geographic generalization through quantitative metrics; and the Participation and Inclusion axes institutionalize citizen engagement, multistakeholder dialogue, accessibility, and DEIA (Diversity, Equity, Inclusion, and Accessibility). Each ethical axis thus serves as an interface between technical practices (privacy-preserving methods, model evaluation, visualization guidelines) and policy standards (ISO, OGC, UN-GGIM guidelines), enabling the practical realization of GeoAI governance principles. From a policy perspective, the ethical framework proposed herein can be directly applied to public AI procurement, data standardization, research evaluation, and legislative improvement. Specifically, the Accountability and Lifecycle Governance axes can serve as foundational references for AI Impact Assessments (AIA) and Data Protection Impact Assessments (DPIA), while the Provenance and Transparency axes can inform the standardization of Model Cards and Datasheets for Datasets. Similarly, the Misuse Prevention axis can function as a preventive compliance framework against emerging threats such as map manipulation, surveillance misuse, and data linkage abuse. The axes of Participation and Inclusion play a decisive role in strengthening social trust and promoting responsible innovation, ensuring that GeoAI develops within a framework of public accountability and ethical legitimacy. The study also provides a foundation for international cooperation and standardization discourse. Since GeoAI inherently involves cross-border data flows, its ethical standards must transcend national laws and technological disparities to achieve international coherence. The twelve ethical axes developed in this study can serve as a common global framework (ethical architecture) for future GeoAI ethics standardization efforts by organizations such as UN-GGIM, ISO/IEC SC42, and OGC.
Nevertheless, several limitations remain.
First, this study focuses on literature-based analysis and lacks empirical validation in real-world industrial or administrative settings. Future research should test the framework’s applicability, effectiveness, and measurability by applying it to actual GeoAI systems such as urban digital twins, drone-based land monitoring, or satellite imagery analysis.
Second, given the rapid pace of GeoAI development, the ethical implications of next-generation technologies such as Generative GeoAI and GeoLLMs (Large Geospatial Language Models) must be continuously examined.
Future studies should include ethical risk scenario analysis, governance simulations, and policy experiments targeting these emerging technologies.
Despite these limitations, this study presents the first comprehensive ethical framework that organically integrates the ethical, legal, and technical dimensions of GeoAI, offering a crucial milestone for the institutionalization and international standardization of the GeoAI ecosystem. Ultimately, the core message of this study is clear: Responsible GeoAI is not about accelerating technological progress—it is about designing a structure of trust. To build a sustainable and equitable GeoAI ecosystem, technological innovation must be accompanied by ethical imagination and institutional implementation, and this study’s findings serve as a starting point for that transformative shift.

Funding

This research was funded by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (RS-2025-00513230).

Data Availability Statement

All documents referenced in this study are publicly available online, and the corresponding website and article links have been added to the References.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ABCAccountability–Bias–Clarity
AIArtificial Intelligence
AIIAAI Impact Assessment
AI RMFArtificial Intelligence Risk Management Framework
AI/MLArtificial Intelligence/Machine Learning
CATSConditional Adversarial Trajectory Synthesis
CRSCoordinate Reference System
DEIADiversity, Equity, Inclusion, and Accessibility
DPIA Data Protection Impact Assessment
DPIA/AIIA Data Protection Impact Assessment/AI Impact Assessment
EDPB European Data Protection Board
ELSI Ethical, Legal, and Social Issues
EU European Union
EU AI Act European Union Artificial Intelligence Act
EUR-Lex EUR-Lex: Access to European Union law
EthicalGEO Ethics in Geospatial Data and Technologies initiative
FAIR Findable, Accessible, Interoperable, Reusable
G20 Group of Twenty
G7 Group of Seven
GDPR General Data Protection Regulation
GeoAI Geospatial Artificial Intelligence
Geo-HMI Geospatial Human–Machine Interaction
GeoLLMs/Geo-LLMs Geospatial Large Language Models
GIS Geographic Information System
GIT Geographic Information Technologies
GML Geography Markup Language
GSGF Global Statistical Geospatial Framework
HIC Human-in-Command
HITL Human-in-the-Loop
HOTL Human-on-the-Loop
HITL/HOTL/HIC Human-in-the-Loop/Human-on-the-Loop/Human-in-Command
HLC Home Location Clustering
HMI Human–Machine Interaction
HPC High-Performance Computing
HVD High-Value Datasets
IASC Inter-Agency Standing Committee
IEC International Electrotechnical Commission
IEEE Institute of Electrical and Electronics Engineers
INSPIRE Infrastructure for Spatial Information in the European Community
ISO International Organization for Standardization
ISO/IEC International Organization for Standardization/International Electrotechnical Commission
ISO/TC 211 ISO Technical Committee 211: Geographic information/Geomatics
ISO/IEC SC42 ISO/IEC JTC 1/SC 42: Artificial Intelligence subcommittee
ITU International Telecommunication Union
IoT Internet of Things
IoU Intersection over Union
LiDAR Light Detection and Ranging
MAUP Modifiable Areal Unit Problem
ML Machine Learning
NIST National Institute of Standards and Technology
NMAs National Mapping Agencies
NSDI National Spatial Data Infrastructure
OCHA United Nations Office for the Coordination of Humanitarian Affairs
OECD Organization for Economic Co-operation and Development
OGC Open Geospatial Consortium
OGC API Open Geospatial Consortium Application Programming Interface
PDEP Policy Document-based Ethical Principle Extraction
PDEP-NcM Policy Document-based Ethical Principle Extraction and Normativity Classification Methodology
PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses
Q-FAIR Quality, Findability, Accessibility, Interoperability, Reusability
QA Quality Assurance
RACI Responsible, Accountable, Consulted, Informed
RMF Risk Management Framework
SC42 Subcommittee 42: Artificial Intelligence
SDI Spatial Data Infrastructure
SDIs Spatial Data Infrastructures
SEPE Scholarly Ethical Principle Extraction
SEPE-NcM Scholarly Ethical Principle Extraction and Normativity Classification Methodology
TEVV Testing, Evaluation, Verification, and Validation
TUL Trajectory-User Linking
UAV Unmanned Aerial Vehicle
UI User Interface
UK United Kingdom
UKGC UK Geospatial Commission
UN United Nations
UN-GGIM United Nations Committee of Experts on Global Geospatial Information Management
UN-IGIF United Nations Integrated Geospatial Information Framework
UNESCO United Nations Educational, Scientific and Cultural Organization
WFS Web Feature Service
WGIC World Geospatial Industry Council
WMS Web Map Service
XAI Explainable Artificial Intelligence

Appendix A

Table A1. PRISMA 2020 Checklist (Abbreviated, Author-Completed).
Table A1. PRISMA 2020 Checklist (Abbreviated, Author-Completed).
ItemPRISMA 2020 Reporting ItemYes/No/NALocation in Manuscript (Section/Page/Para)
1Identify the report as a systematic review in the title.
2Provide a structured abstract covering objectives, methods, results, and implications.
3Explain why the review is needed considering existing knowledge.
4Objectives: State the review question(s)/aim(s) explicitly.
5Eligibility criteria: Specify inclusion/exclusion criteria (and how studies were grouped for synthesis if applicable).
6Information sources: List all databases/registers/websites and the dates last searched.
7Search strategy: Provide full search strategies for each source (or indicate where they are provided).
8Selection process: Describe how records were screened/selected (number of reviewers, independence, automation if any).
9Data collection: Describe how data were extracted (number of reviewers, independence, contacting authors if relevant).
10aData items—outcomes: List and define outcomes (or the main variables/constructs extracted).
10bData items—other: List other extracted variables (e.g., publication year, document type, governance level) and assumptions.
11Risk of bias/quality assessment: Describe methods used to assess risk of bias or quality (or justify if not performed).
12Effect measures: Specify effect measures for each outcome (if quantitative synthesis conducted).
13aSynthesis—eligibility for each synthesis: Describe how studies/documents were decided to be included in each synthesis/theme.
13bSynthesis—data preparation: Describe any data preparation (e.g., normalization, coding, handling missing information).
13cSynthesis—presentation: Describe how results were tabulated/visualized (tables, matrices, maps, etc.).
13dSynthesis—methods: Describe synthesis approach (e.g., content analysis, thematic consolidation, frequency/coverage, reliability).
13eExploring differences: Describe methods to explore heterogeneity/variation (e.g., by document type/region/time).
13fSensitivity/robustness: Describe any sensitivity checks (e.g., coder agreement checks, alternative grouping).
14Reporting bias: Describe assessment of reporting bias (if applicable) or state not applicable with rationale.
15Certainty/confidence: Describe methods to assess certainty in evidence (if applicable) or state not applicable.
16aStudy selection: Report numbers screened/included provide flow diagram (PRISMA flow).
16bExclusions: Cite/report exclude records at full-text stage and reasons (if provided).
17Study characteristics: Present characteristics of included documents/studies.
18Risk of bias in studies: Present results of bias/quality assessment (if performed).
19Results of individual studies: Present results for each included study/document (as appropriate).
20aSynthesis results: Summarize results for each synthesis/theme (e.g., ethical axes derived).
20bAdditional analyses: Report results of subgroup/robust analysis (if performed).
21Reporting biases: Present results of reporting bias assessment (if performed).
22Certainty of evidence: Present certainty/confidence for each main outcome (if assessed).
23aDiscussion—interpretation: Interpret results in context of existing evidence.
23bDiscussion—limitations (evidence): Discuss limitations of included evidence.
23cDiscussion—limitations (process): Discuss limitations of the review methods.
23dDiscussion—implications: Discuss implications for practice, policy, and future research.
24aRegistration: Provide registration information (or state not registered).
24bProtocol: Indicate where the protocol is available (or state not available).
24cAmendments: Describe amendments to protocol/registration (or state none).
25Support: Describe sources of financial/non-financial support and roles of funders.
26Competing interests: Declare competing interests.
27Data/materials availability: Report availability of data extraction forms, coded data, codebook, analysis code, and other materials.
Table A2. Selected Policy Reports on AI Ethics (16 Documents).
Table A2. Selected Policy Reports on AI Ethics (16 Documents).
ReportsKey Features
OECD (2025)
[5]
Adopted by the OECD Council of Ministers in 2019 as the world’s first intergovernmental standard on AI,
It was subsequently updated in 2023 (refining the definition of “AI system”) and in 2024 (incorporating generative AI, information integrity, and implementation-facilitation measures) to reflect the latest technological and policy developments.
G20 (2019)
[6]
Based on the OECD recommendations, G20 member states adopted internationally the principles of human-centered and inclusive AI.
Provides non-binding guidance for policymakers to maximize the benefits of AI while minimizing potential risks.
ITU(2025)
[59]
It synthesizes the workshop outcomes of the 2025 Geneva AI for Good Global Summit and presents standards as practical instruments for trust, innovation, and market access.
Through standards cooperation among ITU–ISO–IEC and the inclusive participation of public, private, and civil-society stakeholders, it establishes a governance mechanism that carries principles into implementation.
EDPB (2020) [16]
Specifies the legal requirements for using location data and contact-tracing tools in accordance with the GDPR and ePrivacy Directive.
Demonstrates that urgent objectives such as epidemic response can coexist with the protection of privacy as a fundamental human right.
EthicalGEO
(2021)
[34]
Addresses ethical challenges associated with the use of location data and provides a common set of principles for responsible utilization.
The Locus Charter serves as an international benchmark document establishing global consensus on the ethical use of location data.
WGIC (2021) [35]
Policy Report 2021-01, offering a global perspective on geospatial AI/ML applications and related policies.
Emphasizes that the application of geospatial AI/ML across various industries and social sectors necessitates the establishment of policy frameworks and ethical guidelines.
UKGC (2022)
[36]
Proposes ethical principles aimed at building public confidence in the use of location data.
Introduces the ABC Framework (Accountability–Bias–Clarity) as a concise and practical ethical standard to be jointly adopted by policymakers, enterprises, and researchers.
NIST (2023) [60]
The AI Risk Management Framework (AI RMF 1.0) provides a voluntary framework for addressing risks associated with AI systems, enabling organizations to design, develop, and use trustworthy AI.
Although not legally binding, the AI RMF serves as a practical operational tool for managing AI risks and ensuring reliability within organizations.
ISO/IEC (2023a) [61]
An international standard guideline for AI risk management, providing instructions for conducting risk management activities throughout the entire AI lifecycle.
Helps organizations and institutions worldwide identify and mitigate AI-related risks in a consistent manner, thereby enhancing global reliability and interoperability.
ISO/IEC (2023b) [62]
An international management system standard for AI governance, outlining requirements and guidance for public and private organizations to manage AI-related risks, opportunities, and responsibilities.
Ensures that organizations can operate under a consistent, globally aligned AI management framework, interoperable with other management system standards.
OECD (2023)
[63]
Promotes shared principles and mutual understanding among G7 countries to advance responsible AI development and deployment and to address the impact of generative AI.· Serves as a starting point for establishing a normative foundation for generative AI governance at the global level.
OCHA. (2025)
[17]
Bridges IASC system-wide principles and the UN Secretariat’s principles on personal and sensitive data into actionable guidance, designed to embed them in field procedures.
Defines data responsibility in humanitarian contexts as the safe, ethical, and effective management of both personal and non-personal data.
UNESCO (2021)
[7]
Provides an international recommendation outlining ethical principles for AI development and use, promoting responsible AI governance worldwide.
Serves as a major reference for national governments in establishing domestic AI policies and regulatory frameworks.
European
Union (2024)
[8]
Aiming to improve the functioning of the EU single market and to promote human-centric, trustworthy AI, the Regulation establishes harmonized rules for the development, placing on the market, and operation of AI systems.
It applies to a broad range of actors—including providers inside and outside the EU, deployers, importers, and manufacturers—while excluding uses for military and national security purposes.
Council of
Europe (2024)
[9]
The first internationally binding convention governing the impacts of AI on human rights, democracy, and the rule of law.
Demonstrates the international consensus that AI technologies must remain compatible with the protection of fundamental rights and the strengthening of democratic institutions.
UN-GGIM
(2025)
[22]
Provides the foundation for connecting the GeoAI ethical axes—Geo-privacy, Spatial Fairness and Bias, and Data Provenance and Quality—with national statistical systems.
Establishes criteria for data integration, privacy, and quality assurance, including standards for addressing, geocoding, metadata, and personal data protection.
Table A3. Selected Academic Papers on AI Ethics (16 Studies).
Table A3. Selected Academic Papers on AI Ethics (16 Studies).
PapersKey Features
Hagendorff
(2020)
[40]
Critically evaluates existing AI ethics guidelines and analyzes their limitations.
Argues that declarative principles alone are insufficient for establishing AI ethics standards, emphasizing the need for practical implementation capability and institutional support.
Fjeld. et al. (2020)
[2]
Conducts a systematic analysis of AI ethics and rights-based principles issued by governments, institutions, corporations, and civil organizations.
Identifies eight overarching themes that provide an internationally recognized consensus framework for developing AI ethical standards.
McKenzie et al. (2023)
[29]
Explores how GeoAI, by combining geospatial information and AI, offers new value and insights while introducing serious ethical challenges related to location-based privacy.
Highlights privacy protection as a core principle of GeoAI ethics and provides comparative insights relevant to international norms and national policy frameworks.
Janowicz (2023)
[30]
Explores how GeoAI, by combining geospatial information and AI, offers new value and insights while introducing serious ethical challenges related to location-based privacy.
Highlights privacy protection as a core principle of GeoAI ethics and provides comparative insights relevant to international norms and national policy frameworks.
Rao et al. (2023a)
[37]
Proposes the development of foundation models in GeoAI that jointly address privacy protection and security.
Emphasizes that privacy and security constitute core ethical axes in GeoAI and must be embedded into large-scale foundation-model development.
Kang et al. (2023)
[31]
A comprehensive review of AI applications in cartography, summarizing AI methods and use cases in map production and interpretation.
Provides key evidence for identifying ethical issues in map creation and visualization, serving as a major reference for the development of a GeoAI ethics framework.
Rao et al. (2023b)
[38]
Proposes a privacy-preserving data synthesis method for the safe sharing of trajectory data.
Suggests technical solutions to balance the tension between privacy protection and data utility in trajectory data applications.
Corrêa et al. (2023)
[47]
Systematically reviews 200 global AI ethics guidelines, recommendations, and policy documents, identifying consensus across universal ethical axes while exposing practical gaps such as lack of enforceability, metrics, and audit systems.
Empirically confirms that the five universal ethical axes represent a globally recurring consensus across multiple jurisdictions and sectors.
Zhang et al. (2023).
[64]
Experimentally demonstrates map-integrity issues such as distortion, hallucination, and projection/legend errors in AI-generated maps (e.g., DALL·E 2).
Proposes the need for AI-generated map detection models, provenance labeling, and uncertainty visualization.
Oluoch (2024)
[32]
Examines the ethical challenges arising from the convergence of AI and Geographic Information Technologies (GIT).
Emphasizes the importance of integrating multidisciplinary ethical guidelines when designing GeoAI frameworks within an international context.
Mai et al. (2025)
[24]
A comprehensive review exploring future directions of GeoAI, presenting a research agenda for both academic and technological advancement beyond current applications.
Demonstrates the need for simultaneous integration of technical innovation and ethical consideration in next-generation GeoAI development.
Mochizuki et al. (2025)
[65]
Critically analyzes UNESCO’s AI policy guidelines in education, questioning whether they are grounded in ethics or driven by techno-solutionism.
Highlights the need for critical assessment of international institutional guidelines to evaluate their ethical and social implications.
Kausika et al.
(2025)
[66]
A conceptual and exploratory study examining the opportunities and risks of introducing GeoAI into national mapping agencies (NMAs) for topographic map production.
Provides concrete insights into organizational, policy, and governance-level ethical considerations, contributing significantly to the development of GeoAI ethical standards and guidelines.
Ye. et al. (2025)
[39]
Clearly addresses key ethical axes—privacy protection, data security, bias minimization, explainability, and trustworthiness—in establishing GeoAI ethics.
Discusses core challenges such as privacy, security, and bias within GeoAI, reinforcing ethical consistency with international frameworks (OECD, UNESCO, EU AI Act).
Kang (2025)
[67]
Explicitly defines key GeoAI ethics axes—privacy protection, fairness and inclusion, transparency, and explainability—providing a direct academic linkage to international AI ethics principles.
Suggests practical design guidelines for data collection, algorithm development, and community participation.
Kijewski et al. (2025)
[3]
Critically evaluates the limitations of “checkbox ethics,” highlighting the lack of performance measurement, weak auditing and enforcement, and conflicts of interest.
Advocates for a shift toward substantive governance, including AI impact assessments (AIA), independent audits, disclosure and explanation obligations, and the design of sanction/incentive mechanisms.
Table A4. GeoAI Ethical Principles Extracted from Policy Reports.
Table A4. GeoAI Ethical Principles Extracted from Policy Reports.
ReportsExtracted Ethical Principles
OECD (2025)
[5]
Inclusive growth and sustainability; human rights and democratic values; privacy protection; fairness and non-discrimination; transparency and explainability; safety and robustness; accountability and responsibility; human oversight.
G20 (2019)
[6]
Human-centered values; privacy protection; fairness and non-discrimination; transparency and explainability; safety and reliability; accountability; digital inclusion; international cooperation and interoperability.
ITU (2025)
[59]
Transparency & Explainability; Accountability; Safety–Security–Reliability; Human-Centered & Human Rights; Fairness & Non-discrimination; Privacy & Data Protection; Interoperability; Sustainability; Content Authenticity & Provenance; Risk Management; Conformity Assessment; Human-in-the-Loop (HITL).
EDPB (2020) [16]Data minimization; purpose limitation; anonymization and pseudonymization; privacy-by-design; legal basis and clear accountability; user rights assurance; transparency and auditability.
EthicalGEO
(2021)
[34]
Realize opportunities; understand impacts and ensure proportional decision making; do no harm; protect vulnerable groups; address bias; minimize intrusion; minimize data collection; protect privacy; prevent identification of individuals; provide accountability.
WGIC (2021) [35]Privacy protection; fairness and non-discrimination; transparency and explainability; accountability; data quality and provenance; safety and security; human oversight; misuse prevention; interoperability standards.
UKGC (2022)
[36]
Accountability; bias mitigation; clarity and transparency; public benefit orientation; trust and reliability (building public confidence); Q-FAIR data principles (Quality, Findability, Accessibility, Interoperability, Reusability); governance; reinforcement of data-subject rights (access, control, preference reflection); equitable data access and barrier reduction; stakeholder participation and open dialogue.
NIST (2023) [60]Reliability and validity; safety and security; fairness; accountability; transparency and explainability; enhanced privacy; human oversight; risk-based governance; continuous monitoring.
ISO/IEC (2023a) [61]Risk management and continuous improvement; transparency; fairness and equity; human oversight; privacy and freedom of expression; safety and security; clear accountability; regulatory compliance; stakeholder participation.
ISO/IEC (2023b) [62]Accountability; lifecycle risk management; transparency and explainability; human oversight; privacy and data protection; data quality and provenance; security and robustness; internal auditing.
OECD (2023) [63]Responsible AI; privacy and data governance; transparency; fairness; protection of human and fundamental rights; safety and security; accountability and self-regulation; international cooperation; intellectual-property protection; strengthened transparency of democratic values and procedures; safety, quality management, competence and trust building.
OCHA. (2025)
[17]
Purpose/Proportionality; Quality/Accuracy; Confidentiality; Transparency; Data Security; Personal Data Protection; Accountability; Fairness/Legitimacy; Human-Rights-Based; People-Centred/Inclusive; Retention/Destruction; Data-Subject Rights.
UNESCO (2021)
[7]
Human rights and dignity; inclusion and equity; safety and security; privacy and data protection; fairness and non-discrimination; accountability and transparency; human oversight; sustainability and environmental ethics.
European
Union (2024)
[8]
Risk-based management; data governance and quality; transparency; human oversight; security and accuracy; accountability; documentation and logging; post-market monitoring and corrective measures.
Council of
Europe (2024)
[9]
Human rights, democracy, and rule of law; privacy and personal data protection; transparency and identification of AI-generated content; accountability; safety and security; equality and non-discrimination; public participation; independent oversight.
UN-GGIM
(2025)
[22]
Privacy and confidentiality; accountability and legal basis; data quality and provenance; interoperability and standardization; unified geocoding and spatial fairness; secure access; cooperation and capacity building; sustainable financing.
Table A5. GeoAI Ethical Principles Extracted from Academic Literature.
Table A5. GeoAI Ethical Principles Extracted from Academic Literature.
PapersExtracted Ethical Principles
Hagendorff
(2020)
[40]
Privacy protection; fairness and non-discrimination; accountability; transparency and explainability; safety and security; human oversight; inclusion and social solidarity; public good and sustainability.
Fjeld. et al. (2020)
[2]
Privacy; fairness; accountability; transparency and explainability; safety and security; human-centered values; professional responsibility.
McKenzie et al. (2023)
[29]
Geo-privacy; data minimization and anonymization; prevention of linkage risks; provenance and transparency; privacy-by-design; technical safeguards (differential privacy, geo-masking); independent oversight and education.
Janowicz (2023)
[30]
Sustainability; privacy and data protection; fairness and representativeness; transparency and explainability; autonomous consent; accountability; reproducibility; social and environmental justice.
Rao et al. (2023a)
[37]
Privacy protection; data security; utility balance; federated and decentralized learning; safety and robustness; misuse prevention; model auditing and monitoring.
Kang et al. (2023)
[31]
Geo-privacy protection; bias mitigation and fairness; transparency and explainability; provenance and version control; human oversight; map integrity; misuse prevention.
Rao et al. (2023b)
[38]
Geo-privacy protection; data synthesis and privacy preservation; privacy–utility trade-off; fairness; secure data sharing; human-centered design.
Corrêa et al. (2023)
[47]
Privacy and data protection; fairness and non-discrimination; transparency; accountability; safety and reliability; human autonomy and participation; inclusion; sustainability and public good.
Zhang et al. (2023).
[64]
Map integrity; provenance and source labeling; uncertainty representation; visualization fairness; accountability and auditing; human oversight; misuse prevention.
Oluoch (2024)
[32]
Privacy; fairness and non-discrimination; transparency and explainability; accountability; autonomy and consent; inclusion; risk minimization; social participation and governance.
Mai et al. (2025)
[24]
Fairness and mitigation of geographic bias; privacy protection; data security; sustainability; interpretability; regulatory compliance; responsible data sharing.
Mochizuki et al. (2025)
[65]
Equity and inclusion; data protection and transparency; bias prevention; protection of human autonomy; environmental sustainability; stakeholder participation; accountability and verifiability.
Kausika et al.
(2025)
[66]
Human-centered design; transparency and explainability; accountability; provenance and quality management; privacy protection; fairness and bias mitigation; misuse prevention; collaborative governance.
Ye et al. (2025)
[39]
Geo-privacy protection; data security; fairness and bias mitigation; reliability assessment; explainability; human-centered design; participation and governance.
Kang (2025)
[67]
Geo-privacy; data security; fairness; explainability; human-centered design; transparent governance; safe human–AI interaction.
Kijewski et al. (2025)
[3]
Accountability and auditing; transparency; fairness; safety; regulatory compliance; independent verification and evaluation; education and capacity building; stakeholder participation.

References

  1. Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
  2. Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI; Berkman Klein Center Research Publication: Cambridge, MA, USA, 2020. [Google Scholar] [CrossRef]
  3. Kijewski, S.; Ronchi, E.; Vayena, E. The rise of checkbox AI ethics: A review. AI Ethics 2025, 5, 1931–1940. [Google Scholar] [CrossRef]
  4. Falegnami, A.; Tomassi, A.; Corbelli, G.; Nucci, F.S.; Romano, E. A generative artificial-intelligence-based workbench to test new methodologies in organisational health and safety. Appl. Sci. 2024, 14, 11586. [Google Scholar] [CrossRef]
  5. OECD. Recommendation of the Council on Artificial Intelligence. Available online: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 (accessed on 11 November 2025).
  6. OECD. G20 AI Principles. Published by G20 Ministerial Meeting on Trade and Digital Economy. Available online: https://oecd.ai/en/wonk/documents/g20-ai-principles (accessed on 11 November 2025).
  7. UNESCO. Recommendation on the Ethics of Artificial Intelligence. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137 (accessed on 10 November 2025).
  8. EU. Regulation (EU) 2024/1689 of the European Parliament and of the Council. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689 (accessed on 11 November 2025).
  9. Council of Europe. Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law; Council of Europe Treaty Series, No. 225; Council of Europe: Strasbourg, France, 2024. [Google Scholar]
  10. Janowicz, K.; Gao, S.; McKenzie, G.; Hu, Y.; Bhaduri, B. GeoAI: Spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond. Int. J. Geogr. Inf. Sci. 2020, 34, 625–636. [Google Scholar] [CrossRef]
  11. Liu, X.; Chen, M.; Claramunt, C.; Batty, M.; Kwan, M.P.; Senousi, A.M.; Cheng, T.; Strobl, J.; Coltekin, A.; Wilson, J.; et al. Geographic information science in the era of geospatial big data: A cyberspace perspective. Innovation 2022, 3, 100279. [Google Scholar] [CrossRef] [PubMed]
  12. Guo, H. Big Earth data: A new frontier in Earth and information sciences. Big Earth Data 2017, 1, 4–20. [Google Scholar] [CrossRef]
  13. Chen, Y. Spatial autocorrelation equation based on Moran’s index. Sci. Rep. 2023, 13, 19296. [Google Scholar] [CrossRef]
  14. Griffith, D.A. Understanding spatial autocorrelation: An everyday metaphor and additional new interpretations. Geographies 2023, 3, 543–562. [Google Scholar] [CrossRef]
  15. Bavaud, F. Measuring and Testing multivariate spatial autocorrelation in a weighted setting: A kernel approach. Geogr. Anal. 2024, 56, 573–599. [Google Scholar] [CrossRef]
  16. EDPB. Guidelines 04/2020 on the Use of Location Data and Contact Tracing Tools in the Context of the COVID-19 Outbreak; European Data Protection Board: Brussels, Belgium, 2020. [Google Scholar]
  17. OCHA. OCHA Data Responsibility Guidelines; United Nations Office for the Coordination of Humanitarian Affairs: New York, NY, USA, 2025. [Google Scholar]
  18. ISO/TC211; ISO/TC 211 Geographic Information/Geomatics. ISO: Geneva, Switzerland, 2009.
  19. OGC. 3D Tiles Specification 1.0. Available online: https://docs.ogc.org/cs/18-053r2/18-053r2.html (accessed on 11 November 2025).
  20. European Commission. Commission Staff Working Document Evaluation of DIRECTIVE 2007/2/EC Establishing an Infrastructure for Spatial Information in the European Community (INSPIRE); European Commission: Brussels, Belgium, 2022. [Google Scholar]
  21. UN-IGIF. United Nations Integrated Geospatial Information Framework, A Strategic Guide to Develop and Strengthen National Geospatial Information Management, Part 1: Overarching Strategy. In United Nations Integrated Geospatial Information Framework; UN-IGIF: New York, NY, USA, 2023. [Google Scholar]
  22. UN-GGIM. The Global Statistical Geospatial Framework (GSGF); Department of Economic and Social Affairs: New York, NY, USA, 2025. [Google Scholar]
  23. Li, W.; Arundel, S.T.; Gao, S.; Goodchild, M.F.; Hu, Y.; Wang, S.; Zipf, A. GeoAI for Science and the Science of GeoAI. J. Spat. Inf. Sci. 2024, 29, 1–33. [Google Scholar] [CrossRef]
  24. Mai, G.; Xie, Y.; Jia, X.; Lao, N.; Rao, J.; Zhu, Q.; Liu, Z.; Chiang, Y.-Y.; Jiao, J. Towards the Next Generation of Geospatial Artificial Intelligence. Int. J. Appl. Earth Obs. Geoinf. 2025, 136, 104368. [Google Scholar] [CrossRef]
  25. Wang, S.; Huang, X.; Liu, P.; Zhang, M.; Biljecki, F.; Hu, T.; Fu, X.; Liu, L.; Liu, X.; Wang, R. Mapping the landscape and roadmap of geospatial artificial intelligence (GeoAI) in quantitative human geography: An extensive systematic review. Int. J. Appl. Earth Obs. Geoinf. 2024, 128, 103734. [Google Scholar] [CrossRef]
  26. Saidi, S.; Idbraim, S.; Karmoude, Y.; Masse, A.; Arbelo, M. Deep-learning for change detection using multi-modal fusion of remote sensing images: A review. Remote Sens. 2024, 16, 3852. [Google Scholar] [CrossRef]
  27. Kazanskiy, N.; Khabibullin, R.; Nikonorov, A.; Khonina, S. A Comprehensive Review of Remote Sensing and Artificial Intelligence Integration: Advances, Applications, and Challenges. Sensors 2025, 25, 5965. [Google Scholar] [CrossRef]
  28. Hoffmann, J.; Bauer, P.; Sandu, I.; Wedi, N.; Geenen, T.; Thiemert, D. Destination Earth–A digital twin in support of climate services. Clim. Serv. 2023, 30, 100394. [Google Scholar] [CrossRef]
  29. McKenzie, G.; Zhang, H.; Gambs, S. Privacy and ethics in GeoAI. In Handbook of Geospatial Artificial Intelligence; CRC Press: Boca Raton, FL, USA, 2023; pp. 388–405. [Google Scholar]
  30. Janowicz, K. Philosophical foundations of geoai: Exploring sustainability, diversity, and bias in geoai and spatial data science. In Handbook of Geospatial Artificial Intelligence; CRC Press: Boca Raton, FL, USA, 2023; pp. 26–42. [Google Scholar]
  31. Kang, Y.; Gao, S.; Roth, R.E. Artificial intelligence studies in cartography: A review and synthesis of methods, applications, and ethics. Cartogr. Geogr. Inf. Sci. 2024, 51, 599–630. [Google Scholar] [CrossRef]
  32. Oluoch, I. Crossing Boundaries: The Ethics of AI and Geographic Information Technologies. ISPRS Int. J. Geo-Inf. 2024, 13, 87. [Google Scholar] [CrossRef]
  33. Paolanti, M.; Tiribelli, S.; Giovanola, B.; Mancini, A.; Frontoni, E.; Pierdicca, R. Ethical framework to assess and quantify the trustworthiness of artificial intelligence techniques: Application case in remote sensing. Remote Sens. 2024, 16, 4529. [Google Scholar] [CrossRef]
  34. EthicalGEO. Locus Charter; EthicalGEO: New York, NY, USA, 2021. [Google Scholar]
  35. WGIC. Geospatial AI/ML Applications and Policies: A Global Perspective; World Geospatial Industry Council: The Hague, The Netherlands, 2021. [Google Scholar]
  36. UKGC. Building Public Confidence in Location Data—The ABC of Ethical Use; UK Geospatial Commission: London, UK, 2022. [Google Scholar]
  37. Rao, J.; Gao, S.; Mai, G.; Janowicz, K. Building privacy-preserving and secure geospatial artificial intelligence foundation models. In Proceedings of the 31st ACM International Conference on Advances in Geographic Information Systems, Hamburg, Germany, 13–16 November 2023; pp. 1–4. [Google Scholar]
  38. Rao, J.; Gao, S.; Zhu, S. CATS: Conditional Adversarial Trajectory Synthesis for privacy-preserving trajectory data publication using deep learning approaches. Int. J. Geogr. Inf. Sci. 2023, 37, 2538–2574. [Google Scholar] [CrossRef]
  39. Ye, X.; Du, J.; Li, X.; Shaw, S.-L.; Fu, Y.; Dong, X.; Zhang, Z.; Wu, L. Human-centered GeoAI Foundation Models: Where GeoAI Meets Human Dynamics. Urban Inform. 2025, 4, 2. [Google Scholar] [CrossRef]
  40. Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
  41. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brenan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  42. Krippendorff, K. Content Analysis: An Introduction to Its Methodology; SAGE publications: Thousand Oaks, CA, USA, 2018. [Google Scholar]
  43. Jiang, H.; Peng, M.; Zhong, Y.; Xie, H.; Hao, Z.; Lin, J.; Ma, X.; Hu, X. A Survey on Deep Learning-Based Change Detection from High-Resolution Remote Sensing Images. Remote Sens. 2022, 14, 1552. [Google Scholar] [CrossRef]
  44. Bai, T.; Yin, D.; Cheng, G.; Han, J. Deep learning for change detection in remote sensing: A review. Geo-Spat. Inf. Sci. 2023, 26, 262–288. [Google Scholar] [CrossRef]
  45. European Commission. High-Value Datasets Best Practices Report; European Commission: Brussels, Belgium, 2024; Available online: data.europa.eu (accessed on 11 November 2025).
  46. UN-GGIM. A Guide to the Role of Standards in Geospatial Information Management; United Nations Committee of Experts on Global Geospatial Information Management: New York, NY, USA, 2015. [Google Scholar]
  47. Corrêa, N.K.; Galvão, C.; Santos, J.W.; Del Pino, C.; Pinto, E.P.; Barbosa, C.; Massmann, D.; Mambrini, R.; Galvao, L.; Terem, E.; et al. Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns 2023, 4, 100857. [Google Scholar] [CrossRef] [PubMed]
  48. Zaidan, E.; Ibrahim, I.A. AI governance in a complex and rapidly changing regulatory landscape: A global perspective. Humanit. Soc. Sci. Commun. 2024, 11, 1121. [Google Scholar] [CrossRef]
  49. Kocak, Z. Publication ethics in the era of artificial intelligence. J. Korean Med. Sci. 2024, 39, e249. [Google Scholar] [CrossRef]
  50. De Montjoye, Y.-A.; Hidalgo, C.A.; Verleysen, M.; Blondel, V.D. Unique in the crowd: The privacy bounds of human mobility. Sci. Rep. 2013, 3, 1376. [Google Scholar] [CrossRef]
  51. Sohrabi, C.; Franchi, T.; Mathew, G.; Kerwan, A.; Nicola, M.; Griffin, M.; Agha, M.; Agha, R. PRISMA 2020 statement: What’s new and the importance of reporting guidelines. Int. J. Surg. 2021, 88, 105918. [Google Scholar] [CrossRef]
  52. Sarkis-Onofre, R.; Catalá-López, F.; Aromataris, E.; Lockwood, C. How to properly use the PRISMA Statement. Syst. Rev. 2021, 10, 117. [Google Scholar] [CrossRef]
  53. Schreier, M. Qualitative Content Analysis in Practice; SAGE publications: Thousand Oaks, CA, USA, 2012. [Google Scholar]
  54. Bowen, G.A. Document analysis as a qualitative research method. Qual. Res. J. 2009, 9, 27–40. [Google Scholar] [CrossRef]
  55. European Commission. Ethics Guidelines for Trustworthy AI; European Commission: Brussels, Belgium, 2019. [Google Scholar]
  56. Floridi, L.; Cowls, J. A unified framework of five principles for AI in society. In Machine Learning and the City: Applications in Architecture and Urban Design; John Wiley & Sons: Hoboken, NJ, USA, 2022; pp. 535–545. [Google Scholar] [CrossRef]
  57. Dwork, C. Differential privacy: A survey of results. In Proceedings of the International Conference on Theory and Applications of Models of Computation, Xi’an, China, 25–29 April 2008; pp. 1–19. [Google Scholar]
  58. Gebru, T.; Morgenstern, J.; Vecchione, B.; Vaughan, J.W.; Wallach, H.; Daume, H., III; Crawford, K. Datasheets for Datasets. Commun. ACM 2021, 64, 86–92. [Google Scholar] [CrossRef]
  59. ITU. The Annual AI Governance Report 2025: Steering the Future of AI; International Telecommunication Union: Geneva, Switzerland, 2025. [Google Scholar]
  60. NIST. Artificial Intelligence Risk Management Framework (AI RMF 1.0); National Institute of Standards and Technology: Gaithersburg, Maryland, 2023. [Google Scholar]
  61. ISO/IEC. Information Technology—Artificial Intelligence—Guidance on Risk Management; ISO/IEC: Geneva, Switzerland, 2023. [Google Scholar]
  62. ISO/IEC. Information Technology—Artificial Intelligence—Management System; ISO/IEC: Geneva, Switzerland, 2023. [Google Scholar]
  63. Organization for Economic Cooperation and Development (OECD). G7 Hiroshima Process on Generative Artificial Intelligence (AI) Towards a G7 Common Understanding on Generative AI; Organization for Economic Cooperation and Development (OECD): Paris, France, 2023. [Google Scholar]
  64. Zhang, Q.; Kang, Y.; Roth, R.E. The Ethics of AI-Generated Maps: A Study of DALL·E 2 and Implications for Cartography. In Proceedings of the 12th International Conference on Geographic Information Science, Leeds, UK, 12–15 September 2023; p. 93. [Google Scholar]
  65. Mochizuki, Y.; Bruillard, E.; Bryan, L. The ethics of AI or techno-solutionism? UNESCO’s policy guidance on AI in education. Br. J. Sociol. Educ. 2025, 46, 1–22. [Google Scholar] [CrossRef]
  66. Kausika, B.B.; Altena, V.V. GeoAI in Topographic Mapping: Navigating the Future of Opportunities and Risks. ISPRS Int. J. Geo-Inf. 2025, 14, 313. [Google Scholar] [CrossRef]
  67. Kang, Y. Human-Centered Geospatial Data Science. arXiv 2025, arXiv:2501.05595. [Google Scholar] [CrossRef]
Table 1. PDEP-NcM Processing Procedure.
Table 1. PDEP-NcM Processing Procedure.
StepProcessing ProcedureTechnical Details/RulesOutput
1. Document AcquisitionCollect full text of reportsOfficial and most recent versions issued by international organizations or government bodiesFinalized analysis corpus
2. Structural AnalysisDetect structure elements such as table of contents, sections, bullet points, and tablesSearch for key signal terms such as “Principles,” “Guidelines,” “Requirements,” “Articles,” “Annex,”, etc.List of candidate sections
3. Normativity AssessmentDetect linguistic strengthStatements expressed in mandatory terms (e.g., shall, must, or explicit prohibitions) were classified as core ethical principles, whereas statements framed in advisory terms (e.g., should or recommend) were treated as indirect ethical principlesNormativity tag for each item
4. Ethical Principle ExtractionExtract sentences or itemsItems presented as bullet points, tables, or formal clauses were treated as core ethical principles, whereas items appearing within implementation- or governance-related sections were treated as indirect ethical principlesDraft list of extracted principles
5. Normalization and LabelingIntegrate synonyms/equivalent conceptsMerge synonymous terms (e.g., privacy = data protection, transparency = explainability) and normalize them into unified GeoAI ethical principlesStandardized ethical axes
6. Metadata AttachmentLink document name, section, and page referencesEnsure traceability through evidence-based documentation and auditabilityFinal extraction matrix
7. Quality ValidationConduct double review and resolve inconsistenciesReach consensus according to rules of normative-strength assessmentReliability-assured results
Table 2. SEPE-NcM Processing Procedure.
Table 2. SEPE-NcM Processing Procedure.
StepInputMain Rules/Processing ProcedureOutput
1. Corpus FinalizationFull texts of selected papers, metadataConfirm latest versions and record document typesCorpus list
2. Document SegmentationOriginal textTag sections (focus on Discussion/Implications/Limitations)Section map
3. Initial Candidate ExtractionSection textFirst-round scan for normative vocabulary, lists, and tablesList of candidate items
4. Normativity ClassificationCandidate itemsApply Core/Indirect classification rulesCategorized items
5. Normalization
and Codebook Development
Categorized itemsSynonymous or conceptually equivalent terms were merged and assigned unified labels in the codebook, resulting in normalized
GeoAI ethical principles
Standardized items
6. Evidence Tagging
and Output Generation
Standardized itemsAttach evidence (paper title, section, page, text snippet)Paper-specific cards and integrated matrix
7. Quality AssuranceComplete outputConduct double coding, consensus process, and duplicate mergingFinal QA-validated version
Table 3. PRISMA flow with counts by set (policy vs. scholarly).
Table 3. PRISMA flow with counts by set (policy vs. scholarly).
Stage & ReasonPolicy (Counts)Scholarly (Counts)Total
Identified counts102
(policy/standards/guidance)
108
(scholarly)
210
Deduplication (n = 45)
Removed counts212445
Screening (n = 115)
Topic mismatch with AI
/GeoAI ethics
3129 60
Non-primary/press/secondary11011
Insufficient ethics
/governance content
161531
Outside time window (<2019)
or outlet unclear
01313
Removed counts5857115
Eligibility (n = 18)
Not primary/official source303
Insufficient operationalization
(governance/eval/audit)
404
Contribution 099
Weak linkage to GeoAI ethics axes022
Removed counts71118
Inclusion (n = 32)
Final selection counts161632
Table 4. The Twelve Ethical Axes of GeoAI.
Table 4. The Twelve Ethical Axes of GeoAI.
Ethical AxisConcept
Geo-privacyEncompasses purpose limitation, minimal collection, and de-identification (anonymization/pseudonymization) of location, trajectory, and proximity data, as well as the prevention of re-identification and linkage. Since privacy-by-design, data minimization, and prohibition of secondary use all fall under the protection of spatial data privacy, these are collectively referred to as Geo-privacy.
Data
Provenance
and Quality
Integrates provenance (lineage and transformation history) with explicit geospatial data-quality management and AI-ready standardization, supported by metadata and interoperability. Provenance enables traceability of quality, but quality must additionally be assessed as fitness-for-purpose relative to production intent and downstream decision context (i.e., who uses the data and for what decisions).
Spatial Fairness and BiasCovers mitigation of geographic bias, representativeness issues, and the Modifiable Areal Unit Problem (MAUP). To explicitly capture the spatial dimension of fairness-related concerns, the term is consolidated as Spatial Fairness.
TransparencyComprises explainability (XAI), disclosure of interaction, and justification of decisions. Since explainability and disclosure are both means of achieving comprehensible openness, these are unified under Transparency.
Accountability and AuditabilityEncompasses audit trails (logging and documentation), mechanisms for appeal and remedy, and clarification of responsible parties. As auditing, remediation, and responsibility all represent the execution of accountability, they are collectively integrated under Accountability.
Safety, Security
and Robustness
Includes cybersecurity and physical safety, robustness, post-deployment monitoring, and corrective measures. Since security, robustness, and monitoring all aim to ensure protection from harm, they are integrated under Safety.
Human
Oversight
and Human-in-the-Loop
Encompasses Human-in-the-Loop systems, intervention or override rights, and responsible deployment. As all these forms of involvement share the common supervision; of human supervision, they are collectively expressed as Human Oversight.
Public Benefit and SustainabilityCaptures sustainability-oriented public benefit, including social welfare and environmental sustainability (e.g., energy and resource efficiency). In GeoAI, sustainability provides an intergenerational framing of public benefit, emphasizing long-term societal value and responsible computational and data-resource use.
Participation
and Stakeholder Engagement
Encompasses community participation, protection of vulnerable groups, and prevention of stigmatization or exclusion. Since the goals of procedural participation and protection both aim to prevent harm through stakeholder involvement, they are consolidated under Participation.
Lifecycle
Governance
Includes lifecycle risk management, impact assessment, documentation, and monitoring. As governance throughout the design–deployment–operation–decommissioning process is central, this is summarized as Lifecycle Governance.
Misuse
Prevention
Covers the prevention of surveillance and geofencing (location-based virtual boundary controls) misuse, as well as the avoidance of deceptive or falsified mapping practices. Since all these forms of unethical use share the goal of preventing abuse, they are grouped under Misuse Prevention.
Inclusion
and Accessibility
Includes accessibility, digital literacy, and equitable benefit distribution. As disparities in access and capability are addressed through inclusiveness, this dimension is concisely represented as Inclusion.
Table 5. Quantitative Indicators of the Twelve GeoAI Ethical Axes.
Table 5. Quantitative Indicators of the Twelve GeoAI Ethical Axes.
Ethical AxisCore
Citations
Indirect
Citations
Total
Documents (n = 32)
Coverage Ratio (%)Krippendorff’s α
1. Geo-privacy2272681.30.84
2. Data Provenance
and Quality
1892475.00.82
3. Spatial Fairness
and Bias
1782371.90.80
4. Transparency2162578.10.86
5. Accountability
and Auditability
1972475.00.83
6. Safety, Security
and Robustness
2082578.10.85
7. Human Oversight
and Human-in-the-Loop
1562062.50.79
8. Public Benefit
and Sustainability
14102165.60.81
9. Participation and Stakeholder Engagement13112062.50.78
10. Lifecycle Governance1682268.80.82
11. Misuse Prevention1291856.30.76
12. Inclusion & Accessibility11101856.30.77
Table 6. Guideline Checklist for Geo-privacy.
Table 6. Guideline Checklist for Geo-privacy.
A. Data Collection and Use Principles (Law, Rights, and Purpose Limitation)
Is the purpose of data collection clearly specified and publicly disclosed (prohibiting any secondary use)?
Has the data minimization principle been established and demonstrated (e.g., using proximity information instead of precise coordinates)?
Are user rights—including voluntariness, non-discrimination, consent/withdrawal, and access to information—properly guaranteed?
Are there protective measures in place for vulnerable groups (e.g., minimizing exposure and preventing discrimination)?
B. Privacy-by-Design (System Architecture and Protection Technologies)
Does the system adopt a device-centered processing architecture, with periodic renewal of pseudonymous identifiers and minimal reliance on centralized trust?
Have geo-masking, k-anonymity, and differential privacy/geo-indistinguishability techniques been applied and documented?
Is end-to-end encryption, access control, and key management implemented across all stages of data processing?
When synthetic data are used, is there evidence demonstrating both distributional fidelity and privacy preservation?
C. Risk Assessment and Validation (Indicators, Testing, Reporting)
Have pre- and post-assessments been conducted for re-identification, linkage, and attribute-inference attacks (including metrics such as TUL and HLC)?
Is the privacy–utility trade-off quantitatively evaluated and publicly reported (e.g., performance, quality, and error impact)?
Are data retention periods, deletion timelines, and anonymization procedures fully documented?
D. Governance and Communication (Documentation, Auditing, and Trust Building)
Are provenance records and configuration parameters maintained and disclosed across all processing stages (collection–processing–analysis–sharing)?
Are there mechanisms for independent review or audit, including open-source code, protocol disclosure, or transparency-driven verification processes?
Are rights notices and data journey explanations provided to users and communities in clear and easily understandable language?
Table 7. Guideline Checklist for Data Provenance and Quality.
Table 7. Guideline Checklist for Data Provenance and Quality.
A. Recording Data Sources and Context (Collection Stage)
Are the source, sensor, acquisition time, coordinate reference system (CRS), accuracy, resolution, spatial/temporal coverage, and original production purpose/intended use constraints recorded and disclosed as metadata?
Are the update frequency and the data journey (collection–sharing–reuse) clearly explained in an accessible and understandable format?
B. Transformation and Processing History (Preprocessing Stage)
Are procedures, parameters, tools/versions, and quality indicators for processes such as orthorectification, resampling, mosaicking, and generalization properly documented?
Are the impacts and limitations of additional transformations—such as data fusion, filtering, or masking—explicitly stated (including potential errors, temporal gaps, or spatial distortions)?
C. Training Data and Labeling (Learning Stage)
Are the rationale for train/validation/test splits, annotation guidelines, quality control (e.g., inter-annotator agreement), and copyright/licensing information documented?
Does the documentation include descriptions and corrective measures addressing data representativeness, geographical coverage, and potential spatial bias (e.g., MAUP effects)?
D. Model and Pipeline Documentation (Modeling Stage)
Are the model and weight versions, training configurations, hyperparameters, random seeds, and TEVV results (Testing, Evaluation, Verification, Validation) maintained in an auditable version-controlled format?
Are regional performance metrics and error maps provided to diagnose and explain spatial heterogeneity and bias?
E. Deployment, Operation, and Auditing (Operational Stage)
Are traceability mechanisms established based on logs, events, and records, and are readable documentation artifacts such as model cards and data cards provided?
Are procedures in place for drift/change detection, retraining triggers, and post-correction actions, including continuous monitoring and reporting?
F. Disclosure, Communication, and Interoperability (Cross-Stage Requirements)
Are clear communication channels and rights notices available for stakeholders (citizens, field operators, verification bodies), including inquiry and response mechanisms?
Are AI-ready metadata standards and interoperable schemas used to enable efficient procurement, validation, auditing, and cross-utilization of data and models?
Table 8. Guideline Checklist for Spatial Fairness and Bias.
Table 8. Guideline Checklist for Spatial Fairness and Bias.
A. Data Representativeness and Collection Planning
Has a coverage map been created to assess target regions and groups (urban/rural, including vulnerable areas), identifying underrepresented or overrepresented zones?
Have the effects of sensor visibility, temporal acquisition gaps, and resolution differences on data representativeness been analyzed and documented?
Were community perspectives (e.g., access barriers, potential disadvantages) incorporated into the data collection design?
B. Preprocessing, Aggregation, and Sampling
Were sensitivity tests and reports conducted to evaluate the effects of MAUP or changes in aggregation units on analytical results (maps and indicators)?
Have spatial distortions or information loss introduced by resampling, filtering, or masking been documented and corrected?
Was spatial cross-validation applied when splitting training, validation, and test datasets (e.g., geographic blocking, urban–rural separation)?
C. Modeling and Evaluation (Geographical Generalization and Performance Gaps)
Were regional, administrative, or environmental performance disparities quantified and disclosed (e.g., precision, recall, IoU, false positive/negative rates)?
Has it been verified that model choice or hyperparameter selection does not amplify bias (e.g., metric or loss-function bias)?
Was geographical generalization performance (on out-of-domain regions) separately evaluated?
D. Results, Visualization, and Decision Making
Were color schemes, class boundaries, and legend designs reviewed using guidelines to avoid stigmatization or misleading impressions?
Do decision-support maps display both uncertainty and data provenance information?
For high-risk decisions (e.g., surveillance, geofencing, resource allocation), are alternative scenarios and threshold sensitivity analyses provided?
E. Governance, Participation, and Reporting
Has a bias mitigation plan (including target metrics, improvement loops, and retraining triggers) been established and documented?
Are community and stakeholder engagement mechanisms (public briefings, feedback channels, appeals/remediation processes) operational?
Do the final reports, model cards, and data cards transparently document spatial fairness diagnostics, corrective actions, and remaining limitations?
Table 9. Guideline Checklist for Transparency.
Table 9. Guideline Checklist for Transparency.
A. Disclosure and Notice (Information and Notice)
Is the use of AI, along with its purpose, scope, and data-subject rights (access, correction, deletion, etc.), clearly communicated in plain language (including in web/mobile UI)?
Are the data journey (collection–processing–sharing–reuse) and responsible entities visually summarized in an accessible diagram or overview statement?
B. Data and Model Provenance (Documentation and Lineage)
Are the data source, accuracy, CRS, update frequency, and preprocessing history (e.g., orthorectification, resampling, mosaicking) recorded and shared using standardized metadata?
Are model and weight versions, training configurations, TEVV results, and version change logs documented and made traceable?
C. Output and Visualization Explanation (Readable Cartography)
Have legends, thresholds, and color schemes been reviewed to ensure they do not cause stigmatization, and are uncertainties and data gaps clearly displayed on maps?
Are users provided with explanations of decision rationales and influencing factors (e.g., key features, weight contributions), along with alternative scenarios or sensitivity analyses?
D. Verification and Reporting (TEVV and Risk Communication)
Are regional performance and bias metrics (e.g., for urban/non-urban or administrative regions) disaggregated and reported spatially?
Are procedures for drift detection, retraining triggers, and corrective actions maintained and shared using a publicly accessible report template?
E. Accessibility and Communication (Stakeholder Communication)
Are readable summaries (plain language reports) provided for citizens, practitioners, and verification institutions, and are feedback or inquiry channels actively maintained?
Are open protocols, code, and audit results shared—where permissible—to support external review and reproducibility?
Table 10. Guideline Checklist for Accountability and Auditability.
Table 10. Guideline Checklist for Accountability and Auditability.
A. Role and Governance Definition (Who Is Responsible?)
Are policies, roles, and authorities (RACI) formally documented, covering data collectors, processors, distributors, decision-makers, and oversight bodies?
Are the legal basis for processing, the responsible entities (Controller/Processor), and the execution and disclosure of DPIA clearly defined?
In line with the ABC framework, are the accountability structure and lines of responsibility communicated to citizens and stakeholders in a comprehensible manner?
B. Traceability and Documentation
Are data and model provenance, preprocessing/training/deployment logs, and version records maintained and accessible in a verifiable format?
Are TEVV (Testing, Evaluation, Verification, Validation) results and limitations documented, including reports on supply chain and pre-trained model risks?
C. Audit, Oversight and Remedy
Are there internal and external independent audit mechanisms and periodic reporting systems in place (including algorithmic audits)?
Are appeal, correction, deletion, and remediation channels provided to data subjects and local communities, with procedures formally documented?
Are there mechanisms for clear public communication (Clarity) and open review pathways (e.g., open documentation and protocols) to strengthen civic trust?
D. Lifecycle Accountability
Are risk-based management, impact assessments (DPIA/AIIA), and post-monitoring and corrective action procedures implemented and verifiable across the lifecycle (design–deployment–operation–decommissioning)?
Are accountability, oversight, and monitoring procedures established for supply chains, third-party vendors, and data brokers?
Does the governance design ensure a balance between innovation and regulation, providing guiding principles under policy uncertainty?
E. GeoAI-Specific Accountability (Spatial Accountability)
During field data collection (e.g., UAV operations), are visibility, notices, and on-site transparency ensured (who collected what, when, and where)?
Have map and indicator designs (legends, thresholds, visualization) undergone accountability reviews to prevent stigmatization or misrepresentation?
Are responsible officers explicitly designated to evaluate and document Geo-privacy and linkage risks, including preemptive and corrective measures such as deletion or anonymization?
Table 11. Guideline Checklist for Safety, Security and Robustness.
Table 11. Guideline Checklist for Safety, Security and Robustness.
A. Design and Data Safety (Integrity and Security)
Are encryption, access control, and key management applied consistently throughout all stages of the data pipeline (collection–transmission–storage–processing)?
Does the system adopt a device-centered architecture minimizing centralized trust (e.g., periodic renewal of pseudonymous identifiers)?
Are data quality and provenance (source, preprocessing, metadata, update frequency) documented to ensure verifiable integrity?
B. Model Robustness (Defense Against Attacks and Contamination)
Have robustness/resilience tests been conducted against membership inference, data extraction, weight leakage, and prompt attacks?
Are rules and automated blocking mechanisms implemented to prevent the leakage of sensitive geospatial information (e.g., residential locations, critical facilities)?
Are tooling security protocols (access rights, logging, moderation) and training for users/operators in place?
C. TEVV and Monitoring (Verification, Surveillance, and Continuous Improvement)
Are TEVV (Testing, Evaluation, Verification, Validation) results, limitations, and assumptions documented and publicly disclosed prior to deployment?
Are drift detection, retraining triggers, and corrective action procedures continuously operated during system runtime?
Are geographical performance variations (urban vs. non-urban, administrative units) analyzed to maintain appropriate safety thresholds?
D. Incident Response and Remedy
Are logs, event records, contact points, and responsibility chains clearly established to support early warning, mitigation, recovery, and post-incident learning?
Are notification scopes, disclosure levels, and remediation procedures for stakeholders predefined and regularly tested through drills?
E. Map Integrity and Dissemination Safety
Do maps and graphical outputs clearly display uncertainty, data gaps, and processing histories, and are anti-manipulation guidelines (e.g., deepfake geography prevention) applied?
F. Organizational Governance and Supply Chain Management
Is an AI management system in place that covers policy, roles, authority, internal auditing, and continuous improvement?
Are there established procedures for evaluation, control, contracting, and auditing of supply chain, third-party, and pre-trained model risks?
Table 12. Guideline Checklist for Human Oversight and Human-in-the-Loop.
Table 12. Guideline Checklist for Human Oversight and Human-in-the-Loop.
A. Role and Authority Definition (Governance)
Are RACI matrices (Responsible/Accountable/Consulted/Informed) established and documented for data collection, modeling, deployment, operation, and correction processes?
Are procedures and cycles for recording, reporting, consultation, monitoring, and review clearly defined to support supervision and enforcement?
B. Intervention, Interruption and Escalation (Operational Procedures)
Are HITL/HOTL/HIC intervention zones and manual override/kill-switch activation conditions defined?
Are escalation rules—based on thresholds, geographic critical zones (e.g., flood or disaster buffers), or operational priorities—clearly established?
C. Explanation and Uncertainty Display (User Interface/Maps)
Do maps and indicators display uncertainty, data gaps, and processing histories, and provide explanations of decision rationales (key features and contributions)?
Are rights notices and data journey explanations communicated to citizens and users in clear, accessible language?
D. Verification and Monitoring (TEVV and Drift Management)
Have TEVV (Testing, Evaluation, Verification, Validation) procedures been performed prior to deployment, with results and limitations documented?
During operation, are performance degradation and drift detection, retraining triggers, and corrective actions activated and recorded?
E. Field Operations (Geo-HMI and Human Participation)
During UAV or field data collection, are transparency measures—such as public signage, safety guidelines, and contact channels—implemented?
Are safe human–robot interaction protocols in place, including defined roles, supervision, feedback loops, version control, and contamination monitoring?
Are humans-as-sensors or mixed-expert frameworks used to integrate local expertise and community feedback into the decision-making loop?
F. Governance Integration (Policy and Institutional Systems)
Is a balanced governance structure designed that harmonizes innovation and regulation, ensuring internal audits, independent reviews, and public oversight?
Are appeal and remedy mechanisms consistent with human rights, rule of law, and procedural fairness principles provided?
Table 13. Guideline Checklist for Public Benefit and Sustainability.
Table 13. Guideline Checklist for Public Benefit and Sustainability.
A. Alignment with Public Value (“Who Benefits?”)
Are the project’s social objectives and public-good contributions clearly defined and publicly disclosed (e.g., welfare, safety, equitable service access)?
Has the distribution of benefits and risks been analyzed across regions and groups to identify potential concentration of advantages or harms (inclusion and non-discrimination perspective)?
B. Environmental and Resource Sustainability
Are the energy, carbon, and resource costs of training and inference quantified and disclosed, and are mitigation strategies (efficiency optimization, renewable energy use, scaling optimization) established?
Have sustainable operational alternatives such as distributed learning and secure collaborative systems been evaluated and adopted where feasible?
C. Non-Maleficence and Proportionality
Has the proportionality between data use and project objectives been reviewed (avoiding unnecessary precision, duration, or scope of location data)?
Are protections for vulnerable groups and measures for de-identification and intrusion minimization embedded in the design?
D. Building Social Trust (Verification, Accountability, Participation)
Are data quality, provenance, and standardized metadata used to ensure verifiability, and are audit and reporting systems in place?
Are stakeholder and community engagement processes established, with clear communication of rights notices and data journeys?
E. Equity, Inclusion, and Capacity Building
Does the design include provisions for digital inclusion, such as accessibility support, capacity development, and literacy enhancement?
Are public-impact assessments and potential adverse effects (e.g., surveillance, exclusion, stigmatization) managed through pre-deployment evaluation and post-deployment monitoring?
Table 14. Guideline Checklist for Participation and Stakeholder Engagement.
Table 14. Guideline Checklist for Participation and Stakeholder Engagement.
A. Stakeholder Mapping and Participation Planning
Has a stakeholder map been documented, identifying citizens, communities, field agencies, regulators, and verification bodies, along with their participation purposes, timelines, and methods?
Are public dialogues, briefings, and feedback channels institutionalized across the project lifecycle?
B. Rights Notice, Consent, and Participatory Control
Are clear notices and consent/withdrawal procedures provided regarding the purpose, scope, retention period, and sharing of data?
Are user interfaces or procedures established to ensure that data subjects can easily access, correct, or delete their information?
C. Community Rights and Protection of Vulnerable Groups
Are community rights (culture, customs, spatial use) assessed for impact, and are protection measures for vulnerable populations incorporated into design and operations?
Are pre-assessments of visualization and alert designs conducted to minimize risks of stigmatization or exclusion?
D. Human-Centered Design and Feedback Loops
Are humans-as-sensors or mixed-experts frameworks used to integrate local knowledge and feedback into the model pipeline?
Are participation outcomes (concerns, suggestions) reflected in design or policy revisions, and are these updates publicly reported?
E. Transparent Communication, Documentation, and External Review
Are data journeys, responsible entities, provenance, and limitations explained and disclosed in plain, understandable language?
Are algorithm audit, benchmark, and validation results publicly shared (within permissible limits) to enable external review and reproducibility?
F. Equity, Inclusion, and Capacity Building
Does the participation process integrate DEIA and digital inclusion principles (accessibility, language, and cultural adaptation) and include digital literacy support?
Are regional or demographic distributions of benefits and risks publicly disclosed, along with plans to mitigate inequalities?
Table 15. Guideline Checklist for Lifecycle Governance.
Table 15. Guideline Checklist for Lifecycle Governance.
A. Governance Design (Policy, Roles, and Accountability)
Are the organization’s AI policies, objectives, and RACI (Responsible–Accountable–Consulted–Informed) roles formally documented, with established approval and revision procedures?
Has stakeholder mapping (citizens, field agencies, regulators, verification bodies) been performed, along with risk and responsibility mapping?
B. Design Stage (Context Identification and Impact Assessment)
At the MAP stage, are use context, stakeholders, and affected rights clearly defined, and are impact assessments (DPIA/AIIA) conducted and recorded?
Are purpose limitation and data minimization principles clearly articulated with a valid legal basis, and are retention and deletion/anonymization plans documented?
C. Data and Learning (Quality, Privacy, Provenance)
Are provenance and metadata standards used to record data sources, preprocessing, coordinate systems, and update frequencies, with quality metrics actively managed?
Are privacy-preserving techniques (e.g., geo-masking, differential privacy) and representativeness/MAUP diagnostics incorporated into the training and validation design?
D. Verification and Deployment (TEVV and Documentation)
Are TEVV (Testing, Evaluation, Verification, Validation) results—including assumptions, limitations, and regional performance—compiled and disclosed in an auditable format prior to deployment?
Are model and weight versions, configurations, and dependencies recorded, and is a formal change control procedure in place?
E. Operation and Oversight (Monitoring, Drift, Incident Response)
Are geographical performance breakdowns (urban/non-urban, administrative areas), drift detection–retraining triggers–corrective loops continuously operated?
Are logs, event records, audit trails, and incident response plans (contacts, notifications, remediation) maintained and regularly tested?
F. Decommissioning and Withdrawal (Retention, Deletion, Transition)
Are data and models deleted or anonymized upon the end of their retention period, with criteria and procedures enforced and documented?
Are summary reports, model/data cards, and other records necessary for public accountability or audit preserved and disclosed where appropriate?
G. Supply Chain and Foundation Models
Are risk assessment, contracting, auditing, and monitoring procedures in place for vendors, third parties, and pre-trained/foundation models?
H. Participation and Communication (Participatory Governance)
Are clear communication channels—including rights notices and data journey explanations—provided for citizens and communities, with active feedback mechanisms maintained?
Table 16. Guideline Checklist for Misuse Prevention.
Table 16. Guideline Checklist for Misuse Prevention.
A. Defining Prohibitions and Proportionality (Policy and Proportionality)
Are prohibited use cases (e.g., surveillance-based exclusion, discriminatory geofencing, falsified or manipulated maps) formally documented and incorporated into training, pledges, and contractual agreements?
Are purpose limitation, data minimization, restricted retention, and post-deletion/anonymization measures enforced to ensure proportionality?
B. Data and Model Protection (Design and Guardrails)
Are device-centered processing, pseudonym renewal, and minimized centralized trust applied in system architecture?
Are malicious prompt detection/blocking, automatic masking of sensitive geospatial data, and defenses against weight leakage or membership inference implemented and verified?
C. Output Integrity (Map Integrity)
Have legends, thresholds, and color schemes been pre-reviewed to prevent manipulation or distortion, and are uncertainties, data gaps, and processing histories visibly displayed?
Have outputs undergone benchmarking, validation, and algorithmic auditing prior to dissemination?
D. Traceability, Auditing, and Post-Response (Trace, Audit and Remedy)
Are provenance, logs, and version records maintained to enable traceability of causes and accountability in the event of manipulation or leakage?
Is an incident response plan (including contact lines, notification, redress, and lessons-learned procedures) established, tested, and subject to independent oversight and audit?
E. Transparent Communication and Participation
Are data journeys, user rights, and system limitations clearly communicated to citizens and users in plain, accessible language, with active feedback channels?
Are open protocols, code, and audit results shared—within permissible limits—to support external review and public accountability?
Table 17. Guideline Checklist for Inclusion and Accessibility.
Table 17. Guideline Checklist for Inclusion and Accessibility.
A. Accessibility and Barrier Reduction
Are public summaries, multilingual or plain-language materials, and accessible user interfaces (UI) provided to ensure that citizens, communities, and field agencies can understand and access results and evidence?
Have measures been designed to reduce economic, technical, or linguistic barriers to data and service access (e.g., open standards, interoperability, affordable access)?
B. Representativeness and Equity
Has the geographic representativeness of collected, trained, and validated datasets been reviewed (covering urban/rural and vulnerable areas), and are coverage gaps addressed?
Are regional performance disparities (precision, recall, false positive/negative rates) disclosed, and are bias mitigation plans implemented?
C. Rights Notice and Participatory Control
Are clear notices and consent/withdrawal procedures provided regarding the purpose, scope, retention period, and data-sharing targets?
Have proportionate control measures (e.g., device-centered or decentralized architectures) been adopted to protect vulnerable groups?
D. Participation and DEIA Integration
Are DEIA principles and stakeholder participation plans (citizens, regulators, verification bodies) embedded in governance and reporting systems?
Are visualization guidelines (legends, boundaries, color schemes, uncertainty representation) applied to avoid stigmatization or exclusion?
E. Public Benefit and Sustainability Alignment
Are inclusion goals (equity, accessibility, capacity building) integrated into performance indicators alongside public-benefit and sustainability objectives?
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yoo, S. An Operational Ethical Framework for GeoAI: A PRISMA-Based Systematic Review of International Policy and Scholarly Literature. ISPRS Int. J. Geo-Inf. 2026, 15, 51. https://doi.org/10.3390/ijgi15010051

AMA Style

Yoo S. An Operational Ethical Framework for GeoAI: A PRISMA-Based Systematic Review of International Policy and Scholarly Literature. ISPRS International Journal of Geo-Information. 2026; 15(1):51. https://doi.org/10.3390/ijgi15010051

Chicago/Turabian Style

Yoo, Suhong. 2026. "An Operational Ethical Framework for GeoAI: A PRISMA-Based Systematic Review of International Policy and Scholarly Literature" ISPRS International Journal of Geo-Information 15, no. 1: 51. https://doi.org/10.3390/ijgi15010051

APA Style

Yoo, S. (2026). An Operational Ethical Framework for GeoAI: A PRISMA-Based Systematic Review of International Policy and Scholarly Literature. ISPRS International Journal of Geo-Information, 15(1), 51. https://doi.org/10.3390/ijgi15010051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop