An Operational Ethical Framework for GeoAI: A PRISMA-Based Systematic Review of International Policy and Scholarly Literature
Abstract
1. Introduction
2. Background and Related Work
2.1. Geospatial Information and AI
2.2. Ethics in GeoAI
3. Materials and Methods
3.1. Systematic Literature Search and Selection (PRISMA 2020)
3.2. Policy Document–Based Extraction Method (PDEP-NcM)
3.2.1. Normative Strength Detection
3.2.2. Structural Signal Identification
3.2.3. Extraction and Normalization Process
3.3. Scholarly Literature–Based Extraction Method (SEPE-NcM)
3.4. Determination of GeoAI Ethical Axes
3.4.1. Normalization Stage: Grouping Synonyms and Equivalent Concepts into Standard Axes
3.4.2. Terminological Integration Stage: Removing Redundancies and Merging Related Items
3.4.3. Prioritization Stage: Selection Criteria
3.4.4. Derivation of the Final GeoAI Ethical Axes
4. Results
4.1. Search and Selection of Policy Reports and Scholarly Literature
4.2. Ethical Principles Extracted from Policy Reports
4.3. Ethical Principles Extracted from Academic Literature
4.4. Derivation of the Final 12 GeoAI Ethical Axes
5. Proposed Guidelines with Checklists
5.1. Geo-Privacy
5.2. Data Provenance and Quality
5.3. Spatial Fairness and Bias
5.4. Transparency
5.5. Accountability and Auditability
5.6. Safety, Security and Robustness
5.7. Human Oversight and Human-in-the-Loop
5.8. Public Benefit and Sustainability
5.9. Participation and Stakeholder Engagement
5.10. Lifecycle Governance
5.11. Misuse Prevention
5.12. Inclusion and Accessibility
6. Discussion
6.1. Interpretation and Contributions of the GeoAI Ethics Framework
6.2. Comparison with Existing Studies and Extensibility
6.3. Policy and Societal Implications
6.4. Academic Implications and International Comparison
6.5. Limitations and Future Research Directions
6.6. Overall Discussion
7. Conclusions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
| ABC | Accountability–Bias–Clarity |
| AI | Artificial Intelligence |
| AIIA | AI Impact Assessment |
| AI RMF | Artificial Intelligence Risk Management Framework |
| AI/ML | Artificial Intelligence/Machine Learning |
| CATS | Conditional Adversarial Trajectory Synthesis |
| CRS | Coordinate Reference System |
| DEIA | Diversity, Equity, Inclusion, and Accessibility |
| DPIA | Data Protection Impact Assessment |
| DPIA/AIIA | Data Protection Impact Assessment/AI Impact Assessment |
| EDPB | European Data Protection Board |
| ELSI | Ethical, Legal, and Social Issues |
| EU | European Union |
| EU AI Act | European Union Artificial Intelligence Act |
| EUR-Lex | EUR-Lex: Access to European Union law |
| EthicalGEO | Ethics in Geospatial Data and Technologies initiative |
| FAIR | Findable, Accessible, Interoperable, Reusable |
| G20 | Group of Twenty |
| G7 | Group of Seven |
| GDPR | General Data Protection Regulation |
| GeoAI | Geospatial Artificial Intelligence |
| Geo-HMI | Geospatial Human–Machine Interaction |
| GeoLLMs/Geo-LLMs | Geospatial Large Language Models |
| GIS | Geographic Information System |
| GIT | Geographic Information Technologies |
| GML | Geography Markup Language |
| GSGF | Global Statistical Geospatial Framework |
| HIC | Human-in-Command |
| HITL | Human-in-the-Loop |
| HOTL | Human-on-the-Loop |
| HITL/HOTL/HIC | Human-in-the-Loop/Human-on-the-Loop/Human-in-Command |
| HLC | Home Location Clustering |
| HMI | Human–Machine Interaction |
| HPC | High-Performance Computing |
| HVD | High-Value Datasets |
| IASC | Inter-Agency Standing Committee |
| IEC | International Electrotechnical Commission |
| IEEE | Institute of Electrical and Electronics Engineers |
| INSPIRE | Infrastructure for Spatial Information in the European Community |
| ISO | International Organization for Standardization |
| ISO/IEC | International Organization for Standardization/International Electrotechnical Commission |
| ISO/TC 211 | ISO Technical Committee 211: Geographic information/Geomatics |
| ISO/IEC SC42 | ISO/IEC JTC 1/SC 42: Artificial Intelligence subcommittee |
| ITU | International Telecommunication Union |
| IoT | Internet of Things |
| IoU | Intersection over Union |
| LiDAR | Light Detection and Ranging |
| MAUP | Modifiable Areal Unit Problem |
| ML | Machine Learning |
| NIST | National Institute of Standards and Technology |
| NMAs | National Mapping Agencies |
| NSDI | National Spatial Data Infrastructure |
| OCHA | United Nations Office for the Coordination of Humanitarian Affairs |
| OECD | Organization for Economic Co-operation and Development |
| OGC | Open Geospatial Consortium |
| OGC API | Open Geospatial Consortium Application Programming Interface |
| PDEP | Policy Document-based Ethical Principle Extraction |
| PDEP-NcM | Policy Document-based Ethical Principle Extraction and Normativity Classification Methodology |
| PRISMA | Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
| Q-FAIR | Quality, Findability, Accessibility, Interoperability, Reusability |
| QA | Quality Assurance |
| RACI | Responsible, Accountable, Consulted, Informed |
| RMF | Risk Management Framework |
| SC42 | Subcommittee 42: Artificial Intelligence |
| SDI | Spatial Data Infrastructure |
| SDIs | Spatial Data Infrastructures |
| SEPE | Scholarly Ethical Principle Extraction |
| SEPE-NcM | Scholarly Ethical Principle Extraction and Normativity Classification Methodology |
| TEVV | Testing, Evaluation, Verification, and Validation |
| TUL | Trajectory-User Linking |
| UAV | Unmanned Aerial Vehicle |
| UI | User Interface |
| UK | United Kingdom |
| UKGC | UK Geospatial Commission |
| UN | United Nations |
| UN-GGIM | United Nations Committee of Experts on Global Geospatial Information Management |
| UN-IGIF | United Nations Integrated Geospatial Information Framework |
| UNESCO | United Nations Educational, Scientific and Cultural Organization |
| WFS | Web Feature Service |
| WGIC | World Geospatial Industry Council |
| WMS | Web Map Service |
| XAI | Explainable Artificial Intelligence |
Appendix A
| Item | PRISMA 2020 Reporting Item | Yes/No/NA | Location in Manuscript (Section/Page/Para) |
|---|---|---|---|
| 1 | Identify the report as a systematic review in the title. | ||
| 2 | Provide a structured abstract covering objectives, methods, results, and implications. | ||
| 3 | Explain why the review is needed considering existing knowledge. | ||
| 4 | Objectives: State the review question(s)/aim(s) explicitly. | ||
| 5 | Eligibility criteria: Specify inclusion/exclusion criteria (and how studies were grouped for synthesis if applicable). | ||
| 6 | Information sources: List all databases/registers/websites and the dates last searched. | ||
| 7 | Search strategy: Provide full search strategies for each source (or indicate where they are provided). | ||
| 8 | Selection process: Describe how records were screened/selected (number of reviewers, independence, automation if any). | ||
| 9 | Data collection: Describe how data were extracted (number of reviewers, independence, contacting authors if relevant). | ||
| 10a | Data items—outcomes: List and define outcomes (or the main variables/constructs extracted). | ||
| 10b | Data items—other: List other extracted variables (e.g., publication year, document type, governance level) and assumptions. | ||
| 11 | Risk of bias/quality assessment: Describe methods used to assess risk of bias or quality (or justify if not performed). | ||
| 12 | Effect measures: Specify effect measures for each outcome (if quantitative synthesis conducted). | ||
| 13a | Synthesis—eligibility for each synthesis: Describe how studies/documents were decided to be included in each synthesis/theme. | ||
| 13b | Synthesis—data preparation: Describe any data preparation (e.g., normalization, coding, handling missing information). | ||
| 13c | Synthesis—presentation: Describe how results were tabulated/visualized (tables, matrices, maps, etc.). | ||
| 13d | Synthesis—methods: Describe synthesis approach (e.g., content analysis, thematic consolidation, frequency/coverage, reliability). | ||
| 13e | Exploring differences: Describe methods to explore heterogeneity/variation (e.g., by document type/region/time). | ||
| 13f | Sensitivity/robustness: Describe any sensitivity checks (e.g., coder agreement checks, alternative grouping). | ||
| 14 | Reporting bias: Describe assessment of reporting bias (if applicable) or state not applicable with rationale. | ||
| 15 | Certainty/confidence: Describe methods to assess certainty in evidence (if applicable) or state not applicable. | ||
| 16a | Study selection: Report numbers screened/included provide flow diagram (PRISMA flow). | ||
| 16b | Exclusions: Cite/report exclude records at full-text stage and reasons (if provided). | ||
| 17 | Study characteristics: Present characteristics of included documents/studies. | ||
| 18 | Risk of bias in studies: Present results of bias/quality assessment (if performed). | ||
| 19 | Results of individual studies: Present results for each included study/document (as appropriate). | ||
| 20a | Synthesis results: Summarize results for each synthesis/theme (e.g., ethical axes derived). | ||
| 20b | Additional analyses: Report results of subgroup/robust analysis (if performed). | ||
| 21 | Reporting biases: Present results of reporting bias assessment (if performed). | ||
| 22 | Certainty of evidence: Present certainty/confidence for each main outcome (if assessed). | ||
| 23a | Discussion—interpretation: Interpret results in context of existing evidence. | ||
| 23b | Discussion—limitations (evidence): Discuss limitations of included evidence. | ||
| 23c | Discussion—limitations (process): Discuss limitations of the review methods. | ||
| 23d | Discussion—implications: Discuss implications for practice, policy, and future research. | ||
| 24a | Registration: Provide registration information (or state not registered). | ||
| 24b | Protocol: Indicate where the protocol is available (or state not available). | ||
| 24c | Amendments: Describe amendments to protocol/registration (or state none). | ||
| 25 | Support: Describe sources of financial/non-financial support and roles of funders. | ||
| 26 | Competing interests: Declare competing interests. | ||
| 27 | Data/materials availability: Report availability of data extraction forms, coded data, codebook, analysis code, and other materials. |
| Reports | Key Features |
|---|---|
| OECD (2025) [5] |
|
| G20 (2019) [6] |
|
| ITU(2025) [59] |
|
| EDPB (2020) [16] |
|
| EthicalGEO (2021) [34] |
|
| WGIC (2021) [35] |
|
| UKGC (2022) [36] |
|
| NIST (2023) [60] |
|
| ISO/IEC (2023a) [61] |
|
| ISO/IEC (2023b) [62] |
|
| OECD (2023) [63] |
|
| OCHA. (2025) [17] |
|
| UNESCO (2021) [7] |
|
| European Union (2024) [8] |
|
| Council of Europe (2024) [9] |
|
| UN-GGIM (2025) [22] |
|
| Papers | Key Features |
|---|---|
| Hagendorff (2020) [40] |
|
| Fjeld. et al. (2020) [2] |
|
| McKenzie et al. (2023) [29] |
|
| Janowicz (2023) [30] |
|
| Rao et al. (2023a) [37] |
|
| Kang et al. (2023) [31] |
|
| Rao et al. (2023b) [38] |
|
| Corrêa et al. (2023) [47] |
|
| Zhang et al. (2023). [64] |
|
| Oluoch (2024) [32] |
|
| Mai et al. (2025) [24] |
|
| Mochizuki et al. (2025) [65] |
|
| Kausika et al. (2025) [66] |
|
| Ye. et al. (2025) [39] |
|
| Kang (2025) [67] |
|
| Kijewski et al. (2025) [3] |
|
| Reports | Extracted Ethical Principles |
|---|---|
| OECD (2025) [5] | Inclusive growth and sustainability; human rights and democratic values; privacy protection; fairness and non-discrimination; transparency and explainability; safety and robustness; accountability and responsibility; human oversight. |
| G20 (2019) [6] | Human-centered values; privacy protection; fairness and non-discrimination; transparency and explainability; safety and reliability; accountability; digital inclusion; international cooperation and interoperability. |
| ITU (2025) [59] | Transparency & Explainability; Accountability; Safety–Security–Reliability; Human-Centered & Human Rights; Fairness & Non-discrimination; Privacy & Data Protection; Interoperability; Sustainability; Content Authenticity & Provenance; Risk Management; Conformity Assessment; Human-in-the-Loop (HITL). |
| EDPB (2020) [16] | Data minimization; purpose limitation; anonymization and pseudonymization; privacy-by-design; legal basis and clear accountability; user rights assurance; transparency and auditability. |
| EthicalGEO (2021) [34] | Realize opportunities; understand impacts and ensure proportional decision making; do no harm; protect vulnerable groups; address bias; minimize intrusion; minimize data collection; protect privacy; prevent identification of individuals; provide accountability. |
| WGIC (2021) [35] | Privacy protection; fairness and non-discrimination; transparency and explainability; accountability; data quality and provenance; safety and security; human oversight; misuse prevention; interoperability standards. |
| UKGC (2022) [36] | Accountability; bias mitigation; clarity and transparency; public benefit orientation; trust and reliability (building public confidence); Q-FAIR data principles (Quality, Findability, Accessibility, Interoperability, Reusability); governance; reinforcement of data-subject rights (access, control, preference reflection); equitable data access and barrier reduction; stakeholder participation and open dialogue. |
| NIST (2023) [60] | Reliability and validity; safety and security; fairness; accountability; transparency and explainability; enhanced privacy; human oversight; risk-based governance; continuous monitoring. |
| ISO/IEC (2023a) [61] | Risk management and continuous improvement; transparency; fairness and equity; human oversight; privacy and freedom of expression; safety and security; clear accountability; regulatory compliance; stakeholder participation. |
| ISO/IEC (2023b) [62] | Accountability; lifecycle risk management; transparency and explainability; human oversight; privacy and data protection; data quality and provenance; security and robustness; internal auditing. |
| OECD (2023) [63] | Responsible AI; privacy and data governance; transparency; fairness; protection of human and fundamental rights; safety and security; accountability and self-regulation; international cooperation; intellectual-property protection; strengthened transparency of democratic values and procedures; safety, quality management, competence and trust building. |
| OCHA. (2025) [17] | Purpose/Proportionality; Quality/Accuracy; Confidentiality; Transparency; Data Security; Personal Data Protection; Accountability; Fairness/Legitimacy; Human-Rights-Based; People-Centred/Inclusive; Retention/Destruction; Data-Subject Rights. |
| UNESCO (2021) [7] | Human rights and dignity; inclusion and equity; safety and security; privacy and data protection; fairness and non-discrimination; accountability and transparency; human oversight; sustainability and environmental ethics. |
| European Union (2024) [8] | Risk-based management; data governance and quality; transparency; human oversight; security and accuracy; accountability; documentation and logging; post-market monitoring and corrective measures. |
| Council of Europe (2024) [9] | Human rights, democracy, and rule of law; privacy and personal data protection; transparency and identification of AI-generated content; accountability; safety and security; equality and non-discrimination; public participation; independent oversight. |
| UN-GGIM (2025) [22] | Privacy and confidentiality; accountability and legal basis; data quality and provenance; interoperability and standardization; unified geocoding and spatial fairness; secure access; cooperation and capacity building; sustainable financing. |
| Papers | Extracted Ethical Principles |
|---|---|
| Hagendorff (2020) [40] | Privacy protection; fairness and non-discrimination; accountability; transparency and explainability; safety and security; human oversight; inclusion and social solidarity; public good and sustainability. |
| Fjeld. et al. (2020) [2] | Privacy; fairness; accountability; transparency and explainability; safety and security; human-centered values; professional responsibility. |
| McKenzie et al. (2023) [29] | Geo-privacy; data minimization and anonymization; prevention of linkage risks; provenance and transparency; privacy-by-design; technical safeguards (differential privacy, geo-masking); independent oversight and education. |
| Janowicz (2023) [30] | Sustainability; privacy and data protection; fairness and representativeness; transparency and explainability; autonomous consent; accountability; reproducibility; social and environmental justice. |
| Rao et al. (2023a) [37] | Privacy protection; data security; utility balance; federated and decentralized learning; safety and robustness; misuse prevention; model auditing and monitoring. |
| Kang et al. (2023) [31] | Geo-privacy protection; bias mitigation and fairness; transparency and explainability; provenance and version control; human oversight; map integrity; misuse prevention. |
| Rao et al. (2023b) [38] | Geo-privacy protection; data synthesis and privacy preservation; privacy–utility trade-off; fairness; secure data sharing; human-centered design. |
| Corrêa et al. (2023) [47] | Privacy and data protection; fairness and non-discrimination; transparency; accountability; safety and reliability; human autonomy and participation; inclusion; sustainability and public good. |
| Zhang et al. (2023). [64] | Map integrity; provenance and source labeling; uncertainty representation; visualization fairness; accountability and auditing; human oversight; misuse prevention. |
| Oluoch (2024) [32] | Privacy; fairness and non-discrimination; transparency and explainability; accountability; autonomy and consent; inclusion; risk minimization; social participation and governance. |
| Mai et al. (2025) [24] | Fairness and mitigation of geographic bias; privacy protection; data security; sustainability; interpretability; regulatory compliance; responsible data sharing. |
| Mochizuki et al. (2025) [65] | Equity and inclusion; data protection and transparency; bias prevention; protection of human autonomy; environmental sustainability; stakeholder participation; accountability and verifiability. |
| Kausika et al. (2025) [66] | Human-centered design; transparency and explainability; accountability; provenance and quality management; privacy protection; fairness and bias mitigation; misuse prevention; collaborative governance. |
| Ye et al. (2025) [39] | Geo-privacy protection; data security; fairness and bias mitigation; reliability assessment; explainability; human-centered design; participation and governance. |
| Kang (2025) [67] | Geo-privacy; data security; fairness; explainability; human-centered design; transparent governance; safe human–AI interaction. |
| Kijewski et al. (2025) [3] | Accountability and auditing; transparency; fairness; safety; regulatory compliance; independent verification and evaluation; education and capacity building; stakeholder participation. |
References
- Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
- Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI; Berkman Klein Center Research Publication: Cambridge, MA, USA, 2020. [Google Scholar] [CrossRef]
- Kijewski, S.; Ronchi, E.; Vayena, E. The rise of checkbox AI ethics: A review. AI Ethics 2025, 5, 1931–1940. [Google Scholar] [CrossRef]
- Falegnami, A.; Tomassi, A.; Corbelli, G.; Nucci, F.S.; Romano, E. A generative artificial-intelligence-based workbench to test new methodologies in organisational health and safety. Appl. Sci. 2024, 14, 11586. [Google Scholar] [CrossRef]
- OECD. Recommendation of the Council on Artificial Intelligence. Available online: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 (accessed on 11 November 2025).
- OECD. G20 AI Principles. Published by G20 Ministerial Meeting on Trade and Digital Economy. Available online: https://oecd.ai/en/wonk/documents/g20-ai-principles (accessed on 11 November 2025).
- UNESCO. Recommendation on the Ethics of Artificial Intelligence. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137 (accessed on 10 November 2025).
- EU. Regulation (EU) 2024/1689 of the European Parliament and of the Council. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689 (accessed on 11 November 2025).
- Council of Europe. Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law; Council of Europe Treaty Series, No. 225; Council of Europe: Strasbourg, France, 2024. [Google Scholar]
- Janowicz, K.; Gao, S.; McKenzie, G.; Hu, Y.; Bhaduri, B. GeoAI: Spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond. Int. J. Geogr. Inf. Sci. 2020, 34, 625–636. [Google Scholar] [CrossRef]
- Liu, X.; Chen, M.; Claramunt, C.; Batty, M.; Kwan, M.P.; Senousi, A.M.; Cheng, T.; Strobl, J.; Coltekin, A.; Wilson, J.; et al. Geographic information science in the era of geospatial big data: A cyberspace perspective. Innovation 2022, 3, 100279. [Google Scholar] [CrossRef] [PubMed]
- Guo, H. Big Earth data: A new frontier in Earth and information sciences. Big Earth Data 2017, 1, 4–20. [Google Scholar] [CrossRef]
- Chen, Y. Spatial autocorrelation equation based on Moran’s index. Sci. Rep. 2023, 13, 19296. [Google Scholar] [CrossRef]
- Griffith, D.A. Understanding spatial autocorrelation: An everyday metaphor and additional new interpretations. Geographies 2023, 3, 543–562. [Google Scholar] [CrossRef]
- Bavaud, F. Measuring and Testing multivariate spatial autocorrelation in a weighted setting: A kernel approach. Geogr. Anal. 2024, 56, 573–599. [Google Scholar] [CrossRef]
- EDPB. Guidelines 04/2020 on the Use of Location Data and Contact Tracing Tools in the Context of the COVID-19 Outbreak; European Data Protection Board: Brussels, Belgium, 2020. [Google Scholar]
- OCHA. OCHA Data Responsibility Guidelines; United Nations Office for the Coordination of Humanitarian Affairs: New York, NY, USA, 2025. [Google Scholar]
- ISO/TC211; ISO/TC 211 Geographic Information/Geomatics. ISO: Geneva, Switzerland, 2009.
- OGC. 3D Tiles Specification 1.0. Available online: https://docs.ogc.org/cs/18-053r2/18-053r2.html (accessed on 11 November 2025).
- European Commission. Commission Staff Working Document Evaluation of DIRECTIVE 2007/2/EC Establishing an Infrastructure for Spatial Information in the European Community (INSPIRE); European Commission: Brussels, Belgium, 2022. [Google Scholar]
- UN-IGIF. United Nations Integrated Geospatial Information Framework, A Strategic Guide to Develop and Strengthen National Geospatial Information Management, Part 1: Overarching Strategy. In United Nations Integrated Geospatial Information Framework; UN-IGIF: New York, NY, USA, 2023. [Google Scholar]
- UN-GGIM. The Global Statistical Geospatial Framework (GSGF); Department of Economic and Social Affairs: New York, NY, USA, 2025. [Google Scholar]
- Li, W.; Arundel, S.T.; Gao, S.; Goodchild, M.F.; Hu, Y.; Wang, S.; Zipf, A. GeoAI for Science and the Science of GeoAI. J. Spat. Inf. Sci. 2024, 29, 1–33. [Google Scholar] [CrossRef]
- Mai, G.; Xie, Y.; Jia, X.; Lao, N.; Rao, J.; Zhu, Q.; Liu, Z.; Chiang, Y.-Y.; Jiao, J. Towards the Next Generation of Geospatial Artificial Intelligence. Int. J. Appl. Earth Obs. Geoinf. 2025, 136, 104368. [Google Scholar] [CrossRef]
- Wang, S.; Huang, X.; Liu, P.; Zhang, M.; Biljecki, F.; Hu, T.; Fu, X.; Liu, L.; Liu, X.; Wang, R. Mapping the landscape and roadmap of geospatial artificial intelligence (GeoAI) in quantitative human geography: An extensive systematic review. Int. J. Appl. Earth Obs. Geoinf. 2024, 128, 103734. [Google Scholar] [CrossRef]
- Saidi, S.; Idbraim, S.; Karmoude, Y.; Masse, A.; Arbelo, M. Deep-learning for change detection using multi-modal fusion of remote sensing images: A review. Remote Sens. 2024, 16, 3852. [Google Scholar] [CrossRef]
- Kazanskiy, N.; Khabibullin, R.; Nikonorov, A.; Khonina, S. A Comprehensive Review of Remote Sensing and Artificial Intelligence Integration: Advances, Applications, and Challenges. Sensors 2025, 25, 5965. [Google Scholar] [CrossRef]
- Hoffmann, J.; Bauer, P.; Sandu, I.; Wedi, N.; Geenen, T.; Thiemert, D. Destination Earth–A digital twin in support of climate services. Clim. Serv. 2023, 30, 100394. [Google Scholar] [CrossRef]
- McKenzie, G.; Zhang, H.; Gambs, S. Privacy and ethics in GeoAI. In Handbook of Geospatial Artificial Intelligence; CRC Press: Boca Raton, FL, USA, 2023; pp. 388–405. [Google Scholar]
- Janowicz, K. Philosophical foundations of geoai: Exploring sustainability, diversity, and bias in geoai and spatial data science. In Handbook of Geospatial Artificial Intelligence; CRC Press: Boca Raton, FL, USA, 2023; pp. 26–42. [Google Scholar]
- Kang, Y.; Gao, S.; Roth, R.E. Artificial intelligence studies in cartography: A review and synthesis of methods, applications, and ethics. Cartogr. Geogr. Inf. Sci. 2024, 51, 599–630. [Google Scholar] [CrossRef]
- Oluoch, I. Crossing Boundaries: The Ethics of AI and Geographic Information Technologies. ISPRS Int. J. Geo-Inf. 2024, 13, 87. [Google Scholar] [CrossRef]
- Paolanti, M.; Tiribelli, S.; Giovanola, B.; Mancini, A.; Frontoni, E.; Pierdicca, R. Ethical framework to assess and quantify the trustworthiness of artificial intelligence techniques: Application case in remote sensing. Remote Sens. 2024, 16, 4529. [Google Scholar] [CrossRef]
- EthicalGEO. Locus Charter; EthicalGEO: New York, NY, USA, 2021. [Google Scholar]
- WGIC. Geospatial AI/ML Applications and Policies: A Global Perspective; World Geospatial Industry Council: The Hague, The Netherlands, 2021. [Google Scholar]
- UKGC. Building Public Confidence in Location Data—The ABC of Ethical Use; UK Geospatial Commission: London, UK, 2022. [Google Scholar]
- Rao, J.; Gao, S.; Mai, G.; Janowicz, K. Building privacy-preserving and secure geospatial artificial intelligence foundation models. In Proceedings of the 31st ACM International Conference on Advances in Geographic Information Systems, Hamburg, Germany, 13–16 November 2023; pp. 1–4. [Google Scholar]
- Rao, J.; Gao, S.; Zhu, S. CATS: Conditional Adversarial Trajectory Synthesis for privacy-preserving trajectory data publication using deep learning approaches. Int. J. Geogr. Inf. Sci. 2023, 37, 2538–2574. [Google Scholar] [CrossRef]
- Ye, X.; Du, J.; Li, X.; Shaw, S.-L.; Fu, Y.; Dong, X.; Zhang, Z.; Wu, L. Human-centered GeoAI Foundation Models: Where GeoAI Meets Human Dynamics. Urban Inform. 2025, 4, 2. [Google Scholar] [CrossRef]
- Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brenan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
- Krippendorff, K. Content Analysis: An Introduction to Its Methodology; SAGE publications: Thousand Oaks, CA, USA, 2018. [Google Scholar]
- Jiang, H.; Peng, M.; Zhong, Y.; Xie, H.; Hao, Z.; Lin, J.; Ma, X.; Hu, X. A Survey on Deep Learning-Based Change Detection from High-Resolution Remote Sensing Images. Remote Sens. 2022, 14, 1552. [Google Scholar] [CrossRef]
- Bai, T.; Yin, D.; Cheng, G.; Han, J. Deep learning for change detection in remote sensing: A review. Geo-Spat. Inf. Sci. 2023, 26, 262–288. [Google Scholar] [CrossRef]
- European Commission. High-Value Datasets Best Practices Report; European Commission: Brussels, Belgium, 2024; Available online: data.europa.eu (accessed on 11 November 2025).
- UN-GGIM. A Guide to the Role of Standards in Geospatial Information Management; United Nations Committee of Experts on Global Geospatial Information Management: New York, NY, USA, 2015. [Google Scholar]
- Corrêa, N.K.; Galvão, C.; Santos, J.W.; Del Pino, C.; Pinto, E.P.; Barbosa, C.; Massmann, D.; Mambrini, R.; Galvao, L.; Terem, E.; et al. Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns 2023, 4, 100857. [Google Scholar] [CrossRef] [PubMed]
- Zaidan, E.; Ibrahim, I.A. AI governance in a complex and rapidly changing regulatory landscape: A global perspective. Humanit. Soc. Sci. Commun. 2024, 11, 1121. [Google Scholar] [CrossRef]
- Kocak, Z. Publication ethics in the era of artificial intelligence. J. Korean Med. Sci. 2024, 39, e249. [Google Scholar] [CrossRef]
- De Montjoye, Y.-A.; Hidalgo, C.A.; Verleysen, M.; Blondel, V.D. Unique in the crowd: The privacy bounds of human mobility. Sci. Rep. 2013, 3, 1376. [Google Scholar] [CrossRef]
- Sohrabi, C.; Franchi, T.; Mathew, G.; Kerwan, A.; Nicola, M.; Griffin, M.; Agha, M.; Agha, R. PRISMA 2020 statement: What’s new and the importance of reporting guidelines. Int. J. Surg. 2021, 88, 105918. [Google Scholar] [CrossRef]
- Sarkis-Onofre, R.; Catalá-López, F.; Aromataris, E.; Lockwood, C. How to properly use the PRISMA Statement. Syst. Rev. 2021, 10, 117. [Google Scholar] [CrossRef]
- Schreier, M. Qualitative Content Analysis in Practice; SAGE publications: Thousand Oaks, CA, USA, 2012. [Google Scholar]
- Bowen, G.A. Document analysis as a qualitative research method. Qual. Res. J. 2009, 9, 27–40. [Google Scholar] [CrossRef]
- European Commission. Ethics Guidelines for Trustworthy AI; European Commission: Brussels, Belgium, 2019. [Google Scholar]
- Floridi, L.; Cowls, J. A unified framework of five principles for AI in society. In Machine Learning and the City: Applications in Architecture and Urban Design; John Wiley & Sons: Hoboken, NJ, USA, 2022; pp. 535–545. [Google Scholar] [CrossRef]
- Dwork, C. Differential privacy: A survey of results. In Proceedings of the International Conference on Theory and Applications of Models of Computation, Xi’an, China, 25–29 April 2008; pp. 1–19. [Google Scholar]
- Gebru, T.; Morgenstern, J.; Vecchione, B.; Vaughan, J.W.; Wallach, H.; Daume, H., III; Crawford, K. Datasheets for Datasets. Commun. ACM 2021, 64, 86–92. [Google Scholar] [CrossRef]
- ITU. The Annual AI Governance Report 2025: Steering the Future of AI; International Telecommunication Union: Geneva, Switzerland, 2025. [Google Scholar]
- NIST. Artificial Intelligence Risk Management Framework (AI RMF 1.0); National Institute of Standards and Technology: Gaithersburg, Maryland, 2023. [Google Scholar]
- ISO/IEC. Information Technology—Artificial Intelligence—Guidance on Risk Management; ISO/IEC: Geneva, Switzerland, 2023. [Google Scholar]
- ISO/IEC. Information Technology—Artificial Intelligence—Management System; ISO/IEC: Geneva, Switzerland, 2023. [Google Scholar]
- Organization for Economic Cooperation and Development (OECD). G7 Hiroshima Process on Generative Artificial Intelligence (AI) Towards a G7 Common Understanding on Generative AI; Organization for Economic Cooperation and Development (OECD): Paris, France, 2023. [Google Scholar]
- Zhang, Q.; Kang, Y.; Roth, R.E. The Ethics of AI-Generated Maps: A Study of DALL·E 2 and Implications for Cartography. In Proceedings of the 12th International Conference on Geographic Information Science, Leeds, UK, 12–15 September 2023; p. 93. [Google Scholar]
- Mochizuki, Y.; Bruillard, E.; Bryan, L. The ethics of AI or techno-solutionism? UNESCO’s policy guidance on AI in education. Br. J. Sociol. Educ. 2025, 46, 1–22. [Google Scholar] [CrossRef]
- Kausika, B.B.; Altena, V.V. GeoAI in Topographic Mapping: Navigating the Future of Opportunities and Risks. ISPRS Int. J. Geo-Inf. 2025, 14, 313. [Google Scholar] [CrossRef]
- Kang, Y. Human-Centered Geospatial Data Science. arXiv 2025, arXiv:2501.05595. [Google Scholar] [CrossRef]
| Step | Processing Procedure | Technical Details/Rules | Output |
|---|---|---|---|
| 1. Document Acquisition | Collect full text of reports | Official and most recent versions issued by international organizations or government bodies | Finalized analysis corpus |
| 2. Structural Analysis | Detect structure elements such as table of contents, sections, bullet points, and tables | Search for key signal terms such as “Principles,” “Guidelines,” “Requirements,” “Articles,” “Annex,”, etc. | List of candidate sections |
| 3. Normativity Assessment | Detect linguistic strength | Statements expressed in mandatory terms (e.g., shall, must, or explicit prohibitions) were classified as core ethical principles, whereas statements framed in advisory terms (e.g., should or recommend) were treated as indirect ethical principles | Normativity tag for each item |
| 4. Ethical Principle Extraction | Extract sentences or items | Items presented as bullet points, tables, or formal clauses were treated as core ethical principles, whereas items appearing within implementation- or governance-related sections were treated as indirect ethical principles | Draft list of extracted principles |
| 5. Normalization and Labeling | Integrate synonyms/equivalent concepts | Merge synonymous terms (e.g., privacy = data protection, transparency = explainability) and normalize them into unified GeoAI ethical principles | Standardized ethical axes |
| 6. Metadata Attachment | Link document name, section, and page references | Ensure traceability through evidence-based documentation and auditability | Final extraction matrix |
| 7. Quality Validation | Conduct double review and resolve inconsistencies | Reach consensus according to rules of normative-strength assessment | Reliability-assured results |
| Step | Input | Main Rules/Processing Procedure | Output |
|---|---|---|---|
| 1. Corpus Finalization | Full texts of selected papers, metadata | Confirm latest versions and record document types | Corpus list |
| 2. Document Segmentation | Original text | Tag sections (focus on Discussion/Implications/Limitations) | Section map |
| 3. Initial Candidate Extraction | Section text | First-round scan for normative vocabulary, lists, and tables | List of candidate items |
| 4. Normativity Classification | Candidate items | Apply Core/Indirect classification rules | Categorized items |
| 5. Normalization and Codebook Development | Categorized items | Synonymous or conceptually equivalent terms were merged and assigned unified labels in the codebook, resulting in normalized GeoAI ethical principles | Standardized items |
| 6. Evidence Tagging and Output Generation | Standardized items | Attach evidence (paper title, section, page, text snippet) | Paper-specific cards and integrated matrix |
| 7. Quality Assurance | Complete output | Conduct double coding, consensus process, and duplicate merging | Final QA-validated version |
| Stage & Reason | Policy (Counts) | Scholarly (Counts) | Total |
|---|---|---|---|
| Identified counts | 102 (policy/standards/guidance) | 108 (scholarly) | 210 |
| Deduplication (n = 45) | |||
| Removed counts | 21 | 24 | 45 |
| Screening (n = 115) | |||
| Topic mismatch with AI /GeoAI ethics | 31 | 29 | 60 |
| Non-primary/press/secondary | 11 | 0 | 11 |
| Insufficient ethics /governance content | 16 | 15 | 31 |
| Outside time window (<2019) or outlet unclear | 0 | 13 | 13 |
| Removed counts | 58 | 57 | 115 |
| Eligibility (n = 18) | |||
| Not primary/official source | 3 | 0 | 3 |
| Insufficient operationalization (governance/eval/audit) | 4 | 0 | 4 |
| Contribution | 0 | 9 | 9 |
| Weak linkage to GeoAI ethics axes | 0 | 2 | 2 |
| Removed counts | 7 | 11 | 18 |
| Inclusion (n = 32) | |||
| Final selection counts | 16 | 16 | 32 |
| Ethical Axis | Concept |
|---|---|
| Geo-privacy | Encompasses purpose limitation, minimal collection, and de-identification (anonymization/pseudonymization) of location, trajectory, and proximity data, as well as the prevention of re-identification and linkage. Since privacy-by-design, data minimization, and prohibition of secondary use all fall under the protection of spatial data privacy, these are collectively referred to as Geo-privacy. |
| Data Provenance and Quality | Integrates provenance (lineage and transformation history) with explicit geospatial data-quality management and AI-ready standardization, supported by metadata and interoperability. Provenance enables traceability of quality, but quality must additionally be assessed as fitness-for-purpose relative to production intent and downstream decision context (i.e., who uses the data and for what decisions). |
| Spatial Fairness and Bias | Covers mitigation of geographic bias, representativeness issues, and the Modifiable Areal Unit Problem (MAUP). To explicitly capture the spatial dimension of fairness-related concerns, the term is consolidated as Spatial Fairness. |
| Transparency | Comprises explainability (XAI), disclosure of interaction, and justification of decisions. Since explainability and disclosure are both means of achieving comprehensible openness, these are unified under Transparency. |
| Accountability and Auditability | Encompasses audit trails (logging and documentation), mechanisms for appeal and remedy, and clarification of responsible parties. As auditing, remediation, and responsibility all represent the execution of accountability, they are collectively integrated under Accountability. |
| Safety, Security and Robustness | Includes cybersecurity and physical safety, robustness, post-deployment monitoring, and corrective measures. Since security, robustness, and monitoring all aim to ensure protection from harm, they are integrated under Safety. |
| Human Oversight and Human-in-the-Loop | Encompasses Human-in-the-Loop systems, intervention or override rights, and responsible deployment. As all these forms of involvement share the common supervision; of human supervision, they are collectively expressed as Human Oversight. |
| Public Benefit and Sustainability | Captures sustainability-oriented public benefit, including social welfare and environmental sustainability (e.g., energy and resource efficiency). In GeoAI, sustainability provides an intergenerational framing of public benefit, emphasizing long-term societal value and responsible computational and data-resource use. |
| Participation and Stakeholder Engagement | Encompasses community participation, protection of vulnerable groups, and prevention of stigmatization or exclusion. Since the goals of procedural participation and protection both aim to prevent harm through stakeholder involvement, they are consolidated under Participation. |
| Lifecycle Governance | Includes lifecycle risk management, impact assessment, documentation, and monitoring. As governance throughout the design–deployment–operation–decommissioning process is central, this is summarized as Lifecycle Governance. |
| Misuse Prevention | Covers the prevention of surveillance and geofencing (location-based virtual boundary controls) misuse, as well as the avoidance of deceptive or falsified mapping practices. Since all these forms of unethical use share the goal of preventing abuse, they are grouped under Misuse Prevention. |
| Inclusion and Accessibility | Includes accessibility, digital literacy, and equitable benefit distribution. As disparities in access and capability are addressed through inclusiveness, this dimension is concisely represented as Inclusion. |
| Ethical Axis | Core Citations | Indirect Citations | Total Documents (n = 32) | Coverage Ratio (%) | Krippendorff’s α |
|---|---|---|---|---|---|
| 1. Geo-privacy | 22 | 7 | 26 | 81.3 | 0.84 |
| 2. Data Provenance and Quality | 18 | 9 | 24 | 75.0 | 0.82 |
| 3. Spatial Fairness and Bias | 17 | 8 | 23 | 71.9 | 0.80 |
| 4. Transparency | 21 | 6 | 25 | 78.1 | 0.86 |
| 5. Accountability and Auditability | 19 | 7 | 24 | 75.0 | 0.83 |
| 6. Safety, Security and Robustness | 20 | 8 | 25 | 78.1 | 0.85 |
| 7. Human Oversight and Human-in-the-Loop | 15 | 6 | 20 | 62.5 | 0.79 |
| 8. Public Benefit and Sustainability | 14 | 10 | 21 | 65.6 | 0.81 |
| 9. Participation and Stakeholder Engagement | 13 | 11 | 20 | 62.5 | 0.78 |
| 10. Lifecycle Governance | 16 | 8 | 22 | 68.8 | 0.82 |
| 11. Misuse Prevention | 12 | 9 | 18 | 56.3 | 0.76 |
| 12. Inclusion & Accessibility | 11 | 10 | 18 | 56.3 | 0.77 |
| A. Data Collection and Use Principles (Law, Rights, and Purpose Limitation) |
|
| B. Privacy-by-Design (System Architecture and Protection Technologies) |
|
| C. Risk Assessment and Validation (Indicators, Testing, Reporting) |
|
| D. Governance and Communication (Documentation, Auditing, and Trust Building) |
|
| A. Recording Data Sources and Context (Collection Stage) |
|
| B. Transformation and Processing History (Preprocessing Stage) |
|
| C. Training Data and Labeling (Learning Stage) |
|
| D. Model and Pipeline Documentation (Modeling Stage) |
|
| E. Deployment, Operation, and Auditing (Operational Stage) |
|
| F. Disclosure, Communication, and Interoperability (Cross-Stage Requirements) |
|
| A. Data Representativeness and Collection Planning |
|
| B. Preprocessing, Aggregation, and Sampling |
|
| C. Modeling and Evaluation (Geographical Generalization and Performance Gaps) |
|
| D. Results, Visualization, and Decision Making |
|
| E. Governance, Participation, and Reporting |
|
| A. Disclosure and Notice (Information and Notice) |
|
| B. Data and Model Provenance (Documentation and Lineage) |
|
| C. Output and Visualization Explanation (Readable Cartography) |
|
| D. Verification and Reporting (TEVV and Risk Communication) |
|
| E. Accessibility and Communication (Stakeholder Communication) |
|
| A. Role and Governance Definition (Who Is Responsible?) |
|
| B. Traceability and Documentation |
|
| C. Audit, Oversight and Remedy |
|
| D. Lifecycle Accountability |
|
| E. GeoAI-Specific Accountability (Spatial Accountability) |
|
| A. Design and Data Safety (Integrity and Security) |
|
| B. Model Robustness (Defense Against Attacks and Contamination) |
|
| C. TEVV and Monitoring (Verification, Surveillance, and Continuous Improvement) |
|
| D. Incident Response and Remedy |
|
| E. Map Integrity and Dissemination Safety |
|
| F. Organizational Governance and Supply Chain Management |
|
| A. Role and Authority Definition (Governance) |
|
| B. Intervention, Interruption and Escalation (Operational Procedures) |
|
| C. Explanation and Uncertainty Display (User Interface/Maps) |
|
| D. Verification and Monitoring (TEVV and Drift Management) |
|
| E. Field Operations (Geo-HMI and Human Participation) |
|
| F. Governance Integration (Policy and Institutional Systems) |
|
| A. Alignment with Public Value (“Who Benefits?”) |
|
| B. Environmental and Resource Sustainability |
|
| C. Non-Maleficence and Proportionality |
|
| D. Building Social Trust (Verification, Accountability, Participation) |
|
| E. Equity, Inclusion, and Capacity Building |
|
| A. Stakeholder Mapping and Participation Planning |
|
| B. Rights Notice, Consent, and Participatory Control |
|
| C. Community Rights and Protection of Vulnerable Groups |
|
| D. Human-Centered Design and Feedback Loops |
|
| E. Transparent Communication, Documentation, and External Review |
|
| F. Equity, Inclusion, and Capacity Building |
|
| A. Governance Design (Policy, Roles, and Accountability) |
|
| B. Design Stage (Context Identification and Impact Assessment) |
|
| C. Data and Learning (Quality, Privacy, Provenance) |
|
| D. Verification and Deployment (TEVV and Documentation) |
|
| E. Operation and Oversight (Monitoring, Drift, Incident Response) |
|
| F. Decommissioning and Withdrawal (Retention, Deletion, Transition) |
|
| G. Supply Chain and Foundation Models |
|
| H. Participation and Communication (Participatory Governance) |
|
| A. Defining Prohibitions and Proportionality (Policy and Proportionality) |
|
| B. Data and Model Protection (Design and Guardrails) |
|
| C. Output Integrity (Map Integrity) |
|
| D. Traceability, Auditing, and Post-Response (Trace, Audit and Remedy) |
|
| E. Transparent Communication and Participation |
|
| A. Accessibility and Barrier Reduction |
|
| B. Representativeness and Equity |
|
| C. Rights Notice and Participatory Control |
|
| D. Participation and DEIA Integration |
|
| E. Public Benefit and Sustainability Alignment |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the author. Published by MDPI on behalf of the International Society for Photogrammetry and Remote Sensing. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Yoo, S. An Operational Ethical Framework for GeoAI: A PRISMA-Based Systematic Review of International Policy and Scholarly Literature. ISPRS Int. J. Geo-Inf. 2026, 15, 51. https://doi.org/10.3390/ijgi15010051
Yoo S. An Operational Ethical Framework for GeoAI: A PRISMA-Based Systematic Review of International Policy and Scholarly Literature. ISPRS International Journal of Geo-Information. 2026; 15(1):51. https://doi.org/10.3390/ijgi15010051
Chicago/Turabian StyleYoo, Suhong. 2026. "An Operational Ethical Framework for GeoAI: A PRISMA-Based Systematic Review of International Policy and Scholarly Literature" ISPRS International Journal of Geo-Information 15, no. 1: 51. https://doi.org/10.3390/ijgi15010051
APA StyleYoo, S. (2026). An Operational Ethical Framework for GeoAI: A PRISMA-Based Systematic Review of International Policy and Scholarly Literature. ISPRS International Journal of Geo-Information, 15(1), 51. https://doi.org/10.3390/ijgi15010051

