Next Article in Journal
ISHS-Net: Single-View 3D Reconstruction by Fusing Features of Image and Shape Hierarchical Structures
Previous Article in Journal
Airborne LiDAR Strip Adjustment Method Based on Point Clouds with Planar Neighborhoods
Previous Article in Special Issue
Agricultural Land Cover Mapping through Two Deep Learning Models in the Framework of EU’s CAP Activities Using Sentinel-2 Multitemporal Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Unlocking the Potential of Explainable Artificial Intelligence in Remote Sensing Big Data

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
School of Computer Science, China University of Geosciences, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(23), 5448; https://doi.org/10.3390/rs15235448
Submission received: 2 November 2023 / Accepted: 17 November 2023 / Published: 22 November 2023
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI) in Remote Sensing Big Data)

1. Introduction

In the ever-evolving landscape of artificial intelligence and big data, the concept of explainable artificial intelligence (XAI) [1] has emerged as a crucial factor in ensuring transparency, accountability, and trustworthiness. In this Special Issue, we delve into the significance of XAI in the realm of remote sensing big data and its implications for various applications, emphasizing its role in bridging the gap between advanced algorithms and human understanding.
As the volume of data generated continues to surge, ranging from initial image processing to understanding high-level data and knowledge discovery [2], the need for advanced AI systems to interpret, analyze, and make decisions from these data has become increasingly apparent. However, these AI systems often operate as “black boxes”, making it challenging for humans to comprehend their decision-making processes. At its core, XAI addresses this issue by making AI models or systems more transparent and interpretable. As discussed in some papers [3,4], it provides insights into why a particular decision was made, enabling humans to trust and act upon AI recommendations. By achieving this, XAI paves the way for a broader acceptance and integration of AI in decision-making processes across various domains.
Since DARPA [1] came up with the concept of XAI, algorithms have come a long way and made remarkable progress. Typical models and algorithms in XAI research mainly include logic rules, decision attribution, and internal structure representation of deep learning models. Logic rules for XAI mainly refer to rule-extractor methods (such as regional tree [5], DeepRED-rule [6]), knowledge graph methods (such as knowledge-aware path recurrent networks [7] and knowledge embedding [8]), etc. Decision attribution for XAI is based on assessing the influence of the input features on the output to determine their importance in decision making, and mainly includes methods based on disturbance [9,10], backpropagation [11], and agent models [12]. Internal structural representations [13] of XAI models aim to understand the role and structure of the data flowing through these networks, including the interpretation of hidden representations of hidden layers, the behavior of individual neurons, etc. In the field of remote sensing, much research on XAI is related to attribution research, such as in [14,15,16], which may be because this field is more concerned about the causes of the information extraction effect.
This Special Issue presented by MDPI’s Remote Sensing journal delves into the intersection of XAI and big data, highlighting the latest research and developments in this critical field. Articles in this issue explore a multitude of topics, including XAI techniques tailored to the analysis of large datasets, the applications of XAI in remote sensing, and discussions on the mechanism that exerts an AI effect in big data contexts.

2. An Overview of Published Articles

This Special Issue mainly explores the advantages and interpretability of current deep-learning-based data-processing algorithms from the aspects of RS image segmentation, RS image classification, RS image enhancement, etc.
The articles by Hasanpour (contribution 6), Xia (contribution 9), and Fang (contribution 10) are all based on semantic segmentation. In contribution 6, the effectiveness of attention mechanisms (AMs) for improving building segmentation in remote sensing big data using convolutional neural network (CNN) backbones is explored, showing that AMs significantly improve quantitative metrics and their visualization aligns with the metrics. In contribution 9, based on the idea of feature-driven classification, the landslide extraction model of a fully convolutional spectral–topographic fusion network (FSTF-Net) based on a deep convolutional neural network of multi-source data fusion is proposed, which takes into account the topographic factors (slope and aspect) and the normalized difference vegetation index (NDVI) as the multi-source data inputs by which to train the model. The model has high reliability and robustness. In contribution 10, the authors introduce a novel approach called conditional co-training (CCT) for unsupervised remote sensing image segmentation in coastal areas, addressing the challenge of limited pre-labeled training data. It offers enhanced explainability and transparency compared to traditional supervised methods and demonstrates a strong performance and efficiency on real-world coastal remote sensing datasets.
The articles by Papadopoulou (contribution 1), Chen (contribution 2), Guo (contribution 4), and Feng (contribution 7) relate to the classification problem. In contribution 1, temporal Convolutional Neural Networks (CNNs) and Recurrent and 2D Convolutional Neural Networks (R-CNNs) are compared to the well-established Random Forest (RF) machine learning algorithm. This comparison assesses the uncertainty of the classification results using an entropy measure and the spatial distribution of the classification errors. In contribution 2, the authors introduce an interactive and open-source Python tool to enhance land cover mapping and monitoring using Google Earth Engine, emphasizing the importance of explainable AI (XAI) for improving model performance and enabling globally accessible and free applications of remote sensing technologies. In contribution 4, to improve the ship classification accuracy and training efficiency, the authors propose a CNN-based ship classification method by creating a new convolutional module, namely the Inception-Residual Controller (IRC) module. A convolutional neural network was built based on the IRC module to extract image features and establish a ship classification model, which is 3% more accurate than the traditional network model. In contribution 7, the authors introduce a bidirectional flow decision tree (BFDT) module that is combined with Convolutional Neural Networks (CNNs) to create a reliable and interpretable framework for remote sensing image scene classification, addressing the lack of explainability and trustworthiness in deep-learning-based approaches, achieving good generalization, and providing correctable and trustworthy results.
The articles by Wang (contribution 3) and Alizadeh (contribution 5) can be interpreted as expanded applications of AI in inverse problems. In contribution 3, the authors present an unsupervised feature extraction method for reducing the dimensionality of hyperspectral images similar to [11], combining endmember extraction and spectral band clustering. In contribution 5, a novel GAN-based image generation model, CFM-GAN, is proposed to improve the image quality of crucial components of high-voltage transmission lines captured by UAVs, achieving high-resolution images with rich semantic details and outperforming existing models.
In addition, contribution 8 is an expanded application of AI for disease prediction. It presents a tree-based interpretable machine learning model that adopts the bagging concept for robust COVID-19 case/death predictions.

3. Conclusions

In conclusion, we believe that explainable AI (XAI) and remote sensing big data are highly complementary in the development of global Earth observation technology. XAI provides a means to understand and interpret the decision-making processes of AI models, while remote sensing data provide rich and diverse sources of information about the Earth’s environment and human activities. Although much of the research in remote sensing is about decision attribution of deep learning, we believe that the research on decision logic and intrinsic structure explanations of deep learning will become more important in the future. Especially in the era of remote sensing big data, important future trends revolve around combining XAI with geoscience knowledge graphs, developing deep learning capable of reasoning, and finally realizing the goal of trusted artificial intelligence in remote sensing. This study will unlock the potential of XAI in remote sensing big data.

Funding

This research was supported by the National Science Foundation (No. 41971397 and 42071413) and China RSGS Development Funding.

Acknowledgments

We acknowledge all the reviewers and authors who contributed to this Special Issue.

Conflicts of Interest

The authors declare no conflict of interest.

List of Contributions

  • Papadopoulou, E.; Mallinis, G.; Siachalou, S.; Koutsias, N.; Thanopoulos, A.C.; Tsaklidis, G. Agricultural Land Cover Mapping through Two Deep Learning Models in the Framework of EU’s CAP Activities Using Sentinel-2 Multitemporal Imagery. Remote Sens. 2023, 15, 4657. https://doi.org/10.3390/rs15194657.
  • Chen, H.; Yang, L.; Wu, Q. Enhancing Land Cover Mapping and Monitoring: An Interactive and Explainable Machine Learning Approach Using Google Earth Engine. Remote Sens. 2023, 15, 4585. https://doi.org/10.3390/rs15184585.
  • Alizadeh Moghaddam, S.H.; Gazor, S.; Karami, F.; Amani, M.; Jin, S. An Unsupervised Feature Extraction Using Endmember Extraction and Clustering Algorithms for Dimension Reduction of Hyperspectral Images. Remote Sens. 2023, 15, 3855. https://doi.org/10.3390/rs15153855.
  • Guo, H.; Ren, L. A Marine Small-Targets Classification Algorithm Based on Improved Convolutional Neural Networks. Remote Sens. 2023, 15, 2917. https://doi.org/10.3390/rs15112917.
  • Wang, J.; Li, Y.; Chen, W. UAV Aerial Image Generation of Crucial Components of High-Voltage Transmission Lines Based on Multi-Level Generative Adversarial Network. Remote Sens. 2023, 15, 1412. https://doi.org/10.3390/rs15051412.
  • Hasanpour Zaryabi, E.; Moradi, L.; Kalantar, B.; Ueda, N.; Halin, A.A. Unboxing the Black Box of Attention Mechanisms in Remote Sensing Big Data Using XAI. Remote Sens. 2022, 14, 6254. https://doi.org/10.3390/rs14246254.
  • Feng, J.; Wang, D.; Gu, Z. Bidirectional Flow Decision Tree for Reliable Remote Sensing Image Scene Classification. Remote Sens. 2022, 14, 3943. https://doi.org/10.3390/rs14163943.
  • Temenos, A.; Tzortzis, I.N.; Kaselimi, M.; Rallis, I.; Doulamis, A.; Doulamis, N. Novel Insights in Spatial Epidemiology Utilizing Explainable AI (XAI) and Remote Sensing. Remote Sens. 2022, 14, 3074. https://doi.org/10.3390/rs14133074.
  • Xia, W.; Chen, J.; Liu, J.; Ma, C.; Liu, W. Landslide Extraction from High-Resolution Remote Sensing Imagery Using Fully Convolutional Spectral–Topographic Fusion Network. Remote Sens. 2021, 13, 5116. https://doi.org/10.3390/rs13245116.
  • Fang, B.; Chen, G.; Chen, J.; Ouyang, G.; Kou, R.; Wang, L. CCT: Conditional Co-Training for Truly Unsupervised Remote Sensing Image Segmentation in Coastal Areas. Remote Sens. 2021, 13, 3521. https://doi.org/10.3390/rs13173521.

References

  1. Gunning, D.; Aha, D. DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 2019, 40, 44–58. [Google Scholar]
  2. Zhang, L.; Zhang, L. Artificial intelligence for remote sensing data analysis: A review of challenges and opportunities. IEEE Geosci. Remote Sens. Mag. 2022, 10, 270–294. [Google Scholar] [CrossRef]
  3. Adadi, A.; Berrada, M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
  4. Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable ai: A review of machine learning interpretability methods. Entropy 2020, 23, 18. [Google Scholar] [CrossRef]
  5. Wu, M.; Parbhoo, S.; Hughes, M.; Kindle, R.; Celi, L.; Zazzi, M.; Roth, V.; Doshi-Velez, F. Regional tree regularization for interpretability in deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 6413–6421. [Google Scholar]
  6. Zilke, J.R.; Loza Mencía, E.; Janssen, F. Deepred–rule extraction from deep neural networks. In Proceedings of the Discovery Science: 19th International Conference, DS 2016, Bari, Italy, 19–21 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 457–473. [Google Scholar]
  7. Wang, X.; Wang, D.; Xu, C.; He, X.; Cao, Y.; Chua, T.S. Explainable reasoning over knowledge graphs for recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 5329–5336. [Google Scholar]
  8. Ai, Q.; Azizi, V.; Chen, X.; Zhang, Y. Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 2018, 11, 137. [Google Scholar] [CrossRef]
  9. Zintgraf, L.M.; Cohen, T.S.; Adel, T.; Welling, M. Visualizing deep neural network decisions: Prediction difference analysis. arXiv 2017, arXiv:1702.04595. [Google Scholar]
  10. Ghorbani, A.; Wexler, J.; Zou, J.Y.; Kim, B. Towards automatic concept-based explanations. Adv. Neural Inf. Process. Syst. 2019, 32, 1–18. [Google Scholar]
  11. Zhao, L.; Zeng, Y.; Liu, P.; Su, X. Band selection with the explanatory gradient saliency maps of convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2020, 17, 2105–2109. [Google Scholar] [CrossRef]
  12. Kraus, S.; Azaria, A.; Fiosina, J.; Greve, M.; Hazon, N.; Kolbe, L.; Lembcke, T.B.; Muller, J.P.; Schleibaum, S.; Vollrath, M. AI for explaining decisions in multi-agent environments. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13534–13538. [Google Scholar]
  13. Bau, D.; Zhu, J.Y.; Strobelt, H.; Lapedriza, A.; Zhou, B.; Torralba, A. Understanding the role of individual units in a deep neural network. Proc. Natl. Acad. Sci. USA 2020, 117, 30071–30078. [Google Scholar] [CrossRef] [PubMed]
  14. Huang, X.; Sun, Y.; Feng, S.; Ye, Y.; Li, X. Better Visual Interpretation for Remote Sensing Scene Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  15. Ishikawa, S.n.; Todo, M.; Taki, M.; Uchiyama, Y.; Matsunaga, K.; Lin, P.; Ogihara, T.; Yasui, M. Example-based explainable AI and its application for remote sensing image classification. Int. J. Appl. Earth Obs. Geoinf. 2023, 118, 103215. [Google Scholar] [CrossRef]
  16. Temenos, A.; Temenos, N.; Kaselimi, M.; Doulamis, A.; Doulamis, N. Interpretable Deep Learning Framework for Land Use and Land Cover Classification in Remote Sensing Using SHAP. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, P.; Wang, L.; Li, J. Unlocking the Potential of Explainable Artificial Intelligence in Remote Sensing Big Data. Remote Sens. 2023, 15, 5448. https://doi.org/10.3390/rs15235448

AMA Style

Liu P, Wang L, Li J. Unlocking the Potential of Explainable Artificial Intelligence in Remote Sensing Big Data. Remote Sensing. 2023; 15(23):5448. https://doi.org/10.3390/rs15235448

Chicago/Turabian Style

Liu, Peng, Lizhe Wang, and Jun Li. 2023. "Unlocking the Potential of Explainable Artificial Intelligence in Remote Sensing Big Data" Remote Sensing 15, no. 23: 5448. https://doi.org/10.3390/rs15235448

APA Style

Liu, P., Wang, L., & Li, J. (2023). Unlocking the Potential of Explainable Artificial Intelligence in Remote Sensing Big Data. Remote Sensing, 15(23), 5448. https://doi.org/10.3390/rs15235448

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop