remotesensing-logo

Journal Browser

Journal Browser

Artificial Intelligence and Earth Observation in Support of the UN Sustainable Development Goals

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 27233

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Industrial Technology, Kangwon National University, Chuncheon 24341, Korea
Interests: machine learning; land cover; surface water; landslides; remote sensing; Landsat
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Civil Engineering, Kangwon National University, Chuncheon 24341, Republic of Korea
Interests: geodesy; remote sensing; GIS; disaster management; sustainable development goals

E-Mail Website
Guest Editor
Department of Civil Engineering, Chosun University, Gwangju 61452, Korea
Interests: GeoAI; geospatial data science; GIScience; topological data analysis; uncertainty with visualization

E-Mail Website
Guest Editor
School of Civil Engineering, Chungbuk National University, Cheongju 28644, Korea
Interests: remote sensing; deep learning; Pansharpening; image fusion; change detection

Special Issue Information

Dear Colleagues,

 In 2014, a plan entitled “Transforming Our World: The 2030 Agenda for Sustainable Development (Agenda 2030)” was proposed by the United Nations (UN) at the UN Sustainable Development Summit to fix global problems. It states 17 Sustainable Development Goals (SDGs or Global Goals) to overcome the world’s challenges, including poverty, inequality, and the effects of climate change. Within the SDGs framework, there is tremendous potential for data produced by geospatial technologies to effectively and efficiently improve social, economic, and environmental sustainability.

With recent advancements in science and technology, devices are becoming more compact with increased capability. Computing platforms have realized high-level performance and the emergence of artificial intelligence (AI) and have been implemented in various fields. Similarly, earth observation (EO) has seen the frequent launch of satellites with higher resolution images in terms of spectral, radiometric, and spatial extents. The combination of EO data with in-situ measurements and the implementation of AI produces reliable geospatial information, which is essential for sustainable development policymaking, programming, and project operations.

Considering these advances, this Special Issue invites manuscripts that present innovative methods and solutions using AI and EO, which benefit society through the achievement of SDGs. There are no constraints regarding the field of application. However, we call for contributions that describe methods and ongoing research for the application of AI and EO for information extraction, monitoring, and implementation strategies of emerging challenges and future directions.

Related Reference:

  1. Acharya, T.D.; Lee, D.H. Remote Sensing and Geospatial Technologies for Sustainable Development: A Review of Applications. Mater. 2019, 31, 3931–3945. https://doi.org/10.18494/SAM.2019.2706
  2. Paganini, M.; Petiteville, I.; Ward, S.; Dyke, G.; Steventon, M.; Harry, J.; Kerblat, F. Satellite Earth Observations in Support of the Sustainable Development Goals, In The CEOS Earth Observation Handbook. Special 2018 ed.; CEOS-ESA: Paris, France, 2018. http://eohandbook.com/sdg/
  3. Available online: http://eo4sdg.org/ (accessed on 15 April 2020).

Dr. Tri Dev Acharya
Dr. Dong Ha Lee
Dr. Myeong-Hun Jeong
Dr. Jaewan Choi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Classification
  • Object detection
  • Pattern recognition
  • Artificial intelligence
  • Machine/deep learning
  • GeoAI
  • Geospatial data science
  • Geospatial data analysis
  • Earth observation
  • Monitoring change
  • Time series analysis
  • Visualization
  • Mapping
  • Sustainability development goals
  • SDG targets/indicators
  • 2030 Agenda

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 5267 KiB  
Article
A Cloud-Based Mapping Approach Using Deep Learning and Very-High Spatial Resolution Earth Observation Data to Facilitate the SDG 11.7.1 Indicator Computation
by Natalia Verde, Petros Patias and Giorgos Mallinis
Remote Sens. 2022, 14(4), 1011; https://doi.org/10.3390/rs14041011 - 18 Feb 2022
Cited by 4 | Viewed by 2838
Abstract
As urbanized areas continue to expand rapidly across all continents, the United Nations adopted in 2015 the Sustainable Development Goal (SDG) 11, aimed at shaping a sustainable future for city dwellers. Earth Observation (EO) satellite data can provide at a fine scale, essential [...] Read more.
As urbanized areas continue to expand rapidly across all continents, the United Nations adopted in 2015 the Sustainable Development Goal (SDG) 11, aimed at shaping a sustainable future for city dwellers. Earth Observation (EO) satellite data can provide at a fine scale, essential urban land use information for computing SDG 11 indicators in order to complement or even replace inaccurate or invalid existing spatial datasets. This study proposes an EO-based approach for extracting large scale information regarding urban open spaces (UOS) and land allocated to streets (LAS) at the city level, for calculating SDG indicator 11.7.1. The research workflow was developed over the Athens metropolitan area in Greece using deep learning classification models for processing PlanetScope and Sentinel-1 imagery, employing freely-available cloud environments offered by Google. The LAS model exhibited satisfactory results while the best experiment performance for mapping UOS, considering both PlanetScope and Sentinel-1 data, yielded high commission errors, however, the cross-validation analysis with the UOS area of OpenStreetMap exhibited a total overlap of 67.38%, suggesting that our workflow is suitable for creating a “potential” UOS layer. The methodology developed herein can serve as a roadmap for the calculation of indicator 11.7.1 through national statistical offices when spatial data are absent or unreliable. Full article
Show Figures

Figure 1

19 pages, 3913 KiB  
Article
Gap-Filling Eddy Covariance Latent Heat Flux: Inter-Comparison of Four Machine Learning Model Predictions and Uncertainties in Forest Ecosystem
by Muhammad Sarfraz Khan, Seung Bae Jeon and Myeong-Hun Jeong
Remote Sens. 2021, 13(24), 4976; https://doi.org/10.3390/rs13244976 - 07 Dec 2021
Cited by 9 | Viewed by 2930
Abstract
Environmental monitoring using satellite remote sensing is challenging because of data gaps in eddy-covariance (EC)-based in situ flux tower observations. In this study, we obtain the latent heat flux (LE) from an EC station and perform gap filling using two deep learning methods [...] Read more.
Environmental monitoring using satellite remote sensing is challenging because of data gaps in eddy-covariance (EC)-based in situ flux tower observations. In this study, we obtain the latent heat flux (LE) from an EC station and perform gap filling using two deep learning methods (two-dimensional convolutional neural network (CNN) and long short-term memory (LSTM) neural networks) and two machine learning (ML) models (support vector machine (SVM), and random forest (RF)), and we investigate their accuracies and uncertainties. The average model performance based on ~25 input and hysteresis combinations show that the mean absolute error is in an acceptable range (34.9 to 38.5 Wm−2), which indicates a marginal difference among the performances of the four models. In fact, the model performance is ranked in the following order: SVM > CNN > RF > LSTM. We conduct a robust analysis of variance and post-hoc tests, which yielded statistically insignificant results (p-value ranging from 0.28 to 0.76). This indicates that the distribution of means is equal within groups and among pairs, thereby implying similar performances among the four models. The time-series analysis and Taylor diagram indicate that the improved two-dimensional CNN captures the temporal trend of LE the best, i.e., with a Pearson’s correlation of >0.87 and a normalized standard deviation of ~0.86, which are similar to those of in situ datasets, thereby demonstrating its superiority over other models. The factor elimination analysis reveals that the CNN performs better when specific meteorological factors are removed from the training stage. Additionally, a strong coupling between the hysteresis time factor and the accuracy of the ML models is observed. Full article
Show Figures

Figure 1

11 pages, 3703 KiB  
Article
Comparative Evaluation of Mapping Accuracy between UAV Video versus Photo Mosaic for the Scattered Urban Photovoltaic Panel
by Young-Seok Hwang, Stephan Schlüter, Seong-Il Park and Jung-Sup Um
Remote Sens. 2021, 13(14), 2745; https://doi.org/10.3390/rs13142745 - 13 Jul 2021
Cited by 9 | Viewed by 2291
Abstract
It is common practice for unmanned aerial vehicle (UAV) flight planning to target an entire area surrounding a single rooftop’s photovoltaic panels while investigating solar-powered roofs that account for only 1% of the urban roof area. It is very hard for the pre-flight [...] Read more.
It is common practice for unmanned aerial vehicle (UAV) flight planning to target an entire area surrounding a single rooftop’s photovoltaic panels while investigating solar-powered roofs that account for only 1% of the urban roof area. It is very hard for the pre-flight route setting of the autopilot for a specific area (not for a single rooftop) to capture still images with high overlapping rates of a single rooftop’s photovoltaic panels. This causes serious unnecessary data redundancy by including the surrounding area because the UAV is unable to focus on the photovoltaic panel installed on the single rooftop. The aim of this research was to examine the suitability of a UAV video stream for building 3-D ortho-mosaics focused on a single rooftop and containing the azimuth, aspect, and tilts of photovoltaic panels. The 3-D position accuracy of the video stream-based ortho-mosaic has been shown to be similar to that of the autopilot-based ortho-photo by satisfying the mapping accuracy of the American Society for Photogrammetry and Remote Sensing (ASPRS): 3-D coordinates (0.028 m) in 1:217 mapping scale. It is anticipated that this research output could be used as a valuable reference in employing video stream-based ortho-mosaics for widely scattered single rooftop solar panels in urban settings. Full article
Show Figures

Figure 1

19 pages, 9148 KiB  
Article
Feasibility Analyses of Real-Time Detection of Wildlife Using UAV-Derived Thermal and RGB Images
by Seunghyeon Lee, Youngkeun Song and Sung-Ho Kil
Remote Sens. 2021, 13(11), 2169; https://doi.org/10.3390/rs13112169 - 01 Jun 2021
Cited by 19 | Viewed by 4263
Abstract
Wildlife monitoring is carried out for diverse reasons, and monitoring methods have gradually advanced through technological development. Direct field investigations have been replaced by remote monitoring methods, and unmanned aerial vehicles (UAVs) have recently become the most important tool for wildlife monitoring. Many [...] Read more.
Wildlife monitoring is carried out for diverse reasons, and monitoring methods have gradually advanced through technological development. Direct field investigations have been replaced by remote monitoring methods, and unmanned aerial vehicles (UAVs) have recently become the most important tool for wildlife monitoring. Many previous studies on detecting wild animals have used RGB images acquired from UAVs, with most of the analyses depending on machine learning–deep learning (ML–DL) methods. These methods provide relatively accurate results, and when thermal sensors are used as a supplement, even more accurate detection results can be obtained through complementation with RGB images. However, because most previous analyses were based on ML–DL methods, a lot of time was required to generate training data and train detection models. This drawback makes ML–DL methods unsuitable for real-time detection in the field. To compensate for the disadvantages of the previous methods, this paper proposes a real-time animal detection method that generates a total of six applicable input images depending on the context and uses them for detection. The proposed method is based on the Sobel edge algorithm, which is simple but can detect edges quickly based on change values. The method can detect animals in a single image without training data. The fastest detection time per image was 0.033 s, and all frames of a thermal video could be analyzed. Furthermore, because of the synchronization of the properties of the thermal and RGB images, the performance of the method was above average in comparison with previous studies. With target images acquired at heights below 100 m, the maximum detection precision and detection recall of the most accurate input image were 0.804 and 0.699, respectively. However, the low resolution of the thermal sensor and its shooting height limitation were hindrances to wildlife detection. The aim of future research will be to develop a detection method that can improve these shortcomings. Full article
Show Figures

Graphical abstract

23 pages, 3680 KiB  
Article
Secondary Precipitation Estimate Merging Using Machine Learning: Development and Evaluation over Krishna River Basin, India
by Venkatesh Kolluru, Srinivas Kolluru, Nimisha Wagle and Tri Dev Acharya
Remote Sens. 2020, 12(18), 3013; https://doi.org/10.3390/rs12183013 - 16 Sep 2020
Cited by 22 | Viewed by 5276
Abstract
The study proposes Secondary Precipitation Estimate Merging using Machine Learning (SPEM2L) algorithms for merging multiple global precipitation datasets to improve the spatiotemporal rainfall characterization. SPEM2L is applied over the Krishna River Basin (KRB), India for 34 years spanning from 1985 to 2018, using [...] Read more.
The study proposes Secondary Precipitation Estimate Merging using Machine Learning (SPEM2L) algorithms for merging multiple global precipitation datasets to improve the spatiotemporal rainfall characterization. SPEM2L is applied over the Krishna River Basin (KRB), India for 34 years spanning from 1985 to 2018, using daily measurements from three Secondary Precipitation Products (SPPs). Sixteen Machine Learning Algorithms (MLAs) were applied on three SPPs under four combinations to integrate and test the performance of MLAs for accurately representing the rainfall patterns. The individual SPPs and the integrated products were validated against a gauge-based gridded dataset provided by the Indian Meteorological Department. The validation was applied at different temporal scales and various climatic zones by employing continuous and categorical statistics. Multilayer Perceptron Neural Network with Bayesian Regularization (NBR) algorithm employing three SPPs integration outperformed all other Machine Learning Models (MLMs) and two dataset integration combinations. The merged NBR product exhibited improvements in terms of continuous and categorical statistics at all temporal scales as well as in all climatic zones. Our results indicate that the SPEM2L procedure could be successfully used in any other region or basin that has a poor gauging network or where a single precipitation product performance is ineffective. Full article
Show Figures

Graphical abstract

20 pages, 9950 KiB  
Article
Object-Based Building Change Detection by Fusing Pixel-Level Change Detection Results Generated from Morphological Building Index
by Aisha Javed, Sejung Jung, Won Hee Lee and Youkyung Han
Remote Sens. 2020, 12(18), 2952; https://doi.org/10.3390/rs12182952 - 11 Sep 2020
Cited by 11 | Viewed by 4293
Abstract
Change detection (CD) is an important tool in remote sensing. CD can be categorized into pixel-based change detection (PBCD) and object-based change detection (OBCD). PBCD is traditionally used because of its simple and straightforward algorithms. However, with increasing interest in very-high-resolution (VHR) imagery [...] Read more.
Change detection (CD) is an important tool in remote sensing. CD can be categorized into pixel-based change detection (PBCD) and object-based change detection (OBCD). PBCD is traditionally used because of its simple and straightforward algorithms. However, with increasing interest in very-high-resolution (VHR) imagery and determining changes in small and complex objects such as buildings or roads, traditional methods showed limitations, for example, the large number of false alarms or noise in the results. Thus, researchers have focused on extending PBCD to OBCD. In this study, we proposed a method for detecting the newly built-up areas by extending PBCD results into an OBCD result through the Dempster–Shafer (D–S) theory. To this end, the morphological building index (MBI) was used to extract built-up areas in multitemporal VHR imagery. Then, three PBCD algorithms, change vector analysis, principal component analysis, and iteratively reweighted multivariate alteration detection, were applied to the MBI images. For the final CD result, the three binary change images were fused with the segmented image using the D–S theory. The results obtained from the proposed method were compared with those of PBCD, OBCD, and OBCD results generated by fusing the three binary change images using the major voting technique. Based on the accuracy assessment, the proposed method produced the highest F1-score and kappa values compared with other CD results. The proposed method can be used for detecting new buildings in built-up areas as well as changes related to demolished buildings with a low rate of false alarms and missed detections compared with other existing CD methods. Full article
Show Figures

Graphical abstract

28 pages, 13829 KiB  
Article
A Double Epipolar Resampling Approach to Reliable Conjugate Point Extraction for Accurate Kompsat-3/3A Stereo Data Processing
by Jaehong Oh and Youkyung Han
Remote Sens. 2020, 12(18), 2940; https://doi.org/10.3390/rs12182940 - 10 Sep 2020
Cited by 14 | Viewed by 2983
Abstract
Kompsat-3/3A provides along-track and across-track stereo data for accurate three-dimensional (3D) topographic mapping. Stereo data preprocessing involves conjugate point extraction and acquisition of ground control points (GCPs), rational polynomial coefficient (RPC) bias compensation, and epipolar image resampling. Applications where absolute positional accuracy is [...] Read more.
Kompsat-3/3A provides along-track and across-track stereo data for accurate three-dimensional (3D) topographic mapping. Stereo data preprocessing involves conjugate point extraction and acquisition of ground control points (GCPs), rational polynomial coefficient (RPC) bias compensation, and epipolar image resampling. Applications where absolute positional accuracy is not a top priority do not require GCPs, but require precise conjugate points from stereo images for subsequent RPC bias compensation, i.e., relative orientation. Conjugate points are extracted between the original stereo data using image-matching methods by a proper outlier removal process. Inaccurate matching results and potential outliers produce geometric inconsistency in the stereo data. Hence, the reliability of conjugate point extraction must be improved. For this purpose, we proposed to apply the coarse epipolar resampling using raw RPCs before the conjugate point matching. We expect epipolar images with even inaccurate RPCs to show better stereo similarity than the original images, providing better conjugate point extraction. To this end, we carried out the quantitative analysis of the conjugate point extraction performance by comparing the proposed approach using the coarsely epipolar resampled images to the traditional approach using the original stereo images. We tested along-track Kompsat-3 stereo and across-track Kompsat-3A stereo data with four well-known image-matching methods: phase correlation (PC), mutual information (MI), speeded up robust features (SURF), and Harris detector combined with fast retina keypoint (FREAK) descriptor (i.e., Harris). These matching methods were applied to the original stereo images and coarsely resampled epipolar images, and the conjugate point extraction performance was investigated. Experimental results showed that the coarse epipolar image approach was very helpful for accurate conjugate point extraction, realizing highly accurate RPC refinement and sub-pixel y-parallax through fine epipolar image resampling, which was not achievable through the traditional approach. MI and PC provided the most stable results for both along-track and across-track test data with larger patch sizes of more than 400 pixels. Full article
Show Figures

Graphical abstract

Back to TopTop