sensors-logo

Journal Browser

Journal Browser

Deep Learning for Environmental Remote Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (26 July 2023) | Viewed by 12329

Special Issue Editors


E-Mail Website
Guest Editor
IRD—National Institute of Research for Sustainable Development—UMR ESPACE-DEV, Montpellier, France
Interests: deep learning; environmental remote sensing; data cleaning and preparation; data fusion
INRAE—National Research Institute for Agriculture, Food and Environment—UMR TETIS, Montpellier, France
Interests: data science applied to remote sensing data; satellite image time series analysis; multisensor data fusion; machine learning

E-Mail Website
Guest Editor
INRAE—National Research Institute for Agriculture, Food and Environment—UMR TETIS, Montpellier, France
Interests: signal and image processing; machine learning for remote sensing data; non-smooth optimization; interpretable deep learning

Special Issue Information

Dear Colleagues,

Machine learning (ML) applied to environmental remote sensing can help support many of the United Nations’ Sustainable Development Goals (SDGs), including life on land (SDG #15), life below water (SDG #14), water governance (SDG #6), zero hunger (SDG #2), and climate change mitigation as well as adaptation (SDG #13), to name a few. Combined with Earth observation and environmental science, data science and ML models can serve many different areas, with relevant applications in ecology, agriculture, forestry, climate modeling, and disaster responses. Recent successes in deep learning (DL) are providing effective tools with which to make advances in many of these domains; however, impactful research and deployment must be performed responsibly and with quantifiable impacts as well as actionable interpretation.

This Special Issue aims to gather cutting-edge contributions using deep learning to analyze remote sensing data for environmental applications. Contributions are accepted in different areas of application, including, but not limited to, environmental studies, agroecology, agroforestry, water management, biodiversity assessment and restoration, forest disturbances, natural resources mapping, disaster management, etc., and can offer methodological contributions including modeling, deep learning architecture search, remote sensing data engineering for DL, benchmarking, open access datasets, etc. Studies reporting on deep learning methods applied to (multitemporal) active or passive remote sensors’ data, LiDAR, airborne platforms, drones, and terrestrial vehicles are welcome, as are papers describing techniques exploiting various sources, such as multimodal and multiscale data fusion. Contributions can be submitted in various forms, such as research papers, review papers, and comparative analyses.

Dr. Laure Berti-Equille
Dr. Dino Ienco
Dr. Cássio Fraga Dantas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning models
  • explainable deep learning models
  • multimodal and multiscale remote sensing data fusion
  • uncertainty quantification of deep learning in environmental and Earth observation applications
  • mapping, monitoring, and characterization of land cover changes with time series
  • robust parameters retrieval for forestry and agricultural applications
  • hybrid deep learning and physical models in environmental applications
  • physical interpretation of deep learning models
  • application of deep learning to environmental science, agroecology, agroforestry, water management, biodiversity assessment and restoration, forest disturbances, natural resources mapping, disaster management, using Earth observation data

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 5622 KiB  
Article
Siamese Unet Network for Waterline Detection and Barrier Shape Change Analysis from Long-Term and Large Numbers of Satellite Imagery
by Hsien-Kuo Chang, Wei-Wei Chen, Jia-Si Jhang and Jin-Cheng Liou
Sensors 2023, 23(23), 9337; https://doi.org/10.3390/s23239337 - 22 Nov 2023
Viewed by 725
Abstract
Barrier islands are vital dynamic landforms that not only host ecological resources but often protect coastal ecosystems from storm damage. The Waisanding Barrier (WSDB) in Taiwan has suffered from continuous beach erosion in recent decades. In this study, we developed a SiamUnet network [...] Read more.
Barrier islands are vital dynamic landforms that not only host ecological resources but often protect coastal ecosystems from storm damage. The Waisanding Barrier (WSDB) in Taiwan has suffered from continuous beach erosion in recent decades. In this study, we developed a SiamUnet network compared to three basic DeepUnet networks with different image sizes to effectively detect barrier waterlines from 207 high-resolution satellite images. The evolution of the barrier waterline shape is obtained to present two special morphologic changes at the southern end and the evolution of the entire waterline. The time periods of separation of the southern end from the main WSDB are determined and discussed. We also show that the southern L-shaped end has occurred recently from the end of 2017 until 2021. The length of the L-shaped end gradually decreases during the summer, but gradually increases during the winter. The L-shaped end obviously has a seasonal and jagged change. The attenuation rate of the land area is analyzed as −0.344 km2/year. We also explore two factors that affect the analysis results, which are the number of valid images selected and the deviation threshold from the mean sea level. Full article
(This article belongs to the Special Issue Deep Learning for Environmental Remote Sensing)
Show Figures

Figure 1

22 pages, 9857 KiB  
Article
Evaluation of the Use of the 12 Bands vs. NDVI from Sentinel-2 Images for Crop Identification
by Adolfo Lozano-Tello, Guillermo Siesto, Marcos Fernández-Sellers and Andres Caballero-Mancera
Sensors 2023, 23(16), 7132; https://doi.org/10.3390/s23167132 - 11 Aug 2023
Cited by 1 | Viewed by 976
Abstract
Today, machine learning applied to remote sensing data is used for crop detection. This makes it possible to not only monitor crops but also to detect pests, a lack of irrigation, or other problems. For systems that require high accuracy in crop identification, [...] Read more.
Today, machine learning applied to remote sensing data is used for crop detection. This makes it possible to not only monitor crops but also to detect pests, a lack of irrigation, or other problems. For systems that require high accuracy in crop identification, a large amount of data is required to generate reliable models. The more plots of and data on crop evolution used over time, the more reliable the models. Here, a study has been carried out to analyse neural network models trained with the Sentinel satellite’s 12 bands, compared to models that only use the NDVI, in order to choose the most suitable model in terms of the amount of storage, calculation time, accuracy, and precision. This study achieved a training time gain of 59.35% for NDVI models compared with 12-band models; however, models based on 12-band values are 1.96% more accurate than those trained with the NDVI alone when it comes to making predictions. The findings of this study could be of great interest to administrations, businesses, land managers, and researchers who use satellite image data mining techniques and wish to design an efficient system, particularly one with limited storage capacity and response times. Full article
(This article belongs to the Special Issue Deep Learning for Environmental Remote Sensing)
Show Figures

Figure 1

30 pages, 35272 KiB  
Article
Graph Neural Network-Based Method of Spatiotemporal Land Cover Mapping Using Satellite Imagery
by Domen Kavran, Domen Mongus, Borut Žalik and Niko Lukač
Sensors 2023, 23(14), 6648; https://doi.org/10.3390/s23146648 - 24 Jul 2023
Cited by 4 | Viewed by 2106
Abstract
Multispectral satellite imagery offers a new perspective for spatial modelling, change detection and land cover classification. The increased demand for accurate classification of geographically diverse regions led to advances in object-based methods. A novel spatiotemporal method is presented for object-based land cover classification [...] Read more.
Multispectral satellite imagery offers a new perspective for spatial modelling, change detection and land cover classification. The increased demand for accurate classification of geographically diverse regions led to advances in object-based methods. A novel spatiotemporal method is presented for object-based land cover classification of satellite imagery using a Graph Neural Network. This paper introduces innovative representation of sequential satellite images as a directed graph by connecting segmented land region through time. The method’s novel modular node classification pipeline utilises the Convolutional Neural Network as a multispectral image feature extraction network, and the Graph Neural Network as a node classification model. To evaluate the performance of the proposed method, we utilised EfficientNetV2-S for feature extraction and the GraphSAGE algorithm with Long Short-Term Memory aggregation for node classification. This innovative application on Sentinel-2 L2A imagery produced complete 4-year intermonthly land cover classification maps for two regions: Graz in Austria, and the region of Portorož, Izola and Koper in Slovenia. The regions were classified with Corine Land Cover classes. In the level 2 classification of the Graz region, the method outperformed the state-of-the-art UNet model, achieving an average F1-score of 0.841 and an accuracy of 0.831, as opposed to UNet’s 0.824 and 0.818, respectively. Similarly, the method demonstrated superior performance over UNet in both regions under the level 1 classification, which contains fewer classes. Individual classes have been classified with accuracies up to 99.17%. Full article
(This article belongs to the Special Issue Deep Learning for Environmental Remote Sensing)
Show Figures

Figure 1

12 pages, 6841 KiB  
Communication
PCRMLP: A Two-Stage Network for Point Cloud Registration in Urban Scenes
by Jingyang Liu, Yucheng Xu, Lu Zhou and Lei Sun
Sensors 2023, 23(12), 5758; https://doi.org/10.3390/s23125758 - 20 Jun 2023
Cited by 2 | Viewed by 1468
Abstract
Point cloud registration plays a crucial role in 3D mapping and localization. Urban scene point clouds pose significant challenges for registration due to their large data volume, similar scenarios, and dynamic objects. Estimating the location by instances (bulidings, traffic lights, etc.) in urban [...] Read more.
Point cloud registration plays a crucial role in 3D mapping and localization. Urban scene point clouds pose significant challenges for registration due to their large data volume, similar scenarios, and dynamic objects. Estimating the location by instances (bulidings, traffic lights, etc.) in urban scenes is a more humanized matter. In this paper, we propose PCRMLP (point cloud registration MLP), a novel model for urban scene point cloud registration that achieves comparable registration performance to prior learning-based methods. Compared to previous works that focused on extracting features and estimating correspondence, PCRMLP estimates transformation implicitly from concrete instances. The key innovation lies in the instance-level urban scene representation method, which leverages semantic segmentation and density-based spatial clustering of applications with noise (DBSCAN) to generate instance descriptors, enabling robust feature extraction, dynamic object filtering, and logical transformation estimation. Then, a lightweight network consisting of Multilayer Perceptrons (MLPs) is employed to obtain transformation in an encoder–decoder manner. Experimental validation on the KITTI dataset demonstrates that PCRMLP achieves satisfactory coarse transformation estimates from instance descriptors within a remarkable time of 0.0028 s. With the incorporation of an ICP refinement module, our proposed method outperforms prior learning-based approaches, yielding a rotation error of 2.01° and a translation error of 1.58 m. The experimental results highlight PCRMLP’s potential for coarse registration of urban scene point clouds, thereby paving the way for its application in instance-level semantic mapping and localization. Full article
(This article belongs to the Special Issue Deep Learning for Environmental Remote Sensing)
Show Figures

Figure 1

25 pages, 3638 KiB  
Article
Forest Fire Smoke Detection Based on Deep Learning Approaches and Unmanned Aerial Vehicle Images
by Soon-Young Kim and Azamjon Muminov
Sensors 2023, 23(12), 5702; https://doi.org/10.3390/s23125702 - 19 Jun 2023
Cited by 10 | Viewed by 3838
Abstract
Wildfire poses a significant threat and is considered a severe natural disaster, which endangers forest resources, wildlife, and human livelihoods. In recent times, there has been an increase in the number of wildfire incidents, and both human involvement with nature and the impacts [...] Read more.
Wildfire poses a significant threat and is considered a severe natural disaster, which endangers forest resources, wildlife, and human livelihoods. In recent times, there has been an increase in the number of wildfire incidents, and both human involvement with nature and the impacts of global warming play major roles in this. The rapid identification of fire starting from early smoke can be crucial in combating this issue, as it allows firefighters to respond quickly to the fire and prevent it from spreading. As a result, we proposed a refined version of the YOLOv7 model for detecting smoke from forest fires. To begin, we compiled a collection of 6500 UAV pictures of smoke from forest fires. To further enhance YOLOv7’s feature extraction capabilities, we incorporated the CBAM attention mechanism. Then, we added an SPPF+ layer to the network’s backbone to better concentrate smaller wildfire smoke regions. Finally, decoupled heads were introduced into the YOLOv7 model to extract useful information from an array of data. A BiFPN was used to accelerate multi-scale feature fusion and acquire more specific features. Learning weights were introduced in the BiFPN so that the network can prioritize the most significantly affecting characteristic mapping of the result characteristics. The testing findings on our forest fire smoke dataset revealed that the proposed approach successfully detected forest fire smoke with an AP50 of 86.4%, 3.9% higher than previous single- and multiple-stage object detectors. Full article
(This article belongs to the Special Issue Deep Learning for Environmental Remote Sensing)
Show Figures

Figure 1

Review

Jump to: Research

23 pages, 1278 KiB  
Review
Role of Internet of Things and Deep Learning Techniques in Plant Disease Detection and Classification: A Focused Review
by Vijaypal Singh Dhaka, Nidhi Kundu, Geeta Rani, Ester Zumpano and Eugenio Vocaturo
Sensors 2023, 23(18), 7877; https://doi.org/10.3390/s23187877 - 14 Sep 2023
Cited by 7 | Viewed by 2176
Abstract
The automatic detection, visualization, and classification of plant diseases through image datasets are key challenges for precision and smart farming. The technological solutions proposed so far highlight the supremacy of the Internet of Things in data collection, storage, and communication, and deep learning [...] Read more.
The automatic detection, visualization, and classification of plant diseases through image datasets are key challenges for precision and smart farming. The technological solutions proposed so far highlight the supremacy of the Internet of Things in data collection, storage, and communication, and deep learning models in automatic feature extraction and feature selection. Therefore, the integration of these technologies is emerging as a key tool for the monitoring, data capturing, prediction, detection, visualization, and classification of plant diseases from crop images. This manuscript presents a rigorous review of the Internet of Things and deep learning models employed for plant disease monitoring and classification. The review encompasses the unique strengths and limitations of different architectures. It highlights the research gaps identified from the related works proposed in the literature. It also presents a comparison of the performance of different deep learning models on publicly available datasets. The comparison gives insights into the selection of the optimum deep learning models according to the size of the dataset, expected response time, and resources available for computation and storage. This review is important in terms of developing optimized and hybrid models for plant disease classification. Full article
(This article belongs to the Special Issue Deep Learning for Environmental Remote Sensing)
Show Figures

Figure 1

Back to TopTop