remotesensing-logo

Journal Browser

Journal Browser

Information Retrieval from Remote Sensing Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Environmental Remote Sensing".

Deadline for manuscript submissions: closed (31 July 2022) | Viewed by 26899

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrotechnical and Computers Engineering, School of Sciences and Technology, NOVA University of Lisbon, 1649004 Lisbon, Portugal
Interests: remote sensing; machine learning; image processing; forestry monitoring; land cover land use image classification; satellite image processing; UAV image processing

E-Mail Website
Guest Editor
Centre of Technology and Systems, UNINOVA, 2829-516 Caparica, Portugal
Interests: image processing; decision support systems and healthcare information systems

Special Issue Information

Dear Colleagues,

The need for more informative decision making has driven the development of advanced applications that are able to summarize large amounts of data on relevant and useful information. This development has also been boosted by the high availability of remote sensing imagery, allowing users to carry out their assessments remotely and covering large areas, rather than by field visits. Imagery provided by satellites is nowadays an inexpensive option with several freely available sources that, if necessary, can be complemented by higher detail images captured using airplanes or drones.

This Special Issue is expected to include manuscripts presenting new techniques to extract meaningful information from remote sensing imagery: data fusion algorithms for merging multi-source or multi-spectral data; advanced time-series analysis for information extraction from historical data and detection or prediction of deviations; new machine learning strategies either by ensemble classifiers or other; decision support tools that use remote sensing information.

Potential topics for this Special Issue include, but are not limited to the following:

- Information fusion;
- Multi-source and multi-spectral image fusion;
- Advanced machine learning methods;
- Time-series analysis;
- Knowledge discovery in remote sensing imagery.

Dr. André Damas Mora
Dr. José Manuel Fonseca
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information extraction
  • machine learning
  • time-series
  • data fusion
  • multi-source
  • multi-spectral
  • decision support
  • remote sensing images
  • satellite image processing

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 6526 KiB  
Article
Automatic Rural Road Centerline Detection and Extraction from Aerial Images for a Forest Fire Decision Support System
by Miguel Lourenço, Diogo Estima, Henrique Oliveira, Luís Oliveira and André Mora
Remote Sens. 2023, 15(1), 271; https://doi.org/10.3390/rs15010271 - 02 Jan 2023
Cited by 1 | Viewed by 2326
Abstract
To effectively manage the terrestrial firefighting fleet in a forest fire scenario, namely, to optimize its displacement in the field, it is crucial to have a well-structured and accurate mapping of rural roads. The landscape’s complexity, mainly due to severe shadows cast by [...] Read more.
To effectively manage the terrestrial firefighting fleet in a forest fire scenario, namely, to optimize its displacement in the field, it is crucial to have a well-structured and accurate mapping of rural roads. The landscape’s complexity, mainly due to severe shadows cast by the wild vegetation and trees, makes it challenging to extract rural roads based on processing aerial or satellite images, leading to heterogeneous results. This article proposes a method to improve the automatic detection of rural roads and the extraction of their centerlines from aerial images. This method has two main stages: (i) the use of a deep learning model (DeepLabV3+) for predicting rural road segments; (ii) an optimization strategy to improve the connections between predicted rural road segments, followed by a morphological approach to extract the rural road centerlines using thinning algorithms, such as those proposed by Zhang–Suen and Guo–Hall. After completing these two stages, the proposed method automatically detected and extracted rural road centerlines from complex rural environments. This is useful for developing real-time mapping applications. Full article
(This article belongs to the Special Issue Information Retrieval from Remote Sensing Images)
Show Figures

Graphical abstract

25 pages, 10110 KiB  
Article
Dam Extraction from High-Resolution Satellite Images Combined with Location Based on Deep Transfer Learning and Post-Segmentation with an Improved MBI
by Yafei Jing, Yuhuan Ren, Yalan Liu, Dacheng Wang and Linjun Yu
Remote Sens. 2022, 14(16), 4049; https://doi.org/10.3390/rs14164049 - 19 Aug 2022
Cited by 1 | Viewed by 1895
Abstract
Accurate mapping of dams can provide useful information about geographical locations and boundaries and can help improve public dam datasets. However, when applied to disaster emergency management, it is often difficult to completely determine the distribution of dams due to the incompleteness of [...] Read more.
Accurate mapping of dams can provide useful information about geographical locations and boundaries and can help improve public dam datasets. However, when applied to disaster emergency management, it is often difficult to completely determine the distribution of dams due to the incompleteness of the available data. Thus, we propose an automatic and intelligent extraction method that combines location with post-segmentation for dam detection. First, we constructed a dataset named RSDams and proposed an object detection model, YOLOv5s-ViT-BiFPN (You Only Look Once version 5s-Vision Transformer-Bi-Directional Feature Pyramid Network), with a training method using deep transfer learning to generate graphical locations for dams. After retraining the model on the RSDams dataset, its precision for dam detection reached 88.2% and showed a 3.4% improvement over learning from scratch. Second, based on the graphical locations, we utilized an improved Morphological Building Index (MBI) algorithm for dam segmentation to derive dam masks. The average overall accuracy and Kappa coefficient of the model applied to 100 images reached 97.4% and 0.7, respectively. Finally, we applied the dam extraction method to two study areas, namely, Yangbi County of Yunnan Province and Changping District of Beijing in China, and the recall rates reached 69.2% and 81.5%, respectively. The results show that our method has high accuracy and good potential to serve as an automatic and intelligent method for the establishment of a public dam dataset on a regional or national scale. Full article
(This article belongs to the Special Issue Information Retrieval from Remote Sensing Images)
Show Figures

Figure 1

16 pages, 10977 KiB  
Article
Real-Time Integration of Segmentation Techniques for Reduction of False Positive Rates in Fire Plume Detection Systems during Forest Fires
by Leonardo Martins, Federico Guede-Fernández, Rui Valente de Almeida, Hugo Gamboa and Pedro Vieira
Remote Sens. 2022, 14(11), 2701; https://doi.org/10.3390/rs14112701 - 04 Jun 2022
Cited by 8 | Viewed by 2129
Abstract
Governmental offices are still highly concerned with controlling the escalation of forest fires due to their social, environmental and economic consequences. This paper presents new developments to a previously implemented system for the classification of smoke columns with object detection and a deep [...] Read more.
Governmental offices are still highly concerned with controlling the escalation of forest fires due to their social, environmental and economic consequences. This paper presents new developments to a previously implemented system for the classification of smoke columns with object detection and a deep learning-based approach. The study focuses on identifying and correcting several False Positive cases while only obtaining a small reduction of the True Positives. Our approach was based on using an instance segmentation algorithm to obtain the shape, color and spectral features of the object. An ensemble of Machine Learning (ML) algorithms was then used to further identify smoke objects, obtaining a removal of around 95% of the False Positives, with a reduction to 88.7% (from 93.0%) of the detection rate on 29 newly acquired daily sequences. This model was also compared with 32 smoke sequences of the public HPWREN dataset and a dataset of 75 sequences attaining 9.6 and 6.5 min, respectively, for the average time elapsed from the fire ignition and the first smoke detection. Full article
(This article belongs to the Special Issue Information Retrieval from Remote Sensing Images)
Show Figures

Graphical abstract

22 pages, 11003 KiB  
Article
Composite Style Pixel and Point Convolution-Based Deep Fusion Neural Network Architecture for the Semantic Segmentation of Hyperspectral and Lidar Data
by Kevin T. Decker and Brett J. Borghetti
Remote Sens. 2022, 14(9), 2113; https://doi.org/10.3390/rs14092113 - 28 Apr 2022
Cited by 5 | Viewed by 2028
Abstract
Multimodal hyperspectral and lidar data sets provide complementary spectral and structural data. Joint processing and exploitation to produce semantically labeled pixel maps through semantic segmentation has proven useful for a variety of decision tasks. In this work, we identify two areas of improvement [...] Read more.
Multimodal hyperspectral and lidar data sets provide complementary spectral and structural data. Joint processing and exploitation to produce semantically labeled pixel maps through semantic segmentation has proven useful for a variety of decision tasks. In this work, we identify two areas of improvement over previous approaches and present a proof of concept network implementing these improvements. First, rather than using a late fusion style architecture as in prior work, our approach implements a composite style fusion architecture to allow for the simultaneous generation of multimodal features and the learning of fused features during encoding. Second, our approach processes the higher information content lidar 3D point cloud data with point-based CNN layers instead of the lower information content lidar 2D DSM used in prior work. Unlike previous approaches, the proof of concept network utilizes a combination of point and pixel-based CNN layers incorporating concatenation-based fusion necessitating a novel point-to-pixel feature discretization method. We characterize our models against a modified GRSS18 data set. Our fusion model achieved 6.6% higher pixel accuracy compared to the highest-performing unimodal model. Furthermore, it achieved 13.5% higher mean accuracy against the hardest to classify samples (14% of total) and equivalent accuracy on the other test set samples. Full article
(This article belongs to the Special Issue Information Retrieval from Remote Sensing Images)
Show Figures

Graphical abstract

20 pages, 30230 KiB  
Article
Manifold-Based Multi-Deep Belief Network for Feature Extraction of Hyperspectral Image
by Zhengying Li, Hong Huang, Zhen Zhang and Guangyao Shi
Remote Sens. 2022, 14(6), 1484; https://doi.org/10.3390/rs14061484 - 19 Mar 2022
Cited by 15 | Viewed by 2441
Abstract
Deep belief networks (DBNs) have been widely applied in hyperspectral imagery (HSI) processing. However, the original DBN model fails to explore the prior knowledge of training samples which limits the discriminant capability of extracted features for classification. In this paper, we proposed a [...] Read more.
Deep belief networks (DBNs) have been widely applied in hyperspectral imagery (HSI) processing. However, the original DBN model fails to explore the prior knowledge of training samples which limits the discriminant capability of extracted features for classification. In this paper, we proposed a new deep learning method, termed manifold-based multi-DBN (MMDBN), to obtain deep manifold features of HSI. MMDBN designed a hierarchical initialization method that initializes the network by local geometric structure hidden in data. On this basis, a multi-DBN structure is built to learn deep features in each land-cover class, and it was used as the front-end of the whole model. Then, a discrimination manifold layer is developed to improve the discriminability of extracted deep features. To discover the manifold structure contained in HSI, an intrinsic graph and a penalty graph are constructed in this layer by using label information of training samples. After that, the deep manifold features can be obtained for classification. MMDBN not only effectively extracts the deep features from each class in HSI, but also maximizes the margins between different manifolds in low-dimensional embedding space. Experimental results on Indian Pines, Salinas, and Botswana datasets reach 78.25%, 90.48%, and 97.35% indicating that MMDBN possesses better classification performance by comparing with some state-of-the-art methods. Full article
(This article belongs to the Special Issue Information Retrieval from Remote Sensing Images)
Show Figures

Figure 1

18 pages, 8989 KiB  
Article
Application of ASTER Data for Differentiating Carbonate Minerals and Evaluating MgO Content of Magnesite in the Jiao-Liao-Ji Belt, North China Craton
by Young-Sun Son, Gilljae Lee, Bum Han Lee, Namhoon Kim, Sang-Mo Koh, Kwang-Eun Kim and Seong-Jun Cho
Remote Sens. 2022, 14(1), 181; https://doi.org/10.3390/rs14010181 - 01 Jan 2022
Cited by 5 | Viewed by 2626
Abstract
Numerous reports have successfully detected or differentiated carbonate minerals such as calcite and dolomite by using the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). However, there is a need to determine whether existing methods can differentiate magnesite from other carbonate minerals. This [...] Read more.
Numerous reports have successfully detected or differentiated carbonate minerals such as calcite and dolomite by using the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). However, there is a need to determine whether existing methods can differentiate magnesite from other carbonate minerals. This study proposes optimal band ratio combinations and new thresholds to distinguish magnesite, dolomite, and calcite using ASTER shortwave-infrared (SWIR) data. These were determined based on the spectral and chemical analysis of rock samples collected from Liaoning, China and Danchon, North Korea and the reflectance values from ASTER images. The results demonstrated that the simultaneous use of thresholds 2.13 and 2.015 for relative absorption band depths (RBDs) of (6 + 8)/7 and (7 + 9)/8, respectively, was the most effective for magnesite differentiation. The use of RBDs and band ratios to discriminate between dolomite and calcite was sufficiently effective. However, talc, tremolite, clay, and their mixtures with dolomite and calcite, which are commonly found in the study area, hampered the classification. The assessment of the ASTER band ratios for magnesite grade according to magnesium oxide content indicated that a band ratio of 5/6 was the most effective for this purpose. Therefore, this study proved that ASTER SWIR data can be effectively utilized for the identification and grade assessment of magnesite on a regional scale. Full article
(This article belongs to the Special Issue Information Retrieval from Remote Sensing Images)
Show Figures

Graphical abstract

23 pages, 18555 KiB  
Article
Superpixel-Based Regional-Scale Grassland Community Classification Using Genetic Programming with Sentinel-1 SAR and Sentinel-2 Multispectral Images
by Zhenjiang Wu, Jiahua Zhang, Fan Deng, Sha Zhang, Da Zhang, Lan Xun, Mengfei Ji and Qian Feng
Remote Sens. 2021, 13(20), 4067; https://doi.org/10.3390/rs13204067 - 12 Oct 2021
Cited by 5 | Viewed by 2091
Abstract
Grasslands are one of the most important terrestrial ecosystems on the planet and have significant economic and ecological value. Accurate and rapid discrimination of grassland communities is critical to the conservation and utilization of grassland resources. Previous studies that explored grassland communities were [...] Read more.
Grasslands are one of the most important terrestrial ecosystems on the planet and have significant economic and ecological value. Accurate and rapid discrimination of grassland communities is critical to the conservation and utilization of grassland resources. Previous studies that explored grassland communities were mainly based on field surveys or airborne hyperspectral and high-resolution imagery. Limited by workload and cost, these methods are typically suitable for small areas. Spaceborne mid-resolution RS images (e.g., Sentinel, Landsat) have been widely used for large-scale vegetation observations owing to their large swath width. However, there still keep challenges in accurately distinguishing between different grassland communities using these images because of the strong spectral similarity of different communities and the suboptimal performance of models used for classification. To address this issue, this paper proposed a superpixel-based grassland community classification method using Genetic Programming (GP)-optimized classification model with Sentinel-2 multispectral bands, their derived vegetation indices (VIs) and textural features, and Sentinel-1 Synthetic Aperture Radar (SAR) bands and the derived textural features. The proposed method was evaluated in the Siziwang grassland of China. Our results showed that the addition of VIs and textures, as well as the use of GP-optimized classification models, can significantly contribute to distinguishing grassland communities, and the proposed approach classified the seven communities in Siziwang grassland with an overall accuracy of 84.21% and a kappa coefficient of 0.81. We concluded that the classification method proposed in this paper is capable of distinguishing grassland communities with high accuracy at a regional scale. Full article
(This article belongs to the Special Issue Information Retrieval from Remote Sensing Images)
Show Figures

Figure 1

25 pages, 11573 KiB  
Article
Earthquake-Damaged Buildings Detection in Very High-Resolution Remote Sensing Images Based on Object Context and Boundary Enhanced Loss
by Chao Wang, Xing Qiu, Hai Huan, Shuai Wang, Yan Zhang, Xiaohui Chen and Wei He
Remote Sens. 2021, 13(16), 3119; https://doi.org/10.3390/rs13163119 - 06 Aug 2021
Cited by 10 | Viewed by 2151
Abstract
Fully convolutional networks (FCN) such as UNet and DeepLabv3+ are highly competitive when being applied in the detection of earthquake-damaged buildings in very high-resolution (VHR) remote sensing images. However, existing methods show some drawbacks, including incomplete extraction of different sizes of buildings and [...] Read more.
Fully convolutional networks (FCN) such as UNet and DeepLabv3+ are highly competitive when being applied in the detection of earthquake-damaged buildings in very high-resolution (VHR) remote sensing images. However, existing methods show some drawbacks, including incomplete extraction of different sizes of buildings and inaccurate boundary prediction. It is attributed to a deficiency in the global context-aware and inaccurate correlation mining in the spatial context as well as failure to consider the relative positional relationship between pixels and boundaries. Hence, a detection method for earthquake-damaged buildings based on the object contextual representations (OCR) and boundary enhanced loss (BE loss) was proposed. At first, the OCR module was separately embedded into high-level feature extractions of the two networks DeepLabv3+ and UNet in order to enhance the feature representation; in addition, a novel loss function, that is, BE loss, was designed according to the distance between the pixels and boundaries to force the networks to pay more attention to the learning of the boundary pixels. Finally, two improved networks (including OB-DeepLabv3+ and OB-UNet) were established according to the two strategies. To verify the performance of the proposed method, two benchmark datasets (including YSH and HTI) for detecting earthquake-damaged buildings were constructed according to the post-earthquake images in China and Haiti in 2010, respectively. The experimental results show that both the embedment of the OCR module and application of BE loss contribute to significantly increasing the detection accuracy of earthquake-damaged buildings and the two proposed networks are feasible and effective. Full article
(This article belongs to the Special Issue Information Retrieval from Remote Sensing Images)
Show Figures

Graphical abstract

11 pages, 2247 KiB  
Communication
Aircraft Detection above Clouds by Sentinel-2 MSI Parallax
by Peder Heiselberg and Henning Heiselberg
Remote Sens. 2021, 13(15), 3016; https://doi.org/10.3390/rs13153016 - 01 Aug 2021
Cited by 7 | Viewed by 3382
Abstract
Detection of aircrafts in satellite images is a challenging problem when the background is strongly reflective clouds with varying transparency. We develop a fast and effective detection algorithm that can find almost all aircrafts above and between clouds in Sentinel-2 multispectral images. It [...] Read more.
Detection of aircrafts in satellite images is a challenging problem when the background is strongly reflective clouds with varying transparency. We develop a fast and effective detection algorithm that can find almost all aircrafts above and between clouds in Sentinel-2 multispectral images. It exploits the time delay of a few seconds between the recorded multispectral images such that a moving aircraft is observed at different positions due to parallax effects. The aircraft speed, heading and altitude are also calculated accurately. Analysing images over the English Channel during fall 2020, we obtain a detection accuracy of 80%, where the most of the remaining were covered by clouds. We also analyse images in the 1.38 μm water absorption band, where only 61% of the aircrafts are detected. Full article
(This article belongs to the Special Issue Information Retrieval from Remote Sensing Images)
Show Figures

Figure 1

23 pages, 14397 KiB  
Article
Collision Avoidance on Unmanned Aerial Vehicles Using Neural Network Pipelines and Flow Clustering Techniques
by Dário Pedro, João P. Matos-Carvalho, José M. Fonseca and André Mora
Remote Sens. 2021, 13(13), 2643; https://doi.org/10.3390/rs13132643 - 05 Jul 2021
Cited by 22 | Viewed by 4272
Abstract
Unmanned Autonomous Vehicles (UAV), while not a recent invention, have recently acquired a prominent position in many industries, and they are increasingly used not only by avid customers, but also in high-demand technical use-cases, and will have a significant societal effect in the [...] Read more.
Unmanned Autonomous Vehicles (UAV), while not a recent invention, have recently acquired a prominent position in many industries, and they are increasingly used not only by avid customers, but also in high-demand technical use-cases, and will have a significant societal effect in the coming years. However, the use of UAVs is fraught with significant safety threats, such as collisions with dynamic obstacles (other UAVs, birds, or randomly thrown objects). This research focuses on a safety problem that is often overlooked due to a lack of technology and solutions to address it: collisions with non-stationary objects. A novel approach is described that employs deep learning techniques to solve the computationally intensive problem of real-time collision avoidance with dynamic objects using off-the-shelf commercial vision sensors. The suggested approach’s viability was corroborated by multiple experiments, firstly in simulation, and afterward in a concrete real-world case, that consists of dodging a thrown ball. A novel video dataset was created and made available for this purpose, and transfer learning was also tested, with positive results. Full article
(This article belongs to the Special Issue Information Retrieval from Remote Sensing Images)
Show Figures

Figure 1

Back to TopTop