remotesensing-logo

Journal Browser

Journal Browser

Computer Vision-Based Methods and Tools in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Environmental Remote Sensing".

Deadline for manuscript submissions: 30 May 2024 | Viewed by 13812

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science and Biomedical Informatics, School of Science, Campus of Lamia, University of Thessaly, GR-35131 Lamia, Greece
Interests: pattern recognition; computer vision; expert systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The progress of remote sensing imaging has been closely associated with computer vision and pattern recognition. Land cover mapping, target detection, change detection and boundary extraction, as well as pattern inference from time-series of imaging data, pose challenges for traditional image computer vision and pattern recognition tasks, such as image clustering, classification and segmentation. Engineered feature vectors have been applied for the analysis of optical, SAR, multispectral or hyperspectral images, as well as point clouds. Later, visual dictionaries, the so-called bag-of-visual-words models (BoVW), incorporated the statistics associated with each problem at hand. In the context of semantic segmentation, numerous approaches, including active contours, Markov random fields (MRF) and superpixels, have been combined with descriptors or BoVWs and applied on remote sensing data. The rapid evolution of powerful GPUs and the availability of large datasets aided extraordinary advances in deep-learning-based computer vision, starting from the performance breakthrough of AlexNet in ILSVRC 2012. This new paradigm reinvigorated the interest in computational tools for remote sensing. Numerous works are regularly published for the analysis of remote sensing data with deep neural networks. Convolutional neural networks (CNNs) and derivative architectures, such as VGG, ResNet and Inception, play a prominent role in this direction. Another deep learning branch in remote sensing consists of applications of recurrent neural networks (RNNs), including long short-term memory (LSTM) networks, on time-series of images.

This Special Issue aims to explore state-of-the-art computer vision and pattern recognition applications in remote sensing. Research contributions, including surveys, are welcome. In particular, novel contributions that cover, but are not limited to, the following application domains are welcome:

  • Land cover mapping;
  • Target detection;
  • Change detection;
  • Boundary extraction;
  • Pattern analysis on time-series of imaging data;
  • Works carried out at all scales and in all environments, including surveys and comparative studies, as well as the description of new methodologies, best practices, advantages and limitations for computational tools in remote sensing.

Prof. Dr. Michalis Savelonas
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • land cover mapping
  • target detection
  • change detection
  • point-clouds
  • LiDAR
  • SAR imaging
  • multispectral imaging
  • hyperspectral imaging
  • computer vision
  • pattern recognition
  • deep learning
  • machine learning
  • image analysis
  • 3D shape analysis
  • time-series analysis

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 6258 KiB  
Article
Locating and Grading of Lidar-Observed Aircraft Wake Vortex Based on Convolutional Neural Networks
by Xinyu Zhang, Hongwei Zhang, Qichao Wang, Xiaoying Liu, Shouxin Liu, Rongchuan Zhang, Rongzhong Li and Songhua Wu
Remote Sens. 2024, 16(8), 1463; https://doi.org/10.3390/rs16081463 - 20 Apr 2024
Viewed by 393
Abstract
Aircraft wake vortices are serious threats to aviation safety. The Pulsed Coherent Doppler Lidar (PCDL) has been widely used in the observation of aircraft wake vortices due to its advantages of high spatial-temporal resolution and high precision. However, the post-processing algorithms require significant [...] Read more.
Aircraft wake vortices are serious threats to aviation safety. The Pulsed Coherent Doppler Lidar (PCDL) has been widely used in the observation of aircraft wake vortices due to its advantages of high spatial-temporal resolution and high precision. However, the post-processing algorithms require significant computing resources, which cannot achieve the real-time detection of a wake vortex (WV). This paper presents an improved Convolutional Neural Network (CNN) method for WV locating and grading based on PCDL data to avoid the influence of unstable ambient wind fields on the localization and classification results of WV. Typical WV cases are selected for analysis, and the WV locating and grading models are validated on different test sets. The consistency of the analytical algorithm and the CNN algorithm is verified. The results indicate that the improved CNN method achieves satisfactory recognition accuracy with higher efficiency and better robustness, especially in the case of strong turbulence, where the CNN method recognizes the wake vortex while the analytical method cannot. The improved CNN method is expected to be applied to optimize the current aircraft spacing criteria, which is promising in terms of aviation safety and economic benefit improvement. Full article
(This article belongs to the Special Issue Computer Vision-Based Methods and Tools in Remote Sensing)
Show Figures

Figure 1

12 pages, 3758 KiB  
Article
Fusion of Single and Integral Multispectral Aerial Images
by Mohamed Youssef and Oliver Bimber
Remote Sens. 2024, 16(4), 673; https://doi.org/10.3390/rs16040673 - 14 Feb 2024
Viewed by 818
Abstract
An adequate fusion of the most significant salient information from multiple input channels is essential for many aerial imaging tasks. While multispectral recordings reveal features in various spectral ranges, synthetic aperture sensing makes occluded features visible. We present a first and hybrid (model- [...] Read more.
An adequate fusion of the most significant salient information from multiple input channels is essential for many aerial imaging tasks. While multispectral recordings reveal features in various spectral ranges, synthetic aperture sensing makes occluded features visible. We present a first and hybrid (model- and learning-based) architecture for fusing the most significant features from conventional aerial images with the ones from integral aerial images that are the result of synthetic aperture sensing for removing occlusion. It combines the environment’s spatial references with features of unoccluded targets that would normally be hidden by dense vegetation. Our method outperforms state-of-the-art two-channel and multi-channel fusion approaches visually and quantitatively in common metrics, such as mutual information, visual information fidelity, and peak signal-to-noise ratio. The proposed model does not require manually tuned parameters, can be extended to an arbitrary number and arbitrary combinations of spectral channels, and is reconfigurable for addressing different use cases. We demonstrate examples for search and rescue, wildfire detection, and wildlife observation. Full article
(This article belongs to the Special Issue Computer Vision-Based Methods and Tools in Remote Sensing)
Show Figures

Figure 1

18 pages, 9369 KiB  
Article
Quantifying the Loss of Coral from a Bleaching Event Using Underwater Photogrammetry and AI-Assisted Image Segmentation
by Kai L. Kopecky, Gaia Pavoni, Erica Nocerino, Andrew J. Brooks, Massimiliano Corsini, Fabio Menna, Jordan P. Gallagher, Alessandro Capra, Cristina Castagnetti, Paolo Rossi, Armin Gruen, Fabian Neyer, Alessandro Muntoni, Federico Ponchio, Paolo Cignoni, Matthias Troyer, Sally J. Holbrook and Russell J. Schmitt
Remote Sens. 2023, 15(16), 4077; https://doi.org/10.3390/rs15164077 - 18 Aug 2023
Cited by 4 | Viewed by 7164
Abstract
Detecting the impacts of natural and anthropogenic disturbances that cause declines in organisms or changes in community composition has long been a focus of ecology. However, a tradeoff often exists between the spatial extent over which relevant data can be collected, and the [...] Read more.
Detecting the impacts of natural and anthropogenic disturbances that cause declines in organisms or changes in community composition has long been a focus of ecology. However, a tradeoff often exists between the spatial extent over which relevant data can be collected, and the resolution of those data. Recent advances in underwater photogrammetry, as well as computer vision and machine learning tools that employ artificial intelligence (AI), offer potential solutions with which to resolve this tradeoff. Here, we coupled a rigorous photogrammetric survey method with novel AI-assisted image segmentation software in order to quantify the impact of a coral bleaching event on a tropical reef, both at an ecologically meaningful spatial scale and with high spatial resolution. In addition to outlining our workflow, we highlight three key results: (1) dramatic changes in the three-dimensional surface areas of live and dead coral, as well as the ratio of live to dead colonies before and after bleaching; (2) a size-dependent pattern of mortality in bleached corals, where the largest corals were disproportionately affected, and (3) a significantly greater decline in the surface area of live coral, as revealed by our approximation of the 3D shape compared to the more standard planar area (2D) approach. The technique of photogrammetry allows us to turn 2D images into approximate 3D models in a flexible and efficient way. Increasing the resolution, accuracy, spatial extent, and efficiency with which we can quantify effects of disturbances will improve our ability to understand the ecological consequences that cascade from small to large scales, as well as allow more informed decisions to be made regarding the mitigation of undesired impacts. Full article
(This article belongs to the Special Issue Computer Vision-Based Methods and Tools in Remote Sensing)
Show Figures

Figure 1

25 pages, 4426 KiB  
Article
Converging Channel Attention Mechanisms with Multilayer Perceptron Parallel Networks for Land Cover Classification
by Xiangsuo Fan, Xuyang Li, Chuan Yan, Jinlong Fan, Lin Chen and Nayi Wang
Remote Sens. 2023, 15(16), 3924; https://doi.org/10.3390/rs15163924 - 8 Aug 2023
Viewed by 958
Abstract
This paper proposes a network structure called CAMP-Net, which considers the problem that traditional deep learning algorithms are unable to manage the pixel information of different bands, resulting in poor differentiation of feature representations of different categories and causing classification overfitting. CAMP-Net is [...] Read more.
This paper proposes a network structure called CAMP-Net, which considers the problem that traditional deep learning algorithms are unable to manage the pixel information of different bands, resulting in poor differentiation of feature representations of different categories and causing classification overfitting. CAMP-Net is a parallel network that, firstly, enhances the interaction of local information of bands by grouping the spectral nesting of the band information and then proposes a parallel processing model. One branch is responsible for inputting the features, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) band information generated by grouped nesting into the ViT framework, and enhancing the interaction and information flow between different channels in the feature map by adding the channel attention mechanism to realize the expressive capability of the feature map. The other branch assists the network’s ability to enhance the extraction of different feature channels by designing a multi-layer perceptron network based on the utilization of the feature channels. Finally, the classification results are obtained by fusing the features obtained by the channel attention mechanism with those obtained by the MLP to achieve pixel-level multispectral image classification. In this study, the application of the algorithm was carried out in the feature distribution of South County, Yiyang City, Hunan Province, and the experiments were conducted based on 10 m Sentinel-2 multispectral RS images. The experimental results show that the overall accuracy of the algorithm proposed in this paper is 99.00% and the transformer (ViT) is 95.81%, while the performance of the algorithm in the Sentinel-2 dataset was greatly improved for the transformer. The transformer shows a huge improvement, which provides research value for developing a land cover classification algorithm for remote sensing images. Full article
(This article belongs to the Special Issue Computer Vision-Based Methods and Tools in Remote Sensing)
Show Figures

Figure 1

23 pages, 28102 KiB  
Article
Multi-SUAV Collaboration and Low-Altitude Remote Sensing Technology-Based Image Registration and Change Detection Network of Garbage Scattered Areas in Nature Reserves
by Kai Yan, Yaxin Dong, Yang Yang and Lin Xing
Remote Sens. 2022, 14(24), 6352; https://doi.org/10.3390/rs14246352 - 15 Dec 2022
Cited by 1 | Viewed by 1408
Abstract
Change detection is an important task in remote sensing image processing and analysis. However, due to position errors and wind interference, bi-temporal low-altitude remote sensing images collected by SUAVs often suffer from different viewing angles. The existing methods need to use an independent [...] Read more.
Change detection is an important task in remote sensing image processing and analysis. However, due to position errors and wind interference, bi-temporal low-altitude remote sensing images collected by SUAVs often suffer from different viewing angles. The existing methods need to use an independent registration network for registration before change detection, which greatly reduces the integrity and speed of the task. In this work, we propose an end-to-end network architecture RegCD-Net to address change detection problems in the bi-temporal SUAVs’ low-altitude remote sensing images. We utilize global and local correlations to generate an optical flow pyramid and realize image registration through layer-by-layer optical flow fields. Then we use a nested connection to combine the rich semantic information in deep layers of the network and the precise location information in the shallow layers and perform deep supervision through the combined attention module to finally achieve change detection in bi-temporal images. We apply this network to the task of change detection in the garbage-scattered areas of nature reserves and establish a related dataset. Experimental results show that our RegCD-Net outperforms several state-of-the-art CD methods with more precise change edge representation, relatively few parameters, fast speed, and better integration without additional registration networks. Full article
(This article belongs to the Special Issue Computer Vision-Based Methods and Tools in Remote Sensing)
Show Figures

Figure 1

19 pages, 15010 KiB  
Article
DSM Generation from Multi-View High-Resolution Satellite Images Based on the Photometric Mesh Refinement Method
by Benchao Lv, Jianchen Liu, Ping Wang and Muhammad Yasir
Remote Sens. 2022, 14(24), 6259; https://doi.org/10.3390/rs14246259 - 10 Dec 2022
Cited by 1 | Viewed by 2051
Abstract
Automatic reconstruction of DSMs from satellite images is a hot issue in the field of photogrammetry. Nowadays, most state-of-the-art pipelines produce 2.5D products. In order to solve some shortcomings of traditional algorithms and expand the means of updating digital surface models, a DSM [...] Read more.
Automatic reconstruction of DSMs from satellite images is a hot issue in the field of photogrammetry. Nowadays, most state-of-the-art pipelines produce 2.5D products. In order to solve some shortcomings of traditional algorithms and expand the means of updating digital surface models, a DSM generation method based on variational mesh refinement of satellite stereo image pairs to recover 3D surfaces from coarse input is proposed. Specifically, the initial coarse mesh is constructed first and the geometric features of the generated 3D mesh model are then optimized by using the information of the original images, while the 3D mesh subdivision is constrained by combining the image’s texture information and projection information, with subdivision optimization of the mesh model finally achieved. The results of this method are compared qualitatively and quantitatively with those of the commercial software PCI and the SGM method. The experimental results show that the generated 3D digital surface has clearer edge contours, more refined planar textures, and sufficient model accuracy to match well with the actual conditions of the ground surface, proving the effectiveness of the method. The method is advantageous for conducting research on true 3D products in complex urban areas and can generate complete DSM products with the input of rough meshes, thus indicating it has some development prospects. Full article
(This article belongs to the Special Issue Computer Vision-Based Methods and Tools in Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop