remotesensing-logo

Journal Browser

Journal Browser

Special Issue "Computer Vision and Image Processing in Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 15 December 2023 | Viewed by 6735

Special Issue Editors

Autonomous System Laboratory (Building 17, Room 131), School of Engineering and IT, University of New South Wales, Canberra 2610, Australia
Interests: computer vision; robotics, mechatronics; drones; machine learning
School of Engineering, University of South Australia, Mawson Lakes, SA 5095, Australia
Interests: robotics; sensing; biomimicry; aerospace
Special Issues, Collections and Topics in MDPI journals
Electrical Engineering Technical College, Middle Technical University, Baghdad 1022, Iraq
Interests: biomedical instrumentation and sensors; health-care applications; computer vision systems; microcontroller applications
The Plant Accelerator, Australian Plant Phenomics Facility, School of Agriculture, Food and Wine, University of Adelaide, Waite Campus, Building WT 40, Hartley Grove, Adelaide, SA 5064, Australia
Interests: machine vision and machine learning for plant phenotyping and precision agriculture; plant nutrient estimation; plant disease detection; drought and salt stress tolerance; plant growing status estimation; invertebrate pest detection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Digital image processing techniques have evolved over the past decade at an unprecedented rate. These developments have led computer vision-based applications to become an inseparable part of our lives. Computer vision research continues to evolve and find its way into new applications. There are also many untapped research avenues.

This special issue aims at articles ranging from improving classical image processing techniques such as image representation, 2D convolutions, and frequency domain image filtering to modern approaches such as machine learning and deep learning-based solutions.

We welcome both theoretical and application solutions. Topics of interest include, but are not limited to:

  • Disasters - search and rescue applications, casualty detection outdoors, disaster site analysis, wildfire detection and analysis, fire detection
  • Aerial - aerial object detection, object tracking from drone videos, aerial mapping, drone videos, drone images, drone vision systems
  • Robotics - image processing for robots, vision-based localization, object detection for robots
  • Medical - health applications, vital signs monitoring, medical image segmentation and classification, biomedical image processing
  • Human motion - human action recognition, human pose recognition, human gesture recognition, face analysis
  • Agriculture – plant phenotyping and precision agriculture, weed and insect detection, plant condition monitoring, thermal imagery, infrared imagery and hyperspectral imaging
  • Image analysis - edge detection, object detection, object classification, image noise removal, image transform, optical flow, machine learning, deep learning
  • Biology - biologically inspired vision systems, insect vision
  • Datasets - image simulation, simulated image/video datasets, computer vision datasets

Dr. Asanka Perera
Prof. Dr. Javaan Chahl
Prof. Dr. Ali Al-Naji
Dr. Huajian Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • image processing
  • machine learning
  • deep learning
  • search and rescue
  • aerial images
  • automatic phenotyping
  • precision agriculture
  • datasets

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Blind Restoration of a Single Real Turbulence-Degraded Image Based on Self-Supervised Learning
Remote Sens. 2023, 15(16), 4076; https://doi.org/10.3390/rs15164076 - 18 Aug 2023
Viewed by 326
Abstract
Turbulence-degraded image frames are distorted by both turbulent deformations and space–time varying blurs. Restoration of the atmospheric turbulence-degraded image is of great importance in the state of affairs, such as remoting sensing, surveillance, traffic control, and astronomy. While traditional supervised learning uses lots [...] Read more.
Turbulence-degraded image frames are distorted by both turbulent deformations and space–time varying blurs. Restoration of the atmospheric turbulence-degraded image is of great importance in the state of affairs, such as remoting sensing, surveillance, traffic control, and astronomy. While traditional supervised learning uses lots of simulated distorted images for training, it has poor generalization ability for real degraded images. To address this problem, a novel blind restoration network that only inputs a single turbulence-degraded image is presented, which is mainly used to reconstruct the real atmospheric turbulence distorted images. In addition, the proposed method does not require pre-training, and only needs to input a single real turbulent degradation image to output a high-quality result. Meanwhile, to improve the self-supervised restoration effect, Regularization by Denoising (RED) is introduced to the network, and the final output is obtained by averaging the prediction of multiple iterations in the trained model. Experiments are carried out with real-world turbulence-degraded data by implementing the proposed method and four reported methods, and we use four non-reference indicators for evaluation, among which Average Gradient, NIQE, and BRISQUE have achieved state-of-the-art effects compared with other methods. As a result, our method is effective in alleviating distortions and blur, restoring image details, and enhancing visual quality. Furthermore, the proposed approach has a certain degree of generalization, and has an excellent restoration effect for motion-blurred images. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)
Show Figures

Figure 1

Article
LMDFS: A Lightweight Model for Detecting Forest Fire Smoke in UAV Images Based on YOLOv7
Remote Sens. 2023, 15(15), 3790; https://doi.org/10.3390/rs15153790 - 30 Jul 2023
Viewed by 676
Abstract
Forest fires pose significant hazards to ecological environments and economic society. The detection of forest fire smoke can provide crucial information for the suppression of early fires. Previous detection models based on deep learning have been limited in detecting small smoke and smoke [...] Read more.
Forest fires pose significant hazards to ecological environments and economic society. The detection of forest fire smoke can provide crucial information for the suppression of early fires. Previous detection models based on deep learning have been limited in detecting small smoke and smoke with smoke-like interference. In this paper, we propose a lightweight model for forest fire smoke detection that is suitable for UAVs. Firstly, a smoke dataset is created from a combination of forest smoke photos obtained through web crawling and enhanced photos generated by using the method of synthesizing smoke. Secondly, the GSELAN and GSSPPFCSPC modules are built based on Ghost Shuffle Convolution (GSConv), which efficiently reduces the number of parameters in the model and accelerates its convergence speed. Next, to address the problem of indistinguishable feature boundaries between clouds and smoke, we integrate coordinate attention (CA) into the YOLO feature extraction network to strengthen the extraction of smoke features and attenuate the background information. Additionally, we use Content-Aware Reassembly of FEatures (CARAFE) upsampling to expand the receptive field in the feature fusion network and fully exploit the semantic information. Finally, we adopt SCYLLA-Intersection over Union (SIoU) loss as a replacement for the original loss function in the prediction phase. This substitution leads to improved convergence efficiency and faster convergence. The experimental results demonstrate that the LMDFS model proposed for smoke detection achieves an accuracy of 80.2% with a 5.9% improvement compared to the baseline and a high number of Frames Per Second (FPS)—63.4. The model also reduces the parameter count by 14% and Giga FLoating-point Operations Per second (GFLOPs) by 6%. These results suggest that the proposed model can achieve a high accuracy while requiring fewer computational resources, making it a promising approach for practical deployment in applications for detecting smoke. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
Depth Information Precise Completion-GAN: A Precisely Guided Method for Completing Ill Regions in Depth Maps
Remote Sens. 2023, 15(14), 3686; https://doi.org/10.3390/rs15143686 - 24 Jul 2023
Viewed by 347
Abstract
In the depth map obtained through binocular stereo matching, there are many ill regions due to reasons such as lighting or occlusion. These ill regions cannot be accurately obtained due to the lack of information required for matching. Since the completion model based [...] Read more.
In the depth map obtained through binocular stereo matching, there are many ill regions due to reasons such as lighting or occlusion. These ill regions cannot be accurately obtained due to the lack of information required for matching. Since the completion model based on Gan generates random results, it cannot accurately complete the depth map. Therefore, it is necessary to accurately complete the depth map according to reality. To address this issue, this paper proposes a depth information precise completion GAN (DIPC-GAN) that effectively uses the Guid layer normalization (GuidLN) module to guide the model for precise completion by utilizing depth edges. GuidLN flexibly adjusts the weights of the guiding conditions based on intermediate results, allowing modules to accurately and effectively incorporate the guiding information. The model employs multiscale discriminators to discriminate results of different resolutions at different generator stages, enhancing the generator’s grasp of overall image and detail information. Additionally, this paper proposes Attention-ResBlock, which enables all ResBlocks in each task module of the GAN-based multitask model to focus on their own task by sharing a mask. Even when the ill regions are large, the model can effectively complement the missing details in these regions. Additionally, the multiscale discriminator in the model enhances the generator’s robustness. Finally, the proposed task-specific residual module can effectively focus different subnetworks of a multitask model on their respective tasks. The model has shown good repair results on datasets, including artificial, real, and remote sensing images. The final experimental results showed that the model’s REL and RMSE decreased by 9.3% and 9.7%, respectively, compared to RDFGan. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)
Show Figures

Figure 1

Communication
Coastal Ship Tracking with Memory-Guided Perceptual Network
Remote Sens. 2023, 15(12), 3150; https://doi.org/10.3390/rs15123150 - 16 Jun 2023
Viewed by 440
Abstract
Coastal ship tracking is used in many applications, such as autonomous navigation, maritime rescue, and environmental monitoring. Many general object-tracking methods based on deep learning have been explored for ship tracking, but they often fail to accurately track ships in challenging scenarios, such [...] Read more.
Coastal ship tracking is used in many applications, such as autonomous navigation, maritime rescue, and environmental monitoring. Many general object-tracking methods based on deep learning have been explored for ship tracking, but they often fail to accurately track ships in challenging scenarios, such as occlusion, scale variation, and motion blur. We propose a memory-guided perception network (MGPN) to address these issues. MGPN has two main innovative improvements. The dynamic memory mechanism (DMM) in the proposed method stores past features of the tracked target to enhance the model’s feature fusion capability in the temporal dimension. Meanwhile, the hierarchical context-aware module (HCAM) enables the interaction of different scales, global and local information, to address the scale discrepancy of targets and improve the feature fusion capability in the spatial dimension. These innovations enhance the robustness of tracking and reduce inaccuracies in the bounding boxes. We conducted an in-depth ablation study to demonstrate the effectiveness of DMM and HCAM. Finally, influenced by the above two points, MGPN has achieved state-of-the-art performance on a large offshore ship tracking dataset, which contains challenging scenarios such as complex backgrounds, ship occlusion, and varying scales. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
An Efficient Cloud Classification Method Based on a Densely Connected Hybrid Convolutional Network for FY-4A
Remote Sens. 2023, 15(10), 2673; https://doi.org/10.3390/rs15102673 - 21 May 2023
Cited by 1 | Viewed by 715
Abstract
Understanding atmospheric motions and projecting climate changes depends significantly on cloud types, i.e., different cloud types correspond to different atmospheric conditions, and accurate cloud classification can help forecasts and meteorology-related studies to be more effectively directed. However, accurate classification of clouds is challenging [...] Read more.
Understanding atmospheric motions and projecting climate changes depends significantly on cloud types, i.e., different cloud types correspond to different atmospheric conditions, and accurate cloud classification can help forecasts and meteorology-related studies to be more effectively directed. However, accurate classification of clouds is challenging and often requires certain manual involvement due to the complex cloud forms and dispersion. To address this challenge, this paper proposes an improved cloud classification method based on a densely connected hybrid convolutional network. A dense connection mechanism is applied to hybrid three-dimensional convolutional neural network (3D-CNN) and two-dimensional convolutional neural network (2D-CNN) architectures to use the feature information of the spatial and spectral channels of the FY-4A satellite fully. By using the proposed network, cloud categorization solutions with a high temporal resolution, extensive coverage, and high accuracy can be obtained without the need for any human intervention. The proposed network is verified using tests, and the results show that it can perform real-time classification tasks for seven different types of clouds and clear skies in the Chinese region. For the CloudSat 2B-CLDCLASS product as a test target, the proposed network can achieve an overall accuracy of 95.2% and a recall of more of than 82.9% for all types of samples, outperforming the other deep-learning-based techniques. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)
Show Figures

Figure 1

Article
Assessment of Machine and Deep Learning Approaches for Fault Diagnosis in Photovoltaic Systems Using Infrared Thermography
Remote Sens. 2023, 15(6), 1686; https://doi.org/10.3390/rs15061686 - 21 Mar 2023
Viewed by 1264
Abstract
Nowadays, millions of photovoltaic (PV) plants are installed around the world. Given the widespread use of PV supply systems and in order to keep these PV plants safe and to avoid power losses, they should be carefully protected, and eventual faults should be [...] Read more.
Nowadays, millions of photovoltaic (PV) plants are installed around the world. Given the widespread use of PV supply systems and in order to keep these PV plants safe and to avoid power losses, they should be carefully protected, and eventual faults should be detected, classified and isolated. In this paper, different machine learning (ML) and deep learning (DL) techniques were assessed for fault detection and diagnosis of PV modules. First, a dataset of infrared thermography images of normal and failure PV modules was collected. Second, two sub-datasets were built from the original one: The first sub-dataset contained normal and faulty IRT images, while the second one comprised only faulty IRT images. The first sub-dataset was used to develop fault detection models referred to as binary classification, for which an image was classified as representing a faulty PV panel or a normal one. The second one was used to design fault diagnosis models, referred to as multi-classification, where four classes (Fault1, Fault2, Fault3 and Fault4) were examined. The investigated faults were, respectively, failure bypass diode, shading effect, short-circuited PV module and soil accumulated on the PV module. To evaluate the efficiency of the investigated models, convolution matrix including precision, recall, F1-score and accuracy were used. The results showed that the methods based on deep learning exhibited better accuracy for both binary and multiclass classification while solving the fault detection and diagnosis problem in PV modules/arrays. In fact, deep learning techniques were found to be efficient for the detection and classification of different kinds of defects with good accuracy (98.71%). Through a comparative study, it was confirmed that the DL-based approaches have outperformed those based on ML-based algorithms. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)
Show Figures

Figure 1

Other

Jump to: Research

Technical Note
Aircraft Target Detection in Low Signal-to-Noise Ratio Visible Remote Sensing Images
Remote Sens. 2023, 15(8), 1971; https://doi.org/10.3390/rs15081971 - 08 Apr 2023
Cited by 2 | Viewed by 1020
Abstract
With the increasing demand for the wide-area refined detection of aircraft targets, remote sensing cameras have adopted an ultra-large area-array detector as a new imaging mode to obtain broad width remote sensing images (RSIs) with higher resolution. However, this imaging technology introduces new [...] Read more.
With the increasing demand for the wide-area refined detection of aircraft targets, remote sensing cameras have adopted an ultra-large area-array detector as a new imaging mode to obtain broad width remote sensing images (RSIs) with higher resolution. However, this imaging technology introduces new special image degradation characteristics, especially the weak target energy and the low signal-to-noise ratio (SNR) of the image, which seriously affect the target detection capability. To address the aforementioned issues, we propose an aircraft detection method for RSIs with low SNR, termed L-SNR-YOLO. In particular, the backbone is built blending a swin-transformer and convolutional neural network (CNN), which obtains multiscale global and local RSI information to enhance the algorithm’s robustness. Moreover, we design an effective feature enhancement (EFE) block integrating the concept of nonlocal means filtering to make the aircraft features significant. In addition, we utilize a novel loss function to optimize the detection accuracy. The experimental results demonstrate that our L-SNR-YOLO achieves better detection performance in RSIs than several existing advanced methods. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)
Show Figures

Figure 1

Technical Note
A Method for Detecting Feature-Sparse Regions and Matching Enhancement
Remote Sens. 2022, 14(24), 6214; https://doi.org/10.3390/rs14246214 - 08 Dec 2022
Viewed by 744
Abstract
Image matching is a key research issue in the intelligent processing of remote sensing images. Due to the large phase differences or apparent differences in ground features between unmanned aerial vehicle imagery and satellite imagery, as well as the large number of sparsely [...] Read more.
Image matching is a key research issue in the intelligent processing of remote sensing images. Due to the large phase differences or apparent differences in ground features between unmanned aerial vehicle imagery and satellite imagery, as well as the large number of sparsely textured areas, image matching between the two types of imagery is very difficult. Tackling the difficult problem of matching unmanned aerial vehicle imagery and satellite imagery, a feature sparse region detection and matching enhancement algorithm (SD-ME) is proposed in this study. First, the SuperGlue algorithm was used to initially match the two images, and feature-sparse region detection was performed with the help of the image features and initial matching results, with the detected feature sparse areas stored in a linked list one by one. Then, according to the order of storage, feature re-extraction was performed on the feature-sparse areas individually, and an adaptive threshold feature screening algorithm was proposed to filter and screen the re-extracted features. This retains only high-confidence features in the region and improves the reliability of matching enhancement results. Finally, local features with high scores that were re-extracted in the feature-sparse areas were aggregated and input to the SuperGlue network for matching, and thus, reliable matching enhancement results were obtained. The experiment selected four pairs of un-manned aerial vehicle imagery and satellite imagery that were difficult to match and compared the SD-ME algorithm with the SIFT, ContextDesc, and SuperGlue algorithms. The results revealed that the SD-ME algorithm was far superior to other algorithms in terms of the number of correct matching points, the accuracy of matching points, and the uniformity of distribution of matching points. The number of correctly matched points in each image pair increased by an average of 95.52% compared to SuperGlue. The SD-ME algorithm can effectively improve the matching quality between unmanned aerial vehicle imagery and satellite imagery and has practical value in the fields of image registration and change detection. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)
Show Figures

Figure 1

Back to TopTop