sensors-logo

Journal Browser

Journal Browser

Remote Sensing Image Processing

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Remote Sensors".

Viewed by 19308

Editor


E-Mail Website
Collection Editor
Department of Embedded Systems Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea
Interests: remote sensing; deep learning; artificial intelligence; image processing; signal processing
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

Remote sensing image processing is a mature research area, and the techniques developed in the field allow many real-life applications with great societal value. For instance, urban monitoring, fire detection or flood prediction can have a great impact on economical and environmental issues. To attain such objectives, the remote sensing community has turned into a multidisciplinary field of science that embraces physics, signal theory, computer science, electronics and communications.

In recent decades, this area has attracted a lot of research interest, and significant progress has been made. Many advances can be seen concerning image processing techniques of enhancement, analysis and understanding from the intuitive and machine-learning level. Nevertheless, many challenges remain in the remote sensing field which encourage new efforts and developments to better understand remote sensing images via image processing techniques.

The aim of this collection is to collect new research and developments in the field. We invite original contributions, so that current research trends can be presented in this collection.

Dr. Gwanggil Jeon
Collection Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (8 papers)

2023

Jump to: 2022

23 pages, 5211 KiB  
Article
Efficient-Lightweight YOLO: Improving Small Object Detection in YOLO for Aerial Images
by Mengzi Hu, Ziyang Li, Jiong Yu, Xueqiang Wan, Haotian Tan and Zeyu Lin
Sensors 2023, 23(14), 6423; https://doi.org/10.3390/s23146423 - 15 Jul 2023
Cited by 11 | Viewed by 6534
Abstract
The most significant technical challenges of current aerial image object-detection tasks are the extremely low accuracy for detecting small objects that are densely distributed within a scene and the lack of semantic information. Moreover, existing detectors with large parameter scales are unsuitable for [...] Read more.
The most significant technical challenges of current aerial image object-detection tasks are the extremely low accuracy for detecting small objects that are densely distributed within a scene and the lack of semantic information. Moreover, existing detectors with large parameter scales are unsuitable for aerial image object-detection scenarios oriented toward low-end GPUs. To address this technical challenge, we propose efficient-lightweight You Only Look Once (EL-YOLO), an innovative model that overcomes the limitations of existing detectors and low-end GPU orientation. EL-YOLO surpasses the baseline models in three key areas. Firstly, we design and scrutinize three model architectures to intensify the model’s focus on small objects and identify the most effective network structure. Secondly, we design efficient spatial pyramid pooling (ESPP) to augment the representation of small-object features in aerial images. Lastly, we introduce the alpha-complete intersection over union (α-CIoU) loss function to tackle the imbalance between positive and negative samples in aerial images. Our proposed EL-YOLO method demonstrates a strong generalization and robustness for the small-object detection problem in aerial images. The experimental results show that, with the model parameters maintained below 10 M while the input image size was unified at 640 × 640 pixels, the APS of the EL-YOLOv5 reached 10.8% and 10.7% and enhanced the APs by 1.9% and 2.2% compared to YOLOv5 on two challenging aerial image datasets, DIOR and VisDrone, respectively. Full article
Show Figures

Figure 1

16 pages, 5267 KiB  
Article
AGDF-Net: Attention-Gated and Direction-Field-Optimized Building Instance Extraction Network
by Weizhi Liu, Haixin Liu, Chao Liu, Junjie Kong and Can Zhang
Sensors 2023, 23(14), 6349; https://doi.org/10.3390/s23146349 - 12 Jul 2023
Cited by 1 | Viewed by 1059
Abstract
Building extraction from high-resolution remote sensing images has various applications, such as urban planning and population estimation. However, buildings have intraclass heterogeneity and interclass homogeneity in high-resolution remote sensing images with complex backgrounds, which makes the accurate extraction of building instances challenging and [...] Read more.
Building extraction from high-resolution remote sensing images has various applications, such as urban planning and population estimation. However, buildings have intraclass heterogeneity and interclass homogeneity in high-resolution remote sensing images with complex backgrounds, which makes the accurate extraction of building instances challenging and regular building boundaries difficult to maintain. In this paper, an attention-gated and direction-field-optimized building instance extraction network (AGDF-Net) is proposed. Two refinements are presented, including an Attention-Gated Feature Pyramid Network (AG-FPN) and a Direction Field Optimization Module (DFOM), which are used to improve information flow and optimize the mask, respectively. The AG-FPN promotes complementary semantic and detail information by measuring information importance to control the addition of low-level and high-level features. The DFOM predicts the pixel-level direction field of each instance and iteratively corrects the direction field based on the initial segmentation. Experimental results show that the proposed method outperforms the six state-of-the-art instance segmentation methods and three semantic segmentation methods. Specifically, AGDF-Net improves the objective-level metric AP and the pixel-level metric IoU by 1.1%~9.4% and 3.55%~5.06%. Full article
Show Figures

Figure 1

16 pages, 6577 KiB  
Article
LPDNet: A Lightweight Network for SAR Ship Detection Based on Multi-Level Laplacian Denoising
by Congxia Zhao, Xiongjun Fu, Jian Dong, Cheng Feng and Hao Chang
Sensors 2023, 23(13), 6084; https://doi.org/10.3390/s23136084 - 1 Jul 2023
Cited by 2 | Viewed by 1532
Abstract
Intelligent ship detection based on synthetic aperture radar (SAR) is vital in maritime situational awareness. Deep learning methods have great advantages in SAR ship detection. However, the methods do not strike a balance between lightweight and accuracy. In this article, we propose an [...] Read more.
Intelligent ship detection based on synthetic aperture radar (SAR) is vital in maritime situational awareness. Deep learning methods have great advantages in SAR ship detection. However, the methods do not strike a balance between lightweight and accuracy. In this article, we propose an end-to-end lightweight SAR target detection algorithm, multi-level Laplacian pyramid denoising network (LPDNet). Firstly, an intelligent denoising method based on the multi-level Laplacian transform is proposed. Through Convolutional Neural Network (CNN)-based threshold suppression, the denoising becomes adaptive to every SAR image via back-propagation and makes the denoising processing supervised. Secondly, channel modeling is proposed to combine the spatial domain and frequency domain information. Multi-dimensional information enhances the detection effect. Thirdly, the Convolutional Block Attention Module (CBAM) is introduced into the feature fusion module of the basic framework (Yolox-tiny) so that different weights are given to each pixel of the feature map to highlight the effective features. Experiments on SSDD and AIR SARShip-1.0 demonstrate that the proposed method achieves 97.14% AP with a speed of 24.68FPS and 92.19% AP with a speed of 23.42FPS, respectively, with only 5.1 M parameters, which verifies the accuracy, efficiency, and lightweight of the proposed method. Full article
Show Figures

Figure 1

11 pages, 908 KiB  
Communication
CBIR-SAR System Using Stochastic Distance
by Alcilene Dalília Sousa, Pedro Henrique dos Santos Silva, Romuere Rodrigues Veloso Silva, Francisco Alixandre Àvila Rodrigues and Fatima Nelsizeuma Sombra Medeiros
Sensors 2023, 23(13), 6080; https://doi.org/10.3390/s23136080 - 1 Jul 2023
Viewed by 888
Abstract
This article proposes a system for Content-Based Image Retrieval (CBIR) using stochastic distance for Synthetic-Aperture Radar (SAR) images. The methodology consists of three essential steps for image retrieval. First, it estimates the roughness (α^) and scale (γ^) [...] Read more.
This article proposes a system for Content-Based Image Retrieval (CBIR) using stochastic distance for Synthetic-Aperture Radar (SAR) images. The methodology consists of three essential steps for image retrieval. First, it estimates the roughness (α^) and scale (γ^) parameters of the GI0 distribution that models SAR data in intensity. The parameters of the model were estimated using the Maximum Likelihood Estimation and the fast approach of the Log-Cumulants method. Second, using the triangular distance, CBIR-SAR evaluates the similarity between a query image and images in the database. The stochastic distance can identify the most similar regions according to the image features, which are the estimated parameters of the data model. Third, the performance of our proposal was evaluated by applying the Mean Average Precision (MAP) measure and considering clippings from three radar sensors, i.e., UAVSAR, OrbiSaR-2, and ALOS PALSAR. The CBIR-SAR results for synthetic images achieved the highest MAP value, retrieving extremely heterogeneous regions. Regarding the real SAR images, CBIR-SAR achieved MAP values above 0.833 for all polarization channels for image samples of forest (UAVSAR) and urban areas (ORBISAR). Our results confirmed that the proposed method is sensitive to the degree of texture, and hence, it relies on good estimates. They are inputs to the stochastic distance for effective image retrieval. Full article
Show Figures

Figure 1

2022

Jump to: 2023

25 pages, 7792 KiB  
Article
Image Enhancement of Maritime Infrared Targets Based on Scene Discrimination
by Yingqi Jiang, Lili Dong and Junke Liang
Sensors 2022, 22(15), 5873; https://doi.org/10.3390/s22155873 - 5 Aug 2022
Cited by 3 | Viewed by 1714
Abstract
Infrared image enhancement technology can effectively improve the image quality and enhance the saliency of the target and is a critical component in the marine target search and tracking system. However, the imaging quality of maritime infrared images is easily affected by weather [...] Read more.
Infrared image enhancement technology can effectively improve the image quality and enhance the saliency of the target and is a critical component in the marine target search and tracking system. However, the imaging quality of maritime infrared images is easily affected by weather and sea conditions and has low contrast defects and weak target contour information. At the same time, the target is disturbed by different intensities of sea clutter, so the characteristics of the target are also different, which cannot be processed by a single algorithm. Aiming at these problems, the relationship between the directional texture features of the target and the roughness of the sea surface is deeply analyzed. According to the texture roughness of the waves, the image scene is adaptively divided into calm sea surface and rough sea surface. At the same time, through the Gabor filter at a specific frequency and the gradient-based target feature extraction operator proposed in this paper, the clutter suppression and feature fusion strategies are set, and the target feature image of multi-scale fusion in two types of scenes are obtained, which is used as a guide image for guided filtering. The original image is decomposed into a target and a background layer to extract the target features and avoid image distortion. The blurred background around the target contour is extracted by Gaussian filtering based on the potential target region, and the edge blur caused by the heat conduction of the target is eliminated. Finally, an enhanced image is obtained by fusing the target and background layers with appropriate weights. The experimental results show that, compared with the current image enhancement method, the method proposed in this paper can improve the clarity and contrast of images, enhance the detectability of targets in distress, remove sea surface clutter while retaining the natural environment features in the background, and provide more information for target detection and continuous tracking in maritime search and rescue. Full article
Show Figures

Figure 1

21 pages, 6384 KiB  
Article
Lightweight Single Image Super-Resolution with Selective Channel Processing Network
by Hongyu Zhu, Hao Tang, Yaocong Hu, Huanjie Tao and Chao Xie
Sensors 2022, 22(15), 5586; https://doi.org/10.3390/s22155586 - 26 Jul 2022
Cited by 8 | Viewed by 1945
Abstract
With the development of deep learning, considerable progress has been made in image restoration. Notably, many state-of-the-art single image super-resolution (SR) methods have been proposed. However, most of them contain many parameters, which leads to a significant amount of calculation consumption in the [...] Read more.
With the development of deep learning, considerable progress has been made in image restoration. Notably, many state-of-the-art single image super-resolution (SR) methods have been proposed. However, most of them contain many parameters, which leads to a significant amount of calculation consumption in the inference phase. To make current SR networks more lightweight and resource-friendly, we present a convolution neural network with the proposed selective channel processing strategy (SCPN). Specifically, the selective channel processing module (SCPM) is first designed to dynamically learn the significance of each channel in the feature map using a channel selection matrix in the training phase. Correspondingly, in the inference phase, only the essential channels indicated by the channel selection matrixes need to be further processed. By doing so, we can significantly reduce the parameters and the calculation consumption. Moreover, the differential channel attention (DCA) block is proposed, which takes into consideration the data distribution of the channels in feature maps to restore more high-frequency information. Extensive experiments are performed on the natural image super-resolution benchmarks (i.e., Set5, Set14, B100, Urban100, Manga109) and remote-sensing benchmarks (i.e., UCTest and RESISCTest), and our method achieves superior results to other state-of-the-art methods. Furthermore, our method keeps a slim size with fewer than 1 M parameters, which proves the superiority of our method. Owing to the proposed SCPM and DCA, our SCPN model achieves a better trade-off between calculation cost and performance in both general and remote-sensing SR applications, and our proposed method can be extended to other computer vision tasks for further research. Full article
Show Figures

Figure 1

18 pages, 6178 KiB  
Article
LAG: Layered Objects to Generate Better Anchors for Object Detection in Aerial Images
by Xueqiang Wan, Jiong Yu, Haotian Tan and Junjie Wang
Sensors 2022, 22(10), 3891; https://doi.org/10.3390/s22103891 - 20 May 2022
Cited by 6 | Viewed by 1844
Abstract
You Only Look Once (YOLO) series detectors are suitable for aerial image object detection because of their excellent real-time ability and performance. Their high performance depends heavily on the anchor generated by clustering the training set. However, the effectiveness of the general Anchor [...] Read more.
You Only Look Once (YOLO) series detectors are suitable for aerial image object detection because of their excellent real-time ability and performance. Their high performance depends heavily on the anchor generated by clustering the training set. However, the effectiveness of the general Anchor Generation algorithm is limited by the unique data distribution of the aerial image dataset. The divergence in the distribution of the number of objects with different sizes can cause the anchors to overfit some objects or be assigned to suboptimal layers because anchors of each layer are generated uniformly and affected by the overall data distribution. In this paper, we are inspired by experiments under different anchors settings and proposed the Layered Anchor Generation (LAG) algorithm. In the LAG, objects are layered by their diagonals, and then anchors of each layer are generated by analyzing the diagonals and aspect ratio of objects of the corresponding layer. In this way, anchors of each layer can better match the detection range of each layer. Experiment results showed that our algorithm is of good generality that significantly uprises the performance of You Only Look Once version 3 (YOLOv3), You Only Look Once version 5 (YOLOv5), You Only Learn One Representation (YOLOR), and Cascade Regions with CNN features (Cascade R-CNN) on the Vision Meets Drone (VisDrone) dataset and the object DetectIon in Optical Remote sensing images (DIOR) dataset, and these improvements are cost-free. Full article
Show Figures

Figure 1

18 pages, 4044 KiB  
Article
On-Orbit Absolute Radiometric Calibration and Validation of ZY3-02 Satellite Multispectral Sensor
by Hongzhao Tang, Junfeng Xie, Xinming Tang, Wei Chen and Qi Li
Sensors 2022, 22(5), 2066; https://doi.org/10.3390/s22052066 - 7 Mar 2022
Cited by 10 | Viewed by 2458
Abstract
This study described the on-orbit vicarious radiometric calibration of Chinese civilian high-resolution stereo mapping satellite ZY3-02 multispectral imager (MUX). The calibration was based on gray-scale permanent artificial targets, and multiple radiometric calibration tarpaulins (tarps) using a reflectance-based approach between July and September 2016 [...] Read more.
This study described the on-orbit vicarious radiometric calibration of Chinese civilian high-resolution stereo mapping satellite ZY3-02 multispectral imager (MUX). The calibration was based on gray-scale permanent artificial targets, and multiple radiometric calibration tarpaulins (tarps) using a reflectance-based approach between July and September 2016 at Baotou calibration site in China was described. The calibration results reveal a good linear relationship between DN and TOA radiances of ZY3-02 MUX. The uncertainty of this radiometric calibration was 4.33%, indicating that radiometric coefficients of ZY3-02 MUX are reliable. A detailed discussion on the validation analysis of the comparison results between the different radiometric calibration coefficients is presented in this paper. To further validate the reliability of the three coefficients, the calibrated ZY3-02 MUX was compared with Landsat-8 Operational Land Imager (OLI). The results also indicate that radiometric characteristics of ZY3-02 MUX imagery are reliable and highly accurate for quantitative applications. Full article
Show Figures

Figure 1

Back to TopTop