remotesensing-logo

Journal Browser

Journal Browser

Artificial Intelligence Algorithm for Remote Sensing Imagery Processing Ⅱ

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 May 2023) | Viewed by 34335

Special Issue Editors


E-Mail Website
Guest Editor
The National Subsea Centre, Robert Gordon University, Aberdeen AB21 0BH, UK
Interests: remote sensing; image processing; hyperspectral image processing; pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
Interests: deep learning; remote sensing image processing and analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing technology is an important technical means for human beings to perceive the world, and multimodal remote sensing technology has spearheaded current research. With the rapid development of artificial intelligence technology, many new remote sensing image processing methods and algorithms have been proposed. Moreover, rapid advances in remote sensing methods have also promoted the application of associated algorithms and techniques to problems in many related fields, such as classification, segmentation and clustering, target detection, etc. This Special Issue aims to report and cover the latest advances and trends about the artificial intelligence algorithm for remote sensing imagery processing. Papers on both theoretical methods and applicative techniques, as well as contributions regarding new advanced methodologies on relevant scenarios of remote sensing images, are welcome. We look forward to receiving your contributions.

Prof. Dr. Chunhui Zhao
Prof. Dr. Jinchang Ren
Prof. Dr. Leyuan Fang
Prof. Dr. Weiwei Sun
Dr. Shou Feng
Dr. Nan Su
Dr. Yiming Yan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • machine learning and deep learning for remote sensing
  • optical/multispectral/hyperspectral image processing
  • LiDAR
  • SAR
  • target detection, anomaly detection, and change detection
  • semantic segmentation and classification
  • object re-identification using cross-domain/cross-dimensional images
  • object 3D modeling and mesh optimization
  • applications in remote sensing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 9204 KiB  
Article
A Gated Content-Oriented Residual Dense Network for Hyperspectral Image Super-Resolution
by Jing Hu, Tingting Li, Minghua Zhao, Fei Wang and Jiawei Ning
Remote Sens. 2023, 15(13), 3378; https://doi.org/10.3390/rs15133378 - 2 Jul 2023
Cited by 1 | Viewed by 1305
Abstract
Limited by the existing imagery sensors, a hyperspectral image (HSI) is characterized by its high spectral resolution but low spatial resolution. HSI super-resolution (SR) aims to enhance the spatial resolution of the HSIs without modifying the equipment and has become a hot issue [...] Read more.
Limited by the existing imagery sensors, a hyperspectral image (HSI) is characterized by its high spectral resolution but low spatial resolution. HSI super-resolution (SR) aims to enhance the spatial resolution of the HSIs without modifying the equipment and has become a hot issue for HSI processing. In this paper, inspired by two important observations, a gated content-oriented residual dense network (GCoRDN) is designed for the HSI SR. To be specific, based on the observation that the structure and texture exhibit different sensitivities to the spatial degradation, a content-oriented network with two branches is designed. Meanwhile, a weight-sharing strategy is merged in the network to preserve the consistency in the structure and the texture. In addition, based on the observation of the super-resolved results, a gating mechanism is applied as a form of post-processing to further enhance the SR performance. Experimental results and data analysis on both ground-based HSIs and airborne HSIs have demonstrated the effectiveness of the proposed method. Full article
Show Figures

Figure 1

24 pages, 8002 KiB  
Article
Automatic ISAR Ship Detection Using Triangle-Points Affine Transform Reconstruction Algorithm
by Xinfei Jin, Fulin Su, Hongxu Li, Zihan Xu and Jie Deng
Remote Sens. 2023, 15(10), 2507; https://doi.org/10.3390/rs15102507 - 10 May 2023
Cited by 4 | Viewed by 2065
Abstract
With the capability of capturing a target’s two-dimensional information, Inverse Synthetic Aperture Radar (ISAR) imaging is widely used in Radar Automatic Target Recognition. However, changes in the ship target’s attitude can lead to the scatterers’ rotation, occlusion, and angle glint, reducing the accuracy [...] Read more.
With the capability of capturing a target’s two-dimensional information, Inverse Synthetic Aperture Radar (ISAR) imaging is widely used in Radar Automatic Target Recognition. However, changes in the ship target’s attitude can lead to the scatterers’ rotation, occlusion, and angle glint, reducing the accuracy of ISAR image recognition. To solve this problem, we proposed a Triangle Preserving level-set-assisted Triangle-Points Affine Transform Reconstruction (TP-TATR) for ISAR ship target recognition. Firstly, three geometric points as initial information were extracted from the preprocessed ISAR images based on the ship features. Combined with these points, the Triangle Preserving level-set (TP) method robustly extracted the fitting triangle of targets depending on the intrinsic structure of the ship target. Based on the extracted triangle, the TP-TATR adjusted all the ship targets from the training and test data to the same attitude, thereby alleviating the attitude sensitivity. Finally, we created templates by averaging the adjusted training data and matched the test data with the templates for recognition. Experiments based on the simulated and measured data indicate that the accuracies of the TP-TATR method are 87.70% and 90.03%, respectively, which are higher than those of the contrast algorithms and have a statistical difference. These demonstrate the effectiveness and robustness of our proposed TP-TATR method. Full article
Show Figures

Figure 1

18 pages, 4759 KiB  
Article
Multi-Scale Ship Detection Algorithm Based on YOLOv7 for Complex Scene SAR Images
by Zhuo Chen, Chang Liu, V. F. Filaretov and D. A. Yukhimets
Remote Sens. 2023, 15(8), 2071; https://doi.org/10.3390/rs15082071 - 14 Apr 2023
Cited by 59 | Viewed by 6876
Abstract
Recently, deep learning techniques have been extensively used to detect ships in synthetic aperture radar (SAR) images. The majority of modern algorithms can achieve successful ship detection outcomes when working with multiple-scale ships on a large sea surface. However, there are still issues, [...] Read more.
Recently, deep learning techniques have been extensively used to detect ships in synthetic aperture radar (SAR) images. The majority of modern algorithms can achieve successful ship detection outcomes when working with multiple-scale ships on a large sea surface. However, there are still issues, such as missed detection and incorrect identification when performing multi-scale ship object detection operations in SAR images of complex scenes. To solve these problems, this paper proposes a complex scenes multi-scale ship detection model, according to YOLOv7, called CSD-YOLO. First, this paper suggests an SAS-FPN module that combines atrous spatial pyramid pooling and shuffle attention, allowing the model to focus on important information and ignore irrelevant information, reduce the feature loss of small ships, and simultaneously fuse the feature maps of ship targets on various SAR image scales, thereby improving detection accuracy and the model’s capacity to detect objects at several scales. The model’s optimization is then improved with the aid of the SIoU loss function. Finally, thorough tests on the HRSID and SSDD datasets are presented to support our methodology. CSD-YOLO achieves better detection performance than the baseline YOLOv7, with a 98.01% detection accuracy, a 96.18% recall, and a mean average precision (mAP) of 98.60% on SSDD. In addition, in comparative experiments with other deep learning-based methods, in terms of overall performance, CSD-YOLO still performs better. Full article
Show Figures

Figure 1

27 pages, 42556 KiB  
Article
Space Target Material Identification Based on Graph Convolutional Neural Network
by Na Li, Chengeng Gong, Huijie Zhao and Yun Ma
Remote Sens. 2023, 15(7), 1937; https://doi.org/10.3390/rs15071937 - 4 Apr 2023
Cited by 4 | Viewed by 2728
Abstract
Under complex illumination conditions, the spectral data distributions of a given material appear inconsistent in the hyperspectral images of the space target, making it difficult to achieve accurate material identification using only spectral features and local spatial features. Aiming at this problem, a [...] Read more.
Under complex illumination conditions, the spectral data distributions of a given material appear inconsistent in the hyperspectral images of the space target, making it difficult to achieve accurate material identification using only spectral features and local spatial features. Aiming at this problem, a material identification method based on an improved graph convolutional neural network is proposed. Superpixel segmentation is conducted on the hyperspectral images to build the multiscale joint topological graph of the space target global structure. Based on this, topological graphs containing the global spatial features and spectral features of each pixel are generated, and the pixel neighborhoods containing the local spatial features and spectral features are collected to form material identification datasets that include both of these. Then, the graph convolutional neural network (GCN) and the three-dimensional convolutional neural network (3-D CNN) are combined into one model using strategies of addition, element-wise multiplication, or concatenation, and the model is trained by the datasets to fuse and learn the three features. For the simulated data and the measured data, the overall accuracy of the proposed method can be kept at 85–90%, and their kappa coefficients remain around 0.8. This proves that the proposed method can improve the material identification performance under complex illumination conditions with high accuracy and strong robustness. Full article
Show Figures

Graphical abstract

28 pages, 5366 KiB  
Article
Adaptive-Attention Completing Network for Remote Sensing Image
by Wenli Huang, Ye Deng, Siqi Hui and Jinjun Wang
Remote Sens. 2023, 15(5), 1321; https://doi.org/10.3390/rs15051321 - 27 Feb 2023
Cited by 6 | Viewed by 2547
Abstract
The reconstruction of missing pixels is essential for remote sensing images, as they often suffer from problems such as covering, dead pixels, and scan line corrector (SLC)-off. Image inpainting techniques can solve these problems, as they can generate realistic content for the unknown [...] Read more.
The reconstruction of missing pixels is essential for remote sensing images, as they often suffer from problems such as covering, dead pixels, and scan line corrector (SLC)-off. Image inpainting techniques can solve these problems, as they can generate realistic content for the unknown regions of an image based on the known regions. Recently, convolutional neural network (CNN)-based inpainting methods have integrated the attention mechanism to improve inpainting performance, as they can capture long-range dependencies and adapt to inputs in a flexible manner. However, to obtain the attention map for each feature, they compute the similarities between the feature and the entire feature map, which may introduce noise from irrelevant features. To address this problem, we propose a novel adaptive attention (Ada-attention) that uses an offset position subnet to adaptively select the most relevant keys and values based on self-attention. This enables the attention to be focused on essential features and model more informative dependencies on the global range. Ada-attention first employs an offset subnet to predict offset position maps on the query feature map; then, it samples the most relevant features from the input feature map based on the offset position; next, it computes key and value maps for self-attention using the sampled features; finally, using the query, key and value maps, the self-attention outputs the reconstructed feature map. Based on Ada-attention, we customized a u-shaped adaptive-attention completing network (AACNet) to reconstruct missing regions. Experimental results on several digital remote sensing and natural image datasets, using two image inpainting models and two remote sensing image reconstruction approaches, demonstrate that the proposed AACNet achieves a good quantitative performance and good visual restoration results with regard to object integrity, texture/edge detail, and structural consistency. Ablation studies indicate that Ada-attention outperforms self-attention in terms of PSNR by 0.66%, SSIM by 0.74%, and MAE by 3.9%, and can focus on valuable global features using the adaptive offset subnet. Additionally, our approach has also been successfully applied to remove real clouds in remote sensing images, generating credible content for cloudy regions. Full article
Show Figures

Figure 1

28 pages, 7900 KiB  
Article
Two-Branch Convolutional Neural Network with Polarized Full Attention for Hyperspectral Image Classification
by Haimiao Ge, Liguo Wang, Moqi Liu, Yuexia Zhu, Xiaoyu Zhao, Haizhu Pan and Yanzhong Liu
Remote Sens. 2023, 15(3), 848; https://doi.org/10.3390/rs15030848 - 2 Feb 2023
Cited by 17 | Viewed by 2974
Abstract
In recent years, convolutional neural networks (CNNs) have been introduced for pixel-wise hyperspectral image (HSI) classification tasks. However, some problems of the CNNs are still insufficiently addressed, such as the receptive field problem, small sample problem, and feature fusion problem. To tackle the [...] Read more.
In recent years, convolutional neural networks (CNNs) have been introduced for pixel-wise hyperspectral image (HSI) classification tasks. However, some problems of the CNNs are still insufficiently addressed, such as the receptive field problem, small sample problem, and feature fusion problem. To tackle the above problems, we proposed a two-branch convolutional neural network with a polarized full attention mechanism for HSI classification. In the proposed network, two-branch CNNs are implemented to efficiently extract the spectral and spatial features, respectively. The kernel sizes of the convolutional layers are simplified to reduce the complexity of the network. This approach can make the network easier to be trained and fit the network to small sample size conditions. The one-shot connection technique is applied to improve the efficiency of feature extraction. An improved full attention block, named polarized full attention, is exploited to fuse the feature maps and provide global contextual information. Experimental results on several public HSI datasets confirm the effectiveness of the proposed network. Full article
Show Figures

Figure 1

24 pages, 4897 KiB  
Article
Image Inpainting with Bilateral Convolution
by Wenli Huang, Ye Deng, Siqi Hui and Jinjun Wang
Remote Sens. 2022, 14(23), 6140; https://doi.org/10.3390/rs14236140 - 3 Dec 2022
Cited by 3 | Viewed by 3032
Abstract
Due to sensor malfunctions and poor atmospheric conditions, remote sensing images often miss important information/pixels, which affects downstream tasks, therefore requiring reconstruction. Current image reconstruction methods use deep convolutional neural networks to improve inpainting performances as they have a powerful modeling capability. However, [...] Read more.
Due to sensor malfunctions and poor atmospheric conditions, remote sensing images often miss important information/pixels, which affects downstream tasks, therefore requiring reconstruction. Current image reconstruction methods use deep convolutional neural networks to improve inpainting performances as they have a powerful modeling capability. However, deep convolutional networks learn different features with the same group of convolutional kernels, which restricts their ability to handle diverse image corruptions and often results in color discrepancy and blurriness in the recovered images. To mitigate this problem, in this paper, we propose an operator called Bilateral Convolution (BC) to adaptively preserve and propagate information from known regions to missing data regions. On the basis of vanilla convolution, the BC dynamically propagates more confident features, which weights the input features of a patch according to their spatial location and feature value. Furthermore, to capture different range dependencies, we designed a Multi-range Window Attention (MWA) module, in which the input feature is divided into multiple sizes of non-overlapped patches for several heads, and then these feature patches are processed by the window self-attention. With BC and MWA, we designed a bilateral convolution network for image inpainting. We conducted experiments on remote sensing datasets and several typical image inpainting datasets to verify the effectiveness and generalization of our network. The results show that our network adaptively captures features between known and unknown regions, generates appropriate content for various corrupted images, and has a competitive performance compared with state-of-the-art methods. Full article
Show Figures

Figure 1

22 pages, 6172 KiB  
Article
Improved Central Attention Network-Based Tensor RX for Hyperspectral Anomaly Detection
by Lili Zhang, Jiachen Ma, Baohong Fu, Fang Lin, Yudan Sun and Fengpin Wang
Remote Sens. 2022, 14(22), 5865; https://doi.org/10.3390/rs14225865 - 19 Nov 2022
Viewed by 1436
Abstract
Recently, using spatial–spectral information for hyperspectral anomaly detection (AD) has received extensive attention. However, the test point and its neighborhood points are usually treated equally without highlighting the test point, which is unreasonable. In this paper, improved central attention network-based tensor RX (ICAN-TRX) [...] Read more.
Recently, using spatial–spectral information for hyperspectral anomaly detection (AD) has received extensive attention. However, the test point and its neighborhood points are usually treated equally without highlighting the test point, which is unreasonable. In this paper, improved central attention network-based tensor RX (ICAN-TRX) is designed to extract hyperspectral anomaly targets. The ICAN-TRX algorithm consists of two parts, ICAN and TRX. In ICAN, a test tensor block as a value tensor is first reconstructed by DBN to make the anomaly points more prominent. Then, in the reconstructed tensor block, the central tensor is used as a convolution kernel to perform convolution operation with its tensor block. The result tensor as a key tensor is transformed into a weight matrix. Finally, after the correlation operation between the value tensor and the weight matrix, the new test point is obtained. In ICAN, the spectral information of a test point is emphasized, and the spatial relationships between the test point and its neighborhood points reflect their similarities. TRX is used in the new HSI after ICAN, which allows more abundant spatial information to be used for AD. Five real hyperspectral datasets are selected to estimate the performance of the proposed ICAN-TRX algorithm. The detection results demonstrate that ICAN-TRX achieves superior performance compared with seven other AD algorithms. Full article
Show Figures

Graphical abstract

24 pages, 7021 KiB  
Article
A Change Detection Method Based on Multi-Scale Adaptive Convolution Kernel Network and Multimodal Conditional Random Field for Multi-Temporal Multispectral Images
by Shou Feng, Yuanze Fan, Yingjie Tang, Hao Cheng, Chunhui Zhao, Yaoxuan Zhu and Chunhua Cheng
Remote Sens. 2022, 14(21), 5368; https://doi.org/10.3390/rs14215368 - 26 Oct 2022
Cited by 16 | Viewed by 2146
Abstract
Multispectral image change detection is an important application in the field of remote sensing. Multispectral images usually contain many complex scenes, such as ground objects with diverse scales and proportions, so the change detection task expects the feature extractor is superior in adaptive [...] Read more.
Multispectral image change detection is an important application in the field of remote sensing. Multispectral images usually contain many complex scenes, such as ground objects with diverse scales and proportions, so the change detection task expects the feature extractor is superior in adaptive multi-scale feature learning. To address the above-mentioned problems, a multispectral image change detection method based on multi-scale adaptive kernel network and multimodal conditional random field (MSAK-Net-MCRF) is proposed. The multi-scale adaptive kernel network (MSAK-Net) extends the encoding path of the U-Net, and designs a weight-sharing bilateral encoding path, which simultaneously extracts independent features of bi-temporal multispectral images without introducing additional parameters. A selective convolution kernel block (SCKB) that can adaptively assign weights is designed and embedded in the encoding path of MSAK-Net to extract multi-scale features in images. MSAK-Net retains the skip connections in the U-Net, and embeds an upsampling module (UM) based on the attention mechanism in the decoding path, which can give the feature map a better expression of change information in both the channel dimension and the spatial dimension. Finally, the multimodal conditional random field (MCRF) is used to smooth the detection results of the MSAK-Net. Experimental results on two public multispectral datasets indicate the effectiveness and robustness of the proposed method when compared with other state-of-the-art methods. Full article
Show Figures

Figure 1

20 pages, 9685 KiB  
Article
Gully Erosion Monitoring Based on Semi-Supervised Semantic Segmentation with Boundary-Guided Pseudo-Label Generation Strategy and Adaptive Loss Function
by Chunhui Zhao, Yi Shen, Nan Su, Yiming Yan and Yong Liu
Remote Sens. 2022, 14(20), 5110; https://doi.org/10.3390/rs14205110 - 13 Oct 2022
Cited by 5 | Viewed by 1805
Abstract
Gully erosion is a major threat to ecosystems, potentially leading to desertification, land degradation, and crop loss. Developing viable gully erosion prevention and remediation strategies requires regular monitoring of the gullies. Nevertheless, it is highly challenging to automatically access the monitoring results of [...] Read more.
Gully erosion is a major threat to ecosystems, potentially leading to desertification, land degradation, and crop loss. Developing viable gully erosion prevention and remediation strategies requires regular monitoring of the gullies. Nevertheless, it is highly challenging to automatically access the monitoring results of the gullies from the latest monitoring data by training historical data acquired by different sensors at different times. To this end, this paper presents a novel semi-supervised semantic segmentation with boundary-guided pseudo-label generation strategy and adaptive loss function method. This method takes full advantage of the historical data with labels and the latest monitoring data without labels to obtain the latest monitoring results of the gullies. The boundary-guided pseudo-label generation strategy (BPGS), guided by the inherent boundary maps of real geographic objects, fuses multiple evidence data to generate reliable pseudo-labels. Additionally, we propose an adaptive loss function based on centroid similarity (CSIM) to further alleviate the impact of pseudo-label noise. To verify the proposed method, two datasets for gully erosion monitoring are constructed according to the satellite data acquired in northeastern China. Extensive experiments demonstrate that the proposed method is more appropriate for automatic gully erosion monitoring than four state-of-the-art methods, including supervised methods and semi-supervised methods. Full article
Show Figures

Figure 1

31 pages, 17920 KiB  
Article
A Novel Hybrid Attention-Driven Multistream Hierarchical Graph Embedding Network for Remote Sensing Object Detection
by Shu Tian, Lin Cao, Lihong Kang, Xiangwei Xing, Jing Tian, Kangning Du, Ke Sun, Chunzhuo Fan, Yuzhe Fu and Ye Zhang
Remote Sens. 2022, 14(19), 4951; https://doi.org/10.3390/rs14194951 - 4 Oct 2022
Cited by 2 | Viewed by 1978
Abstract
Multiclass geospatial object detection in high-spatial-resolution remote-sensing images (HSRIs) has recently attracted considerable attention in many remote-sensing applications as a fundamental task. However, the complexity and uncertainty of spatial distribution among multiclass geospatial objects are still huge challenges for object detection in HSRIs. [...] Read more.
Multiclass geospatial object detection in high-spatial-resolution remote-sensing images (HSRIs) has recently attracted considerable attention in many remote-sensing applications as a fundamental task. However, the complexity and uncertainty of spatial distribution among multiclass geospatial objects are still huge challenges for object detection in HSRIs. Most current remote-sensing object-detection approaches fall back on deep convolutional neural networks (CNNs). Nevertheless, most existing methods only focus on mining visual characteristics and lose sight of spatial or semantic relation discriminations, eventually degrading object-detection performance in HSRIs. To tackle these challenges, we propose a novel hybrid attention-driven multistream hierarchical graph embedding network (HA-MHGEN) to explore complementary spatial and semantic patterns for improving remote-sensing object-detection performance. Specifically, we first constructed hierarchical spatial graphs for multiscale spatial relation representation. Then, semantic graphs were also constructed by integrating them with the word embedding of object category labels on graph nodes. Afterwards, we developed a self-attention-aware multiscale graph convolutional network (GCN) to derive stronger for intra- and interobject hierarchical spatial relations and contextual semantic relations, respectively. These two relation networks were followed by a novel cross-attention-driven spatial- and semantic-feature fusion module that utilizes a multihead attention mechanism to learn associations between diverse spatial and semantic correlations, and guide them to endowing a more powerful discrimination ability. With the collaborative learning of the three relation networks, the proposed HA-MHGEN enables grasping explicit and implicit relations from spatial and semantic patterns, and boosts multiclass object-detection performance in HRSIs. Comprehensive and extensive experimental evaluation results on three benchmarks, namely, DOTA, DIOR, and NWPU VHR-10, demonstrate the effectiveness and superiority of our proposed method compared with that of other advanced remote-sensing object-detection methods. Full article
Show Figures

Figure 1

Other

Jump to: Research

16 pages, 4639 KiB  
Technical Note
SG-Det: Shuffle-GhostNet-Based Detector for Real-Time Maritime Object Detection in UAV Images
by Lili Zhang, Ning Zhang, Rui Shi, Gaoxu Wang, Yi Xu and Zhe Chen
Remote Sens. 2023, 15(13), 3365; https://doi.org/10.3390/rs15133365 - 30 Jun 2023
Cited by 5 | Viewed by 1700
Abstract
Maritime search and rescue is a crucial component of the national emergency response system, which mainly relies on unmanned aerial vehicles (UAVs) to detect objects. Most traditional object detection methods focus on boosting the detection accuracy while neglecting the detection speed of the [...] Read more.
Maritime search and rescue is a crucial component of the national emergency response system, which mainly relies on unmanned aerial vehicles (UAVs) to detect objects. Most traditional object detection methods focus on boosting the detection accuracy while neglecting the detection speed of the heavy model. However, improving the detection speed is essential, which can provide timely maritime search and rescue. To address the issues, we propose a lightweight object detector named Shuffle-GhostNet-based detector (SG-Det). First, we construct a lightweight backbone named Shuffle-GhostNet, which enhances the information flow between channel groups by redesigning the correlation group convolution and introducing the channel shuffle operation. Second, we propose an improved feature pyramid model, namely BiFPN-tiny, which has a lighter structure capable of reinforcing small object features. Furthermore, we incorporate the Atrous Spatial Pyramid Pooling module (ASPP) into the network, which employs atrous convolution with different sampling rates to obtain multi-scale information. Finally, we generate three sets of bounding boxes at different scales—large, medium, and small—to detect objects of different sizes. Compared with other lightweight detectors, SG-Det achieves better tradeoffs across performance metrics and enables real-time detection with an accuracy rate of over 90% for maritime objects, showing that it can better meet the actual requirements of maritime search and rescue. Full article
Show Figures

Figure 1

15 pages, 10804 KiB  
Technical Note
Noised Phase Unwrapping Based on the Adaptive Window of Wigner Distribution
by Junqiu Chu, Xingling Liu, Haotong Ma, Xuegang Yu and Ge Ren
Remote Sens. 2022, 14(21), 5603; https://doi.org/10.3390/rs14215603 - 6 Nov 2022
Viewed by 1853
Abstract
A noised phase-unwrapping method is presented by using the Wigner distribution function to filter the phase noise and restore the gradient of the phase map. By using Poisson’s equation, the unwrapped phase map was obtained. Compared with the existing methods, the proposed method [...] Read more.
A noised phase-unwrapping method is presented by using the Wigner distribution function to filter the phase noise and restore the gradient of the phase map. By using Poisson’s equation, the unwrapped phase map was obtained. Compared with the existing methods, the proposed method is theoretically simple, provides a more accurate representation, and can be implemented in light-field hardware devices, such as Shack-Hartmann sensors. Full article
Show Figures

Figure 1

Back to TopTop