Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = optical remote sensing maritime images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6668 KiB  
Article
Dark Ship Detection via Optical and SAR Collaboration: An Improved Multi-Feature Association Method Between Remote Sensing Images and AIS Data
by Fan Li, Kun Yu, Chao Yuan, Yichen Tian, Guang Yang, Kai Yin and Youguang Li
Remote Sens. 2025, 17(13), 2201; https://doi.org/10.3390/rs17132201 - 26 Jun 2025
Viewed by 612
Abstract
Dark ships, vessels deliberately disabling their AIS signals, constitute a grave maritime safety hazard, with detection efforts hindered by issues like over-reliance on AIS, inadequate surveillance coverage, and significant mismatch rates. This paper proposes an improved multi-feature association method that integrates satellite remote [...] Read more.
Dark ships, vessels deliberately disabling their AIS signals, constitute a grave maritime safety hazard, with detection efforts hindered by issues like over-reliance on AIS, inadequate surveillance coverage, and significant mismatch rates. This paper proposes an improved multi-feature association method that integrates satellite remote sensing and AIS data, with a focus on oriented bounding box course estimation, to improve the detection of dark ships and enhance maritime surveillance. Firstly, the oriented bounding box object detection model (YOLOv11n-OBB) is trained to break through the limitations of horizontal bounding box orientation representation. Secondly, by integrating position, dimensions (length and width), and course characteristics, we devise a joint cost function to evaluate the combined significance of multiple features. Subsequently, an advanced JVC global optimization algorithm is employed to ensure high-precision association in dense scenes. Finally, by integrating data from Gaofen-6 (optical) and Gaofen-3B (SAR) satellites, a day-and-night collaborative monitoring framework is constructed to address the blind spots of single-sensor monitoring during night-time or adverse weather conditions. Our results indicate that the detection model demonstrates a high average precision (AP50) of 0.986 on the optical dataset and 0.903 on the SAR dataset. The association accuracy of the multi-feature association algorithm is 91.74% in optical image and AIS data matching, and 91.33% in SAR image and AIS data matching. The association rate reaches 96.03% (optical) and 74.24% (SAR), respectively. This study provides an efficient technical tool for maritime safety regulation through multi-source data fusion and algorithm innovation. Full article
Show Figures

Graphical abstract

20 pages, 14434 KiB  
Article
Optimized Marine Target Detection in Remote Sensing Images with Attention Mechanism and Multi-Scale Feature Fusion
by Xiantao Jiang, Tianyi Liu, Tian Song and Qi Cen
Information 2025, 16(4), 332; https://doi.org/10.3390/info16040332 - 21 Apr 2025
Cited by 1 | Viewed by 463
Abstract
With the continuous growth of maritime activities and the shipping trade, the application of maritime target detection in remote sensing images has become increasingly important. However, existing detection methods face numerous challenges, such as small target localization, recognition of targets with large aspect [...] Read more.
With the continuous growth of maritime activities and the shipping trade, the application of maritime target detection in remote sensing images has become increasingly important. However, existing detection methods face numerous challenges, such as small target localization, recognition of targets with large aspect ratios, and high computational demands. In this paper, we propose an improved target detection model, named YOLOv5-ASC, to address the challenges in maritime target detection. The proposed YOLOv5-ASC integrates three core components: an Attention-based Receptive Field Enhancement Module (ARFEM), an optimized SIoU loss function, and a Deformable Convolution Module (C3DCN). These components work together to enhance the model’s performance in detecting complex maritime targets by improving its ability to capture multi-scale features, optimize the localization process, and adapt to the large aspect ratios typical of maritime objects. Experimental results show that, compared to the original YOLOv5 model, YOLOv5-ASC achieves a 4.36 percentage point increase in mAP@0.5 and a 9.87 percentage point improvement in precision, while maintaining computational complexity within a reasonable range. The proposed method not only achieves significant performance improvements on the ShipRSImageNet dataset but also demonstrates strong potential for application in complex maritime remote sensing scenarios. Full article
(This article belongs to the Special Issue Computer Vision for Security Applications)
Show Figures

Figure 1

32 pages, 6751 KiB  
Article
SVIADF: Small Vessel Identification and Anomaly Detection Based on Wide-Area Remote Sensing Imagery and AIS Data Fusion
by Lihang Chen, Zhuhua Hu, Junfei Chen and Yifeng Sun
Remote Sens. 2025, 17(5), 868; https://doi.org/10.3390/rs17050868 - 28 Feb 2025
Cited by 2 | Viewed by 1242
Abstract
Small target ship detection and anomaly analysis play a pivotal role in ocean remote sensing technologies, offering critical capabilities for maritime surveillance, enhancing maritime safety, and improving traffic management. However, existing methodologies in the field of detection are predominantly based on deep learning [...] Read more.
Small target ship detection and anomaly analysis play a pivotal role in ocean remote sensing technologies, offering critical capabilities for maritime surveillance, enhancing maritime safety, and improving traffic management. However, existing methodologies in the field of detection are predominantly based on deep learning models with complex network architectures, which may fail to accurately detect smaller targets. In the classification domain, most studies focus on synthetic aperture radar (SAR) images combined with Automatic Identification System (AIS) data, but these approaches have significant limitations: first, they often overlook further analysis of anomalies arising from mismatched data; second, there is a lack of research on small target ship classification using wide-area optical remote sensing imagery. In this paper, we develop SVIADF, a multi-source information fusion framework for small vessel identification and anomaly detection. The framework consists of two main steps: detection and classification. To address challenges in the detection domain, we introduce the YOLOv8x-CA-CFAR framework. In this approach, YOLOv8x is first utilized to detect suspicious objects and generate image patches, which are then subjected to secondary analysis using CA-CFAR. Experimental results demonstrate that this method achieves improvements in Recall and F1-score by 2.9% and 1.13%, respectively, compared to using YOLOv8x alone. By integrating structural and pixel-based approaches, this method effectively mitigates the limitations of traditional deep learning techniques in small target detection, providing more practical and reliable support for real-time maritime monitoring and situational assessment. In the classification domain, this study addresses two critical challenges. First, it investigates and resolves anomalies arising from mismatched data. Second, it introduces an unsupervised domain adaptation model, Multi-CDT, for heterogeneous multi-source data. This model effectively transfers knowledge from SAR–AIS data to optical remote sensing imagery, thereby enabling the development of a small target ship classification model tailored for optical imagery. Experimental results reveal that, compared to the CDTrans method, Multi-CDT not only retains a broader range of classification categories but also improves target domain accuracy by 0.32%. The model extracts more discriminative and robust features, making it well suited for complex and dynamic real-world scenarios. This study offers a novel perspective for future research on domain adaptation and its application in maritime scenarios. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

23 pages, 7594 KiB  
Article
Spatiotemporal Point–Trace Matching Based on Multi-Dimensional Feature Fuzzy Similarity Model
by Yi Liu, Ruijie Wu, Wei Guo, Liang Huang, Kairui Li, Man Zhu and Pieter van Gelder
J. Mar. Sci. Eng. 2024, 12(10), 1883; https://doi.org/10.3390/jmse12101883 - 20 Oct 2024
Cited by 1 | Viewed by 990
Abstract
Identifying ships is essential for maritime situational awareness. Automatic identification system (AIS) data and remote sensing (RS) images provide information on ship movement and properties from different perspectives. This study develops an efficient spatiotemporal association approach that combines AIS data and RS images [...] Read more.
Identifying ships is essential for maritime situational awareness. Automatic identification system (AIS) data and remote sensing (RS) images provide information on ship movement and properties from different perspectives. This study develops an efficient spatiotemporal association approach that combines AIS data and RS images for point–track association. Ship detection and feature extraction from the RS images are performed using deep learning. The detected image characteristics and neighboring AIS data are compared using a multi-dimensional feature similarity model that considers similarities in space, time, course, and attributes. An efficient spatial–temporal association analysis of ships in RS images and AIS data is achieved using the interval type-2 fuzzy system (IT2FS) method. Finally, optical images with different resolutions and AIS records near the waters of Yokosuka Port and Kure are collected to test the proposed model. The results show that compared with the multi-factor fuzzy comprehensive decision-making method, the proposed method can achieve the best performance (F1 scores of 0.7302 and 0.9189, respectively, on GF1 and GF2 images) while maintaining a specific efficiency. This work can realize ship positioning and monitoring based on multi-source data and enhance maritime situational awareness. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

25 pages, 10179 KiB  
Article
An Improved Physics-Based Dual-Band Model for Satellite-Derived Bathymetry Using SuperDove Imagery
by Chunlong He, Qigang Jiang and Peng Wang
Remote Sens. 2024, 16(20), 3801; https://doi.org/10.3390/rs16203801 - 12 Oct 2024
Cited by 2 | Viewed by 1406
Abstract
Shallow water bathymetry is critical for environmental monitoring and maritime security. Current widely used statistical models based on passive optical satellite remote sensing often rely on prior bathymetric data, limiting their application to regions lacking such information. In contrast, the physics-based dual-band log-linear [...] Read more.
Shallow water bathymetry is critical for environmental monitoring and maritime security. Current widely used statistical models based on passive optical satellite remote sensing often rely on prior bathymetric data, limiting their application to regions lacking such information. In contrast, the physics-based dual-band log-linear analytical model (P-DLA) can estimate shallow water bathymetry without in situ measurements, offering significant potential. However, the quasi-analytical algorithm (QAA) used in the P-DLA is sensitive to non-ideal pixels, resulting in unstable bathymetry estimation. To address this issue and evaluate the potential of SuperDove imagery for bathymetry estimation in regions without prior bathymetric data, this study proposes an improved physics-based dual-band model (IPDB). The IPDB replaces the QAA with a spectral optimization algorithm that integrates deep and shallow water sample pixels to estimate diffuse attenuation coefficients for the blue and green bands. This allows for more accurate estimation of shallow water bathymetry. The IPDB was tested on SuperDove images of Dongdao Island, Yongxing Island, and Yongle Atoll. The results showed that SuperDove images are capable of estimating shallow water bathymetry in regions without prior bathymetric data. The IPDB achieved Root Mean Square Error (RMSE) values below 1.7 m and R2 values above 0.89 in all three study areas, indicating strong performance in bathymetric estimation. Notably, the IPDB outperformed the standard P-DLA model in accuracy. Furthermore, this study outlines four sampling principles that, when followed, ensure that variations in the spatial distribution of sampling pixels do not significantly impact model performance. This study also showed that the blue–green band combination is optimal for the analytical expression of the physics-based dual-band model. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of the Inland and Coastal Water Zones II)
Show Figures

Figure 1

19 pages, 8018 KiB  
Article
Characteristics of Yellow Sea Fog under the Influence of Eastern China Aerosol Plumes
by Jiakun Liang and Jennifer D. Small Griswold
Remote Sens. 2024, 16(13), 2262; https://doi.org/10.3390/rs16132262 - 21 Jun 2024
Cited by 1 | Viewed by 1272
Abstract
Sea fog is a societally relevant phenomenon that occurs under the influence of specific oceanic and atmospheric conditions including aerosol conditions. The Yellow Sea region in China regularly experiences sea fog events, of varying intensity, that impact coastal regions and maritime activities. The [...] Read more.
Sea fog is a societally relevant phenomenon that occurs under the influence of specific oceanic and atmospheric conditions including aerosol conditions. The Yellow Sea region in China regularly experiences sea fog events, of varying intensity, that impact coastal regions and maritime activities. The occurrence and structure of fog are impacted by the concentration of aerosols in the air where the fog forms. Along with industrial development, air pollution has become a serious environmental problem in Northeastern China. These higher pollution levels are confirmed by various satellite remote sensing instruments including the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite that observes aerosol and cloud properties. These observations show a clear influence of aerosol loading over the Yellow Sea region, which can impact regional sea fog. In this study, high-resolution data sets from MODIS Aqua L2 are used to investigate the relationships between cloud properties and aerosol features. Using a bi-variate comparison method, we find that, for most cases, larger values of COT (cloud optical thickness) are related to both a smaller DER (droplet effective radius) and higher CTH (cloud top height). However, in the cases where fog is thinner with many zero values in CTH, the larger COT is related to both a smaller DER and CTH. For fog cases where the aerosol type is dominated by smoke (e.g., confirmed fire activities in the East China Plain), the semi-direct effect is indicated and may play a role in determining fog structure such that a smaller DER corresponds with thinner fog and smaller COT values. Full article
Show Figures

Graphical abstract

21 pages, 6567 KiB  
Article
Identification and Positioning of Abnormal Maritime Targets Based on AIS and Remote-Sensing Image Fusion
by Xueyang Wang, Xin Song and Yong Zhao
Sensors 2024, 24(8), 2443; https://doi.org/10.3390/s24082443 - 11 Apr 2024
Cited by 5 | Viewed by 1987
Abstract
The identification of maritime targets plays a critical role in ensuring maritime safety and safeguarding against potential threats. While satellite remote-sensing imagery serves as the primary data source for monitoring maritime targets, it only provides positional and morphological characteristics without detailed identity information, [...] Read more.
The identification of maritime targets plays a critical role in ensuring maritime safety and safeguarding against potential threats. While satellite remote-sensing imagery serves as the primary data source for monitoring maritime targets, it only provides positional and morphological characteristics without detailed identity information, presenting limitations as a sole data source. To address this issue, this paper proposes a method for enhancing maritime target identification and positioning accuracy through the fusion of Automatic Identification System (AIS) data and satellite remote-sensing imagery. The AIS utilizes radio communication to acquire multidimensional feature information describing targets, serving as an auxiliary data source to complement the limitations of image data and achieve maritime target identification. Additionally, the positional information provided by the AIS can serve as maritime control points to correct positioning errors and enhance accuracy. By utilizing data from the Jilin-1 Spectral-01 satellite imagery with a resolution of 5 m and AIS data, the feasibility of the proposed method is validated through experiments. Following preprocessing, maritime target fusion is achieved using a point-set matching algorithm based on positional features and a fuzzy comprehensive decision method incorporating attribute features. Subsequently, the successful fusion of target points is utilized for positioning error correction. Experimental results demonstrate a significant improvement in maritime target positioning accuracy compared to raw data, with over a 70% reduction in root mean square error and positioning errors controlled within 4 pixels, providing relatively accurate target positions that essentially meet practical requirements. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

21 pages, 9955 KiB  
Article
A Recognition Model Incorporating Geometric Relationships of Ship Components
by Shengqin Ma, Wenzhi Wang, Zongxu Pan, Yuxin Hu, Guangyao Zhou and Qiantong Wang
Remote Sens. 2024, 16(1), 130; https://doi.org/10.3390/rs16010130 - 28 Dec 2023
Cited by 3 | Viewed by 1775
Abstract
Ship recognition with optical remote sensing images is currently widely used in fishery management, ship traffic surveillance, and maritime warfare. However, it currently faces two major challenges: recognizing rotated targets and achieving fine-grained recognition. To address these challenges, this paper presents a new [...] Read more.
Ship recognition with optical remote sensing images is currently widely used in fishery management, ship traffic surveillance, and maritime warfare. However, it currently faces two major challenges: recognizing rotated targets and achieving fine-grained recognition. To address these challenges, this paper presents a new model called Related-YOLO. This model utilizes the mechanisms of relational attention to stress positional relationships between the components of a ship, extracting key features more accurately. Furthermore, it introduces a hierarchical clustering algorithm to implement adaptive anchor boxes. To tackle the issue of detecting multiple targets at different scales, a small target detection head is added. Additionally, the model employs deformable convolution to extract the features of targets with diverse shapes. To evaluate the performance of the proposed model, a new dataset named FGWC-18 is established, specifically designed for fine-grained warship recognition. Experimental results demonstrate the excellent performance of the model on this dataset and two other public datasets, namely FGSC-23 and FGSCR-42. In summary, our model offers a new route to solve the challenging issues of detecting rotating targets and fine-grained recognition with remote sensing images, which provides a reliable foundation for the application of remote sensing images in a wide range of fields. Full article
Show Figures

Graphical abstract

24 pages, 18590 KiB  
Article
Heterogeneous Ship Data Classification with Spatial–Channel Attention with Bilinear Pooling Network
by Bole Wilfried Tienin, Guolong Cui, Roldan Mba Esidang, Yannick Abel Talla Nana and Eguer Zacarias Moniz Moreira
Remote Sens. 2023, 15(24), 5759; https://doi.org/10.3390/rs15245759 - 16 Dec 2023
Cited by 2 | Viewed by 1667
Abstract
The classification of ship images has become a significant area of research within the remote sensing community due to its potential applications in maritime security, traffic monitoring, and environmental protection. Traditional monitoring methods like the Automated Identification System (AIS) and the Constant False [...] Read more.
The classification of ship images has become a significant area of research within the remote sensing community due to its potential applications in maritime security, traffic monitoring, and environmental protection. Traditional monitoring methods like the Automated Identification System (AIS) and the Constant False Alarm Rate (CFAR) have their limitations, such as challenges with sea clutter and the problem of ships turning off their transponders. Additionally, classifying ship images in remote sensing is a complex task due to the spatial arrangement of geospatial objects, complex backgrounds, and the resolution limitations of sensor platforms. To address these challenges, this paper introduces a novel approach that leverages a unique dataset termed Heterogeneous Ship data and a new technique called the Spatial–Channel Attention with Bilinear Pooling Network (SCABPNet). First, we introduce the Heterogeneous Ship data, which combines Synthetic Aperture Radar (SAR) and optical satellite imagery, to leverage the complementary features of the SAR and optical modalities, thereby providing a richer and more-diverse set of features for ship classification. Second, we designed a custom layer, called the Spatial–Channel Attention with Bilinear Pooling (SCABP) layer. This layer sequentially applies the spatial attention, channel attention, and bilinear pooling techniques to enhance the feature representation by focusing on extracting informative and discriminative features from input feature maps, then classify them. Finally, we integrated the SCABP layer into a deep neural network to create a novel model named the SCABPNet model, which is used to classify images in the proposed Heterogeneous Ship data. Our experiments showed that the SCABPNet model demonstrated superior performance, surpassing the results of several state-of-the-art deep learning models. SCABPNet achieved an accuracy of 97.67% on the proposed Heterogeneous Ship dataset during testing. This performance underscores SCABPNet’s capability to focus on ship-specific features while suppressing background noise and feature redundancy. We invite researchers to explore and build upon our work. Full article
Show Figures

Figure 1

17 pages, 1045 KiB  
Article
Ship Detection via Multi-Scale Deformation Modeling and Fine Region Highlight-Based Loss Function
by Chao Li, Jianming Hu, Dawei Wang, Hanfu Li and Zhile Wang
Remote Sens. 2023, 15(17), 4337; https://doi.org/10.3390/rs15174337 - 3 Sep 2023
Cited by 1 | Viewed by 1980
Abstract
Ship detection in optical remote sensing images plays a vital role in numerous civil and military applications, encompassing maritime rescue, port management and sea area surveillance. However, the multi-scale and deformation characteristics of ships in remote sensing images, as well as complex scene [...] Read more.
Ship detection in optical remote sensing images plays a vital role in numerous civil and military applications, encompassing maritime rescue, port management and sea area surveillance. However, the multi-scale and deformation characteristics of ships in remote sensing images, as well as complex scene interferences such as varying degrees of clouds, obvious shadows, and complex port facilities, pose challenges for ship detection performance. To address these problems, we propose a novel ship detection method by combining multi-scale deformation modeling and fine region highlight-based loss function. First, a visual saliency extraction network based on multiple receptive field and deformable convolution is proposed, which employs multiple receptive fields to mine the difference between the target and the background, and accurately extracts the complete features of the target through deformable convolution, thus improving the ability to distinguish the target from the complex background. Then, a customized loss function for the fine target region highlight is employed, which comprehensively considers the brightness, contrast and structural characteristics of ship targets, thus improving the classification performance in complex scenes with interferences. The experimental results on a high-quality ship dataset indicate that our method realizes state-of-the-art performance compared to eleven considered detection models. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

27 pages, 21300 KiB  
Article
Mapping Pluvial Flood-Induced Damages with Multi-Sensor Optical Remote Sensing: A Transferable Approach
by Arnaud Cerbelaud, Gwendoline Blanchet, Laure Roupioz, Pascal Breil and Xavier Briottet
Remote Sens. 2023, 15(9), 2361; https://doi.org/10.3390/rs15092361 - 29 Apr 2023
Cited by 8 | Viewed by 3334
Abstract
Pluvial floods caused by extreme overland flow inland account for half of all flood damage claims each year along with fluvial floods. In order to increase confidence in pluvial flood susceptibility mapping, overland flow models need to be intensively evaluated using observations from [...] Read more.
Pluvial floods caused by extreme overland flow inland account for half of all flood damage claims each year along with fluvial floods. In order to increase confidence in pluvial flood susceptibility mapping, overland flow models need to be intensively evaluated using observations from past events. However, most remote-sensing-based flood detection techniques only focus on the identification of degradations and/or water pixels in the close vicinity of overflowing streams after heavy rainfall. Many occurrences of pluvial-flood-induced damages such as soil erosion, gullies, landslides and mudflows located further away from the stream are thus often unrevealed. To fill this gap, a transferable remote sensing fusion method called FuSVIPR, for Fusion of Sentinel-2 & Very high resolution Imagery for Pluvial Runoff, is developed to produce damage-detection maps. Based on very high spatial resolution optical imagery (from Pléiades satellites or airborne sensors) combined with 10 m change images from Sentinel-2 satellites, the Random Forest and U-net machine/deep learning techniques are separately trained and compared to locate pluvial flood footprints on the ground at 0.5 m spatial resolution following heavy weather events. In this work, three flash flood events in the Aude and Alpes-Maritimes departments in the South of France are investigated, covering over more than 160 km2 of rural and periurban areas between 2018 and 2020. Pluvial-flood-detection accuracies hover around 75% (with a minimum area detection ratio for annotated ground truths of 25%), and false-positive rates mostly below 2% are achieved on all three distinct events using a cross-site validation framework. FuSVIPR is then further evaluated on the latest devastating flash floods of April 2022 in the Durban area (South Africa), without additional training. Very good agreement with the impact maps produced in the context of the International Charter “Space and Major Disasters” are reached with similar performance figures. These results emphasize the high generalization capability of this method to locate pluvial floods at any time of the year and over diverse regions worldwide using a very high spatial resolution visible product and two Sentinel-2 images. The resulting impact maps have high potential for helping thorough evaluation and improvement of surface water inundation models and boosting extreme precipitation downscaling at a very high spatial resolution. Full article
(This article belongs to the Special Issue Remote Sensing of Floods: Progress, Challenges and Opportunities)
Show Figures

Figure 1

18 pages, 1532 KiB  
Article
Ship Classification in SAR Imagery by Shallow CNN Pre-Trained on Task-Specific Dataset with Feature Refinement
by Haitao Lang, Ruifu Wang, Shaoying Zheng, Siwen Wu and Jialu Li
Remote Sens. 2022, 14(23), 5986; https://doi.org/10.3390/rs14235986 - 25 Nov 2022
Cited by 7 | Viewed by 3495
Abstract
Ship classification based on high-resolution synthetic aperture radar (SAR) imagery plays an increasingly important role in various maritime affairs, such as marine transportation management, maritime emergency rescue, marine pollution prevention and control, marine security situational awareness, and so on. The technology of deep [...] Read more.
Ship classification based on high-resolution synthetic aperture radar (SAR) imagery plays an increasingly important role in various maritime affairs, such as marine transportation management, maritime emergency rescue, marine pollution prevention and control, marine security situational awareness, and so on. The technology of deep learning, especially convolution neural network (CNN), has shown excellent performance on ship classification in SAR images. Nevertheless, it still has some limitations in real-world applications that need to be taken seriously by researchers. One is the insufficient number of SAR ship training samples, which limits the learning of satisfactory CNN, and the other is the limited information that SAR images can provide (compared with natural images), which limits the extraction of discriminative features. To alleviate the limitation caused by insufficient training datasets, one of the widely adopted strategies is to pre-train CNNs on a generic dataset with massive labeled samples (such as ImageNet) and fine-tune the pre-trained network on the target dataset (i.e., a SAR dataset) with a small number of training samples. However, recent studies have shown that due to the different imaging mechanisms between SAR and natural images, it is hard to guarantee that the pre-trained CNNs (even if they perform extremely well on ImageNet) can be finely tuned by a SAR dataset. On the other hand, to extract the most discriminative ship representation features from SAR images, the existing methods have carried out fruitful research on network architecture design, attention mechanism embedding, feature fusion, etc. Although these efforts improve the performance of SAR ship classification to some extent, they are usually based on more complex network architecture and higher dimensional features, accompanied by more time-consuming storage expenses. Through the analysis of SAR image characteristics and CNN feature extraction mechanism, this study puts forward three hypotheses: (1) Pre-training CNN on a task-specific dataset may be more effective than that on a generic dataset; (2) a shallow CNN may be more suitable for SAR image feature extraction than a deep one; and (3) the deep features extracted by CNNs can be further refined to improve the feature discrimination ability. To validate these hypotheses, we propose to learn a shallow CNN which is pre-trained on a task-specific dataset, i.e., the optical remote sensing ship dataset (ORS) instead of on the widely adopted ImageNet dataset. For comparison purposes, we designed 28 CNN architectures by changing the arrangement of the CNN components, the size of convolutional filters, and pooling formulations based on VGGNet models. To further reduce redundancy and improve the discrimination ability of the deep features, we propose to refine deep features by active convolutional filter selection based on the coefficient of variation (COV) sorting criteria. Extensive experiments not only prove that the above hypotheses are valid but also prove that the shallow network learned by the proposed pre-training strategy and the feature refining method can achieve considerable ship classification performance in SAR images like the state-of-the-art (SOTA) methods. Full article
(This article belongs to the Special Issue Remote Sensing for Maritime Monitoring and Vessel Identification)
Show Figures

Figure 1

22 pages, 8117 KiB  
Article
Multi-Stage Feature Enhancement Pyramid Network for Detecting Objects in Optical Remote Sensing Images
by Kaihua Zhang and Haikuo Shen
Remote Sens. 2022, 14(3), 579; https://doi.org/10.3390/rs14030579 - 26 Jan 2022
Cited by 33 | Viewed by 3163
Abstract
The intelligent detection of objects in remote sensing images has gradually become a research hotspot for experts from various countries, among which optical remote sensing images are considered to be the most important because of the rich feature information, such as the shape, [...] Read more.
The intelligent detection of objects in remote sensing images has gradually become a research hotspot for experts from various countries, among which optical remote sensing images are considered to be the most important because of the rich feature information, such as the shape, texture and color, that they contain. Optical remote sensing image target detection is an important method for accomplishing tasks, such as land use, urban planning, traffic guidance, military monitoring and maritime rescue. In this paper, a multi stages feature pyramid network, namely the Multi-stage Feature Enhancement Pyramid Network (Multi-stage FEPN), is proposed, which can effectively solve the problems of blurring of small-scale targets and large scale variations of targets detected in optical remote sensing images. The Content-Aware Feature Up-Sampling (CAFUS) and Feature Enhancement Module (FEM) used in the network can perfectly solve the problem of fusion of adjacent-stages feature maps. Compared with several representative frameworks, the Multi-stage FEPN performs better in a range of common detection metrics, such as model accuracy and detection accuracy. The mAP reaches 0.9124, and the top-1 detection accuracy reaches 0.921 on NWPU VHR-10. The results demonstrate that Multi-stage FEPN provides a new solution for the intelligent detection of targets in optical remote sensing images. Full article
Show Figures

Graphical abstract

18 pages, 10791 KiB  
Article
High-Speed Lightweight Ship Detection Algorithm Based on YOLO-V4 for Three-Channels RGB SAR Image
by Jiahuan Jiang, Xiongjun Fu, Rui Qin, Xiaoyan Wang and Zhifeng Ma
Remote Sens. 2021, 13(10), 1909; https://doi.org/10.3390/rs13101909 - 13 May 2021
Cited by 109 | Viewed by 11683
Abstract
Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of [...] Read more.
Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue. Full article
Show Figures

Graphical abstract

18 pages, 9546 KiB  
Article
Improved YOLOv3 Based on Attention Mechanism for Fast and Accurate Ship Detection in Optical Remote Sensing Images
by Liqiong Chen, Wenxuan Shi and Dexiang Deng
Remote Sens. 2021, 13(4), 660; https://doi.org/10.3390/rs13040660 - 11 Feb 2021
Cited by 106 | Viewed by 6595
Abstract
Ship detection is an important but challenging task in the field of computer vision, partially due to the minuscule ship objects in optical remote sensing images and the interference of clouds occlusion and strong waves. Most of the current ship detection methods focus [...] Read more.
Ship detection is an important but challenging task in the field of computer vision, partially due to the minuscule ship objects in optical remote sensing images and the interference of clouds occlusion and strong waves. Most of the current ship detection methods focus on boosting detection accuracy while they may ignore the detection speed. However, it is also indispensable to increase ship detection speed because it can provide timely ocean rescue and maritime surveillance. To solve the above problems, we propose an improved YOLOv3 (ImYOLOv3) based on attention mechanism, aiming to achieve the best trade-off between detection accuracy and speed. First, to realize high-efficiency ship detection, we adopt the off-the-shelf YOLOv3 as our basic detection framework due to its fast speed. Second, to boost the performance of original YOLOv3 for small ships, we design a novel and lightweight dilated attention module (DAM) to extract discriminative features for ship targets, which can be easily embedded into the basic YOLOv3. The integrated attention mechanism can help our model learn to suppress irrelevant regions while highlighting salient features useful for ship detection task. Furthermore, we introduce a multi-class ship dataset (MSD) and explicitly set supervised subclass according to the scales and moving states of ships. Extensive experiments verify the effectiveness and robustness of ImYOLOv3, and show that our method can accurately detect ships with different scales in different backgrounds, while at a real-time speed. Full article
Show Figures

Figure 1

Back to TopTop