Next Article in Journal
Evaluating the Abilities of Satellite-Derived Burned Area Products to Detect Forest Burning in China
Next Article in Special Issue
Locality Preserving Property Constrained Contrastive Learning for Object Classification in SAR Imagery
Previous Article in Journal
An Efficient Fault Detection and Exclusion Method for Ephemeris Monitoring
Previous Article in Special Issue
Refocusing Swing Ships in SAR Imagery Based on Spatial-Variant Defocusing Property
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Anomaly-Based Ship Detection Using SP Feature-Space Learning with False-Alarm Control in Sea-Surface SAR Images

1
Key Laboratory of Intelligent Computing and Signal Processing, Ministry of Education, Anhui University, Hefei 230601, China
2
Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei 230601, China
3
East China Institute of Photo-Electron ICs, Suzhou 215163, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(13), 3258; https://doi.org/10.3390/rs15133258
Submission received: 29 May 2023 / Revised: 18 June 2023 / Accepted: 22 June 2023 / Published: 24 June 2023
(This article belongs to the Special Issue SAR-Based Signal Processing and Target Recognition)

Abstract

:
Synthetic aperture radar (SAR) can provide high-resolution and large-scale maritime monitoring, which is beneficial to ship detection. However, ship-detection performance is significantly affected by the complexity of environments, such as uneven scattering of ship targets, the existence of speckle noise, ship side lobes, etc. In this paper, we present a novel anomaly-based detection method for ships using feature learning for superpixel (SP) processing cells. First, the multi-feature extraction of the SP cell is carried out, and to improve the discriminating ability for ship targets and clutter, we use the boundary feature described by the Haar-like descriptor, the saliency texture feature described by the non-uniform local binary pattern (LBP), and the intensity attention contrast feature to construct a three-dimensional (3D) feature space. Besides the feature extraction, the target classifier or determination is another key step in ship-detection processing, and therefore, the improved clutter-only feature-learning (COFL) strategy with false-alarm control is designed. In detection performance analyses, the public datasets HRSID and LS-SSDD-v1.0 are used to verify the method’s effectiveness. Many experimental results show that the proposed method can significantly improve the detection performance of ship targets, and has a high detection rate and low false-alarm rate in complex background and multi-target marine environments.

1. Introduction

With the development of high-resolution synthetic aperture radar (SAR) technology, we have widely applied SAR to various fields, both military and civilian, because of the conducting ability in any weather and condition. SAR has obvious advantages over optical and infrared sensors [1,2,3]. Therefore, the detection of ship targets in SAR images is an important application and has attracted a lot of attention and research in recent years.
The traditional pixel-level constant false-alarm rate (CFAR) algorithm is widely used. Scholars have successively put forward a series of CFAR methods, such as the cell-averaging CFAR (CA-CFAR) method, the smallest-of CFAR (SO-CFAR) method, the greatest-of CFAR (GO-CFAR) method, and the ordered statistic CFAR (OS-CFAR) method, etc. [4,5]. CFAR detection methods mainly depend on clutter statistical modeling, i.e., exploiting the grayscale characteristics between ships and the clutter background to acquire the target-detection decision. Typical sea-clutter statistical models mainly include Rayleigh, Weibull, Log-normal, K, Gamma, and generalized Gamma distribution (GΓD) models, etc. [6,7,8,9,10,11,12]. Later, the advent of the kernel density estimation (KDE) approach improved the goodness-of-fit (GoF) to the clutter background significantly [13]. However, due to the complexity of sea-clutter scattering, statistical modeling is intricate to build precisely, which will cause the performance of CFAR detectors using statistical characteristics to deteriorate and the false-alarm rate to rise. To adapt to different SAR scenes, researchers selected the best statistical model of clutter through the GoF analyses of diverse clutter statistical models so that the target-detection rate is excellent for various scenarios [14,15]. To improve the target-detection performance and reduce the false-alarm rate, some scholars have made a lot of improvements to the traditional pixel-level CFAR method and have achieved good results in target-detection performance [16,17,18].
When utilizing the pixel-level CFAR method for ship target detection, the speckle noise is likely to cause false alarms. Through superpixel (SP) segmentation, the ship target can be regarded as one or more connected regions, which can not only better preserve the ship target contour but also reduce the effect of speckle noise. A new SP-level CFAR detector was designed by Pappas et al., which could not only reduce the false-alarm probability of ship target detection but also better maintain the ship shapes [19]. Moreover, to improve ship-detection accuracy, SP-level CFAR detectors are widely used in SAR ship-detection tasks [20,21,22]. Due to the influence of adjacent targets and side lobes of ships in multi-target environments, clutter modeling is inaccurate and the target-detection performance is poor. Thus, to advance the goodness-of-fit of sea clutter and enhance ship-detection capability in multi-target scenarios, the SP-level CFAR detection method was presented based on the truncated Gamma statistic [23]. An improved SP-level CFAR detector was designed by Li et al., who considered weighted information entropy (WIE) as an SP statistical feature and adopted the coarse-to-fine detection idea to achieve two-stage CFAR detection of ship targets [24]. At present, attention contrast enhancement theory has been widely applied to target-detection tasks, and the saliency detection of ship targets has a great potential [25,26,27,28,29]. The SP-based local contrast measure (SLCM) method has been presented to detect ships hidden in the strong noise background efficiently [30]. Lin et al. employed the SP-based fisher vector for feature extraction, which can describe the deep difference between target and background and improve target-detection ability in SAR images with a low signal-to-noise ratio (SNR) [31]. However, due to the complexity of sea-surface environments, there is a significant impact on the detection performance of small ship targets with weak scattering while detecting targets based on traditional statistical characteristics. Therefore, it is necessary to study further saliency feature extraction in SAR images for achieving the efficient detection of ship targets in complex environments.
With the rapid development of machine learning, researchers have applied machine-learning algorithms in the radar field to achieve radar signal processing. Ship-detection methods based on machine learning mainly include two types, namely the detection algorithms based on traditional machine learning, and deep learning. As a typical traditional machine-learning model for a binary classification task, the support vector machine (SVM) is widely utilized in ship target detection on the sea surface. From the perspective of the SVM training model, the problem of ship target detection can be regarded as a binary classification problem between ship targets and sea-clutter backgrounds. Therefore, ship-detection algorithms based on the feature extraction of SAR slice images and SVM binary classification have been proposed successively, which have wide applications and excellent detection effects. He et al. designed the SVM-based detection method using the constructed gray-level co-occurrence matrix (GLCM) texture feature samples [32]. However, because the detection performance for the SVM classifier depends on the extracted feature property, the detection rate is limited to only using GLCM texture features to distinguish ships and clutter. To improve ship-detection performance in non-uniform sea conditions, the idea of coarse-to-fine detection was adopted by Xiong et al., who combined the SVM classifier and the maximum entropy thresholding method to achieve ship detection [33]. Later, Li et al. utilized the Relief method to choose the optimal feature combination from many polarimetric rotation domain features which can distinguish candidate pixels between targets and clutter and then combined it with a common SVM classifier to achieve accurate binary classification [34]. Furthermore, since interference (such as noise and ship side lobes, etc.) in complex environments is easy to cause false alarms, it is very significant to suppress false alarms for ship target detection in SAR images. Yang et al. presented a false-alarm removal method using one-class SVM (OC-SVM) for ship target detection [35]. The parameter adjustment of the traditional SVM training model needs to be finished manually and largely depends on experience, which easily leads to the unstable classification performance of SVM. Therefore, researchers have made a great many improvements based on conventional SVM. To reduce the false-alarm rate during docked ship detection in complex near-shore port environments, Zou et al. utilized genetic operators to improve the traditional particle swarm optimization algorithm, and adjusted SVM parameters based on the optimized genetic operator particle swarm optimization (GOPSO) algorithm. Although the optimized GOPSO-SVM algorithm has higher classification accuracy and fewer false alarms, there is still missed detection [36]. In addition, the algorithm for ship detection based on simulated annealing by fuzzy matching-SVM (SAFM-SVM) was provided, which could adaptively optimize the detector parameters and complete the self-adaptive feature screening [37].
With the development of deep learning in recent years, there are many ship-detection algorithms based on deep-learning networks. These algorithms are mainly divided into single-stage and two-stage detection algorithms. In terms of detection accuracy, two-stage methods are usually superior to single-stage. However, from the perspective of detection speed, the single-stage detection methods are faster. For the single-stage network structure, the typical You Only Look Once (YOLO) model is used for the ship target detection [38], and with the continuous improvement of network structure, scholars have presented a series of ship target detection methods based on improved YOLO algorithms, such as YOLOv3 [39], YOLOv5 [40], BiFA-YOLO [41] as well as some other improved YOLO models [42,43,44]. To improve the real-time performance of target detection in SAR images, based on the YOLOv2 model architecture, ref. [45] developed a new network architecture with fewer layers, namely YOLOv2-reduced. Its detection performance is similar to YOLOv2, but there is a significant improvement in detection speed. Region-based convolutional neural network (R-CNN) is the most basic two-stage network structure. Ship-detection algorithms based on faster R-CNN and many improved faster R-CNN have been proposed in recent years [46,47,48]. To further enhance the feature-extraction ability, Xia et al. concentrated mainly on the optimization of the backbone and neck parts of the ship-detection framework and designed the visual transformer ship-detection framework in SAR images based on contextual joint representation learning (CRTransSar) [49]. Zhou et al. provided the Doppler feature matrix fused with a multi-layer feature pyramid network (D-MFPN) which can effectively enhance the detecting capability of ship targets, especially for the moving ships [50]. Aiming at the problem of SAR image classification when only a few labeled data are available, the new framework to train a deep neural network for classifying SAR images by eliminating the need for a huge was designed by Rostami et al. [51]. Considering the importance of SAR ship-detection speed in practical applications, Zhang et al. [52] designed a new grid convolutional neural network (G-CNN) architecture, which mainly includes the backbone convolutional neural network (B-CNN) and the detection convolutional neural network (D-CNN). This method achieves ship target detection with high speed for SAR images while maintaining the accuracy of ship detection in actual applications. To solve the problem of ship detection in complex inshore and offshore scenarios, the ship-detection method using the high-resolution ship-detection network (HR-SDNet) was provided by Wei et al., who fully utilize the feature maps of high-resolution and low-resolution convolutions to design a novel high-resolution feature pyramid network (HRFPN), which has better detection accuracy and robustness [53]. In addition, Liu et al. [54] proposed the framework for exploring multi-scale ship proposals, mainly consisting of two stages: hierarchical grouping and proposal scoring, which can effectively solve the problem of significant ship-scale differences in SAR images, therefore improving ship-detection performance. Meanwhile, the physical scattering mechanism of the ship target is fused into the network model to improve the detection ability [55,56,57].
The motility of the sea surface results in complicated scattering characteristics which are related to radar parameters, sea state, etc. Ship detection is a great challenge in high-resolution sea clutter, especially in high-sea states. For detection methods of traditional machine learning and deep learning, two implicit conditions used effectively are that the training samples of clutter and ships are balanced, and samples need diversity. Ship detection encounters the two extreme unbalances on the sea surface, and the training samples of sea clutter can be ergodic in the feature space in a short time while ship targets are non-ergodic and difficult to obtain. The deep-learning method requires a large amount of data and high computational cost in the learning stage, while the detection stage is of low computational cost, which is unfavorable for fast implementation. Due to the sparsity of ship targets on the sea surface relative to sea clutter, the ship target samples can be regarded as outliers in the clutter sample space, and extracting ship targets from clutter background can be regarded as a problem of anomaly-based ship detection in the sea surface environment. The theory of anomaly detection has been widely applied in ship target detection tasks [58,59]. The feature space is critical for target-detection performance, and saliency feature space based on SP cells should be further explored for single-channel SAR intensity images. Therefore, we study the anomaly-based ship-detection method of SPs for clutter-only feature learning. The main work of this paper is as follows:
  • To enhance the feature representation capability in the SP cell, we construct a three-dimensional (3D) feature space that contains the boundary feature described by Haar-like, texture feature described by non-uniform LBP, and intensity attention contrast feature.
  • The clutter-only feature-learning (COFL) model with false-alarm control is developed in the anomaly-based detection decision based on the established feature space.
  • We execute extensive experiments on SAR datasets collected from different satellites, and it is obvious that our proposed method has a state-of-the-art feature discriminative ability and good detection accuracy.
The remainder of this paper is organized as follows: Section 2 introduces the main contents of our ship target detection method in detail. Section 3 compares and analyzes our method with other benchmark methods, and proves the effectiveness of the proposed method. In Section 4, we summarize the conclusions.

2. Detection Methodology

Because strong noise points generally occupy a small number of bright pixels in complex clutter backgrounds, it is easy to mistake strong noise points for target pixels when using conventional pixel-level methods to detect targets, which will cause a great many false alarms. Moreover, the weak scattering characteristics of small ships are likely to reduce the ship-detection rate. Therefore, this paper proposes an anomaly-based ship target-detection algorithm based on SP cells using the clutter-only feature space. First, preprocessing is carried out and consists of sea–land and SP segmentation. Second, we extract multi-feature from three aspects, namely the boundary feature, texture feature, and intensity attention contrast feature of the SP cell to enhance the information representation ability. At last, the ship target detection is realized using the detection model obtained through 3D feature-space training. Figure 1 presents the specific flow of anomaly detection in this paper.

2.1. Preprocessing Operation

2.1.1. Sea–Land Segmentation

In sea–land scenes, the land will affect ship-detection performance, as shown in Figure 2a–c, and the sea–land segmentation is a crucial step for ship detection. The maximum inter-class variance (OTSU) method is a non-parametric and unsupervised method proposed by Japanese Otsu in 1979, which can automatically select segmentation thresholds during image segmentation. Generally, the image is mainly divided into foreground and background according to the gray feature of the image. OTSU method obtains the best global threshold based on the maximum variance between classes of foreground and background in the image, which is simple to calculate and maximizes the separability of classes on grayscale [60].
Therefore, to accurately detect ship targets in the marine environment of the SAR image and eliminate the interference caused by the land, we utilize the OTSU method to complete the effective segmentation between the sea surface and the non-sea surface. Figure 2d–f show OTSU segmentation results of SAR images shown in Figure 2a–c. It can be seen that the OTSU method has an excellent segmentation effect and can accurately extract the sea area from the original SAR scenes, which is convenient for the subsequent SAR sea-surface ship target detection.

2.1.2. SP Segmentation

SP segmentation agglomerates pixels with similar properties and locations into a cell, which can effectively suppress the influence of isolated clutter and speckle noise, and has become an important tool for computer vision. Simple linear iterative clustering (SLIC) is a common SP segmentation algorithm with excellent performance and fast segmentation speed by controlling the size and compactness of SP cells [24]. For single-channel SAR images, we adapt SLIC to 3D space g , x , y T , where g represents the pixel intensity, x and y represent the spatial position information. The spatial proximity and intensity similarity between any two pixels are calculated as:
d s = ( x u x v ) 2 + ( y u y v ) 2
d c = ( g u g v ) 2
where ( x u , y u ) and ( x v , y v ) represent the corresponding position information of pixel u and pixel v in the SAR image, respectively, g u and g v are the intensity values of pixel points u and v, respectively. The distance metric between any two pixels in a single-channel SAR image is expressed as [31]:
D = d c 2 + ( d s S ) 2 M 2
where S denotes the SP size, M is an adjustment parameter. S is selected according to the actual detected SAR scenarios, and the optimal value range of M is 0.1∼0.3 by extensive experimental analysis.
Figure 3b shows SP segmentation results with S = 15 and M = 0.1 , which verifies that SP segmentation using the SLIC method can maintain the boundary contour of the ship target. When performing SP segmentation, a ship will likely be divided into multiple SP cells due to the discontinuity of ship targets, or a target SP cell contains clutter in the edge. Therefore, the feature-extraction stage needs to consider segmentation inaccuracy for the effect of ship-detection performance.

2.2. Feature Extraction Based on SP

2.2.1. Boundary Feature Extraction Based on Haar-like

Considering the ambiguity of ship contour driving from noise and other factors, conventional edge-detected operators cannot work well in the edge extraction of ships. Haar-like is a feature description operator with excellent edge feature-extraction performance, which can be roughly divided into four types, namely edge feature, linear feature, center-surrounding feature, and diagonal feature [61]. Consequently, the proposed method exploits edge feature templates of Haar-like to achieve the boundary feature extraction as shown in Figure 4.
According to the description of Haar-like edge features in different directions, we utilize the normal direction of the closed curve for the SP cell corresponding to the Haar-like edge feature template as shown in Figure 5, in which a solid red line represents the closed boundary curve of the SP cell with counterclockwise direction, and the blue arrow points to the normal direction of the boundary point.
According to the normal direction of the boundary pixel point, the convolution operator F θ based on the Haar-like theory to calculate any boundary pixel feature measure value of SP cells is used. The different normal directions θ are as follows:
  • When θ = 0 , F θ = F θ 1 = E 1 E 1 1 × 2 L ;
  • When θ = 45 , F θ = F θ 2 = 0 E 2 E 2 0 2 L × 2 L ;
  • When θ = 90 , F θ = F θ 1 T ;
  • When θ = 135 , F θ = F θ 3 = E 3 0 0 E 3 2 L × 2 L ;
  • When θ = 180 , F θ = F θ 1 ;
  • When θ = 225 , F θ = F θ 2 ;
  • When θ = 270 , F θ = F θ 1 T ;
  • When θ = 315 , F θ = F θ 3 .
where E 1 represents a 1 × L matrix with all elements of 1, E 2 represents a L × L matrix in which the elements on a sub diagonal are all 1 and the other elements are all zero, E 3 stands for L × L identity matrix.
Therefore, the feature measure value at any boundary pixel point ( p , q ) for the SP cell can be defined as:
f B ( p , q ) = F θ I = m n F θ ( m , n ) I ( p + m , q + n ) s . t . θ = 0 , 45 , 90 , 135 , 180 , 225 , 270 , 315
where ⊗ denotes the convolution operation, I ( p , q ) indicates the pixel intensity value at the boundary pixel point ( p , q ) . The value ranges of m and n are different for different normal directions at boundary points.
Since a ship may be divided into several targets (as shown in Figure 3b), the first K maximum boundary pixel feature metric values are selected to extract the boundary feature value of the ship target SP, and the calculation criterion is as follows:
f 1 = 1 K k = 1 K f B k
where f B k represents the boundary feature metric value at the k-th boundary pixel point ( p k , q k ) of the SP cell. Figure 6 shows the boundary feature values of different SP cells for Figure 3, and displays that values of ship SP cells are higher than clutter SP cells.

2.2.2. Saliency Texture Feature Extraction Based on Non-Uniform LBP

Due to the complexity of ship structure and differences in ship materials, the ship target scattering at different positions is differing, resulting in fluctuations of texture and intensity. This paper mainly aims at the texture difference between the ship target and clutter to extract texture features based on the SP cells, to realize the effective distinction between the ship and clutter.
The LBP feature operator is generally applicable to texture feature extraction of local images, and the LBP texture feature can be reflected through the differences between the center pixel and its surrounding pixels in the rectangular window. To represent the texture information of the region, the LBP value is calculated as [62]:
L b p = i = 0 P 1 s ( I i I c ) · 2 i s . t . s I i I c = 1 , I i I c 0 , I i < I c
where I c is the intensity value of center pixel, I i , i = 0 , 1 , , P 1 represents the intensity value of i-th pixel in the local neighborhood, and there are P neighbor pixels in total.
Affected by the structure and material of ships, the scattering uniformity of the ship target is poor, and there is an obvious texture difference between the ship target and clutter. Therefore, to better represent the LBP texture feature of the ship SP, the local saliency texture feature measure value for ship SP is used as follows:
f 2 = max log 2 T ¯ e x T T ¯ e x B k + 1 × T ¯ e x T s . t . k = 1 , 2 , , K B
where T e x T ( p , q ) = j = 0 L 1 2 2 I j I ( p , q ) 2 represents the texture value at pixel ( p , q ) of the target SP cell, I ( p , q ) is the original intensity value at pixel ( p , q ) of the target SP cell, and I j ,   j = 0 ,   1 ,   ,   L 1 2 2 is the intensity value of j-th pixel in the local L 1 × L 1 neighborhood. T ¯ e x T is the average texture value of the center target SP cell and T ¯ e x B k ,   k = 1 ,   2 ,   ,   K B is the average texture value of the k-th neighboring clutter SP cell, K B indicates the total number of clutter SP cells in the local neighborhood window. Moreover, with the increase of L 1 value, the non-uniformity of LBP texture becomes higher, which is conducive to the detection [27,63]. Figure 7 shows the saliency texture feature values of different SP cells for Figure 3 and presents a better texture distinction of ships and clutter.

2.2.3. Attention Contrast Feature Extraction Based on Intensity Information

The scattering intensity of ships and clutter to electromagnetic waves is different. The energy of the ship target in SAR images is relatively concentrated, while the energy distribution of the clutter background is uniform. Therefore, inspired by the idea of local contrast measurement (LCM) [30], to achieve the saliency representation of the ship intensity feature, we use the metric value of intensity attention contrast enhancement for the ship target as:
f 3 = max 1 K I i = 1 K I I T i × μ T μ B k s . t . k = 1 , 2 , , K B
where I T i represents the i-th largest intensity value in the target SP cell, K I is the number of maximal intensity values considered in the ship SP, μ T is the average intensity value of the central target SP cell in the neighborhood window, and μ B k ,   k = 1 ,   2 ,   ,   K B represents the average intensity value of the k-th surrounding clutter background SP cell in the neighborhood window. Figure 8 shows the intensity attention contrast feature values of different SP cells for Figure 3, and we can see that there are obvious intensity differences between ships and sea clutter.
As shown from Figure 6, Figure 7 and Figure 8, there are significant differences for ship and clutter SP cells. Consequently, we construct a 3D feature vector based on the above three features. The h-th clutter SP feature sample is recorded as f h = f 1 , h ,   f 2 , h ,   f 3 , h , where h = 1 ,   2 ,   ,   H , and H is the number of samples. The clutter-only feature training set F H is expressed as:
F H = f 1 f 2 f H = f 1 , 1 f 2 , 1 f 3 , 1 f 1 , 2 f 2 , 2 f 3 , 2 f 1 , H f 2 , H f 3 , H
Moreover, to limit the different feature values to the same scale, the 3D feature set F H needs to be normalized. The normalized feature set F H is obtained by the following equation:
F H = f 1 f 2 f H = f 1 , 1 M i n 1 M a x 1 M i n 1 f 2 , 1 M i n 2 M a x 2 M i n 2 f 3 , 1 M i n 3 M a x 3 M i n 3 f 1 , 2 M i n 1 M a x 1 M i n 1 f 2 , 2 M i n 2 M a x 2 M i n 2 f 3 , 2 M i n 3 M a x 3 M i n 3 f 1 , H M i n 1 M a x 1 M i n 1 f 2 , H M i n 2 M a x 2 M i n 2 f 3 , H M i n 3 M a x 3 M i n 3
where M i n 1 , M a x 1 , M i n 2 , M a x 2 , M i n 3 and M a x 3 represent the minimum and maximum values for different feature sets f 1 , 1 ,   f 1 , 2 ,   ,   f 1 , H , f 2 , 1 ,   f 2 , 2 ,   ,   f 2 , H , f 3 , 1 ,   f 3 , 2 ,   ,   f 3 , H .

2.3. Detection Decision by COFL with False-Alarm Control

Extracting a ship from sea clutter can be regarded as a two-classification question, while SVM-based detection needs training samples of two classes to be balanced to obtain high detection accuracy, and the ship is the outlier on the sea surface. Based on this fact, we use the anomaly-detection theory to design the feature-learning metric. For the normalized 3D feature set F H = f i R 3 : i = 1 ,   2 ,   ,   H , the decision hypersphere Ω with false-alarm control is determined as follows:
min R , ξ R 2 + C i = 1 H ξ i s . t . F ( f i ) c 2 R 2 + ξ i , i = 1 , , H ξ i 0 , i f j : f j Ω = H × ( 1 P f a )
where R is the radius of the decision hypersphere Ω , c is the center of Ω , C is the penalty factor, ξ i represents the slack variable. F ( f i ) is the mapping of f i in high-dimensional space, f j : f j Ω represents the set of clutter samples contained in Ω , f j : f j Ω represents the number of sample elements in set f j : f i Ω , P f a stands for the probability of false alarm, and . denotes a rounding downward symbol.
According to the classical single-class classification model support vector domain description (SVDD) [64], the original optimization problem can be simplified to Equation (12) using the Lagrange multiplier method and KKT condition.
min α i = 1 H j = 1 H α i α j k f ( f i , f j ) i = 1 H α i k f ( f i , f i ) s . t . i = 1 H α i = 1 0 α i C , i f j : f j Ω = H × ( 1 P f a )
where k f ( f i , f j ) is the Gauss kernel function. By solving Equation (12), a hypersphere model Ω can be obtained and used as the detection decision. If a sample is contained in the hypersphere Ω , it is considered to be clutter, otherwise, it is determined as the target.
Algorithm 1 summarizes the implementation of our proposed SAR ship detector based on the COFL model with false-alarm control.
Algorithm 1. The proposed ship detector based on COFL with false-alarm control.
  • Input: The original SAR images.
  • Step-1: Preprocessing.
  • Step-1.1: Sea–land segmentation of input scene SAR images using the OTSU method.
  • Step-1.2: SP segmentation by SLIC algorithm.
  • Step-2: Multi-feature extraction based on SP cell, and three methods of SP feature extraction are as follows:
  • Step-2.1: Boundary feature value is calculated by Equation (5).
  • Step-2.2: Saliency texture feature value is calculated by Equation (7).
  • Step-2.3: Intensity attention contrast value is calculated by Equation (8).
  • Step-3: Construct normalized clutter-only feature set F H according to Equations (9) and (10), and then train the classification decision model based on Equation (12).
  • Step-4: Detection based on the trained decision hypersphere.
  • Output: Detection results.

3. Experimental Results and Analysis

The effectiveness of our proposed feature space and ship-detection performance is verified in this section using high-resolution SAR images from the public datasets HRSID [65] and LS-SSDD-v1.0 [66], in which all real targets have been labeled. Table 1 lists the different imaging parameters of SAR images from HRSID and LS-SSDD-v1.0 datasets. We select multi-target sea-surface scenes and sea–land scenes for experimental analysis.

3.1. Effectiveness Analyses of Multi-Feature Extraction

To further verify that the proposed three-dimensional features (P3DF) in this paper can significantly distinguish ship target from the clutter background, we respectively present 3D feature spatial distributions constructed with P3DF and optimal three-dimensional features (O3DF) in [33] (O3DF-SVM-based method) about two SAR images shown in Figure 9a,b.
Figure 10a,b show 3D feature spaces constructed with the O3DF-SVM-based method for the training clutter samples and target samples of Figure 9a,b. Figure 10c,d present 3D feature spaces constructed with our method for clutter samples and target samples of Figure 9a,b. From all 3D feature spatial distributions, compared with multiple features extracted in the O3DF-SVM-based method, our proposed multi-features can better discriminate ships from clutter, especially for small ships, and it can be seen from Figure 10c,d that the feature distributions of ship targets are independent of the feature space of training clutter. Therefore, the 3D feature space proposed in this paper is more conducive to ship target detection.

3.2. Performance Analyses of Target Detection

In this subsection, using many real SAR images from the public datasets HRSID and LS-SSDD-v1.0, target-detection experiments are carried out to validate ship-detection performance of the algorithm proposed in this paper for different scenes. This paper shows the detection effect of four different multi-target SAR images and compares our proposed method with CA-CFAR method [5], G Γ D-CFAR method [15], improved SP-CFAR method [24], SLCM method [30], adaptive SP-CFAR [20], O3DF-SVM-based method [33] and P3DF-SVM-based method. Among all comparison methods involving false-alarm rates, such as CA-CFAR, G Γ D-CFAR, improved SP-CFAR, and adaptive SP-CFAR methods, the false-alarm probability is 10 5 . Different multi-target scenes, such as the sea–land (Figure 11a–c), small targets (Figure 11b–d), and ship wake (Figure 11c), are presented in this paper. Real ship targets are marked with white rectangles.
Figure 12, Figure 13, Figure 14 and Figure 15 present detection results of ship targets for seven comparison algorithms and our proposed algorithm in this paper, respectively. After the ship target clustering processing, we utilize red, green, and yellow rectangles to mark correctly detected targets, false alarms, and missed targets, respectively. From all detection results for CA-CFAR and G Γ D-CFAR methods, it is easy to generate many false alarms because of the existence of interferences (such as speckle noise and ship wakes, etc.) which result in the inaccurate clutter statistical model. Compared with the pixel-wise detection method, the improved SP-CFAR method can reduce the probability of false alarms, but there are still a few false alarms due to the insufficient effect of strong interference suppression. From all detection results of SLCM and adaptive SP-CFAR methods, there is a phenomenon of missed detection for small targets with weak scattering characteristics (shown in Figure 12d,e, Figure 13d,e, Figure 14d,e and Figure 15d,e). However, compared with the SLCM method, the adaptive SP-CFAR method has the merit of fewer missed ships because the method adopts the non-local SP topology structure to avoid the overestimation of the threshold. The O3DF-SVM-based method can efficiently and accurately realize the binary classification of ship targets and clutter. However, in the actual SAR scenes, the number of ship target samples is very limited, which can produce the detection deterioration from the imbalance of training samples, and may produce a small number of false alarms (see in Figure 13f, Figure 14f and Figure 15f). When using the P3DF-SVM-based method to detect target samples, ship-detection ability can be improved, but due to the limitation of SVM training decision, there are a few false targets as shown in Figure 13g, Figure 14g and Figure 15g. However, from the detection results of our proposed method for Figure 13h, Figure 14h and Figure 15h, the classification effect is better than SVM detection methods. Furthermore, in the uniform sea surface for Figure 11a, comparing the results of SVM methods with our proposed method in Figure 12f–h, we can find that ship-detection ability is the same. In addition, bright ship wakes usually appear with the moving ships as shown in Figure 11c, it is easy to be misjudged as ship targets. Comparing all detection results of Figure 14, it is indicated that our proposed algorithm can effectively reduce false alarms caused by ship wakes.
Furthermore, the figure of merit (FoM) is adopted to evaluate the detection performance quantitatively and its calculation equation is:
F o M = N d N s + N f
where N d , N f and N s are the number of correctly detected ship targets, false alarms, and real ships, respectively. The higher the FoM value, the better the target-detection performance.
FoMs of different methods are listed in Table 2. From Table 2, CA-CFAR and G Γ D-CFAR methods will bring a high false-alarm rate in complex multi-target SAR scenarios. The improved SP-CFAR method reduces the probability of false alarms. The SLCM method will cause some missed small ships with weaker scattering, and the adaptive SP-CFAR method can improve the ship target detection rate and reduce the probability of missed detection. Due to the limitation of ship training samples, the discrimination accuracy of the SVM model for strong interferences is limited, resulting in a small number of false alarms for the detection using the O3DF-SVM-based method and the P3DF-SVM-based method. However, the FoM values of the proposed method in Table 2 have more outstanding detection performance.
In addition, by conducting target-detection experiments on many scene SAR images, the average performance and the standard deviation about FoM is shown in Table 3. From Table 3, the proposed algorithm has better detection performance and robustness compared to other comparative algorithms.
To evaluate the detection performance of different methods more comprehensively, we use the receiver-operating-characteristic (ROC) curve to describe the relationship between the detection probability p d and the observed false-alarm probability p f . p d and p f are denoted, respectively, as:
p d = N d t p N t p
p f = N d c p N c p
where N t p and N c p represent the total number of target pixels and the total number of clutter pixels, respectively, N d t p represents the number of correctly detected target pixels, and N d c p is the number of pixels that falsely detect clutter pixels as target pixels.
Figure 16 presents the ROC curves in Figure 11a–d for different algorithms based on the real measured SAR data, which state the detection performance of all methods, and the observed false-alarm probability p f in our method is the smallest under the same detection probability p d , i.e., the proposed detection method in this paper outperforms other comparison methods in terms of ship-detection performance.
To verify ship-detection performance and computational complexity of the proposed method compared to the deep-learning-based methods, the YOLOv5-based detection method [40] is adopted for comparative analysis. Figure 17 presents the detection results for scene 2 and scene 4 (shown in Figure 11b,d), in which correctly detected targets and missed targets are still, respectively, marked by red and yellow rectangles. As shown in Figure 17, the YOLOv5-based detection method cannot effectively detect small targets, and our proposed method can improve the small ship target detection performance. In addition, the deep-learning detection method is data-driven, and the high-precision training model needs vast amounts of data, while our proposed anomaly-based detection method requires a small number of samples to train the detection decision model. Therefore, it is obvious that our proposed method has a lower computational cost in the learning stage. The average computational cost of the YOLOv5-based method and our proposed method in the testing stage are 57 ms and 42 ms, respectively, and it is clear that our method has an efficient execution capability.

4. Conclusions

The paper investigates the anomaly-based ship-detection method of SP on the sea surface, and the effectiveness of the proposed algorithm is verified through huge amounts of multi-target SAR images. Due to the effect of island and speckle, preprocessing is indispensable and mainly includes sea–land and SP segmentation. To enhance the detection ability for different-scale ships, 3D feature spaces based on the boundary feature, saliency texture feature, and intensity attention contrast feature of the SP cell are designed. The anomaly-detection metric is built with flexible adjustment for the false-alarm rate. The COFL model with false-alarm control can efficiently and accurately classify ships and clutter SPs. Many experiments and comparative analyses of different methods prove that our algorithm can reduce the false-alarm rate in a complex multi-target environment, and at the same time significantly improves the detection performance of ship targets. However, there is often segmentation inaccuracy for ships of small sizes and weak scattering in complex scenes, which will affect the detection performance of small and weak ship targets. In the future, more effective SP feature-extraction methods and machine-learning models with more accurate classification accuracy will be further explored to improve the detection performance of ships of small sizes and weak scattering in complex scenes.

Author Contributions

Conceptualization, X.P.; methodology, X.P. and N.L.; software, N.L.; validation, N.L. and G.Z.; formal analysis, G.Z. and Z.W.; investigation, N.L. and X.P.; resources, L.Y.; data curation, X.P.; writing—original draft preparation, N.L.; writing—review and editing, N.L., X.P., L.Y., Z.H., J.C., Z.W. and G.Z.; supervision, L.Y., J.C. and Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant No. 62201004, in part by the Natural Science Foundation of Education Department of Anhui Province under Grant KJ2020A0030, in part by the Postdoctoral Fund of Anhui Province under Grant 2021B497 and in part by the Opening Foundation of the Key Laboratory of Intelligent Computing and Signal Processing under Grant 2020A009.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Allard, Y.; Germain, M.; Bonneau, O. Ship Detection and Characterization Using Polarime SAR Data. In Harbour Protection Through Data Fusion Technologies; Springer: Berlin/Heidelberg, Germany, 2009; pp. 243–250. [Google Scholar]
  2. Xu, G.; Zhang, B.; Chen, J.; Xing, M.; Hong, W. Sparse synthetic aperture radar imaging from compressed sensing and machine learning: Theories, applications and trends. IEEE Geosci. Remote Sens. Mag. 2022, 10, 32–69. [Google Scholar] [CrossRef]
  3. Zhang, B.; Xu, G.; Zhou, R.; Zhang, H.; Hong, W. Multi-channel back-projection algorithm for mmWave automotive MIMO SAR imaging with doppler-division multiplexing. IEEE J. Sel. Top. Signal Process. 2023, 17, 445–457. [Google Scholar] [CrossRef]
  4. Sor, R.; Sathone, J.S.; Deoghare, S.U.; Sutaone, M.S. OS-CFAR based on thresholding approaches for target detection. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–6. [Google Scholar]
  5. Tao, D.; Anfinsen, S.N.; Brekke, C. Robust CFAR detector based on truncated statistics in multiple-target situations. IEEE Trans. Geosci. Remote Sens. 2016, 54, 117–134. [Google Scholar] [CrossRef]
  6. Ai, J.; Luo, Q.; Yang, X.; Yin, Z.; Xu, H. Outliers-robust CFAR detector of gaussian cutter based on the truncated-maximum-likelihood-estimator in SAR imagery. IEEE Trans. Intell. Transp. Syst. 2020, 21, 2039–2049. [Google Scholar] [CrossRef]
  7. Almeida García, F.D.; Flores Rodriguez, A.C.; Fraidenraich, G.; Santos Filho, J.C.S. CA-CFAR detection performance in homogeneous Weibull clutter. IEEE Trans. Geosci. Remote Sens. Lett. 2019, 16, 887–891. [Google Scholar] [CrossRef]
  8. Liu, N.; Sun, Y.; Ding, H.; Song, J. Comparative analysis of classical statistical models based on real sea clutter. Comput. Simul. 2017, 34, 448–452. [Google Scholar]
  9. Li, W.; Zhang, Y.; Zhang, G. Sea clutter simulation research based on lognormal distribution. In Proceedings of the 2016 4th International Conference on Advanced Materials and Information Technology Processing (AMITP 2016), Guilin, China, 24–25 September 2016; pp. 545–548. [Google Scholar]
  10. Li, B.; Liu, X.; Du, S.; Li, W. CFAR detector based on the identification of sea clutter distribution characteristics. J. Phys. Conf. Ser. 2022, 2221, 1–10. [Google Scholar] [CrossRef]
  11. Saldanha, M.F.S.; Freitas, C.C.; Sant’Anna, S.J.S. Single channel SAR image segmentation using gamma distribution hipothesis test. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 4323–4326. [Google Scholar]
  12. Gao, G.; Ouyang, K.; Luo, Y.; Liang, S.; Zhou, S. Scheme of parameter estimation for generalized gamma distribution and its application to ship detection in SAR images. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1812–1832. [Google Scholar] [CrossRef]
  13. Zhou, H.; Li, Y.; Jiang, T. Sea clutter distribution modeling: A kernel density estimation approach. In Proceedings of the 2018 10th International Conference on Wireless Communications and Signal Processing (WCSP), Hangzhou, China, 18–20 October 2018; pp. 1–6. [Google Scholar]
  14. Xin, Z.; Liao, G.; Yang, Z.; Zhang, Y.; Dang, H. Analysis of distribution using graphical goodness of fit for airborne SAR sea-clutter data. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5719–5728. [Google Scholar] [CrossRef]
  15. Qin, X.; Zhou, S.; Zou, H. A CFAR detection algorithm for generalized gamma distributed background in high-resolution SAR images. IEEE Geosci. Remote Sens. Lett. 2013, 10, 806–810. [Google Scholar]
  16. Dai, H.; Du, L.; Wang, Y.; Wang, Z. A modified CFAR algorithm based on object proposals for ship target detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1925–1929. [Google Scholar] [CrossRef]
  17. Ai, J.; Yang, X.; Yan, H. A local CFAR detector based on gray intensity correlation in SAR imagery. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Valencia, Spain, 22–27 July 2018; pp. 697–700. [Google Scholar]
  18. Wang, C.; Wang, J.; Liu, X. A novel algorithm for ship detection in SAR images. In Proceedings of the 2019 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Dalian, China, 20–22 September 2019; pp. 1–5. [Google Scholar]
  19. Pappas, O.; Achim, A.; Bull, D. Superpixel-level CFAR detectors for ship detection in SAR imagery. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1397–1401. [Google Scholar] [CrossRef] [Green Version]
  20. Li, M.; Cui, X.; Chen, S. Adaptive superpixel-level CFAR detector for SAR inshore dense ship detection. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  21. Li, T.; Peng, D.; Shi, S. Outlier-robust superpixel-level CFAR detector with truncated clutter for single look complex SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 5261–5274. [Google Scholar] [CrossRef]
  22. Zhang, L.; Zhang, Z.; Lu, S.; Xiang, D.; Su, Y. Fast Superpixel-Based Non-Window CFAR Ship Detector for SAR Imagery. Remote Sens. 2022, 14, 2092. [Google Scholar] [CrossRef]
  23. Li, T.; Peng, D.; Chen, Z.; Guo, B. Superpixel-level CFAR detector based on truncated gamma distribution for SAR images. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1421–1425. [Google Scholar] [CrossRef]
  24. Li, T.; Liu, Z.; Xie, R.; Ran, L. An improved superpixel-level CFAR detection method for ship targets in high-resolution SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 184–194. [Google Scholar] [CrossRef]
  25. Yang, M.; Guo, C.; Zhong, H.; Yin, H. A curvature-Based saliency method for ship detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1590–1594. [Google Scholar] [CrossRef]
  26. Wang, Z.; Wang, R.; Fu, X.; Xia, K. Unsupervised ship detection for single-channel SAR images based on multiscale saliency and complex signal kurtosis. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  27. Li, N.; Pan, X.; Yang, L.; Huang, Z.; Wu, Z.; Zheng, G. Adaptive CFAR method for SAR ship detection using intensity and texture feature fusion attention contrast mechanism. Sensors 2022, 22, 8116. [Google Scholar] [CrossRef]
  28. Cheng, J.; Xiang, D.; Tang, J.; Zheng, Y.; Guan, D.; Du, B. Inshore ship detection in large-scale SAR images based on saliency enhancement and bhattacharyya-like distance. Remote Sens. 2022, 14, 2832. [Google Scholar] [CrossRef]
  29. Liang, Y.; Sun, K.; Zeng, Y.; Li, G.; Xing, M. An adaptive hierarchical detection method for ship targets in high-resolution SAR images. Remote Sens. 2020, 12, 303. [Google Scholar] [CrossRef] [Green Version]
  30. Wang, X.; Chen, C.; Pan, Z. Superpixel-based LCM detector for faint ships hidden in strong noise background SAR imagery. IEEE Geosci. Remote Sens. Lett. 2019, 16, 417–421. [Google Scholar] [CrossRef]
  31. Lin, H.; Chen, H.; Jin, K.; Zeng, L.; Yang, J. Ship detection with superpixel-level fisher vector in high-resolution SAR images. IEEE Geosci. Remote Sens. Lett. 2020, 17, 247–251. [Google Scholar] [CrossRef]
  32. He, G.; Xia, Z.; Chen, H.; Li, K.; Zhao, Z.; Guo, Y.; Feng, P. An adaptive ship detection algorithm for HRWS SAR images under complex background: Application to sentinel1a data. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2018, 3, 497–503. [Google Scholar] [CrossRef] [Green Version]
  33. Xiong, W.; Xu, Y.; Yao, L. A new ship target detection algorithm based on SVM in high resolution SAR image. Remote. Sens. Technol. Appl. 2018, 33, 119–127. [Google Scholar]
  34. Li, H.; Cui, X.; Chen, S. PolSAR ship detection with optimal polarimetric rotation domain features and SVM. Remote Sens. 2021, 13, 3932. [Google Scholar] [CrossRef]
  35. Yang, X.; Bi, F.; Yu, Y.; Chen, L. An effective false-alarm removal method based on OC-SVM for SAR ship detection. In Proceedings of the IET International Radar Conference 2015, Hangzhou, 14–16 October 2015; pp. 1–4. [Google Scholar]
  36. Zou, B.; Qiu, Y.; Zhang, L. Docked ships detection using PolSAR image based on GOPSO-SVM. In Proceedings of the 2019 IEEE Radar Conference (RadarConf), Boston, MA, USA, 22–26 April 2019; pp. 1–6. [Google Scholar]
  37. Zou, B.; Qiu, Y.; Zhang, L. Ship detection using PolSAR images based on simulated annealing by fuzzy matching. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  38. Jiang, S.; Zhu, M.; He, Y.; Zheng, Z.; Zhou, F.; Zhou, G. Ship detection with SAR based on Yolo. In Proceedings of the 2020 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1647–1650. [Google Scholar]
  39. Yu, H.; Li, Y.; Zhang, D. An improved YOLO v3 small-scale ship target detection algorithm. In Proceedings of the 2021 6th International Conference on Smart Grid and Electrical Automation (ICSGEA), Kunming, China, 29–30 May 2021; pp. 560–563. [Google Scholar]
  40. Xu, X.; Zhang, X.; Zhang, T. SAR ship detection using YOLOv5 algorithm with anchor boxes cluster. In Proceedings of the 2022 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 2139–2142. [Google Scholar]
  41. Sun, Z.; Leng, X.; Lei, Y.; Xiong, B.; Ji, K.; Kuang, G. BiFA-YOLO: A novel YOLO-based method for arbitrary-oriented ship detection in high-resolution SAR images. Remote Sens. 2021, 13, 4209. [Google Scholar] [CrossRef]
  42. Guo, Y.; Chen, S.; Zhan, R.; Wang, W.; Zhang, J. LMSD-YOLO: A lightweight YOLO algorithm for multi-scale SAR ship detection. Remote Sens. 2022, 14, 4801. [Google Scholar] [CrossRef]
  43. Xu, X.; Zhang, X.; Zhang, T. Lite-YOLOv5: A lightweight deep learning detector for on-board ship detection in large-scene sentinel-1 SAR images. Remote Sens. 2022, 14, 1018. [Google Scholar] [CrossRef]
  44. Tang, G.; Zhuge, Y.; Claramunt, C.; Men, S. N-YOLO: A SAR ship detection using noise-classifying and complete-target extraction. Remote Sens. 2021, 13, 871. [Google Scholar] [CrossRef]
  45. Chang, Y.L.; Anagaw, A.; Chang, L.; Wang, Y.; Hsiao, C.Y.; Lee, W.H. Ship detection based on YOLOv2 for SAR imagery. Remote Sens. 2019, 11, 786. [Google Scholar] [CrossRef] [Green Version]
  46. Ke, X.; Zhang, X.; Zhang, T.; Shi, J.; Wei, S. SAR ship detection based on an improved faster R-CNN using deformable convolution. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Brussels, Belgium, 11–16 July 2021; pp. 3565–3568. [Google Scholar]
  47. Chai, B.; Chen, L.; Shi, H.; He, C. Marine ship detection method for SAR image based on improved faster RCNN. In Proceedings of the 2021 SAR in Big Data Era (BIGSARDATA), Nanjing, China, 22–24 September 2021; pp. 1–4. [Google Scholar]
  48. Liu, D.; Gao, S. Ship targets detection in remote sensing images based on improved faster-RCNN. J. Phys. Conf. Ser. 2021, 2132, 1–6. [Google Scholar] [CrossRef]
  49. Xia, R.; Chen, J.; Huang, Z.; Wan, H.; Wu, B.; Sun, L.; Yao, B.; Xiang, H.; Xing, M. CRTransSar: A visual transformer based on contextual joint representation learning for SAR ship detection. Remote Sens. 2022, 14, 1488. [Google Scholar] [CrossRef]
  50. Zhou, Y.; Fu, K.; Han, B.; Yang, J.; Pan, Z.; Hu, Y.; Yin, D. D-MFPN: A doppler feature matrix fused with a multilayer feature pyramid network for SAR ship detection. Remote Sens. 2023, 15, 626. [Google Scholar] [CrossRef]
  51. Rostami, M.; Kolouri, S.; Eaton, E.; Kim, K. Deep transfer learning for few-shot SAR image classification. Remote Sens. 2019, 11, 1374. [Google Scholar] [CrossRef] [Green Version]
  52. Zhang, T.; Zhang, X. High-speed ship detection in SAR images based on a grid convolutional neural network. Remote Sens. 2019, 11, 1206. [Google Scholar] [CrossRef] [Green Version]
  53. Wei, S.; Su, H.; Ming, J.; Wang, C.; Yan, M.; Kumar, D.; Shi, J.; Zhang, X. Precise and robust ship detection for high-resolution SAR imagery based on HR-SDNet. Remote Sens. 2020, 12, 167. [Google Scholar] [CrossRef] [Green Version]
  54. Liu, N.; Cao, Z.; Cui, Z.; Pi, Y.; Dang, S. Multi-scale proposal generation for ship detection in SAR images. Remote Sens. 2019, 11, 526. [Google Scholar] [CrossRef] [Green Version]
  55. Sun, Y.; Wang, Z.; Sun, X.; Fu, K. SPAN: Strong scattering point aware network for ship detection and classification in large-scale SAR imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1188–1204. [Google Scholar] [CrossRef]
  56. Gao, S.; Liu, H. Polarimetric SAR ship detection based on scattering characteristics. IEEE J. Miniat. Air Space Syst. 2022, 3, 197–203. [Google Scholar] [CrossRef]
  57. Zhang, T.; Yang, Z.; Xing, C.; Zeng, L.; Yin, J.; Yang, J. Ship detection from polsar imagery based on the scattering difference parameter. In Proceedings of the 2020 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1255–1258. [Google Scholar]
  58. Wang, N.; Li, B.; Xu, Q.; Wang, Y. Automatic ship detection in optical remote sensing images based on anomaly detection and SPP-PCANet. Remote Sens. 2019, 11, 47. [Google Scholar] [CrossRef] [Green Version]
  59. Zhai, L.; Li, Y.; Su, Y. A novel ship detection algorithm based on anomaly detection theory for SAR images. In Proceedings of the 2016 Progress in Electromagnetic Research Symposium (PIERS), Shanghai, China, 8–11 August 2016; pp. 2868–2872. [Google Scholar]
  60. Chen, X.; Sun, J.; Yin, K.; Yu, J. Sea-land segmentation algorithm of SAR image based on Otsu method and statistical characteristic of sea area. J. Data Acquis. Process. 2014, 29, 603–608. [Google Scholar]
  61. Abdullah, A.; Albashish, D. Empirical comparision on boosted cascade of Haar-like features to histogram of oriented gradients for person detection. In Proceedings of the 2021 International Conference on Electrical Engineering and Informatics (ICEEI), Kuala Terengganu, Malaysia, 12–13 October 2021; pp. 1–6. [Google Scholar]
  62. Karanwal, S. Improved LBP and discriminative LBP: Two novel local descriptors for face recognition. In Proceedings of the 2022 IEEE International Conference on Data Science and Information System (ICDSIS), Hassan, India, 29–30 July 2022; pp. 1–6. [Google Scholar]
  63. Ravi Kumar, Y.B.; Ravi Kumar, C.N. Local binary pattern: An improved LBP to extract nonuniform LBP patterns with Gabor filter to increase the rate of face similarity. In Proceedings of the International Conference on Cognitive Computing and Information Processing (CCIP), Mysuru, India, 12–13 August 2016; pp. 1–5. [Google Scholar]
  64. Huang, W.; Lu, S.; Tang, X. A method using clustering and SVDD for quality detection. In Proceedings of the 2021 33rd Chinese Control and Decision Conference (CCDC), Kunming, China, 22–24 May 2021; pp. 4069–4072. [Google Scholar]
  65. Wei, S.; Zeng, X.; Qu, Q.; Wang, M.; Su, H.; Shi, J. HRSID: A high-resolution SAR images dataset for ship detection and instance segmentation. IEEE Access 2020, 8, 120234–120254. [Google Scholar] [CrossRef]
  66. Zhang, T.; Zhang, X.; Ke, X.; Zhan, X.; Shi, J.; Wei, S.; Pan, D.; Li, J.; Su, H.; Zhou, Y.; et al. LS-SSDD-v1.0: A deep learning dataset dedicated to small ship detection from large-scale Sentinel-1 SAR Images. Remote Sens. 2020, 12, 2997. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the proposed method.
Figure 1. Flow chart of the proposed method.
Remotesensing 15 03258 g001
Figure 2. Sea–land segmentation results for different scenes. (a) scene 1; (b) scene 2; (c) scene 3; (d) result of scene 1; (e) result of scene 2; (f) result of scene 3.
Figure 2. Sea–land segmentation results for different scenes. (a) scene 1; (b) scene 2; (c) scene 3; (d) result of scene 1; (e) result of scene 2; (f) result of scene 3.
Remotesensing 15 03258 g002
Figure 3. SP segmentation result. (a) original SAR image; (b) segmentation result for (a).
Figure 3. SP segmentation result. (a) original SAR image; (b) segmentation result for (a).
Remotesensing 15 03258 g003
Figure 4. Haar-like edge features in different directions. (a) θ = 0 ; (b) θ = 45 ; (c) θ = 90 ; (d) θ = 135 ; (e) θ = 180 ; (f) θ = 225 ; (g) θ = 270 ; (h) θ = 315 .
Figure 4. Haar-like edge features in different directions. (a) θ = 0 ; (b) θ = 45 ; (c) θ = 90 ; (d) θ = 135 ; (e) θ = 180 ; (f) θ = 225 ; (g) θ = 270 ; (h) θ = 315 .
Remotesensing 15 03258 g004
Figure 5. Schematic diagram about the normal direction of the boundary pixel point.
Figure 5. Schematic diagram about the normal direction of the boundary pixel point.
Remotesensing 15 03258 g005
Figure 6. Boundary feature values of target and clutter SPs for Figure 3.
Figure 6. Boundary feature values of target and clutter SPs for Figure 3.
Remotesensing 15 03258 g006
Figure 7. Saliency texture feature values of target and clutter SPs for Figure 3.
Figure 7. Saliency texture feature values of target and clutter SPs for Figure 3.
Remotesensing 15 03258 g007
Figure 8. Intensity attention contrast feature values of target and clutter SPs for Figure 3.
Figure 8. Intensity attention contrast feature values of target and clutter SPs for Figure 3.
Remotesensing 15 03258 g008
Figure 9. Different SAR scenes for feature-space analysis. (a) scene A; (b) scene B. Small targets are marked by red circles.
Figure 9. Different SAR scenes for feature-space analysis. (a) scene A; (b) scene B. Small targets are marked by red circles.
Remotesensing 15 03258 g009
Figure 10. 3D feature distributions of SP cells for different scenes shown in Figure 9. (a) 3D feature space constructed with O3DF-SVM-based method for scene A; (b) 3D feature space constructed with O3DF-SVM-based method for scene B; (c) 3D feature space constructed with our method for scene A; (d) 3D feature space constructed with our method for scene B.
Figure 10. 3D feature distributions of SP cells for different scenes shown in Figure 9. (a) 3D feature space constructed with O3DF-SVM-based method for scene A; (b) 3D feature space constructed with O3DF-SVM-based method for scene B; (c) 3D feature space constructed with our method for scene A; (d) 3D feature space constructed with our method for scene B.
Remotesensing 15 03258 g010
Figure 11. Original multi-target scenes. (a) scene 1; (b) scene 2; (c) scene 3; (d) scene 4.
Figure 11. Original multi-target scenes. (a) scene 1; (b) scene 2; (c) scene 3; (d) scene 4.
Remotesensing 15 03258 g011
Figure 12. Different detection results of scene 1. (a) CA-CFAR method; (b) G Γ D-CFAR method; (c) improved SP-CFAR method; (d) SLCM method; (e) adaptive SP-CFAR method; (f) O3DF-SVM-based method; (g) P3DF-SVM-based method; (h) proposed method.
Figure 12. Different detection results of scene 1. (a) CA-CFAR method; (b) G Γ D-CFAR method; (c) improved SP-CFAR method; (d) SLCM method; (e) adaptive SP-CFAR method; (f) O3DF-SVM-based method; (g) P3DF-SVM-based method; (h) proposed method.
Remotesensing 15 03258 g012
Figure 13. Different detection results of scene 2. (a) CA-CFAR method; (b) G Γ D-CFAR method; (c) improved SP-CFAR method; (d) SLCM method; (e) adaptive SP-CFAR method; (f) O3DF-SVM-based method; (g) P3DF-SVM-based method; (h) proposed method.
Figure 13. Different detection results of scene 2. (a) CA-CFAR method; (b) G Γ D-CFAR method; (c) improved SP-CFAR method; (d) SLCM method; (e) adaptive SP-CFAR method; (f) O3DF-SVM-based method; (g) P3DF-SVM-based method; (h) proposed method.
Remotesensing 15 03258 g013
Figure 14. Different detection results of scene 3. (a) CA-CFAR method; (b) G Γ D-CFAR method; (c) improved SP-CFAR method; (d) SLCM method; (e) adaptive SP-CFAR method; (f) O3DF-SVM-based method; (g) P3DF-SVM-based method; (h) proposed method.
Figure 14. Different detection results of scene 3. (a) CA-CFAR method; (b) G Γ D-CFAR method; (c) improved SP-CFAR method; (d) SLCM method; (e) adaptive SP-CFAR method; (f) O3DF-SVM-based method; (g) P3DF-SVM-based method; (h) proposed method.
Remotesensing 15 03258 g014
Figure 15. Different detection results of scene 4. (a) CA-CFAR method; (b) G Γ D-CFAR method; (c) improved SP-CFAR method; (d) SLCM method; (e) adaptive SP-CFAR method; (f) O3DF-SVM-based method; (g) P3DF-SVM-based method; (h) proposed method.
Figure 15. Different detection results of scene 4. (a) CA-CFAR method; (b) G Γ D-CFAR method; (c) improved SP-CFAR method; (d) SLCM method; (e) adaptive SP-CFAR method; (f) O3DF-SVM-based method; (g) P3DF-SVM-based method; (h) proposed method.
Remotesensing 15 03258 g015
Figure 16. ROC curves for different methods. (a) ROC for Figure 11a; (b) ROC for Figure 11b; (c) ROC for Figure 11c; (d) ROC for Figure 11d.
Figure 16. ROC curves for different methods. (a) ROC for Figure 11a; (b) ROC for Figure 11b; (c) ROC for Figure 11c; (d) ROC for Figure 11d.
Remotesensing 15 03258 g016
Figure 17. Detection results of different scenes based on the YOLOv5 model. (a) Detection result of scene 2 shown in Figure 11b. (b) Detection result of scene 4 shown in Figure 11d.
Figure 17. Detection results of different scenes based on the YOLOv5 model. (a) Detection result of scene 2 shown in Figure 11b. (b) Detection result of scene 4 shown in Figure 11d.
Remotesensing 15 03258 g017
Table 1. Imaging parameters of SAR images from HRSID and LS-SSDD-v1.0 datasets.
Table 1. Imaging parameters of SAR images from HRSID and LS-SSDD-v1.0 datasets.
DatasetSatelliteImaging ModeIncident Angle ( )Resolution (m)Polarization
HRSIDSentinel-1, TerraSAR-XSM, ST, HS27.6∼34.8, 20∼450.5, 1, 3HH, HV, VV
LS-SSDD-v1.0Sentinel-1IW27.6∼34.8 5 × 20 VV, VH
Note: SM: strip-map mode; ST: staring spotlight; HS: high-resolution spotLight; IW: interferometric wide-swath.
Table 2. FoMs of Different Methods in Different Scenes.
Table 2. FoMs of Different Methods in Different Scenes.
ScenesMethods N d N f FoM
Scene 1CA-CFAR method9100.4737
G Γ D-CFAR method950.6429
Improved SP-CFAR method920.8182
SLCM method800.8889
Adaptive SP-CFAR method800.8889
O3DF-SVM-based method901
P3DF-SVM-based method901
Proposed method901
Scene 2CA-CFAR method1280.6000
G Γ D-CFAR method1260.6667
Improved SP-CFAR method1240.7500
SLCM method900.7500
Adaptive SP-CFAR method1000.8333
O3DF-SVM-based method1220.8571
P3DF-SVM-based method1210.9231
Proposed method1201
Scene 3CA-CFAR method12110.5217
G Γ D-CFAR method1250.7059
Improved SP-CFAR method1240.7500
SLCM method1120.7857
Adaptive SP-CFAR method1110.8462
O3DF-SVM-based method1220.8571
P3DF-SVM-based method1220.8571
Proposed method1210.9231
Scen 4CA-CFAR method52130.8000
G Γ D-CFAR method5280.8667
Improved SP-CFAR method5260.8966
SLCM method4700.9038
Adaptive SP-CFAR method4900.9423
O3DF-SVM-based method5230.9455
P3DF-SVM-based method5220.9630
Proposed method5210.9811
Table 3. The average values and standard deviation values of FoM for different algorithms.
Table 3. The average values and standard deviation values of FoM for different algorithms.
CA-CFAR MrthodG Γ D-CFAR MethodImproved SP-CFAR MethodSLCM MethodAdaptive SP-CFAR MethodO3DF-SVM-Based MethodP3DF-SVM-Based MethodProposed Method
Average values0.61570.71270.80810.83120.85700.90900.94500.9724
Standard deviation values0.11660.09490.06240.05480.04120.03870.02830.0200
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pan, X.; Li, N.; Yang, L.; Huang, Z.; Chen, J.; Wu, Z.; Zheng, G. Anomaly-Based Ship Detection Using SP Feature-Space Learning with False-Alarm Control in Sea-Surface SAR Images. Remote Sens. 2023, 15, 3258. https://doi.org/10.3390/rs15133258

AMA Style

Pan X, Li N, Yang L, Huang Z, Chen J, Wu Z, Zheng G. Anomaly-Based Ship Detection Using SP Feature-Space Learning with False-Alarm Control in Sea-Surface SAR Images. Remote Sensing. 2023; 15(13):3258. https://doi.org/10.3390/rs15133258

Chicago/Turabian Style

Pan, Xueli, Nana Li, Lixia Yang, Zhixiang Huang, Jie Chen, Zhenhua Wu, and Guoqing Zheng. 2023. "Anomaly-Based Ship Detection Using SP Feature-Space Learning with False-Alarm Control in Sea-Surface SAR Images" Remote Sensing 15, no. 13: 3258. https://doi.org/10.3390/rs15133258

APA Style

Pan, X., Li, N., Yang, L., Huang, Z., Chen, J., Wu, Z., & Zheng, G. (2023). Anomaly-Based Ship Detection Using SP Feature-Space Learning with False-Alarm Control in Sea-Surface SAR Images. Remote Sensing, 15(13), 3258. https://doi.org/10.3390/rs15133258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop