Next Article in Journal
A Study on the Spatial Morphological Evolution and Driving Factors of Coral Islands and Reefs in the South China Sea Based on Multi-Source Satellite Imagery
Previous Article in Journal
Investigating the Impact of Seafarer Training in the Autonomous Shipping Era
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Constant False Alarm Rate Detection Method for Sonar Imagery Targets Based on Segmented Ordered Weighting

1
National Key Laboratory of Underwater Acoustic Technology, Harbin Engineering University, Harbin 150001, China
2
Key Laboratory of Marine Information Acquisition and Security (Harbin Engineering University), Ministry of Industry and Information Technology, Harbin 150001, China
3
College of Underwater Acoustic Engineering, Harbin Engineering University, Harbin 150001, China
4
Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen 518000, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(4), 819; https://doi.org/10.3390/jmse13040819
Submission received: 27 February 2025 / Revised: 9 April 2025 / Accepted: 15 April 2025 / Published: 20 April 2025
(This article belongs to the Section Ocean Engineering)

Abstract

:
Achieving reliable target detection in the field of sonar imagery represents a significant challenge due to the complex underwater interference patterns characterized by speckle noise, tunnel effects, and low-signal-to-noise ratio (SNR) environments. Currently, constant false alarm rate (CFAR) detection denotes a fundamental target detection method in sonar target recognition. However, conventional CFAR methods face some limitations, including a slow computational speed, a high false alarm rate (FAR), and a notable missed detection rate (MDR). To address these limitations, this study proposes an innovative segmentation–detection framework. The proposed framework employs a global segmentation algorithm to identify regions of interest containing potential targets, which is followed by localized two-dimensional CFAR detection. This hierarchical framework can significantly improve computational efficiency while reducing the FAR, thus enabling the practical implementation of advanced, computationally intensive CFAR detection methods in real-time target detection in sonar imagery. In addition, an innovative segmented-ordered-weighting CFAR (SOW-CFAR) detection method that integrates multiple weighting windows to implement ordered weighting of reference cells is developed. This method can effectively reduce both the FAR and MDR through optimized reference cell processing. The experimental results demonstrate that the proposed method can achieve superior detection performance in sonar imagery applications compared to the existing methods. The proposed SOW-CFAR detection method can achieve fast and accurate target detection in the sonar imagery field.

1. Introduction

Imaging sonar denotes one of the most crucial high-tech marine exploration devices in the field of contemporary ocean engineering. Imaging sonars have had extensive applications in the tasks of submarine construction and maintenance, marine mapping, and oceanic resource exploration [1,2,3]. However, the detection performance of imaging sonars during their practical operations is significantly affected by several factors, including complex and variable marine acoustic fields, environmental noise, and unstable platform motion. This can result in the problems of a low signal-to-noise ratio (SNR) of target echoes, prevalent speckle noise in sonar images, and inconsistent imaging quality [4,5,6]. Therefore, achieving reliable target detection while maintaining low values of false alarm rate (FAR) and missed detection rate (MDR) in complex environments remains a critical challenge in the field of sonar signal processing. Constant false alarm rate (CFAR) detection technology has demonstrated superior capability in controlling false alarms in environments with Gaussian-distributed backgrounds [7,8]. Recently, with continuous advancements in CFAR detection methods, this technique has gained widespread adoption in sonar imaginary applications [9,10].
To enable autonomous target detection in side-scan sonar imagery, Villar et al. [11,12] extended the cell-averaging CFAR (CA-CFAR) detection methodology from one-dimensional to two-dimensional processing and achieved satisfactory performance in pipeline detection using side-scan sonar systems. However, although the CA-CFAR algorithm can achieve good computational efficiency and optimal performance in homogeneous backgrounds, its detection capability significantly deteriorates in heterogeneous environments, particularly under the condition of multiple target interference scenarios. To address these limitations, Verma et al. [13] developed a variability index CFAR (VI-CFAR) detection algorithm by integrating the CA method with the greatest of (GO) and smallest of (SO) techniques for sonar target detection. The results indicated that the VI-CFAR method could reduce the CFAR loss in homogeneous environments while maintaining reasonable robustness for heterogeneous backgrounds characterized by interference targets and clutter transitions. However, its detection performance degraded when there were interference targets in the two reference windows simultaneously. Wang et al. [14] designed an enhanced detection method by considering the distinct noise distribution characteristics in both the azimuth and range dimensions of sonar imagery. They extended the weighted-cell-averaging CFAR (WCA-CFAR) technique to two-dimensional sonar image processing and achieved balanced target detection performance using multibeam bathymetric sonar water column imagery. Nevertheless, using this approach in practice requires prior knowledge of background statistics, which might limit its implementation in dynamic underwater environments. The aforementioned methods belong to the category of mean-based CFAR detection methods.
Another significant category of CFAR detection methods includes ordered-statistics (OS)-based methods, such as ordered-statistics CFAR (OS-CFAR) [15], censored-mean CFAR (CM-CFAR) [16], and weighted-window CFAR (WW-CFAR) [17]. The OS-based methods have demonstrated superior robustness compared to their mean-based counterparts in heterogeneous environments.
Villar et al. [15] applied the 2D OS-CFAR detection method to the field of side-scan sonar imagery. They proposed a new approach to improve the algorithmic efficiency of the 2D OS-CFAR using distributive histograms and the optimal breakdown point concept. Wang et al. [16] developed the censored-mean CFAR (CM-CFAR) detection method specifically for detecting terrain targets in multibeam bathymetric sonar water column imagery. This approach outperformed the OS-CFAR method in terms of detection accuracy. The enhanced adaptability is due to the use of an expanded reference cell configuration, which provides more comprehensive environmental sampling. However, this CM-CFAF demonstrates an elevated MDR at low SNR, which is a common scenario in sonar target detection applications. Meng et al. [17] introduced a weighted window technique into the OS-based detection, applied reduced weighting coefficients to both the higher- and lower-order samples, and proposed the weighted-window CFAR (WW-CFAR) detection method for radar target detection. The WW-CFAR could maintain a superior detection rate in low-SNR environments but exhibited increased susceptibility to false alarms in heterogeneous backgrounds, such as sonar imagery.
Therefore, maintaining an optimal detection performance in complex sonar imaging environments still represents a significant technical challenge. Furthermore, advanced detection algorithms, such as the CM-CFAR and WW-CFAR methods, exhibit substantial computational complexity. In addition, comprehensive detection processing of full-frame sonar imagery requires extensive computational resources and long processing times, which indicates the necessity of the development of efficient detection methods to enhance real-time operational capabilities in practical sonar target detection applications.
To address the aforementioned challenges, this study proposes a segmentation–detection framework for rapid CFAR detection in sonar imagery. The proposed framework employs a dual-stage approach, where the global target region segmentation is performed by Otsu’s thresholding method first, and then local advanced two-dimensional CFAR detection is conducted. In high-resolution sonar images where targets occupy minimal pixel areas, using Otsu’s segmentation method can effectively eliminate both non-target regions and obvious interference sources that can trigger false alarms, thus significantly reducing the computational complexity and FAR value in subsequent detection processes. Based on the proposed detection framework, a segmented-ordered-weighting CFAR (SOW-CFAR) detection method is developed. In the proposed method, the local detection phase incorporates a hybrid method that combines rectangular and trapezoidal ordered weighting windows. The rectangular weighting window is employed under high-SNR conditions to suppress strong interference, whereas the trapezoidal weighting window is used in low-SNR scenarios to prevent missed detections. Such an adaptive weighting strategy enhances the proposed method’s robustness in complex sonar imaging environments and improves its overall detection performance. The effectiveness of the proposed segmentation–detection framework and SOW-CFAR method is verified by the target detection experiments conducted on the water column images acquired by multibeam bathymetric sonar, evaluating the related performance metrics.
The main contributions of this work can be summarized as follows:
(1)
A segmentation–detection framework that integrates global Otsu segmentation with local two-dimensional CFAR detection is proposed, which can significantly reduce the computational complexity and FAR in subsequent detection processes;
(2)
A SOW-CFAR detection method is developed. Considering the identification results of potential target regions, this method employs a hybrid framework that combines rectangular and trapezoidal ordered weighting windows in the local refined CFAR detection phase. This approach enhances the algorithm’s adaptability to complex sonar imaging environments, reduces the MDR value, and improves the overall detection performance;
(3)
Comprehensive lake trials are conducted to perform a detailed comparative analysis of the proposed segmentation–detection framework and SOW-CFAR detection method. The empirical evaluation results validate the proposed algorithm’s performance against the widely used CA and OS-CFAR methods.
The remainder of this paper is organized as follows. Section 2 introduces a one-dimensional CFAR detection model, describes a segmentation–detection framework for rapid target detection in the sonar imagery field in detail, and provides a detailed explanation of the SOW-CFAR detection method. Section 3 provides a comparative performance analysis of various CFAR detection methods through lake trials and demonstrates the effectiveness of the proposed approaches. Section 4 discusses the advantages of the proposed method, and analyzes the underlying reasons for its superior performance over the existing methods. Section 5 analyzes the proposed method’s limitations and outlines potential directions for future research. Finally, Section 6 concludes this study and briefly presents future application directions.

2. Proposed Method

2.1. Fundamental Principles of CFAR Detection

The segmentation–detection framework and SOW-CFAR method based on the fundamental principles of one-dimensional CFAR theory are proposed to achieve rapid and reliable target detection in the field of sonar imagery. To facilitate a comprehensive analysis of the proposed solutions, this study first introduces the theoretical framework of one-dimensional CFAR detection. The schematic diagram of a one-dimensional CFAR detector structure is displayed in Figure 1.
A CFAR detector uses a sliding window structure and generates adaptive thresholds locally to maintain a relatively CFAR under ideal conditions. In Figure 1, the orange cell, denoted by D, represents the detection statistic in the cell under test (CUT) in the presented framework. Next to the CUT, there are two guard cells, which prevent target energy leakage into the reference cells, thus preserving the accuracy of the local clutter power estimation process. The cells on both sides of the CUT, marked in a light blue color, denote the reference cells, which are represented by x i ( i = 1 , , R ) , where R = 2 n defines the total length of the reference sliding window, and n is the length of the single-sided reference window. Assume that Z denotes the local estimate of clutter power derived from a reference cell x i , and T represents the threshold multiplier; then, the adaptive decision criterion can be expressed as follows:
H 1 D T Z , H 0
where H 1 represents the hypothesis of target presence, and H 0 denotes the hypothesis of target absence.
The envelope of Gaussian-distributed clutter follows a Rayleigh distribution. However, after the square-law detection, each reference cell sample in the detection window exhibits an exponential distribution, which is characterized by the probability density function (PDF) expressed by
f ( x ) = 1 λ exp x λ , x 0 .
Under hypothesis H 0 , where no target is present in reference cells, the parameter λ denotes the total average power level of the background clutter plus the noise denoted by μ . In contrast, under the hypothesis H 1 , where a target exists in reference cells, λ is expressed by μ ( 1 + S N R ) , where S N R denotes the ratio of the signal power to the average power of the clutter-plus-noise interference. Thus, it can be written that
λ = μ H 0 , μ ( 1 + S N R ) H 1 .
Assume the reference cell samples are independent and identically distributed (IID) and their λ is equal to μ ; then, under a threshold S = T Z , which is a random variable, the false alarm probability ( P f a ) is calculated using the statistical expectation over Z as follows:
P f a = E S Pr D ( v ) S | H 0 = 0 f Z ( z ) T z 1 μ e x / μ d x d z = 0 e T z / μ f Z ( z ) d z = M Z ( t ) | t = T / μ ,
where f Z ( z ) is the PDF of variable Z, and M Z ( u ) is the moment-generating function (MGF) of variable Z.
When u = T μ ( 1 + S N R ) , Equation (4) reduces to the detection probability ( P d ) formula under a uniform clutter background, which is expressed by
P d = M Z ( t ) | t = T μ ( 1 + S N R ) .
In the CA-CFAR method, the estimation of the background power level is conducted based on the mean value of R reference cell samples, which is expressed by
Z C A = 1 R i = 1 R x i ,
where x i follows an exponential distribution, which represents a special case of the Gamma distribution when n = 1 , that is, x i G ( 1 , 1 / μ ) .
Further, considering that all reference cell samples are IID, it follows that Z C A G ( R , R / μ ) ; thus, the MGF of Z C A can be expressed by
M Z , C A ( t ) = 1 + μ t R R .
By employing the MGF, the P f a and P d values of the CA-CFAR detection method can be respectively obtained by
P f a , C A = M Z , C A ( t ) | t = T / μ = 1 + T R R ,
P d , C A = M Z , C A ( t ) | t = T μ ( 1 + S N R ) = 1 + T R ( 1 + S N R ) R .
However, in the OS-CFAR detection method, the detection process begins by sorting R reference cell samples in ascending order, which can be expressed by
x ( 1 ) x ( 2 ) x ( R ) .
Next, the kth ordered sample is selected as an estimate of the background power level Z, which is defined by
Z O S = x ( k ) .
The PDF of Z O S is expressed by
f Z , O S ( z ) = k μ R k e ( R k + 1 ) z / μ ( 1 e z / μ ) k 1 .
The MGF value of Z O S is calculated by
M Z , O S ( u ) = k R k Γ ( R k + u μ + 1 ) Γ ( k ) Γ ( R + u μ + 1 ) .
Using the MGF value calculated by Equation (13), the P f a and P d values of the OS-CFAR detection method are respectively obtained by
P f a , O S = M Z , O S ( t ) | t = T / μ = k R k Γ ( R k + T + 1 ) Γ ( k ) Γ ( R + T + 1 ) ,
P d , O S = M Z , O S ( t ) | t = T μ ( 1 + S N R ) = k R k Γ ( R k + T / ( 1 + S N R ) + 1 ) Γ ( k ) Γ ( R + T / ( 1 + S N R ) + 1 ) .
Based on all the aforementioned, it can be concluded that both the CA-CFAR and OS-CFAR methods exhibit a CFAR property because their P f a and P d values are independent of μ .
Traditional one-dimensional CFAR detectors employ a detection window that slides along a single axis, which can significantly reduce their robustness when background characteristics exhibit rapid variations. Further, due to the complex marine environment and the application of spatial filtering algorithms, sonar imagery often shows substantial noise fluctuations in both the azimuth and range dimensions. Therefore, to achieve superior detection performance, this study adopts a two-dimensional detection window approach. The schematic diagram of two common types of two-dimensional detection windows is illustrated in Figure 2. Unlike one-dimensional detection windows, two-dimensional detection windows that include rectangular and cross detection window configurations offer distinct advantages by considering the noise characteristics in the vertical range and horizontal azimuth dimensions simultaneously. The cross detection window configuration has substantially fewer reference cells than rectangular windows of the same dimensions, which reduces computational complexity significantly. Consequently, this study employs a cross detection window to maintain computational efficiency. When the background clutter power is estimated using a two-dimensional cross detection window, the total number of reference cells is given by R = 4 n .

2.2. Segmentation–Detection Framework

In sonar imagery data, targets occupy an extremely small fraction of data points, and therefore, performing two-dimensional CFAR detection on every pixel would require extensive computational resources and processing times. To address this challenge, this study proposes a segmentation–detection framework for rapid CFAR-based target detection in sonar images. The proposed segmentation–detection framework is illustrated in Figure 3, where it can be seen that it includes two sequential stages, which are as follows:
(1)
Global Segmentation Stage: In this stage, a global threshold is calculated based on the distinct distribution characteristics between the target and background data, which enables rapid segmentation of potential target regions, yielding precisely segmented target regions.
(2)
Local Adaptive Detection Stage: In this stage, a two-dimensional CFAR sliding window is applied to the sonar data based on the segmented target regions to calculate localized adaptive thresholds. Subsequently, localized 2D-CFAR detection is performed, generating precise target detection results.
Figure 3. The block diagram of the proposed segmentation–detection framework.
Figure 3. The block diagram of the proposed segmentation–detection framework.
Jmse 13 00819 g003
Consider a sonar image frame with a size of M × N , where the intensity level range is { 0 , 1 , , L 1 } , and let f i denote the frequency of an intensity level i. Then, the probability of occurrence for intensity i is expressed by
P i = f i M × N , P i 0 and i = 0 L 1 P i = 1 .
By using an intensity threshold k as a boundary, data whose intensity levels belong to the set { 0 , 1 , , k } are classified as a background ( I 0 ), and data whose intensity levels are form the set { k + 1 , k + 2 , , L 1 } are classified as a target ( I 1 ), thus partitioning the entire dataset into two classes denoted by I 0 and I 1 . The probabilities ( w 0 , w 1 ) of the two classes, and their mean intensities ( μ 0 , μ 1 ) , are respectively defined as follows:
w 0 = i = 0 k P i , w 1 = i = k + 1 L 1 P i = 1 w 0 , μ 0 = i = 0 k i P i w 0 , μ 1 = i = k + 1 L 1 i P i w 1 .
The global mean intensity of an image is expressed by
μ T = i = 0 k i P i .
The inter-class variance between classes I 0 and I 1 is calculated by
S B = w 0 ( μ 0 μ T ) 2 + w 1 ( μ 1 μ T ) 2 .
Otsu’s method [18] selects an optimal intensity threshold k * by maximizing the inter-class variance value, that is the S B value, which quantifies the separation between the two classes. The optimal threshold is defined by
k * = arg max 0 k L 1 S B .
By iterating through the sonar data, any data point whose intensity is larger than the predetermined optimal intensity segmentation threshold is classified as a target; otherwise, it is classified as a background. The application of Otsu’s method can effectively isolate target regions while removing the majority of the background, thus significantly reducing the computational load of subsequent detection algorithms. In addition, the segmentation process based on Otsu’s method can also remove some obvious interference sources that could potentially trigger false alarms, which further reduces the false alarm rate of a detection system.

2.3. CFAR Detection Based on Segmented Ordered Weighting

Based on the segmentation–detection framework proposed in Section 2.2, this study presents the SOW-CFAR detection method. First, Otsu’s method is employed to segment the region where the targets are located. Then, a local two-dimensional ordered-weighting CFAR detection is performed using a cross detection window that simultaneously considers the noise characteristics in both the azimuth and range dimensions. The schematic diagram of the local two-dimensional ordered-weighting CFAR detection in SOW-CFAR is displayed in Figure 4.
Inspired by the OS-based methods, the proposed SOW-CFAR method sorts the reference cells in ascending order (i.e., from the smallest to the largest) in the sliding window, yielding ordered samples, which can be expressed as
x ( 1 ) x ( 2 ) x ( R ) .
It should be noted that higher-order samples (e.g., x ( R ) ) might overestimate the background power, thus suppressing weak targets; in contrast, lower-order samples (e.g., x ( 1 ) ) might underestimate the background power, thus increasing false alarms. To address this problem, the proposed method assigns weighted coefficients ω i ( i = 1 , 2 , , R ) to the ordered samples and uses the weighted sum for background power estimation. The weighted sum is defined by
Z = i = 1 R ω i x ( i ) .
Although the original reference cells are statistically independent, the ordered statistics are not. To resolve this problem, this study introduces auxiliary variables [19], and an auxiliary variable v i is defined as follows:
ν i = ( R + 1 i ) x ( i ) x ( i 1 ) , x ( 0 ) = 0 .
The auxiliary variables are all IID and follow an exponential distribution defined by
f ( v i ) = 1 μ exp v i μ , v i 0 .
Through algebraic manipulation, a weighted sum Z can be expressed as follows:
Z = i = 1 R ω i x ( i ) = i = 1 R ν i c i ,
where
c i = R + 1 i j = i R ω j .
Thus, the MGF of weighted sum Z becomes
M Z ( t ) = i = 1 R c i μ t + c i .
Further, using the MGF, the P f a and P d values are respectively calculated by
P f a = M Z ( t ) | t = T / μ = i = 1 R c i T + c i = i = 1 R R + 1 i j = i R ω j T + R + 1 i j = i R ω j ,
P d = M Z ( t ) | t = T μ ( 1 + S N R ) = i = 1 R c i T 1 + S N R + c i = i = 1 R R + 1 i j = i R ω j T 1 + S N R + R + 1 i j = i R ω j .
In scenarios, the weighting coefficients can be configured as rectangular or trapezoidal, depending on a particular scenario. When the weighting coefficients are set as a rectangular window, the weighting coefficients are defined as follows:
ω j = ρ j l = 1 R ρ l ,
where
ρ l = 0 1 l R 1 , 1 R 1 + 1 l R R 2 , 0 R R 2 + 1 l R .
A rectangular window assigns zero weighting coefficients to both the lower- and higher-order samples, thus effectively excluding these extremes, and uses the average of the remaining samples to estimate the background power. However, conventional detection methods, including the CA, OS, and CM-CFAR methods, can be regarded as specialized implementations of a rectangular window framework.
The weighting coefficients of the CA, OS, and CM-CFAR methods using a rectangular window as a weighting function are respectively expressed by
ρ l , C A = 1 1 l R ,
ρ l , O S = 0 1 l k 1 , 1 l = k , 0 k + 1 l R ,
ρ l , C M = 1 1 l R 1 , 0 R 1 + 1 l R .
However, when a trapezoidal window is applied as a weighting function, ρ l is defined as follows:
ρ l = 1 α R 1 ( l 1 ) + α 1 l R 1 , 1 R 1 + 1 l R R 2 , ( 0 α 1 , 0 β 1 ) . β 1 R 2 ( l R ) + β R R 2 + 1 l R ,
A trapezoidal window assigns small weighting coefficients to both the lower- and higher-order samples rather than discarding them completely and uses the weighted sum of all ordered samples to estimate the background power. This approach ensures a more balanced and robust estimation compared to the rectangular window, particularly in heterogeneous clutter environments. The WW-CFAR method implements a trapezoidal window framework, which provides an enhanced detection performance by adaptively incorporating all reference samples while mitigating the impact of the extreme values.
A rectangular window enhances a detection algorithm’s robustness against multi-target interference, making it particularly suitable for short-range, high-SNR scenarios. However, its performance can significantly degrade in low-SNR environments and often result in missed detections. In contrast, a trapezoidal window ensures better target detection across different SNR conditions at the cost of increased false alarms. However, when a trapezoidal window is combined with a segmentation algorithm, its false alarm rate can be effectively mitigated. Therefore, by strategically integrating the rectangular and trapezoidal windows, a detection system can achieve superior overall performance, thus balancing robustness against interference with reliable target detection under diverse operational conditions. The SOW-CFAR employs a hybrid window strategy, where a rectangular window is used at close ranges to suppress strong sidelobe interference, and a trapezoidal window is adopted at long ranges to reduce missed detections of low-SNR marginal terrain echoes.

3. Experimental Verification Results

This section evaluates and compares the performance of multiple CFAR detection methods through lake trials, further validating the efficacy of the proposed methods.

3.1. Experimental Hardware and Software

The experiments were performed on a desktop computer running the Microsoft Windows 11 operating system, equipped with an Intel Core i7-11800H CPU and 32 GB of RAM. In addition, MATLAB R2019b was used as a computational platform in algorithm simulations, and validation trials were conducted using actual lake data.

3.2. Experimental Data

In this study, the Songhua Lake in northeastern China was selected as an experimental site to evaluate the performance of the proposed detection methods. An HT-series multibeam bathymetric sonar system developed by Harbin Engineering University was employed for underwater terrain data acquisition. The HT-series multibeam bathymetric sonar system is illustrated in Figure 5. This system’s main technical parameters were as follows: The center frequency was 200 kHz, the transmitted signal was a single-frequency rectangular pulse, and the receiving linear array consisted of 108 elements. It employed a Mills Cross array architecture, as shown in Figure 6.
The experimental data were collected during November 2023, and Figure 7 shows the experimental device during navigation. The raw echo data acquired by the HT-series multibeam bathymetric sonar system underwent sequential processing, which included amplification, filtering, demodulation, and beamforming processes conducted to generate echo signals in various directions. Figure 8 shows a portion of the echo signals coming from various directions.
The echo signals were organized by direction to generate a water column image. Two representative image frames are illustrated in Figure 9, and they were selected as test datasets. These images exhibit continuous strip-like features corresponding to the topographic targets, and approximately 4300 pixels and 5000 pixels were occupied by terrain targets in Images 1 and 2, respectively. The sonar imagery captured by the system contained terrain targets with significant SNR variations between the central and marginal regions, as well as notable sidelobe interference, which made these data ideal for the rigorous evaluation of the proposed algorithm.

3.3. Experimental Results

3.3.1. Performance Test of Proposed Segmentation–Detection Framework

To validate the effectiveness of the proposed segmentation–detection framework, this study conducted comparative tests with widely used CFAR methods, including the CA and OS-CFAR methods, which were explained in detail in Section 2.1.
The proposed segmentation–detection framework was applied to the two comparison methods, yielding the segmented-cell-averaging CFAR (SCA-CFAR) and segmented-ordered-statistics CFAR (SOS-CFAR) detection methods. It was assumed that both the SCA-CFAR and SOS-CFAR detection methods first executed the OTSU-based global segmentation process, which was followed by localized CA or OS-CFAR operations performed on segmented regions.
The multibeam sonar imagery, Image 1, with approximately 4300 pixels occupied by terrain targets, was selected to compare the SCA and SOS-CFAR methods with the CA and OS-CFAR methods having identical parameters ( n = 32 , R = 128 , P f a = 2 % ) . The detection results are shown in Figure 10.
The following six metrics were selected to evaluate the detection performance of all the methods:
(1)
Precision: This denoted a proportion of true targets among all detected positives;
(2)
Accuracy: This metric indicated a proportion of correct predictions among all samples;
(3)
Recall: This indicated a proportion of true targets correctly identified among all true targets;
(4)
MDR: This represented a proportion of undetected targets;
(5)
FAR: This denoted a proportion of false positives among detected positives;
(6)
Elapsed time: This was the computational time required for detection processing.
The following four key parameters were defined: True Positives (TPs), which represented the number of samples correctly classified as targets; True Negatives (TNs), which denoted the samples accurately identified as background; False Positives (FPs), which indicated the count of background samples erroneously detected as targets; and False Negatives (FNs), which correspond to the instances where true targets were mistakenly classified as background. Accordingly, the calculation formulas of the first five evaluation metrics were defined as follows:
Precision = T P T P + F P ,
Accuracy = T P + T N T P + T N + F P + F N ,
Recall = T P T P + F N ,
MDR = F N T P + F N ,
FAR = F P F P + T N .
The quantitative performance metrics of the detection results of the four methods for Image 1 are summarized in Table 1.
By comparing the results presented in Figure 10a,b, as well as those displayed in Figure 10c,d, it can be observed that the SCA and SOS-CFAR methods significantly reduced the false alarm number compared to the conventional CA and OS-CFAR methods, particularly in the close-range regions lacking terrain targets. As presented in Table 1, the FAR for OS-CFAR methods decreased the false alarm rate from 18.371% to 4.372% after applying the proposed segmentation–detection framework. This improvement could be attributed to the global segmentation step in the proposed framework, which could effectively eliminate obvious interference sources that could potentially trigger false alarms, thus reducing the FAR of a detection method.
In addition, the SCA-CFAR completed the detection process in 0.313 s, surpassing the CA with 3.936 s, the OS with 10.687 s, and the SOS-CFAR with 0.498 s. This efficiency gain could be attributed to the global segmentation step of the proposed framework, which eliminated most background regions, bypassing unnecessary local detection operations.
Consequently, the segmentation–detection framework could successfully reduce both the computational complexity and the FAR of the CA and OS-CFAR detection methods in sonar imagery target detection tasks.

3.3.2. Performance Test of Proposed SOW-CFAR Detection Method

To validate the effectiveness of the proposed SOW CFAR method, this study conducted comparative tests with the CA and OS-CFAR methods. The multibeam sonar imagery, Image 2, with approximately 5000 pixels occupied by terrain targets, was used to compare the proposed SOW-CFAR method with the CA and OS-CFAR methods with identical parameters ( n = 32 , R = 128 , P f a = 2 % ) . The comparison detection results are displayed in Figure 11, and the quantitative metrics are summarized in Table 2.
By comparing the results presented in Figure 11a–c, it can be concluded that the SOW-CFAR methods achieved fewer false alarms compared to the conventional CA and OS-CFAR methods in the close-range regions lacking terrain targets. The SOW-CFAR method achieved a precision of 96.357% and a FAR of 3.643%, outperforming the CA and OS-CFAR methods, as presented in Table 2. The SOW-CFAR completed the detection process in 0.607 s, surpassing the CA with 4.016 s and OS-CFAR methods with 10.855 s, although the computational complexity of the SOW-CFAR was higher compared to the other two methods. These improvements can be attributed to the proposed segmentation–detection framework, which significantly removed background regions, avoiding computationally intensive detection operations on non-critical regions.
In the low-SNR far-range marginal terrain regions, the SOW-CFAR method achieved a significantly higher detection count of topographic targets compared to the conventional CA and OS-CFAR detectors and had an MDR of 21.148%, thus outperforming the CA and OS-CFAR methods with the MDR values of 57.443% and 46.038%, respectively, as presented in Table 2. This improvement could be attributed to the trapezoidal weighting window employed by the SOW-CFAR method in these regions, which computed a lower amplitude threshold, thus enabling the detection of a larger number of topographic targets.
Consequently, the SOW-CFAR method demonstrated significant advantages over the CA and OS-CFAR methods in terms of computational efficiency, FAR, and MDR.

4. Discussion

In the proposed segmentation–detection framework, a global segmentation threshold, which is calculated based on the distinct distribution characteristics between the target and background data, is applied to the entire image, which can significantly reduce the number of pixels required for refined 2D-CFAR detection. This approach can drastically reduce the computational time while enhancing efficiency. As explained in Section 3.3.1, although the CA-CFAR method achieved the shortest detection time of 0.313 s in the experimental tests, the OS-CFAR method exhibited the most substantial computational speed improvement, reducing the detection time from 10.687 s to 0.498 s. Notably, the OS-CFAR method involved sorting operations, which inherently imposed higher computational complexity compared to the CA-CFAR method. This implies that the proposed framework can provide higher computational acceleration for algorithms with elevated complexity. As presented in Section 3.3.2, leveraging the segmentation–detection framework, the SOW-CFAR method, despite having the highest inherent complexity, achieved a runtime of 0.607 s in the experiments, outperforming both the CA (4.016 s) and OS-CFAR (10.855 s) methods and demonstrating real-time processing capability.
The SCA and SOS-CFAR methods achieved lower FAR compared to the CA and OS-CFAR methods in Section 3.3.1. This performance enhancement can be attributed to the global segmentation stage, which determined an optimal intensity threshold by maximizing inter-class variance values. Through iterative sonar data processing, interference sources with intensity values less than this optimized segmentation threshold were systematically classified as background noise and eliminated. Experimental results demonstrated that prominent false-alarm-inducing interference could be effectively suppressed during rapid target region isolation. However, the proposed framework exhibited limited effectiveness in mitigating tunnel-effect interference caused by strong direct-path reflections, as evidenced by retained high-amplitude artifacts being incorrectly classified as valid targets during the segmentation phase.
By strategically integrating rectangular and trapezoidal windows, the SOW-CFAR method achieved lower FAR and MDR values compared to the other methods in Section 3.3.2. In high-SNR near-range regions, even when strong interferences evaded global segmentation, the rectangular window could partially mitigate them by discarding high-order ordered samples during background power estimation, thus suppressing artifacts like tunnel effects. In low-SNR far-range marginal terrain regions, unlike the rectangular window, the trapezoidal window assigned smaller weights to low- and high-order samples instead of fully discarding them, using a weighted sum of all ordered samples to estimate background power. Compared to the rectangular window, the trapezoidal approach engaged more reference cells in background estimation, enhancing equilibrium and robustness. This resulted in a lower detection threshold, thus significantly improving the terrain target recall rate and reducing the MDR value. As shown in Table 2, the SOW-CFAR method achieved an MDR of 21.148%, outperforming the CA (57.443%) and OS-CFAR (46.038%) methods.

5. Limitations

Although the proposed segmentation–detection framework and SOW-CFAR method perform exceptionally well in sonar image target detection, several limitations remain in real-time practical applications:
First, global segmentation processing may prevent weak target detection. During the global segmentation stage, while the global threshold calculated based on the distinct distribution characteristics of targets and background allows for rapid segmentation of potential target regions, it runs the risk of misclassifying low-intensity targets as background, increasing the likelihood of missed detections. Implementing a hybrid segmentation strategy that integrates global thresholds with locally adaptive thresholds could improve weak target preservation and overall detection performance.
Second, the exponential distribution assumption may not adequately capture the spatiotemporal statistical characteristics of actual sonar backgrounds. Modern multibeam bathymetric sonar systems with high-range resolution and narrow beam patterns significantly reduce the number of scatterers per resolution cell, resulting in reverberation amplitude probability density functions that differ from traditional Rayleigh distributions. Although this study prioritized computational efficiency by using the exponential distribution for tail approximation of sonar signal intensity statistics (which is superior to the Rayleigh distribution), the observed 3.643% false alarm rate of SOW-CFAR in practical detection still exceeds the theoretical parameter 2%, indicating model mismatch in certain reverberation regions. Systematic research in [20] reveals that the K-distribution and generalized Pareto distribution can more accurately describe sonar data statistics. However, complex distribution models often prevent closed-form expressions for calculating false alarm and detection probabilities in CFAR implementations, increasing computational complexity and limiting practical applicability. Future research will focus on finding the best balance between model accuracy and computational efficiency for CFAR detectors.

6. Conclusions

This study aims to address the main challenges in CFAR detection in sonar imagery. First, considering that targets are extremely sparse in high-resolution sonar data, this study designs a segmentation–detection framework tailored for rapid CFAR-based target detection. In addition, by employing the principle of global threshold segmentation, the computational complexity is significantly reduced while suppressing false alarms effectively. Second, a SOW-CFAR method is developed by integrating rectangular and trapezoidal weighting windows to maintain robust detection performance in complex sonar imaging environments characterized by heterogeneous backgrounds and noise interference. The results of the experimental tests conducted using the lake data validate that the proposed segmentation–detection framework can increase the detection efficiency and accuracy of the CFAR algorithms for sonar targets. Compared to the conventional methods, the SOW-CFAR method demonstrates superior performance across all evaluation metrics, achieving optimal precision, minimal false alarms, and near-real-time processing capabilities, thus providing an effective solution to the problems in practical underwater target detection applications, particularly for detecting seabed terrain targets in multibeam bathymetric sonar water column imagery.

Author Contributions

Conceptualization, W.N. and H.L.; Methodology, W.N. and J.W. (Jian Wang); Software, W.N.; Validation, W.N. and J.W. (Jiani Wen); Formal Analysis, W.N. and T.X.; Investigation, W.N. and J.W. (Jiani Wen); Resources, H.L.; Data Curation, Y.H.; Writing—Original Draft, W.N.; Writing—Review & Editing, W.N. and J.W. (Jian Wang); Visualization, Y.H.; Supervision, H.L. and J.W. (Jian Wang); Project Administration, J.W. (Jian Wang) and H.L.; Funding Acquisition, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China, under grant No. 2021YFC3101803; in part by the Marine Economy Development Special Project of Guangdong Province, under grant No. GDNRC[2024]43; in part by the National Natural Science Foundation of China, under grant No. U1906218; and in part by the Natural Science Foundation of Heilongjiang Province, under grant No. ZD2020D001.

Data Availability Statement

The datasets presented in this article are not readily available because they contain proprietary sonar signal parameters from an ongoing research project and are subject to pending intellectual property rights. Requests to access the datasets should be directed to the corresponding author at hsenli@126.com.

Acknowledgments

We are very grateful to the Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, for providing essential test equipment and devices that significantly supported this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, H.; Ye, X.; Zhou, H.; Huang, H. A Mapping Method Fusing Forward-Looking Sonar and Side-Scan Sonar. J. Mar. Sci. Eng. 2025, 13, 166. [Google Scholar] [CrossRef]
  2. Grządziel, A. The impact of side-scan sonar resolution and acoustic shadow phenomenon on the quality of sonar imagery and data interpretation capabilities. Remote Sens. 2023, 15, 5599. [Google Scholar] [CrossRef]
  3. Hong, S. A robust underwater object detection method using forward-looking sonar images. Remote Sens. Lett. 2023, 14, 442–449. [Google Scholar] [CrossRef]
  4. Zheng, L.; Hu, T.; Zhu, J. Underwater sonar target detection based on improved ScEMA YOLOv8. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1503505. [Google Scholar] [CrossRef]
  5. Zheng, K.; Liang, H.; Zhao, H.; Chen, Z.; Xie, G.; Li, L.; Lu, J.; Long, Z. Application and Analysis of the MFF-YOLOv7 Model in Underwater Sonar Image Target Detection. J. Mar. Sci. Eng. 2024, 12, 2326. [Google Scholar] [CrossRef]
  6. Ye, T.; Deng, X.; Cong, X.; Zhou, H.; Yan, X. Parallelisation Strategy of Non-local Means Filtering Algorithm for Real-time Denoising of Forward-looking Multi-beam Sonar Images. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 13226–13243. [Google Scholar] [CrossRef]
  7. Finn, H.M. Adaptive detection mode with threshold control as a function of spatially sampled clutter-level estimates. RCA Rev. 1968, 29, 414–465. [Google Scholar]
  8. Smith, M.E.; Varshney, P.K. Intelligent CFAR processor based on data variability. IEEE Trans. Aerosp. Electron. Syst. 2002, 36, 837–847. [Google Scholar] [CrossRef]
  9. Shuping, L.; Feng, D.; Ranwei, L. Robust Centralized CFAR Detection for Multistatic Sonar Systems. Chin. J. Electron. 2021, 30, 322–330. [Google Scholar] [CrossRef]
  10. He, X.; Xu, Y.; Liu, M.; Hao, C. Inhomogeneity Suppression CFAR Detection Based on Statistical Modeling. IEEE Trans. Aerosp. Electron. Syst. 2022, 59, 1231–1238. [Google Scholar] [CrossRef]
  11. Villar, S.A.; Acosta, G.G.; Senna, A.S.; Rozenfeld, A. Pipeline detection system from acoustic images utilizing CA-CFAR. In Proceedings of the 2013 OCEANS-San Diego, San Diego, CA, USA, 23–27 September 2013; pp. 1–8. [Google Scholar]
  12. Villar, S.A.; Acosta, G.G. Accumulated CA-CFAR Process in 2-D for Online Object Detection From Sidescan Sonar Data. IEEE J. Ocean. Eng. 2014, 40, 558–569. [Google Scholar]
  13. Verma, A.K. Variability index constant false alarm rate (VI-CFAR) for sonar target detection. In Proceedings of the 2008 International Conference on Signal Processing, Communications and Networking, Chennai, India, 4–6 January 2008; pp. 138–141. [Google Scholar]
  14. Wang, J.; Li, H.; Huo, G.; Li, C.; Wei, Y. A multi-beam seafloor constant false alarm detection method based on weighted element averaging. J. Mar. Sci. Eng. 2023, 11, 513. [Google Scholar] [CrossRef]
  15. Villar, S.A.; Menna, B.V.; Torcida, S.; Acosta, G.G. Efficient approach for OS-CFAR 2D technique using distributive histograms and breakdown point optimal concept applied to acoustic images. IET Radar Sonar Navig. 2019, 13, 2071–2082. [Google Scholar] [CrossRef]
  16. Wang, J.; Li, H.; Du, W.; Xing, T.; Zhou, T. Seafloor terrain detection from acoustic images utilizing the fast two-dimensional CMLD-CFAR. Int. J. Nav. Archit. Ocean. Eng. 2021, 13, 187–193. [Google Scholar] [CrossRef]
  17. Xiangwei, M.; Guan, J.; You, H. Performance analysis of the weighted window CFAR algorithms. In Proceedings of the 2003 International Conference on Radar (IEEE Cat. No.03EX695), Adelaide, Australia, 3–5 September 2003; pp. 354–357. [Google Scholar]
  18. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. Syst. Man Cybern. IEEE Trans. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  19. David, D. Order Statistics; Wiley: New York, NY, USA, 1981. [Google Scholar]
  20. Shen, J.; Gini, F.; Greco, M.; Zhou, T. Statistical Analyses of Measured Forward-Looking Sonar Echo Data in a Shallow Water Environment. In Proceedings of the 2022 IEEE 12th Sensor Array and Multichannel Signal Processing Workshop (SAM), Trondheim, Norway, 20–23 June 2022; pp. 106–110. [Google Scholar]
Figure 1. The schematic diagram of the one-dimensional CFAR detector structure.
Figure 1. The schematic diagram of the one-dimensional CFAR detector structure.
Jmse 13 00819 g001
Figure 2. The schematic diagram of two common types of two-dimensional detection windows: (a) rectangular detection window; (b) cross detection window.
Figure 2. The schematic diagram of two common types of two-dimensional detection windows: (a) rectangular detection window; (b) cross detection window.
Jmse 13 00819 g002
Figure 4. The schematic diagram of the local two-dimensional ordered-weighting CFAR detection in SOW-CFAR.
Figure 4. The schematic diagram of the local two-dimensional ordered-weighting CFAR detection in SOW-CFAR.
Jmse 13 00819 g004
Figure 5. A picture of the HT-series multibeam bathymetric sonar system.
Figure 5. A picture of the HT-series multibeam bathymetric sonar system.
Jmse 13 00819 g005
Figure 6. Schematic diagram of the Mills Cross array architecture.
Figure 6. Schematic diagram of the Mills Cross array architecture.
Jmse 13 00819 g006
Figure 7. Picture of the HT-series multibeam bathymetric sonar system during navigation.
Figure 7. Picture of the HT-series multibeam bathymetric sonar system during navigation.
Jmse 13 00819 g007
Figure 8. The echo signals in different directions: (a) −60 deg; (b) −20 deg; (c) 20 deg; (d) 60 deg.
Figure 8. The echo signals in different directions: (a) −60 deg; (b) −20 deg; (c) 20 deg; (d) 60 deg.
Jmse 13 00819 g008aJmse 13 00819 g008b
Figure 9. The water column images acquired by the multibeam bathymetric sonar: (a) Image 1; (b) Image 2.
Figure 9. The water column images acquired by the multibeam bathymetric sonar: (a) Image 1; (b) Image 2.
Jmse 13 00819 g009
Figure 10. The detection results of different methods for Image 1: (a) CA-CFAR; (b) SCA-CFAR; (c) OS-CFAR; (d) SOS-CFAR.
Figure 10. The detection results of different methods for Image 1: (a) CA-CFAR; (b) SCA-CFAR; (c) OS-CFAR; (d) SOS-CFAR.
Jmse 13 00819 g010
Figure 11. The detection results of Image 2: (a) CA-CFAR; (b) OS-CFAR; (c) SOW-CFAR.
Figure 11. The detection results of Image 2: (a) CA-CFAR; (b) OS-CFAR; (c) SOW-CFAR.
Jmse 13 00819 g011
Table 1. Detection performance metrics of the four algorithms for Image 1.
Table 1. Detection performance metrics of the four algorithms for Image 1.
MethodCASCAOSSOS
Precision (%)81.33392.51981.62995.628 1
Accuracy (%)99.97499.99199.96999.994 1
Recall (%)42.45642.45652.407 152.407 1
MDR (%)57.54457.54447.593 147.593 1
FAR (%)18.6677.47918.3714.372 1
Elapsed time (s)3.9360.313 110.6870.498
1 The boldface values indicate the optimal performance metrics across all evaluated parameters.
Table 2. Detection performance metrics of various algorithms for Image 2.
Table 2. Detection performance metrics of various algorithms for Image 2.
MethodCAOSSOW
Precision (%)77.82778.42496.357 1
Accuracy (%)99.96399.95599.991 1
Recall (%)42.55753.96278.852 1
MDR ( %)57.44346.03821.148 1
FAR (%)22.17321.5763.643 1
Elapsed time (s)4.01610.8550.607 1
1 The boldface values denote the optimal performance metrics across all evaluated parameters.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Na, W.; Li, H.; Wang, J.; Wen, J.; Xing, T.; Hou, Y. A Constant False Alarm Rate Detection Method for Sonar Imagery Targets Based on Segmented Ordered Weighting. J. Mar. Sci. Eng. 2025, 13, 819. https://doi.org/10.3390/jmse13040819

AMA Style

Na W, Li H, Wang J, Wen J, Xing T, Hou Y. A Constant False Alarm Rate Detection Method for Sonar Imagery Targets Based on Segmented Ordered Weighting. Journal of Marine Science and Engineering. 2025; 13(4):819. https://doi.org/10.3390/jmse13040819

Chicago/Turabian Style

Na, Wankai, Haisen Li, Jian Wang, Jiani Wen, Tianyao Xing, and Yuxia Hou. 2025. "A Constant False Alarm Rate Detection Method for Sonar Imagery Targets Based on Segmented Ordered Weighting" Journal of Marine Science and Engineering 13, no. 4: 819. https://doi.org/10.3390/jmse13040819

APA Style

Na, W., Li, H., Wang, J., Wen, J., Xing, T., & Hou, Y. (2025). A Constant False Alarm Rate Detection Method for Sonar Imagery Targets Based on Segmented Ordered Weighting. Journal of Marine Science and Engineering, 13(4), 819. https://doi.org/10.3390/jmse13040819

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop