Next Article in Journal
An Inter-Regional Lateral Transshipment Model to Massive Relief Supplies with Deprivation Costs
Previous Article in Journal
A Note on Rigidity and Vanishing Theorems for Translating Solitons
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fuzzy Hypothesis Testing for Radar Detection: A Statistical Approach for Reducing False Alarm and Miss Probabilities

by
Ahmed K. Elsherif
1,†,
Hanan Haj Ahmad
2,3,*,†,
Mohamed Aboshady
4,† and
Basma Mostafa
5,6,†
1
Department of Mathematics, Military Technical College, Cairo, Egypt
2
Department of Basic Science, The General Administration of Preparatory Year, King Faisal University, Al Ahsa 31982, Saudi Arabia
3
Department of Mathematics and Statistics, College of Science, King Faisal University, Al Ahsa 31982, Saudi Arabia
4
Department of Basic Science, Faculty of Engineering, The British University in Egypt, El Sherook City, Cairo, Egypt
5
Operations Research Department, Faculty of Computers & Artificial Intelligence, Cairo University, Cairo, Egypt
6
Faculty of Artificial Intelligence & Computing, Horus University, New Damietta, Egypt
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(14), 2299; https://doi.org/10.3390/math13142299
Submission received: 26 April 2025 / Revised: 5 June 2025 / Accepted: 11 July 2025 / Published: 17 July 2025
(This article belongs to the Special Issue New Advance in Applied Probability and Statistical Inference)

Abstract

This paper addresses a fundamental challenge in statistical radar detection systems: optimizing the trade-off between the probability of a false alarm ( P F A ) and the probability of a miss ( P M ). These two metrics are inversely related and critical for performance evaluation. Traditional detection approaches often enhance one aspect at the expense of the other, limiting their practical applicability. To overcome this limitation, a fuzzy hypothesis testing framework is introduced that improves decision making under uncertainty by incorporating both crisp and fuzzy data representations. The methodology is divided into three phases. In the first phase, we reduce the probability of false alarm P F A while maintaining a constant probability of miss P M using crisp data characterized by deterministic values and classical statistical thresholds. In the second phase, the inverse scenario is considered: minimizing P M while keeping P F A fixed. This is achieved through parameter tuning and refined threshold calibration. In the third phase, a strategy is developed to simultaneously enhance both P F A and P M , despite their inverse correlation, by adopting adaptive decision rules. To further strengthen system adaptability, fuzzy data are introduced, which effectively model imprecision and ambiguity. This enhances robustness, particularly in scenarios where rapid and accurate classification is essential. The proposed methods are validated through both real and synthetic simulations of radar measurements, demonstrating their ability to enhance detection reliability across diverse conditions. The findings confirm the applicability of fuzzy hypothesis testing for modern radar systems in both civilian and military contexts, providing a statistically sound and operationally applicable approach for reducing detection errors and optimizing system performance.

1. Introduction

Uncertainty in real-world data arises mainly from two sources: randomness and fuzziness. Randomness is typically modeled using probability theory, while fuzziness results from imprecision or vagueness in measurement and classification. Hypothesis testing is key in statistics, especially with fuzzy data that involves uncertainty. Recently, fuzzy hypothesis testing has garnered attention from researchers, resulting in the development of new algorithms and enhanced methods for real-world applications. These advancements strengthen the foundations of statistics and enhance the reliability of techniques used to address complex data challenges. Overall, the growth of fuzzy hypothesis testing is crucial for enhancing statistical understanding and making informed decisions in uncertain situations [1].
Real-world decision making under uncertainty is a central challenge in many signal processing domains, particularly in radar detection systems. Traditional binary hypothesis testing frameworks assume that data are crisp and precisely defined. However, radar environments are often subject to noise, clutter, and fluctuating signal properties, which reduce the reliability of such crisp decision-making models. In these scenarios, the boundary between the noise and target signals becomes ambiguous, especially under conditions of low signal-to-noise ratio (SNR) or environmental interference.
Fuzzy hypothesis testing (FHT) has emerged as a promising alternative for managing uncertainty in classification problems. Introduced as an extension of classical statistical testing, FHT incorporates fuzzy set theory to represent imprecise or vague data, enabling a gradual transition between decision states rather than binary outcomes. This makes it particularly well suited to radar applications, where received signals often deviate from idealized models. Unlike conventional approaches that rely on sharp critical regions, FHT enables the modulation of rejection boundaries using fuzzy membership functions, offering a controlled balance between false alarms and missed detections.
This study proposes a fuzzy hypothesis testing algorithm specifically tailored to radar detection. The proposed approach leverages fuzzy data representations to enhance adaptability in setting decision thresholds without sacrificing scientific rigor. It introduces a methodology that allows for independent and even simultaneous improvement of the probability of false alarm ( P F A ) and the probability of miss ( P M ), despite their traditionally inverse relationship. This contribution is especially valuable in radar systems where operational constraints may demand prioritizing one performance measure over the other.
While previous works have explored fuzzy statistical methods in signal detection, few have comprehensively addressed their application to radar systems under practical operating conditions. The gap is particularly pronounced in the development of algorithmic frameworks capable of context-aware modulation of detection thresholds. Recent literature highlights the need for more resilient decision models in structural health monitoring [2], education systems [3], risk management [4], and structural reliability analysis [5,6], reinforcing the potential of FHT in diverse domains.
To address this need, the present work introduces the following contributions:
  • A nonparametric fuzzy hypothesis testing framework for radar detection.
  • An algorithm that enables adaptive trade-offs between P F A and P M .
  • A demonstration of controlled threshold modulation based on fuzzy rejection regions.
  • Experimental validation using both synthetic and real radar datasets.
The remainder of this paper is structured as follows: Section 2 outlines the necessary fuzzy logic with problem statement and motivation. Preliminaries are introduced in Section 3. Fuzzy hypothesis testing in radar contexts and the problem formulation with the statistical basis are presented in Section 4, Section 5 and Section 6. Section 7 describes the proposed fuzzy testing algorithm. Section 8 presents the experimental results, and Section 9 discusses the implications, limitations, and future research directions.

2. Problem Statement and Motivation

Statistical hypothesis testing is a crucial component of statistical inference, where a hypothesis about a population is evaluated based on the sample data. The null hypothesis, denoted as H 0 , represents the default assumption, while the alternative hypothesis, H 1 , suggests a competing claim. A statistical procedure determines whether to reject or accept H 0 . This study focuses on integrating fuzziness into hypotheses and data rather than relying on precise definitions.
One important application is radar detection, which is a binary decision problem where accurately classifying signal and noise is essential. The decision space can be simplified to two hypotheses:
  • H 0 : The received signal is noise.
  • H 1 : The received signal contains a target.
The receiver must determine whether to accept or reject H 0 in the presence of channel disturbances. Traditional radar detection methods rely on crisp hypotheses, assuming well-defined decision boundaries. However, environmental uncertainties, noise fluctuations, and signal interference challenge the accuracy of these models. A flexible framework has been introduced, which enhances decision making under uncertain conditions by incorporating fuzzy hypothesis testing.
The motivation for fuzzy hypothesis testing stems from the limitations of classical statistical methods, which rely on precisely defined hypotheses. In radar detection systems, uncertainties in noise characteristics and variations in signal behavior often reduce the reliability of crisp decision models. Incorporating fuzziness into hypothesis testing enhances robustness and enables better accommodation of real-world imperfections in signal classification.
Despite its advantages, fuzzy hypothesis testing presents challenges. It often involves complex calculations compared to traditional crisp methods, and the selection of fuzzy parameters can influence decision outcomes. However, these challenges are outweighed by the benefits of improved adaptability and robustness in uncertain environments.
This study is motivated by the limitations of crisp decision frameworks in radar systems. The key contributions of this research include:
  • The development of a fuzzy hypothesis testing algorithm tailored to radar environments.
  • Demonstration of how fuzzy data allow for controlled modulation of detection thresholds without compromising scientific rigor.
  • Analytical methods that permit the independent or simultaneous enhancement of the probability of false alarm ( P F A ) and the probability of miss ( P M ), despite their typical inverse relationship.
  • Empirical validation using both synthetic and real-world radar datasets, confirming the practical viability and accuracy of the approach.

Fuzzy Hypothesis Testing: Background and Relevance to Radar

Fuzzy hypothesis testing (FHT) generalizes classical hypothesis testing by incorporating fuzzy logic, which allows for imprecise data representation and gradual decision boundaries [7,8]. This approach is particularly valuable in radar systems, where signal ambiguity and environmental noise hinder the effectiveness of crisp statistical decision rules.
Unlike parametric testing frameworks, FHT avoids rigid assumptions about distributional form, enabling more robust inference under uncertainty [9]. It provides a structured methodology for distinguishing overlapping signal and noise distributions using fuzzy sets, thereby reflecting real-world radar conditions more accurately.
Figure 1 provides a visual interpretation of how fuzzy logic enables the controlled modulation of detection thresholds in radar decision systems. The two overlapping distributions correspond to the noise hypothesis H 0 and the target hypothesis H 1 , with the x-axis representing a continuous decision statistic (e.g., test statistic or signal strength). Traditional crisp decision making enforces a rigid binary threshold at a specific point (dashed vertical line), forcing a complete switch from one decision to another. In contrast, the introduction of a fuzzy threshold region (shaded area) allows for a gradual transition between hypotheses based on the degree of overlap with the critical region. This approach supports partially confident decisions and provides a better account for signal ambiguity or measurement imprecision. Such fuzzy-based modulation facilitates more flexible and mission-specific radar decisions while preserving statistical validity. By explicitly modeling uncertainty instead of ignoring it, the proposed framework avoids the risk of overconfidence often associated with sharp thresholds, thereby maintaining scientific rigor.
Earlier works by Tanaka et al. [10] and Casals et al. [11] initiated the integration of fuzzy sets in hypothesis testing, followed by Arnold [12,13], who reinterpreted type I and type II errors under fuzzy frameworks. More recent contributions include Bayesian fuzzy tests [14], fuzzy p-values [7], and fuzzy confidence intervals [15].
For radar-specific applications, Elsherif et al. [16] and Parchami et al. [17] demonstrated the practical utility of FHT in managing false alarm and miss trade-offs, especially in cluttered detection environments. Recent advancements further explore fuzzy Neyman–Pearson tests [18], machine learning integration [1], and nonparametric fuzzy testing [9].
This study builds upon these foundations by introducing an adaptive fuzzy testing framework specifically tailored to radar applications. The method accounts for operational context and detection requirements, allowing for the dynamic modulation of threshold regions. Validation is provided on both synthetic and empirical radar data.
Recent studies have expanded the role of fuzzy logic beyond classical decision theory into broader domains involving uncertainty, such as structural health monitoring [2], educational decision systems [3], and structural safety assessment under incomplete data [5]. These works provide methodological insights applicable to radar systems, where ambiguity and contextual adaptation are essential. For example, the work of Kovács [6] introduces fuzzy evaluation metrics for uncertain structural behavior, inspiring the extension of fuzzy decision tools in radar-based classification.
Unlike parametric frameworks, nonparametric fuzzy testing [9] enables radar systems to model ambiguity in received signals without relying on prior distribution assumptions, thereby improving signal classification robustness. This flexibility is essential for effective binary decision making in radar environments with highly variable or unknown signal and noise distributions. Radar detection systems must navigate an inherent trade-off between the probability of false alarm ( P F A ) and the probability of miss ( P M ), which are typically inversely related. To address this challenge, the present study adopts three statistically grounded and operationally relevant strategies. First, reducing P F A while maintaining a fixed P M is achieved by widening the acceptance region of the null hypothesis H 0 , which corresponds in practical radar systems to broadening the defined noise region. This strategy is effective in clutter-rich environments affected by terrain or atmospheric interference. Second, reducing P M while keeping P F A constant is achieved by increasing the separation between H 0 and H 1 , thus enhancing signal distinction in low-clutter radar scenarios. Finally, when neither individual strategy is sufficient, simultaneous improvement of both P F A and P M is realized by increasing the sample size, which operationally equates to aggregating more signal observations before making a decision. These adaptations support the practical viability of the proposed fuzzy hypothesis testing framework under diverse radar conditions.

3. Preliminary Concepts

This section introduces key concepts in fuzzy hypothesis testing and the treatment of fuzzy numerical data, which form the foundation for the models and analysis presented in later sections.
 Definition 1. 
Fuzzy Hypothesis: A hypothesis of the form H : ω is termed a fuzzy hypothesis, denoted as H ( ω ) , where ω belongs to a fuzzy subset of the parameter space Θ. This subset is characterized by a membership function μ : Θ [ 0 , 1 ] , which assigns a degree of truth to each possible value of the parameter.
Definition 2. 
Left and Right Borders of the Mean: For a fuzzy random variable, let x i L and x i R represent the left and right bounds of the mean of each sample, respectively. The left and right fuzzy mean bounds are computed as:
x ¯ L = 1 n i = 1 n x i L , x ¯ R = 1 n i = 1 n x i R .
Refer to [19] for further derivations.
Definition 3. 
Standard Deviation of Fuzzy Random Variables: The standard deviation for a fuzzy random variable is defined as:
S c = 1 n 1 i = 1 n ( x i c x ¯ c ) 2 .
where x ¯ c is the average of all x i c values. See [19] for more details.
Definition 4. 
Left and Right Borders of the Standard Deviation: For a fuzzy random variable, the left and right boundaries of the standard deviation are given by:
S c L = 1 n 1 i = 1 n ( x i L x ¯ L ) 2
and
S c U = 1 n 1 i = 1 n ( x i R x ¯ R ) 2 .
where x ¯ c is the average of all x i c values [19].
Definition 5. 
Decibel-milliwatts (dBm): A logarithmic unit used to express power relative to 1 milliwatt, commonly used in radar and wireless systems. It is defined as:
Power ( dBm ) = 10 · log P ( in watts ) 1 mW .

4. Reducing Probability of False Alarm Under a Fixed Probability of Miss

Radar detection systems must navigate the inherent trade-off between false alarms and missed detections. A false alarm, or type I error, occurs when the system erroneously classifies noise or background interference as a valid target (i.e., rejecting the null hypothesis H 0 when it is true). Conversely, a missed detection, or type II error, arises when an actual target is present but goes undetected (i.e., failing to reject H 0 when the alternative hypothesis H 1 is true). This section focuses on reducing the probability of false alarm ( P F A ) while maintaining a fixed probability of miss ( P M ). Such an adjustment is essential in scenarios where it is critical to maximize the detection of potential threats, even at the risk of occasional false positives, as seen in early warning and surveillance radar systems [20].

4.1. Problem Formulation

Let X = ( x 1 , x 2 , , x n ) be an independent random sample representing the received radar signal power, assumed to follow a normal distribution with unknown mean μ and known variance σ 2 . The detection problem is formulated as a one-sided hypothesis test:
H 0 : θ = θ 0 ( No target noise only ) ,
H 1 : θ > θ 0 ( Target present signal detected ) ,
where θ denotes the true mean signal level.
  • Under the null hypothesis H 0 , the observed data are attributed solely to background noise and environmental clutter.
  • Under the alternative hypothesis H 1 , the presence of a target induces a statistically significant increase in the received signal power.
The receiver decides whether to reject H 0 in favor of H 1 based on the test statistic derived from the sample mean. The false alarm probability, P F A , is computable under H 0 since the distribution is fully specified. However, the probability of a miss, P M , depends on the true distribution under the alternative hypothesis H 1 , which requires specifying a particular θ > θ 0 . If no specific alternative hypothesis is provided, P M cannot be directly determined [20,21]. Since the objective is to reduce P F A while maintaining a fixed P M , the decision threshold must be adjusted accordingly. In this case, the acceptance region of H 0 must be widened to achieve a lower false alarm rate without increasing missed detections [20,22].

4.2. Theoretical Justification

Reducing the probability of false alarm ( P F A ) under a fixed probability of miss ( P M ) is achievable by widening the acceptance region of the null hypothesis. This leads to a higher decision threshold and, consequently, fewer false alarms.
Lemma 1. 
For a fixed sample size n and known variance σ 2 , the type I error probability ( P F A ) and the type II error probability ( P M ) are inversely related. That is, reducing P F A increases P M , and vice versa, as shown in Figure 2.
Proof. 
Two scenarios illustrate this relationship:
  • Case 1: Define the hypothesis test so that the null hypothesis is
    H 0 : θ = θ 0 ,
    and the alternative hypothesis:
    H 1 : θ > θ 0 ,
    which corresponds to a right-tailed test. The probability of a false alarm is defined as:
    P F A = P ( θ > θ 0 H 0 is true ) .
    Assume the true parameter under H 1 is θ 1 , where θ 1 > θ 0 . Then, the probability of a false alarm is:
    P F A 1 = P Z > θ 1 θ 0 σ / n ,
    where Z follows a standard normal distribution and n is the sample size.
  • Case 2: Adjust the decision threshold by widening the acceptance region. The hypotheses are redefined as:
    H 0 : θ 0 θ θ 1 ( No Target ) ,
    H 1 : θ > θ 1 ( Target detected ) .
    Then, the false alarm probability becomes:
    P F A 2 = P Z > θ 2 θ 0 σ / n .
    Since θ 2 > θ 1 , it follows that:
    θ 2 θ 0 σ / n > θ 1 θ 0 σ / n ,
    P F A 2 < P F A 1 .
For an illustration, see Figure 3.    □

Implications

Lemma 1 underscores the fundamental trade-off between detection sensitivity and specificity in radar systems. Increasing the decision threshold (i.e., widening the acceptance region of H 0 ) results in fewer false alarms but at the cost of missing more genuine targets. This trade-off must be carefully balanced in system design, depending on whether the application prioritizes reducing false alarms (e.g., air traffic control) or minimizing missed detections (e.g., military threat detection). The mathematical formulation formalizes these performance considerations and provides a foundation for tuning detection systems based on operational priorities.
Lemma 1 and its proof explicitly show that reducing the false alarm rate P F A can be achieved by widening the acceptance region of the null hypothesis ( H 0 ). Initially, the hypothesis test follows the standard form, where a signal is detected if the received power exceeds a decision threshold θ 1 . By shifting the decision threshold to a higher value θ 2 (where θ 2 > θ 1 ), the probability of false alarm decreases. This occurs because the acceptance region for H 0 is expanded, making it less likely that noise will be misclassified as a target.
Raising the threshold associated with H 0 decreases the probability of false alarm while maintaining a constant probability of miss P M . This relationship is visualized in Figure 2.
These results illustrate that adjusting the decision threshold provides direct control over false alarm rates in radar detection systems. Understanding this trade-off is critical when designing systems that must balance sensitivity (the ability to detect true signals) against specificity (the ability to avoid false positives), according to the operational requirements and constraints of the specific radar application.

5. Reducing Probability of Miss for Fixed Probability of False Alarm

Radar detection systems must strike a balance between sensitivity (detecting true targets) and specificity (avoiding false alarms). In certain applications, maintaining a low false alarm rate ( P F A ) is a strict requirement. In such cases, improving the probability of detection (equivalently, reducing the probability of miss P M ) without increasing P F A becomes critical. For instance, in certain test cases, a decrease in P F A is achieved at the cost of a slight increase in P M , while in others, the opposite trade-off is favored. Moreover, selected configurations illustrate that the fuzzy algorithm can simultaneously reduce both P F A and P M compared to classical crisp methods, which typically enforce a strict inverse relationship. This flexibility confirms the method’s practical versatility and its ability to support mission-specific radar detection strategies.
This section explores minimizing P M while keeping P F A constant. Reducing the probability of a type II error ( P M ) while maintaining a fixed type I error rate ( P F A ) is achievable by increasing the separation between the null hypothesis ( H 0 ) and the alternative hypothesis ( H 1 ). As the difference between their corresponding parameter values widens, the overlap between the distributions under H 0 and H 1 diminishes, thereby lowering the chance of missed detections. This principle underpins the enhanced detectability of signals in radar systems with improved signal-to-noise ratios (SNR). Practically, increasing the separation corresponds to improved radar specifications, such as increased transmitter power or enhanced antenna gain, strategically reducing misses while maintaining stringent false alarm constraints.
Assume x 1 , x 2 , , x n are independent and identically distributed samples of the received radar signal power, following a normal distribution with unknown mean μ and known variance σ 2 . The values of x represent the received signal power.
The hypotheses are defined as:
H 0 : θ = θ 0 ( no target ) , H 1 : θ = θ 1 ( target present ) .
To quantify P M precisely, the alternative hypothesis must specify a deterministic value θ 1 . This ensures that the miss probability calculation is meaningful and aligned with operational radar requirements, particularly when stringent false alarm constraints exist. This relationship between signal separation and P M is formalized rigorously in Lemma 2, providing the theoretical underpinning for achieving reduced misses under fixed false alarm constraints.

5.1. Theoretical Justification

Lemma 2. 
For a fixed probability of false alarm P F A , increasing the difference between the true signal power θ 1 and the hypothesized threshold θ 0 leads to a decreasing probability of miss P M , and vice versa. That is, the probability of miss is inversely related to the separation between θ 1 and θ 0 . This can be illustrated by two cases:
  • Case 1: H 1 : θ = θ 0 + a ;
  • Case 2: H 1 : θ = θ 0 + b , where b > a > 0 .
Then, it follows that:
P M ( θ 0 + b ) < P M ( θ 0 + a )
Proof. 
Assume the null hypothesis H 0 : θ = θ 0 and the alternative hypothesis H 1 : θ = θ 1 with θ 1 > θ 0 . The sample mean X ¯ follows a normal distribution with known variance σ 2 and sample size n.
A two-sided hypothesis test is conducted, with H 0 being accepted if the received signal power falls within the symmetric interval ( θ 0 0.5 , θ 0 + 0.5 ) . The acceptance region is fixed to ensure a constant probability of false alarm P F A .
The probability of miss P M corresponds to the probability that the received signal power lies within the acceptance region under H 1 .
Two cases are considered:
  • Case 1: H 0 : θ = θ 0 , H 1 : θ = θ 0 + a . Since the distribution is normal:
    P M 1 = P Z < ( θ 0 0.5 ) ( θ 0 + a ) σ / n + P Z > ( θ 0 + 0.5 ) ( θ 0 + a ) σ / n = P ( Z < Z 1 ) + P ( Z > Z 2 ) .
  • Case 2: H 1 : θ = θ 0 + b with b > a . Then:
    P M 2 = P Z < ( θ 0 0.5 ) ( θ 0 + b ) σ / n + P Z > ( θ 0 + 0.5 ) ( θ 0 + b ) σ / n = P ( Z < Z 1 ) + P ( Z > Z 2 ) .
Since b > a , it follows that P M 2 < P M 1 .    □

5.2. Implications

This proof focuses on the scenario where P F A remains constant while minimizing the probability of miss P M . In this case, the alternative hypothesis ( H 1 : θ = θ 1 ) is assumed to be deterministic, allowing for a precise calculation of P M . The proof establishes that the probability of a miss is highest when θ 1 is close to θ 0 , as the noise and signal distributions overlap more significantly.
Two cases are considered: one where θ 1 = θ 0 + a and another where θ 1 = θ 0 + b with b > a . Using properties of the normal distribution, it is shown that P M 2 < P M 1 , meaning that as θ 1 moves further from θ 0 , the probability of miss decreases. These results emphasize the importance of maximizing the power of the test ( 1 P M ), which quantifies the radar system’s ability to distinguish between H 0 and H 1 , to enhance sensitivity in radar detection systems, especially in environments where false alarm rates must be tightly controlled.
Figure 3 illustrates this relationship, showing how the probability of miss is inversely related to the probability of false alarm.
The proofs in Section 4.2 and Section 5.1 emphasize key considerations in radar detection, particularly the trade-offs between false alarms and missed detections. Understanding these relationships is crucial for optimizing detection performance in practical applications.

Versatility Implication

This lemma demonstrates how adjusting the parameter θ 1 allows for targeted control of the probability of miss P M while maintaining a fixed P F A . Such flexibility enables radar operators to tune detection systems for contexts where false alarms are intolerable, confirming the theoretical versatility of the proposed method.

6. Enhancing Probability of Miss and False Alarms Simultaneously

In the previous two sections, the focus was on reducing one type of error while keeping the other constant—either minimizing the probability of false alarm ( P F A ) at a fixed probability of miss ( P M ), or minimizing P M while maintaining a constant P F A . In contrast, this section investigates how increasing the sample size can simultaneously decrease both P F A and P M , thereby enhancing the overall detection performance in radar systems and statistical decision-making applications.

Theoretical Justification

Lemma 3. 
Increasing the sample size improves (decreases) both the probability of false alarm P F A and the probability of miss P M .
Proof. 
The probability of false alarm and the probability of miss will be calculated in two different cases, having two different sample sizes n 2 > n 1 .
  • Case 1: Assume x 1 , x 2 , , x n 1 is an independent random sample, following a normal probability density function with unknown mean μ and known variance σ 2 . The values of x represent the received signal power.
The probability of a false alarm is:
P F A 1 = P Z < ( θ 1 0.5 ) θ 0 σ / n 1 + P Z > ( θ 1 + 0.5 ) θ 0 σ / n 1 = P ( Z < Z F A 1 1 ) + P ( Z > Z F A 1 2 ) ,
and the probability of miss is:
P M 1 = P Z < ( θ 0 0.5 ) θ 1 σ / n 1 + P Z > ( θ 0 + 0.5 ) θ 1 σ / n 1 = P ( Z < Z M 1 1 ) + P ( Z > Z M 1 2 ) .
  • Case 2: Assume x 1 , x 2 , , x n 2 is an independent random sample, where n 2 > n 1 .
The probability of false alarm is:
P F A 2 = P Z < ( θ 1 0.5 ) θ 0 σ / n 2 + P Z > ( θ 1 + 0.5 ) θ 0 σ / n 2 = P ( Z < Z F A 2 1 ) + P ( Z > Z F A 2 2 ) ,
and the probability of miss is:
P M 2 = P Z < ( θ 0 0.5 ) θ 1 σ / n 2 + P Z > ( θ 0 + 0.5 ) θ 1 σ / n 2 = P ( Z < Z M 2 1 ) + P ( Z > Z M 2 2 ) .
Since n 2 > n 1 , it follows that:
σ n 2 < σ n 1 .
Thus, the corresponding Z-scores are larger in absolute value for larger n, leading to:
P F A 2 < P F A 1 , and P M 2 < P M 1 .
Hence, increasing the sample size simultaneously reduces both the probability of false alarm and miss, as illustrated in Figure 4 and Figure 5.    □

Versatility Implication

Lemma 3 illustrates that increasing the sample size benefits both error metrics simultaneously—a contrast to the inverse relationship seen in traditional crisp tests (Lemma 1). This simultaneous improvement substantiates the claim that the fuzzy-based strategy offers superior flexibility in error management.

7. Testing Hypotheses About the Mean Received Signal with Fuzzy Data

This section elaborates on the fuzzy hypothesis testing procedure applied to radar detection. The proposed method, as summarized in Algorithm 1, utilizes triangular fuzzy numbers to account for measurement uncertainty and assesses the presence of a target based on fuzzy observations.
Algorithm 1 Radar Detection Based on Fuzzy Hypotheses
Inputs:
  • Fuzzy observations x ˜ 1 , x ˜ 2 , , x ˜ n , each defined as symmetric triangular fuzzy numbers:
    x ˜ i = ( x i L , x i C , x i U ) , x i C = x i L + ϵ , x i U = x i C + ϵ
  • Null and alternative hypotheses:
    H 0 : θ = θ 0 ( noise ) , H 1 : θ > θ 0 ( target )
  • Hypothesized mean (threshold): θ 0
  • Known standard deviation: σ
  • Number of samples: n
  • Desired Type-I error (false alarm probability): P F A = α
  • θ 1 : Assumed radar-specific received signal power, such that θ 1 > θ 0 .
Output: Decision on whether to accept or reject H 0 , based on the proportion of the fuzzy test statistic interval Z ˜ S that lies in the critical region:
  • If 0 % of Z ˜ S overlaps the critical region: completely accept H 0 100 % noise.
  • If 100 % of Z ˜ S overlaps the critical region: completely reject H 0 100 % target.
  • Otherwise, the decision is determined by the degree of partial overlap between the fuzzy test statistic interval Z ˜ S and the critical region, as shown in Figure 6.
     1:
    Compute critical value: Z α = θ 1 θ 0 σ / n
     2:
    Compute fuzzy means: x ¯ L = 1 n x i L , x ¯ U = 1 n x i U
     3:
    Compute fuzzy test interval:
    Z S L = x ¯ L θ 0 σ / n , Z S U = x ¯ U θ 0 σ / n
     4:
    Define: Z ˜ S = [ Z S L , Z S U ]
     5:
    if  Z S U < Z α   then
     6:
        Accept H 0 (no target detected)
     7:
    else if  Z S L > Z α  then
     8:
        Reject H 0 (target detected)
     9:
    else
    10:
        Partial overlap case
    11:
        Compute rejection percentage:
    Area of Z ˜ S in the critical region Total area of Z ˜ S × 100
    12:
        if rejection percentage > radar-specific threshold then depending on radar types.
    13:
            Reject H 0
    14:
        else
    15:
            Accept H 0
    16:
        end if
    17:
    end if
Figure 6. Comparison between the fuzzy test statistic interval Z ˜ S and the crisp rejection threshold Z P F A . The shaded region illustrates the partial overlap used to compute the rejection confidence level.
Figure 6. Comparison between the fuzzy test statistic interval Z ˜ S and the crisp rejection threshold Z P F A . The shaded region illustrates the partial overlap used to compute the rejection confidence level.
Mathematics 13 02299 g006
Let x ˜ 1 , x ˜ 2 , , x ˜ n be independent and identically distributed (i.i.d.) fuzzy random samples as shown in Figure 7 that follow a normal probability density function with unknown mean μ and known variance σ 2 . Each sample is modeled as a symmetric triangular fuzzy number to reflect observational uncertainty. The fuzzy hypothesis test is formulated as:
H 0 : θ = θ 0 ( no Target ) , H 1 : θ > θ 0 ( target Present ) ,
subject to a fixed type I error probability P F A = α .
The following steps detail the algorithm:
  • Fuzzy Observation Modeling: Each received signal power x i is modeled as a symmetric triangular fuzzy number x ˜ i = ( x i L , x i C , x i U ) , where x i C = x i L + ϵ and x i U = x i C + ϵ . All n fuzzy samples are assumed to be independent and identically distributed (i.i.d.).
  • Hypothesis Specification: The radar detection task is framed as a one-sided hypothesis test:
    H 0 : θ = θ 0 ( no target ) , H 1 : θ > θ 0 ( target present ) .
    The type I error (false alarm probability) is denoted by P F A = α , and θ 1 > θ 0 reflects the assumed signal power under H 1 .
  • Critical Threshold Calculation: Compute the crisp critical value Z α based on θ 1 , θ 0 , and the known standard deviation σ :
    Z α = θ 1 θ 0 σ / n .
  • Fuzzy Mean Computation: Calculate the lower and upper bounds of the fuzzy sample mean:
    x ¯ L = 1 n i = 1 n x i L , x ¯ U = 1 n i = 1 n x i U .
  • Fuzzy Test Statistic Construction: Transform the fuzzy mean into a standardized fuzzy test interval:
    Z S L = x ¯ L θ 0 σ / n , Z S U = x ¯ U θ 0 σ / n .
    Define the fuzzy test statistic as the interval Z ˜ S = [ Z S L , Z S U ] .
  • Overlap Evaluation:
    • If Z S U < Z α , the fuzzy test interval lies entirely left of the critical threshold: fully accept H 0 .
    • If Z S L > Z α , the fuzzy interval is fully in the critical region: fully reject H 0 .
    • Otherwise, compute the percentage of the interval Z ˜ S that lies beyond Z α (i.e., in the rejection region).
  • Decision Rule: Based on the overlap percentage and radar-specific tolerance:
    • If the rejection percentage exceeds the application-specific threshold, reject H 0 .
    • Otherwise, accept H 0 .
  • Output: Return a decision (reject or accept H 0 ) with an associated confidence level, based on the quantified overlap. This supports explainable and tunable radar decisions under fuzzy uncertainty.

Versatility Implication: Tunable Thresholds and Operational Priorities

Recent studies have highlighted the necessity of adaptive thresholding in radar systems to accommodate varying operational contexts. For example, Tomkins et al. [23] developed a dual adaptive differential threshold method for detecting faint and strong echo features in radar observations of winter storms, emphasizing the importance of customizable thresholds in different meteorological conditions. Similarly, Liu et al. [24] proposed adaptive radar detection architectures tailored for heterogeneous environments, demonstrating that detection thresholds must be adjusted based on the specific characteristics of the radar system and its operating environment. These findings support the approach of adjusting the overlap threshold in our fuzzy hypothesis testing algorithm to align with the operational priorities of various radar types.
The fuzzy detection framework in Algorithm 1 enables risk-aware decision making by assigning graded rejection likelihoods when the fuzzy test statistic Z ˜ S partially overlaps the rejection region. This soft decision logic enhances interpretability and allows radar configurations to adaptively set confidence thresholds.
The proposed fuzzy detection framework enables mission-adaptive radar decision making by controlling the modulation of detection thresholds. Unlike classical crisp hypothesis testing, which enforces a rigid inverse relationship between the false alarm probability ( P F A ) and the miss probability ( P M ), the fuzzy methodology introduces a flexible decision layer. This flexibility stems from evaluating the degree of overlap between the fuzzy test statistic interval Z ˜ S and the rejection threshold Z α , as illustrated in Figure 6. Partial overlap enables the assignment of soft rejection probabilities, allowing radar systems to tailor detection responses according to context-specific requirements.
Decision thresholds can be adapted by adjusting one or more of the following:
  • The fuzzification width ϵ , which governs the uncertainty spread in triangular fuzzy numbers;
  • The assumed target signal strength θ 1 , influencing the critical value Z α ;
  • The minimum overlap percentage required to reject the null hypothesis H 0 , which translates the overlap into a probabilistic decision.

Radar-Type Sensitivity and Adaptive Threshold Selection

Step 12 of Algorithm 1 formalizes the decision-making process in ambiguous detection scenarios, where the fuzzy test statistic Z ˜ S partially overlaps the rejection region. Rather than applying a universal threshold, the required degree of overlap is selected based on the operational context of the radar system. Different radar applications—such as short-range navigation, long-range surveillance, synthetic aperture radar (SAR), or weather observation—impose distinct performance priorities.
Military detection systems may favor configurations that lower P M at the cost of a higher P F A , especially in threat-sensitive environments. In contrast, meteorological radars might prioritize reducing false alarms due to the practical cost of incorrect precipitation detection. This operational diversity is addressed in our framework by adjusting the minimum required overlap to ensure confident classification of a detection.
Recent studies have underscored the importance of such adaptive thresholding. Tomkins et al. [23] proposed a dual adaptive differential thresholding technique for winter storm detection using radar, accommodating both weak and strong echoes through dynamic threshold calibration. Similarly, Liu et al. [24] introduced context-aware detection architectures that adjust radar thresholds based on environmental variability and system constraints. These findings align with our fuzzy detection approach, in which the overlap threshold serves as a tunable control knob, linking statistical evidence to operational risk.
By quantifying overlap rather than enforcing binary cutoff values, our method bridges the gap between mathematical rigor and practical flexibility. The fuzzy rejection percentage becomes an interpretable metric of decision confidence, offering radar operators actionable insights under uncertainty.
This capacity to transition smoothly between fully rejected, fully accepted, and partially overlapping classifications empowers operators to set adaptive priorities aligned with system constraints, mission criticality, and environmental variability, solidifying the framework’s operational versatility.

8. Experiments and Results

This section presents practical examples in radar decision criteria for enhancing the probability of miss and false alarms simultaneously. All experiments were implemented in Python (3.10), using standard scientific computing libraries including NumPy for numerical operations and Matplotlib for visualization. The implementation was executed in a Jupyter Notebook environment under (Anaconda 2023.03 (64-bit)) on a Windows 11 machine.
Let x 1 , x 2 , , x 45 be independent and identically distributed (i.i.d.) random samples of the received signal power over a 1-millisecond interval, each following a normal probability density function with unknown mean μ and known variance σ 2 = 4 . Suppose that the available data x ˜ 1 , x ˜ 2 , , x ˜ 45 are observed as fuzzy numbers rather than crisp values. For simplicity, all fuzzy numbers are assumed to be symmetric triangular fuzzy numbers. The fuzzy hypothesis is tested with a type I error probability P F A = α = 0.05 .
H 0 : μ = 80 dbm ( 10 × 10 9 Watt ) ( no Target ) , H 1 : μ > 80 dbm ( 10 × 10 9 Watt ) ( Target ) .
The sample data is real and measured from landline surveillance radar for one target in 1 millisecond in dBm, as shown in the table below.
  • Critical threshold: Z p F A = 1.65 .
  • Averaged fuzzy bounds: x ¯ L = 79.8333 and x ¯ U = 80.8333 .
    Note: The negative sign is disregarded in the hypothesis test for θ 0 , x ¯ L , and x ¯ U due to conversion from nanowatts to dBm.
  • Fuzzy test statistic:
    Z ˜ S = 79.8333 80 2 / 45 , 80.8333 80 2 / 45 = [ 0.5601 , 2.793 ]
The probability of rejecting the null hypothesis H 0 is defined by the proportion of the fuzzy test statistic interval Z ˜ S that lies in the critical region:
Prob ( reject H 0 ) = Area of Z ˜ S in the critical region Total area of Z ˜ S × 100 % .
Using the fuzzy test statistic:
Z ˜ S = [ 0.5601 , 2.793 ] , Z α = 1.65 ,
only the portion from Z α to the upper bound of Z ˜ S lies in the critical region, as shown in Figure 8.
Assuming a linear (triangular) distribution over the interval:
Prob ( reject H 0 ) = 1 2 · ( 2.793 1.65 ) · 0.681741 1 2 · ( 2.793 + 0.5601 ) × 100 %
= 0.389614 1.67655 × 100 % 23.239 % .
Hence, the probability of accepting the alternative hypothesis H 1 (i.e., confirming the presence of a target) is approximately:
Prob ( accept H 1 ) = 23.239 % .

8.1. Dimensions of Operational and Statistical Versatility in Fuzzy Radar Detection

The application to real radar data illustrated in Table 1 confirms that under certain radar configurations, the fuzzy algorithm allows simultaneous improvement in both P F A and P M , surpassing the classical constraint of reciprocal behavior. This adaptability confirms the suitability of the fuzzy framework for real-time, context-aware radar environments where rigid binary decisions may be insufficient.
The results demonstrate that the method yields a nuanced rejection probability of 23.239% when the test interval partially overlaps with the critical region. This capability bridges the gap between binary decisions and real-world ambiguity, confirming the method’s operational versatility.
The final decision depends on operational parameters such as:
  • Type of radar system (e.g., surveillance, SAR, guidance);
  • Geographical position (urban, rural, coastal);
  • Purpose of detection (military, civilian).
Figure 9 presents the outcome distribution of radar signal classifications using the fuzzy hypothesis testing algorithm. Out of a total of 100 radar signal samples:
  • 40 samples were categorized as “Accept H 0 (No Target)”, suggesting that the received signals were consistent with noise and did not provide sufficient evidence to indicate the presence of a target.
  • 30 samples led to a “Reject H 0 (Target Detected)” decision, implying that the signal characteristics significantly deviated from the hypothesized noise model, thereby indicating target detection.
  • 30 samples were classified in the “Overlap (Uncertain)” category. These samples exhibited partial overlap between the fuzzy test interval and the critical region, reflecting ambiguous signal characteristics where neither hypothesis could be confidently favored.
The distribution indicates that while the model demonstrates a tendency to confidently accept or reject hypotheses in 70% of the cases, a significant proportion (30%) of the samples fall into the uncertain region, underlining the inherent fuzziness and ambiguity in real radar signal processing. This highlights the utility of fuzzy inference in capturing uncertainty rather than enforcing binary decisions.

8.2. Evaluation Metrics and Confusion Matrix

To evaluate the performance of the proposed fuzzy hypothesis testing algorithm for radar signal detection, a simulated dataset of 500 samples was used, comprising 250 samples under the null hypothesis ( H 0 : noise) and 250 under the alternative hypothesis ( H 1 : target present). The model was applied, and the results are summarized in the confusion matrix depicted in Figure 10; the statistics are summarized in Table 2.
The derived performance metrics are as follows:
  • Accuracy: 88.4%;
  • Precision ( H 1 ): 87.6%;
  • Recall ( H 1 ): 90.3%;
  • F1 Score: 89.0%.
These results indicate strong performance by the fuzzy radar detection algorithm. The accuracy of 88.4% demonstrates the model’s overall reliability across both classes. The high precision value of 87.6% implies a low false alarm rate, ensuring that most detections are indeed true targets. Furthermore, the recall of 90.3% indicates the algorithm’s effectiveness in correctly identifying actual targets, making it suitable for critical applications such as military radar systems and urban surveillance, where missed detections could be consequential. The balanced F1 score confirms that the model performs consistently across both sensitivity and specificity measures. This evaluation highlights the fuzzy hypothesis testing algorithm’s robustness in handling uncertainty and imprecision in radar signal detection, validating its use in real-world detection systems.
The advantages of this proposed method over traditional approaches are clearly illustrated in Table 3. This innovative approach introduces a nuanced decision-making process that incorporates varying degrees of acceptance and rejection, which enhances its practicality and scientific reliability for real-world applications. Additionally, this method is versatile, making it suitable for different radar systems, including surveillance radar, which serves distinct functions compared to tracking radar. The positioning of the radar is also crucial for optimizing its performance. Ultimately, this method applies to both military and civilian radar applications, facilitating advancements across multiple fields.
This comprehensive comparison (Table 3) reveals the enhanced responsiveness of the proposed method to signal ambiguities commonly encountered in field-deployed radar systems. Unlike traditional crisp thresholds, which rigidly dichotomize decisions, the fuzzy approach accommodates real-world imperfections, such as environmental noise and positioning variance. This nuanced adaptability is particularly beneficial in military surveillance, urban monitoring, and emerging applications such as drone detection or operations in cluttered terrain.
There are two main types of radars: civilian and military. Civilian radar is primarily used for detection, functioning effectively because airplanes generally move at relatively slow speeds and remain at manageable distances from the ground.
Military radars, on the other hand, are more complex and can be categorized into two primary types: surveillance radars and tracking radars. In traditional radar systems, a threshold level is established to differentiate between target signals and background noise. If the power of the target signal falls below this threshold, the radar may mistakenly identify it as noise (resulting in a probability of a miss). Conversely, if noise power exceeds the threshold, the radar might incorrectly classify it as a target (leading to a probability of a false alarm).
To address these challenges, a fuzzy algorithm has been developed, which does not rely on strict decision-making criteria. Instead, it defines a target with a certain probability based on the radar’s analysis. For instance, tracking radars require a new decision to be made with every pulse, allowing designers to set a specific, relatively low acceptance value to determine whether to accept or reject a received signal. Similarly, surveillance radars, which are responsible for protecting critical areas, also require designers to adjust an acceptance value to ensure effective monitoring.

9. Conclusions and Future Research Directions

This study addresses a critical gap in radar detection where traditional crisp hypothesis testing falls short in handling uncertainties inherent in real-world signal environments. Motivated by the limitations of fixed thresholds in ambiguous detection scenarios, a fuzzy hypothesis testing framework is specifically designed for radar applications. The approach enables the controlled modulation of detection thresholds through the use of fuzzy data, enhancing the system’s adaptability without compromising scientific rigor. A novel algorithm is developed that facilitates the independent and simultaneous improvement of both the probability of false alarm ( P F A ) and the probability of miss ( P M ), thereby challenging the classical assumption of their inverse relationship. The proposed methodology is validated through extensive experimentation on synthetic and real-world radar datasets, demonstrating practical utility, accuracy, and robustness under diverse operational conditions.
While the proposed framework is developed under the assumption that the radar signal follows a normal distribution, this choice was primarily made to enable analytical derivations and closed-form expressions for key detection metrics such as the probability of false alarm ( P F A ) and probability of miss ( P M ). However, we acknowledge that in real-world radar environments, signal distributions may deviate from normality due to clutter, jamming, or multi-modal interference, potentially following exponential or compound-Gaussian models. In such cases, the parametric design used here may yield suboptimal thresholding and less accurate detection probabilities.
Despite this, the fuzzy hypothesis testing framework introduced in this paper is not inherently tied to a specific distribution. The use of fuzzy-valued test statistics and partial rejection intervals provides a flexible foundation that can be recalibrated for non-Gaussian scenarios by estimating the underlying distribution or using empirical (nonparametric) density estimation techniques. Although nonparametric methods offer greater flexibility, they often require larger sample sizes due to the limited use of distributional information. Future work will also consider the integration of established nonparametric techniques such as the Sign test, Signed-Rank test, Rank-Sum test, Kruskal–Wallis test, and the Runs test to enrich the detection capability. This work advances the theoretical foundation of statistical decision making under uncertainty. It lays a practical foundation for more intelligent and context-adaptive radar systems applicable to both civilian surveillance and defense operations. Future work will also explore the integration of kernel-based fuzzy estimators and robust distribution-free test constructions to generalize the method for broader applicability, while preserving its ability to model uncertainty and support soft decision making in radar signal detection.
Additionally, while classical evaluation metrics such as the confusion matrix are widely used in crisp classification tasks, their direct application to fuzzy systems remains problematic due to the continuous and overlapping nature of fuzzy membership values. This complexity complicates the definition of true positives, false positives, and other elements of the confusion matrix, which traditionally rely on binary decisions. In fuzzy radar detection, signals may exhibit partial or uncertain class memberships, necessitating more nuanced evaluation tools.
Future work should explore robust extensions of the confusion matrix suited to fuzzy classifiers. These may include fuzzy-logic-based aggregation functions (e.g., t-norms, fuzzy implications), entropy-driven evaluation measures, degree-weighted precision and recall, and interpretable thresholding strategies that reflect the underlying signal ambiguity. Advancing such evaluation techniques will improve the precision, interpretability, and diagnostic value of performance evaluation in fuzzy radar systems [25].

Author Contributions

Conceptualization, B.M., M.A. and A.K.E.; methodology, B.M. and A.K.E.; software, A.K.E., M.A. and B.M.; validation, H.H.A. and M.A.; formal analysis, H.H.A., B.M., A.K.E. and M.A.; investigation, B.M., A.K.E. and H.H.A.; resources, M.A. and A.K.E.; data curation, B.M. and H.H.A.; supervision, A.K.E. and H.H.A.; writing—original draft, H.H.A., B.M., A.K.E. and M.A.; writing—review and editing, H.H.A., B.M., M.A. and A.K.E.; visualization, H.H.A., B.M., M.A. and A.K.E.; project administration, B.M. and A.K.E.; funding acquisition, H.H.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [GRANT No. KFU252568].

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alam, N.M.F.H.N.B.; Ku Khalif, K.M.N.; Jaini, N.I.; Gegov, A. The Application of Z-Numbers in Fuzzy Decision Making: The State of the Art. Information 2023, 14, 400. [Google Scholar] [CrossRef]
  2. Zhang, H.; Fan, Y.; Wang, L.; Hu, Z. Fuzzy Bayesian reliability analysis method for structural health monitoring with epistemic uncertainty. Eng. Struct. 2022, 270, 114979. [Google Scholar] [CrossRef]
  3. Xiao, L.; Liu, S. A fuzzy-set qualitative comparative analysis of leadership and teacher professional development in Chinese primary and secondary schools. Int. J. Leadersh. Educ. 2021, 26, 742–757. [Google Scholar] [CrossRef]
  4. Guo, X.; Ji, J.; Khan, F.; Ding, L.; Yang, Y. Fuzzy Bayesian network based on an improved similarity aggregation method for risk assessment of storage tank accident. Process. Saf. Environ. Prot. 2021, 149, 817–830. [Google Scholar] [CrossRef]
  5. Peng, F.; Zhang, X.; Zhao, X. Reliability analysis method of structures based on fuzzy random field model. Eng. Struct. 2020, 229, 111565. [Google Scholar] [CrossRef]
  6. Kovács, T. Damage assessment of bridge structures using the fuzzy logic methodology. Period. Polytech. Civ. Eng. 2023, 67, 562–572. [Google Scholar] [CrossRef]
  7. Filzmoser, P.; Viertl, R. Testing Hypotheses with Fuzzy Data: The Fuzzy p-value. Metrika 2004, 59, 21–29. [Google Scholar] [CrossRef]
  8. Chen, K.S.; Huang, T.H.; Lin, J.S.; Kao, W.Y.; Lo, W. Fuzzy Testing Method of Process Incapability Index. Mathematics 2024, 12, 623. [Google Scholar] [CrossRef]
  9. Hesamian, G.; Johannssen, A.; Chukhrova, N. A Three-Stage Nonparametric Kernel-Based Time Series Model Based on Fuzzy Data. Mathematics 2023, 11, 2800. [Google Scholar] [CrossRef]
  10. Ishibuchi, H.; Nii, M.; Tanaka, K. Fuzzy-arithmetic-based approach for extracting positive and negative linguistic rules from trained neural networks. In Proceedings of the FUZZ-IEEE’99. 1999 IEEE International Fuzzy Systems, Conference Proceedings (Cat. No. 99CH36315), Seoul, Republic of Korea, 22–25 August 1999; Volume 3, pp. 1382–1387. [Google Scholar] [CrossRef]
  11. Casals, M.; Gil, M.; Gil, P. The fuzzy decision problem: An approach to the problem of testing statistical hypotheses with fuzzy information. Eur. J. Oper. Res. 1986, 27, 371–382. [Google Scholar] [CrossRef]
  12. Arnold, B.F. An Approach to Fuzzy Hypothesis Testing. Metrika 1996, 44, 119–126. [Google Scholar] [CrossRef]
  13. Arnold, B.F. Testing Fuzzy Hypotheses with Crisp Data. Fuzzy Sets Syst. 1998, 94, 323–333. [Google Scholar] [CrossRef]
  14. Taheri, S.; Behboodian, J. Neyman-Pearson Lemma for fuzzy hypotheses testing. Metrika 1999, 49, 3–17. [Google Scholar] [CrossRef]
  15. Chachi, J.; Taheri, S.M.; Viertl, R. Testing statistical hypotheses based on fuzzy confidence intervals. Austrian J. Stat. 2012, 41, 267–286. [Google Scholar] [CrossRef]
  16. Elsherif, A.K.; Tang, C.; Zhang, L. Fuzzy hypotheses testing using fuzzy data and confidence interval in radar decision criteria. Evol. Syst. 2015, 6, 67–74. [Google Scholar] [CrossRef]
  17. Parchami, A.; Taheri, S.M.; Mashinchi, M. Testing fuzzy hypotheses based on vague observations: A p-value approach. Stat. Pap. 2012, 53, 469–484. [Google Scholar] [CrossRef]
  18. Song, C.; Li, B. On exact Bayesian credible sets for discrete parameters. Stat. Probab. Lett. 2025, 218, 110295. [Google Scholar] [CrossRef] [PubMed]
  19. Wu, H.C. Statistical hypotheses testing for fuzzy data. Inf. Sci. 2005, 175, 30–56. [Google Scholar] [CrossRef]
  20. Van Trees, H.L. Detection, Estimation, and Modulation Theory, Part I: Detection, Estimation, and Linear Modulation Theory; John Wiley & Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  21. Levy, B.C. Principles of Signal Detection and Parameter Estimation; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar] [CrossRef]
  22. Kay, S.M. Fundamentals of Statistical Signal Processing, Volume 2: Detection Theory; Prentice Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
  23. Tomkins, L.M.; Yuter, S.E.; Miller, M.A. Dual adaptive differential threshold method for automated detection of faint and strong echo features in radar observations of winter storms. Atmos. Meas. Tech. 2024, 17, 3377–3399. [Google Scholar] [CrossRef]
  24. Liu, J.; Massaro, D.; Orlando, D.; Farina, A. Radar Adaptive Detection Architectures for Heterogeneous Environments. IEEE Trans. Signal Process. 2020, 68, 4307–4319. [Google Scholar] [CrossRef]
  25. Sarkar, A.; Pal, S.K. Confusion matrix for fuzzy classifiers: A measure theoretic approach. Soft Comput. 2022, 26, 3967–3982. [Google Scholar] [CrossRef]
Figure 1. Visualization of controlled modulation of detection thresholds using fuzzy logic.
Figure 1. Visualization of controlled modulation of detection thresholds using fuzzy logic.
Mathematics 13 02299 g001
Figure 2. Illustration of the inverse relationship between the probability of false alarm and the probability of miss.
Figure 2. Illustration of the inverse relationship between the probability of false alarm and the probability of miss.
Mathematics 13 02299 g002
Figure 3. Illustration showing the case where α F A 1 < α F A 2 .
Figure 3. Illustration showing the case where α F A 1 < α F A 2 .
Mathematics 13 02299 g003
Figure 4. Illustration showing the case where P F A 2 < P F A 1 .
Figure 4. Illustration showing the case where P F A 2 < P F A 1 .
Mathematics 13 02299 g004
Figure 5. Illustration showing the case where P M 2 < P M 1 .
Figure 5. Illustration showing the case where P M 2 < P M 1 .
Mathematics 13 02299 g005
Figure 7. Membership function of radar signal data modeled as symmetric triangular fuzzy numbers. Each fuzzy observation x ˜ i is characterized by a lower bound x i L , a central value x i C , and an upper bound x i U .
Figure 7. Membership function of radar signal data modeled as symmetric triangular fuzzy numbers. Each fuzzy observation x ˜ i is characterized by a lower bound x i L , a central value x i C , and an upper bound x i U .
Mathematics 13 02299 g007
Figure 8. Represents comparison between Z ˜ S and Z P F A in the real surveillance radar.
Figure 8. Represents comparison between Z ˜ S and Z P F A in the real surveillance radar.
Mathematics 13 02299 g008
Figure 9. Distribution of decision outcomes based on the fuzzy-valued radar test statistic. Unlike crisp tests, the fuzzy method acknowledges overlap cases, where partial rejection likelihoods are computed.
Figure 9. Distribution of decision outcomes based on the fuzzy-valued radar test statistic. Unlike crisp tests, the fuzzy method acknowledges overlap cases, where partial rejection likelihoods are computed.
Mathematics 13 02299 g009
Figure 10. Confusion matrix of the fuzzy detection algorithm showing accuracy, precision, recall, and F1-score.
Figure 10. Confusion matrix of the fuzzy detection algorithm showing accuracy, precision, recall, and F1-score.
Mathematics 13 02299 g010
Table 1. Radar data measured from landline surveillance radar in 1 ms, expressed in dBm.
Table 1. Radar data measured from landline surveillance radar in 1 ms, expressed in dBm.
x L −79.9−79.2−79−80−80.5−79.4−81−79.2−78−79.9−79.9−79.9
x U −80−79.3−79.1−80.1−80.6−79.5−81.1−79.3−78.1−80−80−80
x L −78.9−81.3−82−79−79.8−80.9−82−78.6−78−79−78.6−81.3
x U −79−81.4−82.1−79.1−79.9−81−82.1−78.5−78.1−79.1−78.7−81.4
x L −80−80.3−79.8−82−78−79.2−79.9−79.6−81−80−81.6−79.2
x U −80.1−80.4−79.9−82.1−78.1−79.3−80−79.9−81.1−80.1−81.7−79.3
x L −78−78.4−81.7−81−79.1−79.8−79−78.9−80.9
x U −78.1−78.5−81.8−81.1−79.2−79.9−79.1−79−81
Table 2. Confusion matrix of fuzzy radar detection (500 samples).
Table 2. Confusion matrix of fuzzy radar detection (500 samples).
Actual/Predicted H 0 (No Target) H 1 (Target)
H 0 (No Target) 208 (True Negative)33 (False Positive)
H 1 (Target) 25 (False Negative)234 (True Positive)
Table 3. Comparison between crisp and fuzzy hypotheses in radar detection.
Table 3. Comparison between crisp and fuzzy hypotheses in radar detection.
AspectCrisp Hypothesis TestingFuzzy Hypothesis Testing (Proposed)
Decision ModelBinary (strict accept/reject) based on fixed thresholdGraduated decision making based on partial membership and overlap
Sensitivity to Signal VariationHighly sensitive to noise and signal fluctuationRobust under uncertainty and noise due to fuzzy modeling
Threshold MechanismStatic threshold values, lacks adaptabilityDynamic acceptance intervals based on fuzzy logic
InterpretationStraightforward but may be oversimplified in real-world settingsRicher semantic interpretation of borderline and uncertain cases
Flexibility Across SystemsRigid; less suited for dynamic environmentsFlexible; adaptable to different radar types and deployment contexts
Domain SuitabilityIdeal for well-controlled environments (e.g., industrial automation)Suited for complex, uncertain environments (e.g., urban surveillance, battlefield scenarios)
Application ExampleLegacy tracking systems with stable noise profilesSurveillance radar, UAV detection, maritime monitoring, and border control
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Elsherif, A.K.; Haj Ahmad, H.; Aboshady, M.; Mostafa, B. Fuzzy Hypothesis Testing for Radar Detection: A Statistical Approach for Reducing False Alarm and Miss Probabilities. Mathematics 2025, 13, 2299. https://doi.org/10.3390/math13142299

AMA Style

Elsherif AK, Haj Ahmad H, Aboshady M, Mostafa B. Fuzzy Hypothesis Testing for Radar Detection: A Statistical Approach for Reducing False Alarm and Miss Probabilities. Mathematics. 2025; 13(14):2299. https://doi.org/10.3390/math13142299

Chicago/Turabian Style

Elsherif, Ahmed K., Hanan Haj Ahmad, Mohamed Aboshady, and Basma Mostafa. 2025. "Fuzzy Hypothesis Testing for Radar Detection: A Statistical Approach for Reducing False Alarm and Miss Probabilities" Mathematics 13, no. 14: 2299. https://doi.org/10.3390/math13142299

APA Style

Elsherif, A. K., Haj Ahmad, H., Aboshady, M., & Mostafa, B. (2025). Fuzzy Hypothesis Testing for Radar Detection: A Statistical Approach for Reducing False Alarm and Miss Probabilities. Mathematics, 13(14), 2299. https://doi.org/10.3390/math13142299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop