Next Article in Journal
GRACE/GFO and Swarm Observation Analysis of the 2023–2024 Extreme Drought in the Amazon River Basin
Previous Article in Journal
Multiscale Precipitating Characteristics of Categorized Extremely Persistent Flash Heavy Rainfalls over the Sichuan Basin in China Based on SOM and Multi-Source Datasets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

WTC-MobResNet: A Deep Learning Approach for Detecting Wind Turbine Clutter in Weather Radar Data

1
College of Electronic Engineering, Chengdu University of Information Technology, Chengdu 610225, China
2
China Meteorological Administration Tornado Key Laboratory, Beijing 100871, China
3
Jiangsu Meteorological Observation Center, Nanjing 210041, China
4
Key Laboratory of Transportation Meteorology of China Meteorological Administration, Nanjing Joint Institute for Atmospheric Sciences, Nanjing 210041, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(16), 2763; https://doi.org/10.3390/rs17162763
Submission received: 12 June 2025 / Revised: 31 July 2025 / Accepted: 6 August 2025 / Published: 9 August 2025
(This article belongs to the Section AI Remote Sensing)

Abstract

With the rapid expansion of Wind Parks (WPs), Wind Turbine Clutter (WTC) has become a significant challenge due to the interference it causes with data from next-generation Doppler weather radars. Traditional clutter detection methods struggle to strike a balance between detection accuracy and efficiency. This study proposes a deep learning model named WTC-MobResNet, which integrates the architectures of MobileNet and ResNet and is specifically designed for WTC detection tasks. The model combines the lightweight characteristics of MobileNet with the residual learning capabilities of ResNet, enabling efficient extraction of WTC features from weather radar echo data and achieving precise identification of WTC. The experimental results demonstrate that the proposed model achieves an ACC of 98.21%, a PRE of 97.52%, a POD of 98.99%, and an F1 score of 98.25%, outperforming several existing deep learning models in both detection accuracy and false alarm control. These results confirm the potential of WTC-MobResNet for real-world operational applications.

1. Introduction

Weather radar, as a crucial observational tool in meteorological science, weather forecasting, and environmental monitoring, plays a vital role in ensuring the accuracy of forecasts and the timeliness of disaster warnings [1]. However, the data quality of weather radar is affected by many factors, including the inhomogeneity of atmospheric conditions, terrain obstruction [2], building reflection, Wind Park (WP) interference [3], etc. Among them, the impact of WPs has become increasingly prominent in recent years, and current detection methods have difficulty achieving a good balance between accuracy and computing resources [4]. Therefore, developing a Wind Turbine Clutter (WTC) detection method that can ensure high accuracy and efficiently utilize computing resources is of great significance to improving the quality of weather radar data and the accuracy of weather forecasts.
As a pollution-free, renewable, and sustainable natural clean energy, wind energy has gradually attracted substantial attention from the world. According to the Global Wind Energy Report 2024 [5], the global wind power industry achieved historic growth in the past year, with new installed capacities reaching 117 GW, a year-on-year increase of 50%, and the cumulative installed capacity exceeded 1 TW for the first time. Wind turbine structures in WPs mainly include the mast, nacelle, and blades. The mast, a large metallic structure, causes strong reflections and blockage of weather radar signals, resulting in false echoes that interfere with weather detection [6,7]. While a stationary mast does not affect Doppler velocities or spectral widths [8], the rotating blades driven by the nacelle introduce dynamic scattering and Doppler features similar to meteorological targets. This leads to misinterpretation by radar systems, a phenomenon known as WTC [9]. Unlike ground clutter and precipitation, WTC exhibits highly variable characteristics that depend on several complex and dynamic factors, including the relative position between radar and wind turbines, as well as the wind turbines’ operational status and blade orientation. These spatial and temporal factors result in non-uniform echo patterns, making it extremely difficult to achieve robust classification using fixed thresholds of polarimetric variables alone. In practical meteorological operations, accurately detecting WTC is essential not only for ensuring the reliability of weather radar forecasts but also for preventing misinterpretation by forecasters. Spurious echoes generated by wind turbines can be mistaken for convective storms or precipitation, potentially resulting in false warnings and forecast errors. Moreover, effective identification and localization of WTC provide a vital foundation for the future development of WTC filtering algorithms, which are key to enhancing the overall quality and usability of weather radar data.
In the field of WTC detection, current research mainly focuses on analyzing radar data using time-domain and spectral features. Gallardo et al. [10]. proposed identifying WTC based on large radar cross-sections (RCSs) and Doppler spectral widths by applying thresholds on signal strength and spectral width. However, this method is limited by the relative positions between radar and turbines, reducing its generalizability across different locations. Cheong et al. [11]. employed Level-II radar data—reflectivity, Doppler velocity, and spectral width—along with a fuzzy inference system (FIS) for WTC detection under normal weather conditions, but it struggled with complex phenomena like thunderstorms and tornadoes. Building on this, He, W. et al. [12]. used WSR-88D radar data to construct feature distribution histograms and membership functions, enabling adaptive WTC detection. Yet, their work was confined to a single radar site, lacking validation across broader regions. In 2023, Su, T. et al. [13]. utilized single-polarization radar base data and a fuzzy logic algorithm to distinguish WTC from precipitation echoes and ground clutter. However, the method has not incorporated dual-polarization data, which offer richer microphysical information and are increasingly adopted in modern radar systems in China. This omission limits detection accuracy, as our findings indicate that dual-polarization variables are highly effective in distinguishing WTC from meteorological echoes and ground clutter. Specifically, WTC exhibits very low and often negative differential reflectivity ( Z D R ) due to asymmetric scattering from turbine structures, whereas precipitation typically yields positive or near-zero ZDR, and ground clutter is near zero. Additionally, the correlation coefficient ( C C ) for WTC is significantly lower (typically 0.24–0.48) because of metallic scattering and phase instability from rotating blades. These differences, especially in Z D R and C C , make dual-polarization radar an effective tool for identifying WTC under various weather conditions.
In recent years, with the rapid development of deep learning technology, its application in many fields has made breakthrough progress, especially in image recognition, target detection, and data analysis [14]. In the field of meteorology, the cross-integration of deep learning has also brought new ideas and solutions to traditional meteorological forecasting and clutter recognition methods. Weather radar data usually has the characteristics of high dimensionality, strong noise, strong spatiotemporal coupling, and nonlinear complexity [15]. These characteristics continue to pose challenges for traditional physical models and statistical methods in modeling and recognition efficiency. However, the introduction of deep learning technology, especially the architecture represented by convolutional neural network (CNNs) [16], enables the automatic learning of discriminative spatial features from massive meteorological data, and it is widely used in tasks such as weather target detection, echo classification, and quantitative estimation of precipitation (QPE) from radar echoes [17]. In addition, with the development of new network structures such as Transformer [18] and self-attention mechanisms [19], it has shown great potential in modeling long-distance spatial dependencies and multi-channel feature fusion, and it has gradually been introduced into radar clutter detection and multi-source meteorological data fusion analysis.
In order to solve the problems introduced due to the traditional method’s reliance on threshold setting to adapt to complex meteorological conditions, such as the generalization ability of the fuzzy logic detection method being insufficient and dual-polarization radar data being not fully utilized, we propose a novel model that utilizes dual-polarization Doppler weather radar data as input, combining the lightweight architecture of MobileNet [20] with the deep residual learning capabilities of ResNet [21]. This hybrid network, named WTC-MobResNet, is designed to efficiently and accurately identify and separate WTC from large volumes of weather radar data.
The organization of this paper is as follows: Section 2 provides a detailed description of the weather radar data used in this study and presents the data preprocessing procedures. Section 3 introduces the design and architecture of the proposed WTC-MobResNet model, explaining the integration of MobileNet and ResNet modules for effective WTC detection. Section 4 reports the experimental results, including model performance evaluations and case studies under various weather conditions. Finally, Section 5 summarizes the main findings, discusses limitations, and suggests potential directions for future research in the field of WTC detection.

2. Data and Preprocessing

2.1. Weather Radar Data

Z9513 weather radar data used in this study were obtained from the S-band dual-polarization new-generation weather radar located in Nantong City, Jiangsu Province, China. Approximately 48.5 km northeast of the radar lies the WP area known as the “Rudong Offshore WPs” in Figure 1. Rudong Offshore WPs is China’s first flexible direct current offshore wind power project and the largest offshore WP in Asia [22]. The project achieved full-capacity grid connection on 25 December 2021. It comprises 265 wind turbines and generates approximately 9.04 million kilowatt-hours of electricity per day, which is sufficient for meeting the annual electricity consumption needs of about 4000 average households. A The CINRAD-SA radar system supports four types of volume coverage patterns (VCPs): VCP11, VCP21, VCP31, and VCP32 [23]. The radar data used in this study were collected under the VCP21 scanning mode, which includes nine elevation angles and a temporal resolution of 6 min. The spatial resolution for reflectivity, differential reflectivity, correlation coefficients, and differential phase shifts is 250 m, with a maximum detection range of 460 km. The spatial resolution for the Doppler velocity and spectrum width is also 250 m, but their maximum detection range is 230 km.
Since the overall height of the wind turbines in the Rudong offshore WPs is about 150 to 250 m, and the WPs is 48.5 km away from the radar site, when the scanning elevation angle is raised to 1.5°, the vertical height corresponding to the radar beam exceeds 1.5 km, far exceeding the height range of the wind turbines. Therefore, the WTC interference characteristics of the wind turbines cannot be observed in the radar data at high elevation angles. The actual interference of WPs to the Nantong Z9513 weather radar at different elevation angles is shown in Figure 2.
Finally, the six types of level-II radar data products (reflectivity, radial velocity, spectrum width, differential reflectivity, differential phase shift, and correlation coefficient) used in this study are all data collected by the radar when scanning at an elevation angle of 0.5°. In addition, the weather conditions of precipitation intensity are classified according to the reflectivity factor (unit: dBZ): When reflectivity is less than 8 dBZ, it is judged as a clear sky state; when reflectivity is between 8 and 15 dBZ, it is judged as light rain; when reflectivity is in the range of 15–33 dBZ, it is judged as moderate rain; when reflectivity is greater than 33 dBZ, it is judged as heavy rain. Table 1 shows all dataset sample cases involved in this experiment, and a total of 12 weather radar cases were used.

2.2. Data Preprocessing

After the precision upgrade of the Nantong radar, its fundamental weather radar data still follows the standard base data format and supports streaming transmission [24]. However, since the six types of Level-II data products collected by the radar have not undergone azimuth correction [25], it is necessary to apply azimuth correction to all six products to ensure that the 0th radial data point in the processed output always begins from true north and proceeds sequentially.
After azimuth correction, based on the prior geographical information of WPs, radar data affected by WTC in all six channels are segmented using an 8 × 8 sliding window (actual geographic location size of 2 km × 2 km). We considered that whether a smaller sliding window size could be used; although it can provide more refined data samples, it is difficult to effectively extract the complete spatial characteristics of WTC, thus affecting the detection ability of the model. If a larger sliding window size is used, although it can include more background information, it will also introduce a large number of redundant and irrelevant areas, increasing the difficulty of model training. Therefore, an 8 × 8 data block was ultimately determined to be the most appropriate. The stride was set to 1 to ensure that the sliding blocks fully covered the entire area affected by WTC. Simultaneously, radar echo data not affected by WP interference were also subjected to the same segmentation process (used as negative samples for classification model training). However, the segmented blocks from real radar data contained a large number of NaN (NULL) values. To address this, we performed data cleaning by setting a threshold: any 8 × 8 data block with more than 40% NaN was considered invalid and excluded from the dataset. After the initial cleaning, a small portion of blocks containing NaN still remained, so further processing was applied. For blocks within WTC regions, NaN values were filled with −33, aiming to emphasize boundary contrast and preserve original data characteristics. The value of −33 corresponds to the standard invalid radial data value in CINRAD/SA radar systems, calculated by the following formula [26]:
Radial Data = Stored Value Offset Scale
where the default Offset is 66, and Scale is 2. Under this setting, when the Stored Value is 0, the corresponding Radial Data is −33. This conventionally represents missing or unusable data in radar products and avoids introducing misleading numerical features during model training. For blocks outside the WTC regions, NaN values were filled with the mean value of the entire 8 × 8 block to ensure smoother data transitions.
To prevent class imbalance during model training, we applied random undersampling [27], which randomly selects multiple subsets from the majority class (negative samples) to match the number of WTC (positive samples), ensuring a balanced dataset. After cleaning, the dataset was constructed by labeling all WTC data as positive samples (class = 1) and all non-clutter radar data as negative samples (class = 0).
Although the radar data used were collected at different times, they all originated from the same radar site. To improve the model’s generalization ability, we performed data augmentation [28], including 90°, 180°, and 270° rotations, as well as horizontal and vertical flipping. This not only allowed the model to learn clutter features from different angles and patterns but also reduced the risk of overfitting to specific samples, enabling it to focus more on essential data characteristics and improving detection accuracy and adaptability in real radar echo scenarios. The radar dataset’s construction process is illustrated in Figure 3.
Finally, a total of 120,660 samples were obtained, including 60,330 positive samples and 60,330 negative samples (random undersampling was introduced to keep the number of positive samples consistent with negative samples). Each sample is a three-dimensional matrix of 6 (types) × 8 (radials) × 8 (range bin), where 6 represents six types of weather radar data channels (reflectivity, radial velocity, spectrum width, differential reflectivity, differential phase shift, and correlation coefficient), and the 8 × 8 grid represents the local sub-blocks clipped from the radar’s polar coordinate data. Detailed sample information is provided in Table 2.

3. The Design of WTC-MobResNet

3.1. Overall Framework

In this study, we propose a deep learning model named WTC-MobResNet, which integrates MobileNet and ResNet architectures and is specifically designed for the detection of WTC in weather radar echo maps. MobileNet is known for its lightweight architecture using depthwise separable convolutions, while ResNet addresses the degradation problem in deep networks through residual connections. By combining the strengths of both architectures, our model aims to remain lightweight while enhancing its ability to learn complex features.
In the MobileNet component, we employ depthwise separable convolutions to reduce computational cost and the number of parameters. Each MobileNet Block mainly consists of a depthwise separable convolution layer and a pointwise convolution layer, where the depthwise separable convolution layer uses grouped convolution to reduce computational complexity, while the pointwise convolution layer is used to increase the number of channels. This design helps MobileNet maintain high accuracy with significantly reduced model size and computation.
The ResNet component enhances the model’s learning capability by introducing residual blocks. Each residual block includes two convolutional layers and an ReLU activation layer connected by skip connections that allow gradients to flow directly through the network. This helps alleviate the vanishing gradient problem commonly encountered in deep networks [29], enabling the model to learn deeper feature representations while maintaining efficient training. The overall model structure is shown in Figure 4.
The model architecture mainly includes three stages: feature extraction, feature enhancement, and final classification, the detailed structural parameters are shown in Table A1 of Appendix A.
In the feature extraction stage, the input data is divided into training, validation, and test sets with a ratio of 8:1:1. The dataset is split at the well-prepared sample level, meaning that each sample is exclusively assigned to one of the three sets, with no temporal overlap between them. This ensures that the validation and test samples originate from radar cases that are completely independent of those used during training, thereby providing a more robust and unbiased evaluation of the model’s generalization performance. The data is then passed through the MobileNet block for preliminary feature extraction. Since the traditional MobileNet is designed for three-channel image input and our WTC dataset consists of six-channel radar data, we modified the first convolutional layer of MobileNet to accommodate the input characteristics of the radar data. In total, 15 MobileNet blocks were designed for this stage. Given that our input six-channel matrix has a spatial resolution of 8 × 8, we intentionally reduced the use of pooling layers to prevent excessive loss of spatial resolution, which may hinder the model’s ability to capture sufficient local features. Additionally, to facilitate smooth integration with the subsequent ResNet block, we introduce a fully connected layer after MobileNet to adjust the feature dimensions to match the input requirements of the ResNet part. Specifically, after passing through the 15 MobileNet blocks, the output feature map has a shape of (64, 4, 4), corresponding to 64 channels and a spatial size of 4 × 4. This feature map is then flattened into a 1D vector of length 1024 (64 × 4 × 4), which serves as the input to the fully connected layer. The FC layer reduces this 1024-dimensional vector to a 128-dimensional feature vector, effectively compressing the spatial information while preserving key semantic features. This transformation ensures compatibility with the subsequent ResNet block, enhancing the model’s overall detection capability.
In the feature enhancement stage, we employ a ResNet architecture to further refine and enhance the extracted features. ResNet utilizes residual connections to effectively mitigate the vanishing gradient problem in deep network training, and it captures deeper feature representations. Moreover, the multiple convolutional layers and nonlinear activation units in ResNet significantly improve the model’s discriminative power in identifying WTC, enabling it to better distinguish between WTC and other meteorological echo features.
In the final classification stage, the enhanced features are passed through a fully connected layer and then classified using the Sigmoid activation function to perform binary classification for WTC detection. The Sigmoid function compresses any input value into a range between 0 and 1. After passing the feature vector through the Sigmoid function, we apply a threshold of 0.5 for classification: samples with a computed value of >0.5 are classified as positive samples, and those with a value of <0.5 are classified as negative samples. The Sigmoid function is defined as follows:
S i g m o i d ( x ) = 1 1 + exp ( x )

3.2. Loss Function

In the field of deep learning, commonly used loss functions include hinge loss, Kullback–Leibler divergence loss, contrastive loss, and binary cross-entropy (BCE) [30] loss, each with its unique advantages and applicable scenarios. In this study, we chose to use the BCE loss function. Its mathematical expression is as follows:
L B C E = 1 N i = 1 N y i log y ^ i + 1 y i log 1 y ^ i
where N denotes the number of samples, y i represents the true label of the i-th sample (valued as 0 or 1), and y ^ i denotes the predicted probability for the i-th sample. This loss function calculates the cross-entropy between the true labels and predicted probabilities, measuring how well the model’s output aligns with the true distribution. In the WTC detection task, the model aims to determine whether each radar echo belongs to a WTC region (positive sample) or a non-WTC region (negative sample), essentially framing the problem as a probabilistic binary classification. The BCE loss function is particularly well suited for this purpose, as it directly reflects the divergence between predicted probabilities and binary truth value, and it provides informative gradient signals even when predictions are highly incorrect. Given the spatial non-stationarity and complex background interference inherent in weather radar data, BCE enhances the model’s ability to capture subtle distinctions between WTC and non-WTC echoes, thereby improving the robustness and reliability of detection results.
In the actual training process, we observed that the model began to overfit at about 130 epochs; that is, the model performed well on the training set, but its detection ability on the test set decreased [31]. In order to improve the convergence stability of the model and suppress overfitting, this study introduced the L2 regularization (weight decay) strategy. The basic idea of L2 regularization is to add an additional penalty term on the basis of the original loss function L B C E to constrain the update amplitude of the model parameters, thereby improving the generalization detection ability of the model [32]. Its mathematical expression is as follows:
L r e g = L B C E + λ i w i 2
where L B C E is the original binary cross-entropy loss, w i denotes the weight parameters of the model, and λ is the hyperparameter controlling the strength of regularization, which determines the contribution of the regularization term to the total loss. In this study, λ was set to 0.01 to strike a balance between model complexity and generalization ability.
The experimental results show that after introducing L2 regularization, the model’s performance on the validation set improved significantly, with overfitting effectively mitigated and generalization error noticeably reduced. Finally, after 230 epochs of training, we selected the model parameters that achieved the best performance on the validation set and used this model to detect WTC in new radar echo data in order to evaluate its practical effectiveness in the WTC detection task.

4. Results

4.1. Model Evaluation

In the task of WTC detection, to comprehensively evaluate the model’s performance, we used two additional deep learning models for comparative experiments. The first is the ResNet model, which leverages residual connections to address the vanishing gradient and degradation issues commonly encountered in deep neural networks. The second is ShuffleNet, a lightweight network that reduces computational costs while maintaining high accuracy through the use of grouped convolutions and channel shuffle mechanisms [33].
Furthermore, we adopted a series of classification indicators, including training loss; validation loss; training accuracy; validation accuracy based on the training set and the validation set; and accuracy (ACC), precision (PRE), probability of detection (POD), F1 score, critical success index (CSI), and false alarm rate (FAR) [34] based on the test set. Table 3 lists the meaning of each parameter in the classification indicator.
In the model training and validation phases, the training loss and validation loss reflect the error variation trends during training and validation processes, which help determine whether the model is overfitting or underfitting. The training accuracy and validation accuracy measure the classification accuracy on the training and validation sets, providing insight into the model’s convergence. The actual results of the three models on the training and validation sets are shown in Figure 5.
In Figure 5a,d it can be observed that the WTC-MobResNet model performs well on both the training and validation sets, with high accuracy and low loss. This indicates that the model not only learns effective features but also demonstrates strong generalization ability. In Figure 5b,e, although ResNet shows good performance on the training set, its validation loss and accuracy fluctuate more significantly. In Figure 5c,f, ShuffleNet exhibits a rapid decline in training loss and relatively stable validation loss, but its accuracy still falls short compared to WTC-MobResNet. A comprehensive analysis is presented in Table 4.
During the testing phase, ACC (Equation (4)) is used as a general metric to measure overall classification performance, representing the proportion of correctly classified samples. However, relying solely on ACC may not sufficiently reflect the model’s actual detection effectiveness. Therefore, we further introduce PRE (Equation (5)), POD (Equation (6)), and F1 Score (Equation (7)). Among these, PRE measures the proportion of correctly predicted positive samples among all predicted positives, indicating the false alarm level. POD measures the proportions of actual positive samples that are correctly identified, reflecting the model’s detection capability. The F1 score, as the harmonic mean of PRE and POD, provides a balanced evaluation between these two metrics.
In addition, considering the practical requirements of WTC detection, we specifically focus on the CSI (Equation (8)) and FAR (Equation (9)). CSI evaluates the detection performance of the model regarding WTC by comprehensively considering hits, misses, and false alarms—the higher the CSI, the stronger the model’s ability to detect clutter. The FAR is used to evaluate the false alarm situation of the model: that is, the proportion of samples that are misclassified as WTC. The lower the index, the lower the false detection rate of the model, which helps reduce interference with normal meteorological targets.
A C C = T P + T N T P + T N + F P + F N
P R E = T P T P + F P
P O D = T P T P + F N
F 1 score = 2 × P R E × P O D P R E + P O D
C S I = T P T P + F N + F P
F A R = F P T P + F P
The final indicator score results of the three models on the test set are shown in Table 5.
As shown in Figure 6, the experimental results show that WTC-MobResNet achieves the best performance in the WTC detection task, with an overall ACC of 0.9821, a PRE of 0.9752, a POD of 0.9899, an F1 score of 0.9825, a CSI of 0.9656, and the lowest FAR of only 0.0172.
In comparison, ResNet has an ACC of 0.9545, a PRE of 0.9425, a POD of 0.9692, an F1 score of 0.9557, a CSI of 0.9151, and a FAR of 0.0574. Although its overall performance is slightly lower than that of WTC-MobResNet, it still has good performance in detection capabilities.
ShuffleNet shows relatively balanced detection capabilities, with an ACC of 0.9606, a PRE of 0.9547, a POD of 0.9792, an F1 score of 0.9668, a CSI of 0.9358, and a FAR 0.0452, indicating that it has good lightweight advantages while maintaining detection performance.
Overall, WTC-MobResNet outperforms other models in terms of ACC, FAR, and comprehensive detection capabilities, verifying the effectiveness of its structural design and its application potential in actual radar interference detection. To provide a more detailed visual comparison, Figure 7 presents the specific detection results of the three deep learning models. Additionally, individual range bins identified as WTC are marked with black circles on the radar echo plots for clearer localization and analysis.
As can be observed in Figure 7, the WTC-MobResNet model achieves the most precise localization of WTC, effectively capturing the spatial distribution of interference with minimal false alarms. In contrast, the ResNet model exhibits a certain degree of misdetection, erroneously identifying non-clutter regions. While ShuffleNet also delivers reasonably good detection performance, it shows signs of insufficient detection and misses some areas of clutter. These results further demonstrate the superior detection capability and spatial accuracy of the proposed WTC-MobResNet architecture.

4.2. Case Test

To evaluate the effectiveness of the WTC-MobResNet model in detecting WTC under different weather conditions, the base data collected by the Nantong Z9513 radar (the WPs are located 48.5 km east–northeast of the radar station) under four different weather conditions were selected for detection. Additionally, to verify the generalization ability of the model, the existing Z9740 radar (the WPs are located 6–40 km east of the radar station) in Changsha was also tested. Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 show the detection results of the WTC-MobResNet model on the Nantong radar, and Figure 13 and Figure 14 show the detection results of WTC-MobResNet model on the Changsha radar.

4.2.1. Nantong Radar Case

The four different weather condition information points of the Nantong radar are as follows: clear sky on 10 June 2023, light rain on 19 October 2021, moderate rain at 22:21 on 15 April 2023, and heavy rain at 23:39 on 15 April 2023. The following figure is the detection result of the clear sky condition.
Under the clear sky conditions in Figure 8, due to the low blade speed of the wind turbine, it does not exhibit many significant features in radial velocity, spectral width, and differential phase shift. However, from the three products of reflectivity, differential reflectivity, and correlation coefficient, we can clearly find that the values of WTC are significantly different from meteorological echoes. Under clear sky conditions, the reflectivity value is generally between 10 and 20 dBZ, the differential reflectivity value is generally between −2.4 and 0 dB, and the correlation coefficient is generally between 0.24 and 0.48 (the general meteorological target correlation coefficient is greater than 0.9, and the ground clutter correlation coefficient is between 0.4 and 0.8). Therefore, combined with the detection results of the WTC-MobResNet model, it is proven that the proposed WTC detection is effective.
In the real-case test shown in Figure 10, we observed that when WPs are exposed to large-scale rainfall, the WTC is almost completely masked by the weather process. As a result, the radar echo data in the obscured areas is not contaminated by WTC. This phenomenon was also noted in a study by Hall W [35]. However, the WTC-MobResNet model tends to misidentify some rainfall echoes as WTC under such conditions.
To address this issue, we further optimized the algorithm, enabling the enhanced WTC-MobResNet model to effectively detect WTC even when it is completely covered by widespread precipitation. To facilitate the analysis of the model’s improvement, the range bin identified as WTC is marked with black circles on the actual radar reflectivity echo images (Figure 11b,d). The final optimized results are as follows:
Based on the data collected by the Nantong radar site, under the meteorological conditions shown in Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12, when the clutter caused by wind turbines encounters rainy weather, it is difficult to distinguish the precipitation echo from the WTC by relying solely on single-polarization radar data products (such as reflectivity, radial velocity, and spectral width). This is due to the presence of tower and dynamic blades of wind turbines, which can produce echo signals that are very similar to meteorological targets in weather radar reflectivity, radial velocity, and spectral width echo maps. However, in dual-polarization radar data products (such as differential reflectivity, differential phase shift, and correlation coefficient), we can observe significant differences between WTC and meteorological echoes (for example, differential reflectivity usually shows negative values, while the correlation coefficient is between 0.24 and 0.48). Finally, by comparing the detection results of WTC-MobResNet with ground truth labels, we conducted a quantitative evaluation of its performance. The evaluation metrics, as summarized in Table 6, present a comparative analysis of the WTC-MobResNet model’s performance under four different weather conditions using data from the Nantong Z9513 radar, demonstrating its robust detection capability. Therefore, incorporating dual-polarization radar data products into the model is a meaningful strategy for improving the accuracy of WTC detection.

4.2.2. Changsha Radar Case

In order to test the generalization ability of the model, the Hunan Changsha Z9740 radar, which was not included in the training dataset, also performed WTC detection (Figure 13 and Figure 14). This radar site has been interfered with by WTC for a long time. The final detection results are as follows.
By comparing the detection results with the actual radar echo distributions in reflectivity, differential reflectivity, and correlation coefficient, as well as the known geographic location of the wind turbines, it is evident that the WTC-MobResNet model accurately identifies regions of Wind Turbine Clutter. Furthermore, since the Changsha radar site was not included in the training dataset, the effective detection performance on this unseen site demonstrates the model’s strong generalization capability across different radar sources.

5. Conclusions

This study proposes a deep learning model for WTC detection, named WTC-MobResNet. The model integrates the lightweight structure of MobileNet and the residual learning capability of ResNet. By leveraging multi-channel data from dual-polarization weather radar, the proposed approach achieves efficient and accurate detection of WTC. The main conclusions of this work are summarized as follows:
  • Innovatively Incorporate Deep Learning Technology into WTC Detection: In order to solve the problem of WTC detection using weather radar, deep learning detection technology was introduced on the basis of the traditional existing detection methods. The experimental results demonstrate that the proposed model achieves an accuracy of 98.21% on the test set, with a precision of 97.52% and a false alarm rate of 1.72%. The detection speed reaches the level of seconds, enabling near real-time detection.
  • This study makes full use of multi-channel data from dual-polarization weather radar, including reflectivity, radial velocity, spectrum width, differential reflectivity, differential phase shift, and correlation coefficients. The experimental results demonstrate that dual-polarization parameters, such as differential reflectivity and the correlation coefficient, exhibit significant discriminative capabilities in WTC detection. This finding further confirms their effectiveness under complex meteorological conditions.
  • Case test results indicate that when the WPs encounters large-scale rainfall, the WTC will be almost completely covered by the weather process so that the radar echo data in the covered area are not contaminated by WTC, but the model misdetects. However, we cleverly optimized the model through the correlation coefficient parameter. When conducting individual case tests, we set the samples with a data block mean value of >0.95 in the correlation coefficient channel so that they would not be detected by the model (correlation coefficient of the meteorological echo is >0.95), thereby achieving a more accurate detection effect.
Finally, the goal of this study is to provide a more effective solution for mitigating WTC interference in weather radar observations and provide more effective monitoring methods for practitioners in the field of weather radar. Although the proposed WTC-MobResNet model has achieved notable performance improvements in WTC detection tasks, the current study does not yet consider the heterogeneous impacts of WPs under varying geographical and weather meteorological conditions. Future work will aim to incorporate WTC samples from multiple radar sites across different regions and weather scenarios to further enhance the model’s generalization capability and operational applicability.

Author Contributions

Conceptualization, Y.G., Q.Z., Y.L., F.Z., Z.R. and H.W.; methodology, Y.G., Q.Z., Y.L. and Z.R.; software, Y.G., Z.R., Q.Z. and F.Z.; validation, Y.G., Q.Z., Y.L. and Z.R.; formal analysis, Y.G., Q.Z., Y.L. and H.W.; investigation, Q.Z.; resources, Q.Z. and H.W.; data curation, Q.Z.; writing—original draft preparation, Y.G. and Q.Z.; writing—review and editing, Y.G., Q.Z., Y.L. and Z.R.; visualization, Y.G. and Q.Z.; supervision, Q.Z., Y.L., F.Z. and H.W.; project administration, Q.Z. and Y.L.; funding acquisition, Q.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (U2342216 and U20B2061), the China Meteorological Administration Tornado Key Laboratory (grant TKL202309), the China Meteorological Administration projects (CMAJBGS202316), Beijige Fund of Nanjing Joint Institute for Atmospheric Sciences (BJG202501), the Joint Research Project for Meteorological Capacity Improvement (22NLTSY009), and Key Scientific Research Projects of Jiangsu Provincial Meteorological Bureau (KZ202203).

Data Availability Statement

The original contributions presented in the study are included in this article; further inquiries can be directed to the corresponding author.

Acknowledgments

The authors are grateful for the use of S-band radar data from Jiangsu Detection Center. The authors are also grateful to the researchers whose published papers contain information used and cited in this paper. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
WPsWind Parks;
WTCWind Turbine Clutter;
RCSRadar Cross-Section;
CNNConvolutional Neural Network;
FISFuzzy Inference System;
VCPVolume Coverage Pattern;
BCEBinary Cross-Entropy;
QPEQuantitative Estimation of Precipitation;
MSEMean Squared Error.

Appendix A

Appendix A.1. Model Structure

Table A1. Model structure overview and configuration.
Table A1. Model structure overview and configuration.
StageLayer/Block TypeConfigurationOutput ShapeNotes
InputRadar Data Input6 channels, 8 × 8(120,660, 6, 8, 8)Six radar features (Z, V, W, Z D R , Φ D P , C C )
Stage 1:
Feature Extraction
Conv2D + BN + ReLU + PoolingConv: 3 × 3, stride = 1; Pool: 2 × 2, stride = 2(120,660, 32, 4, 4)First conv adapted to 6-channel input
MobileNet Block × 15Depthwise Conv: 3 × 3 + Pointwise Conv: 1 × 1, BN, ReLU(120,660, 64, 4, 4)Depthwise-separable conv preserves spatial shape
Flatten(120,660, 1024)64 × 4 × 4 = 1024
Stage 2:
Feature Enhancement
Fully Connected LayerLinear layer, output units: 128(120,660, 128)Feature compression before ResNet
ResNet Block × 4Conv1 (128, 3 × 3) + Conv2 (128, 3 × 3) + skip + ReLU(120,660, 128)Maintains shape via skip connection
Stage 3:
Classification
Fully Connected Output LayerLinear layer, output units: 1(120,660, 1)Binary classification output
Sigmoid Activation σ ( x ) = 1 1 + exp ( x ) (120,660, 1)Converts output to probability
OutputThreshold-Based ClassificationThreshold = 0.5(120,660, 1) → (120,660,)Class = 1 (WTC) or Class = 0 (non-WTC echo)

Appendix B

Appendix B.1. MobileNet Introduce

MobileNet is a lightweight convolutional neural network designed for efficient feature extraction with minimal computational overhead. Its core innovation lies in the use of depthwise-separable convolutions, which decompose standard convolutions into a depthwise convolution and a pointwise convolution operation. This significantly reduces the number of parameters and floating point operations (FLOPs) while preserving representational capacity.
Figure A1. Depthwise-separable convolution.
Figure A1. Depthwise-separable convolution.
Remotesensing 17 02763 g0a1
In our study, MobileNet is adapted to process six-channel dual-polarization radar data by modifying the first convolutional layer to accept six input channels. The network is used as a front-end extractor to capture low-level semantic features from 8 × 8 radar echo blocks. To retain spatial information critical for WTC identification, we minimize pooling operations. The MobileNet output is compressed through a fully connected layer before being passed to the ResNet-based enhancement module.

Appendix B.2. ResNet Introduction

ResNet is a deep convolutional neural network architecture renowned for its use of residual learning to address the degradation problem in very deep networks. The key innovation lies in the introduction of skip connections, which allow the network to learn identity mappings and alleviate the vanishing gradient issue, thereby enabling stable training of deeper models. Each residual block typically consists of two or more convolutional layers, with the input added directly to the output after these layers. This design allows for more efficient gradient flow and improved feature representation across layers.
Figure A2. Residual block.
Figure A2. Residual block.
Remotesensing 17 02763 g0a2
In our study, ResNet is utilized as the feature enhancement module following the MobileNet backbone. After preliminary feature extraction, the compressed feature vector is reshaped into a suitable format and fed into a sequence of residual blocks. Each block refines and deepens the representation by learning discriminative patterns that distinguish Wind Turbine Clutter (WTC) from meteorological echoes. The use of four consecutive ResNet blocks enhances the network’s capacity to capture complex spatial dependencies while maintaining structural stability through residual connections. This enables more robust identification of subtle interference patterns characteristic of WTC.

References

  1. Kostis, T.G.; Goudosis, A.K.; Dagkinis, I.; Volos, C.K.; Nikitakos, N.V. Wind Turbines & Weather Radar: A Review of the Problem. In Proceedings of the 2nd International Conference on Applied and Computational Mathematics (ICACM ’13), Athens, Greece, 14–16 May 2013; Volume 13, pp. 53–59. [Google Scholar]
  2. Germann, U.; Boscacci, M.; Clementi, L.; Gabella, M.; Hering, A.; Sartori, M.; Sideris, I.V.; Calpini, B. Weather radar in complex orography. Remote Sens. 2022, 14, 503. [Google Scholar] [CrossRef]
  3. Krich, S.I.; Montanari, M.; Amendolare, V.; Berestesky, P. Wind turbine interference mitigation using a waveform diversity radar. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 805–815. [Google Scholar] [CrossRef]
  4. Ren, Z.; Zeng, Q.; He, J.; Wang, H.; Li, L. A Novel Algorithm for Identifying Radar Clutter Caused by Wind Farm Interference. In Proceedings of the 2024 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Zhuhai, China, 22–24 November 2024; Volume 69, pp. 1–5. [Google Scholar]
  5. Global Wind Energy Council. Global Wind Report. 2024. Available online: https://www.gwec.net/reports (accessed on 9 June 2024).
  6. Lepetit, T.; Simon, J.; Petex, J.F.; Cheraly, A.; Marcellin, J.P. Radar cross-section of a wind turbine: Application to weather radars. In Proceedings of the 2019 13th European Conference on Antennas and Propagation (EuCAP), Krakow, Poland, 31 March–5 April 2019; Volume 14, pp. 1–3. [Google Scholar]
  7. Mazel, D.; Egan, M. Wind turbine radar interference mitigation: Factors to evaluate future radar-based solutions. In Proceedings of the 2024 Integrated Communications, Navigation and Surveillance Conference (ICNS), Herndon, VA, USA, 23–25 April 2024; Volume 19, pp. 1–9. [Google Scholar]
  8. Xing, B.; Mu, J. A method for wind turbine clutter recognition and mitigation. In Proceedings of the IET International Radar Conference (IET IRC 2020), Online, 4–6 November 2020; Volume 9, pp. 975–979. [Google Scholar]
  9. Hood, K.; Torres, S.; Palmer, R. Automatic detection of wind turbine clutter for weather radars. J. Atmos. Ocean. Technol. 2010, 27, 1868–1880. [Google Scholar] [CrossRef]
  10. Gallardo-Hernando, B.; Pérez-Martínez, F.; Aguado-Encabo, F. Detection and mitigation of wind turbine clutter in C-band meteorological radar. IET Radar Sonar Navig. 2010, 4, 520–527. [Google Scholar] [CrossRef]
  11. Cheong, B.L.; Palmer, R.; Torres, S. Automatic wind turbine identification using level-II data. In Proceedings of the 2011 IEEE RadarCon (RADAR), Kansas City, MO, USA, 23–27 May 2011; Volume 15, pp. 271–275. [Google Scholar]
  12. He, W.; Guo, S.; Wang, X.; Wu, R. Weather Radar Wind Farms Clutters Detection and Identification Method Based on Level-II Data and Fuzzy Logic Inference. J. Electron. Inf. Technol. 2016, 38, 3252–3260. [Google Scholar]
  13. Su, T.; Guo, J. Identification and Removal of Wind Turbine Clutter in the New Generation Weather Radar. J. Ocean. Meteorol. Res. 2023, 43, 45–58. [Google Scholar]
  14. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  15. Saltikoff, E.; Friedrich, K.; Soderholm, J.; Lengfeld, K.; Nelson, B.; Becker, A.; Hollmann, R.; Urban, B.; Heistermann, M.; Tassone, C. An overview of using weather radar for climatological studies: Successes, challenges, and potential. Bull. Am. Meteorol. Soc. 2019, 100, 1739–1752. [Google Scholar] [CrossRef]
  16. Chua, L.O.; Yang, L. Cellular neural networks: Theory. IEEE Trans. Circuits Syst. 2002, 35, 1257–1272. [Google Scholar] [CrossRef]
  17. Kumar, B.; Haral, H.; Kalapureddy, M.; Singh, B.B.; Yadav, S.; Chattopadhyay, R.; Pattanaik, D.; Rao, S.A.; Mohapatra, M. Utilizing deep learning for near real-time rainfall forecasting based on radar data. Phys. Chem. Earth Parts A/B/C 2024, 135, 103600. [Google Scholar] [CrossRef]
  18. Tang, Y.; Wang, Y.; Guo, J.; Tu, Z.; Han, K.; Hu, H.; Tao, D. A survey on transformer compression. arXiv 2024, arXiv:2402.05964. [Google Scholar]
  19. Tang, Y.; Wang, Y.; Guo, J.; Tu, Z.; Han, K.; Hu, H.; Tao, D. Self-attention with relative position representations. arXiv 2018, arXiv:1803.02155. [Google Scholar]
  20. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  21. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  22. Xu, Z.; Zhang, H.; Wang, Y.; Wang, X.; Xue, S.; Liu, W. Dynamic detection of offshore wind turbines by spatial machine learning from spaceborne synthetic aperture radar imagery. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 1674–1686. [Google Scholar] [CrossRef]
  23. Zhang, Q.; Wang, X.; Liang, H.; Li, R.; Dai, Y.; Liu, J.; Li, F.; Liu, X. Assumption and Discussion of Volume Scanning Strategy of New Generation Weather Radar in China I. In Proceedings of the 2021 CIE International Conference on Radar (Radar), Haikou, China, 15–19 December 2021; pp. 1153–1158. [Google Scholar]
  24. National Research Council; Division on Earth and Life Studies; Board on Atmospheric Sciences and Climate; Committee on the Assessment of the National Weather Service’s Modernization Program. The National Weather Service Modernization and Associated Restructuring: A Retrospective Assessment; National Academies Press: Washington, DC, USA, 2012. [Google Scholar]
  25. Su, T.; Ge, J.; Zhang, H. Review of the Development of Dual-Polarization Weather Radar Systems in China. J. Ocean. Meteorol. Res. 2018, 38, 31–33. [Google Scholar]
  26. Yu, J.; Cui, Z.; Li, Z.; Liao, X.; Du, Y. Research on image classification algorithms based on deep learning. In Proceedings of the 12th International Scientific and Practical Conference “Modern Thoughts on the Development of Science: Ideas, Technologies and Theories”, Amsterdam, The Netherlands, 26–29 March 2024; International Science Group: New York, NY, USA, 2024. 336p. [Google Scholar]
  27. Hasanin, T.; Khoshgoftaar, T. The effects of random undersampling with simulated class imbalance for big data. In Proceedings of the 2018 IEEE international conference on information reuse and integration (IRI), Salt Lake City, UT, USA, 6–9 July 2018; pp. 70–79. [Google Scholar]
  28. Maharana, K.; Mondal, S.; Nemade, B. A review: Data pre-processing and data augmentation techniques. Glob. Transit. Proc. 2022, 3, 91–99. [Google Scholar] [CrossRef]
  29. Wu, D.; Wang, Y.; Xia, S.T.; Bailey, J.; Ma, X. Skip connections matter: On the transferability of adversarial examples generated with resnets. arXiv 2020, arXiv:2002.05990. [Google Scholar]
  30. Li, Q.; Jia, X.; Zhou, J.; Shen, L.; Duan, J. Rediscovering bce loss for uniform classification. arXiv 2024, arXiv:2403.07289. [Google Scholar]
  31. Ying, X. An overview of overfitting and its solutions. J. Phys. Conf. Ser. 2019, 1168, 022022. [Google Scholar] [CrossRef]
  32. Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar]
  33. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
  34. Hossin, M.; Sulaiman, M.N. A review on evaluation metrics for data classification evaluations. Int. J. Data Min. Knowl. Manag. Process 2015, 5, 1. [Google Scholar]
  35. Hall, W.; Rico-Ramirez, M.A.; Krämer, S. Offshore wind turbine clutter characteristics and identification in operational C-band weather radar measurements. Q. J. R. Meteorol. Soc. 2017, 143, 720–730. [Google Scholar] [CrossRef]
Figure 1. Nantong Weather Radar and Rudong Offshore WPs (red marker—Nantong Weather Radar; orange rectangle—Rudong Offshore WP).
Figure 1. Nantong Weather Radar and Rudong Offshore WPs (red marker—Nantong Weather Radar; orange rectangle—Rudong Offshore WP).
Remotesensing 17 02763 g001
Figure 2. WTC interference at different elevation angles (red fan shape—WTC).
Figure 2. WTC interference at different elevation angles (red fan shape—WTC).
Remotesensing 17 02763 g002
Figure 3. WTC dataset construction.
Figure 3. WTC dataset construction.
Remotesensing 17 02763 g003
Figure 4. Model architecture.
Figure 4. Model architecture.
Remotesensing 17 02763 g004
Figure 5. Model performance. (a) Training and validation loss of WTC-MobResNet. (b) Training and validation loss of ResNet. (c) Training and validation loss of ShuffleNet. (d) Training and validation accuracy of WTC-MobResNet. (e) Training and validation accuracy of ResNet. (f) Training and validation accuracy of ShuffleNet.
Figure 5. Model performance. (a) Training and validation loss of WTC-MobResNet. (b) Training and validation loss of ResNet. (c) Training and validation loss of ShuffleNet. (d) Training and validation accuracy of WTC-MobResNet. (e) Training and validation accuracy of ResNet. (f) Training and validation accuracy of ShuffleNet.
Remotesensing 17 02763 g005
Figure 6. Model index analysis. (a) ACC; (b) PRE; (c) POD; (d) F1 score; (e) CSI; (f) FAR.
Figure 6. Model index analysis. (a) ACC; (b) PRE; (c) POD; (d) F1 score; (e) CSI; (f) FAR.
Remotesensing 17 02763 g006
Figure 7. Comparison of three model detection results. (a) Real radar echo map; (b) WTC-MobResNet detection results; (c) ResNet detection results; (d) ShuffleNet detection results; (e) WTC-MobResNet detection results marked in the radar echo image; (f) ResNet detection results marked in the radar echo image; (g) ShuffleNet detection results marked in the radar echo image (orange box—echo area displayed in (eg)).
Figure 7. Comparison of three model detection results. (a) Real radar echo map; (b) WTC-MobResNet detection results; (c) ResNet detection results; (d) ShuffleNet detection results; (e) WTC-MobResNet detection results marked in the radar echo image; (f) ResNet detection results marked in the radar echo image; (g) ShuffleNet detection results marked in the radar echo image (orange box—echo area displayed in (eg)).
Remotesensing 17 02763 g007
Figure 8. WTC-MobResNet detection under clear sky conditions. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Figure 8. WTC-MobResNet detection under clear sky conditions. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Remotesensing 17 02763 g008
Figure 9. WTC-MobResNet detection under light rain conditions. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Figure 9. WTC-MobResNet detection under light rain conditions. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Remotesensing 17 02763 g009
Figure 10. WTC-MobResNet detection under moderate rain conditions. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Figure 10. WTC-MobResNet detection under moderate rain conditions. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Remotesensing 17 02763 g010
Figure 11. Comparison of WTC-MobResNet before and after optimization. (a) WTC-MobResNet detection results before optimization. (b) Reflectivity echo marker map before optimization. (c) WTC-MobResNet detection results after optimization. (d) Reflectivity echo marker map after optimization (orange box—echo area shown in the right sub-image).
Figure 11. Comparison of WTC-MobResNet before and after optimization. (a) WTC-MobResNet detection results before optimization. (b) Reflectivity echo marker map before optimization. (c) WTC-MobResNet detection results after optimization. (d) Reflectivity echo marker map after optimization (orange box—echo area shown in the right sub-image).
Remotesensing 17 02763 g011
Figure 12. WTC-MobResNet detection results under heavy rain conditions. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Figure 12. WTC-MobResNet detection results under heavy rain conditions. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Remotesensing 17 02763 g012
Figure 13. WTC-MobResNet Changsha 8:03 WTC detection results. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Figure 13. WTC-MobResNet Changsha 8:03 WTC detection results. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Remotesensing 17 02763 g013
Figure 14. WTC-MobResNet Changsha 8:25 WTC detection results. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Figure 14. WTC-MobResNet Changsha 8:25 WTC detection results. (a) WTC-MobResNet detection results. (b) True reflectivity echo map. (c) True radial velocity echo map. (d) True spectrum width echo map. (e) True differential reflectivity echo map. (f) True differential phase shift echo map. (g) True correlation coefficient echo map (orange box—echo area shown in the right sub-image; black circle—WTC).
Remotesensing 17 02763 g014
Table 1. Weather cases used in the dataset.
Table 1. Weather cases used in the dataset.
DateTime (UTC + 8)Radar StationStation CityWeather Conditions
2023041517:21Z9513Nantong CityClear sky
2023041517:39Z9513Nantong CityClear sky
2023041518:15Z9513Nantong CityClear sky
2023041518:33Z9513Nantong CityLight rain
2023041518:50Z9513Nantong CityLight rain
2023041520:27Z9513Nantong CityModerate rain
2023041521:33Z9513Nantong CityHeavy rain
2023061008:22Z9513Nantong CityClear sky
2023061021:10Z9513Nantong CityModerate rain
2023071618:50Z9513Nantong CityModerate rain
2023071621:00Z9513Nantong CityModerate rain
2023071623:05Z9513Nantong CityLight rain
Table 2. Sample statistics.
Table 2. Sample statistics.
DateTime (UTC + 8)Positive SamplesNegative SamplesTotal
2023041517:21489748979794
2023041517:39495649569912
2023041518:15473047309460
2023041518:335032503210,064
2023041518:50494649469892
2023041520:275213521310,426
2023041521:335062506210,124
2023061008:225189518910,378
2023061021:105048504810,096
2023071618:50487648769752
2023071621:005134513410,268
2023071623:055247524710,494
Table 3. Classification indicator definitions.
Table 3. Classification indicator definitions.
Predicted ClassTrue Class
1 (Yes WTC)0 (No WTC)
1 (Yes WTC)TP (True Positives)FP (False Positives)
0 (No WTC)FN (False Negatives)TN (True Negatives)
Table 4. Model performance analysis. “↑” means that a higher score is better while “↓” means that a lower score is better.
Table 4. Model performance analysis. “↑” means that a higher score is better while “↓” means that a lower score is better.
ModelTraining Loss ↓Validation Loss ↓Training Accuracy ↑Validation Accuracy ↑
WTC-MobResNet0.03420.03460.98120.9800
ResNet0.04620.13060.97560.9482
ShuffleNet0.06580.10020.96740.9591
Table 5. Model performance on the test set.
Table 5. Model performance on the test set.
ModelACC ↑PRE ↑POD ↑F1-Score ↑CSI ↑FAR ↓
WTC-MobResNet0.98210.97520.98990.98250.96560.0172
ResNet0.95450.94250.96920.95570.91510.0574
ShuffleNet0.96060.95470.97920.96680.93580.0452
Table 6. Evaluation metrics of WTC-MobResNet under Z9513 radar cases.
Table 6. Evaluation metrics of WTC-MobResNet under Z9513 radar cases.
Radar CaseWeather ConditionACCPREPODF1 ScoreCSIFAR
2023.6.10 12:59Clear Sky0.98420.97310.98740.98570.95930.0187
2021.10.19 8:05Light Rain0.97980.96830.98370.98090.95790.0241
2023.4.15 22:21Moderate Rain0.94930.93910.95830.95230.91780.0543
2023.4.15 23:39Heavy Rain0.95680.94540.96260.95830.92570.0478
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, Y.; Zeng, Q.; Liu, Y.; Zhang, F.; Wang, H.; Ren, Z. WTC-MobResNet: A Deep Learning Approach for Detecting Wind Turbine Clutter in Weather Radar Data. Remote Sens. 2025, 17, 2763. https://doi.org/10.3390/rs17162763

AMA Style

Gao Y, Zeng Q, Liu Y, Zhang F, Wang H, Ren Z. WTC-MobResNet: A Deep Learning Approach for Detecting Wind Turbine Clutter in Weather Radar Data. Remote Sensing. 2025; 17(16):2763. https://doi.org/10.3390/rs17162763

Chicago/Turabian Style

Gao, Yao, Qiangyu Zeng, Yin Liu, Fugui Zhang, Hao Wang, and Zhicheng Ren. 2025. "WTC-MobResNet: A Deep Learning Approach for Detecting Wind Turbine Clutter in Weather Radar Data" Remote Sensing 17, no. 16: 2763. https://doi.org/10.3390/rs17162763

APA Style

Gao, Y., Zeng, Q., Liu, Y., Zhang, F., Wang, H., & Ren, Z. (2025). WTC-MobResNet: A Deep Learning Approach for Detecting Wind Turbine Clutter in Weather Radar Data. Remote Sensing, 17(16), 2763. https://doi.org/10.3390/rs17162763

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop