Explainable Machine Learning for Tower-Radar Monitoring of Wind Turbine Blades: Fine-Grained Blade Recognition Under Changing Operational Conditions
Abstract
1. Introduction
1.1. Motivation for Explainable Machine Learning
1.2. Perturbations, Model Robustness and Generalization
1.3. Related Works and Novelty
2. Materials and Methods
2.1. Data Acquisition
2.2. Properties of the Dataset
2.2.1. Problem Geometry
2.2.2. Fundamentals of Frequency-Modulated Continuous Waves (FMCW)
2.2.3. Scattering
2.3. Convolutional Classifiers
2.4. Visual Explanations
- Identify the target layer (usually the final convolutions ), which should both capture high-level features while retaining spatial information,
- compute the gradients of the score z for the target class with respect to the k-th feature map of layer L,
- average gradients to obtain importance of each feature map (where the normalization constant is omitted for simplicity),
- weight feature maps to create a coarse localization highlighting influential regions
- upsample heatmap for overlay to original radargram or combination with results from Guided Backpropagation.
3. Results and Discussion
3.1. Classifier Robustness Across Turbine Locations
3.1.1. Identically Distributed Train and Test Data
3.1.2. Re-Using Trained Classifiers for Other Similar Turbine
3.2. Classifier Robustness Across EOCs
3.2.1. Rotation Speed
3.2.2. Nacelle Orientation and Pitch Angle
3.3. Visual Explanations of Radargrams
3.3.1. Custom CNN on Original Radargrams
3.3.2. ResNet-18 on Original Radargrams
3.3.3. Custom CNN and ResNet-18 on Cropped Radargrams
3.3.4. ResNet-18 on Augmented Radargrams
3.4. Impact of Weather Conditions on Radargrams
3.5. “Structural Fingerprint” Hypothesis
4. Conclusions
- Thanks to the visual explanations, the high-intensity areas in the radargrams were identified, which are decisive for blade classification. In contrast, the remaining background of the crescent turned out to be obstructive in some instances. Thus, a foreground detection step could be introduced, leading to background removal or context-based adaptive cropping to aid blade recognition.
- In the current work, perturbations were investigated, and the kinship among image transformations was pointed out. Here, augmentations help to enlarge or balance datasets; tower-radar monitoring, as a recent computer vision topic, however, will likely also benefit from the imagery of resembling domains as well as synthetic data [12].
- The intricate but isolated task of recognizing individual rotor blades was tackled. Practical tower-radar monitoring systems nonetheless face multi-task challenges. The growth of wind energy often conflicts with the conservation of wildlife. In order to protect species, wind turbines will not even become approved in sensitive locations, or existing ones have to be temporarily switched off at designated times. In order to resolve this green-green dilemma (climate versus wildlife protection), novel detection capabilities are being developed with respect to birds and bats in the vicinity of wind parks [71,72].
- By addressing the impact of changing EOCs, the fundamentally dynamic setting of tower-radar monitoring was acknowledged. In addition, the continual learning challenge should be treated in which the classifier adapts to new and changing data as it emerges during monitoring. Here, only weak or noisy labels are available [73] and, e.g., regularization [74] must strike a balance between preservation versus flexibility.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| EOC | Environmental and operational condition |
| FMCW | Frequency-modulated continuous wave |
| TRM | Tower-radar monitoring |
| ML | Machine learning |
| AI | Artificial intelligence |
| LIME | Local interpretable model-agnostic explanations |
| UAV | Unmanned aerial vehicle |
| CNN | Convolutional neural network |
| SAR | Synthetic aperture radar |
| SCADA | Supervisory control and data acquisition |
| FFT | Fast Fourier transform |
| ReLU | Rectified linear unit |
| MaxPool | Maximum pooling |
| ROI | Region of interest |
| SNR | Signal-to-noise ratio |
References
- Niemi, J.; Tanttu, J.T. Deep learning-based automatic bird identification system for offshore wind farms. Wind Energy 2020, 23, 1394–1407. [Google Scholar] [CrossRef]
- Rosa, I.M.D.; Marques, A.T.; Palminha, G.; Costa, H.; Mascarenhas, M.; Fonseca, C.; Bernardino, J. Classification success of six machine learning algorithms in radar ornithology. Ibis 2016, 158, 28–42. [Google Scholar] [CrossRef]
- Zhao, H.; Chen, G.; Hong, H.; Zhu, X. Remote structural health monitoring for industrial wind turbines using short-range Doppler radar. IEEE Trans. Instrum. Meas. 2021, 70, 8002609. [Google Scholar] [CrossRef]
- The European Parliament and the Council of the European Union. Regulation Laying Down Harmonised Rules on Artificial Intelligence. 2024. Available online: http://data.europa.eu/eli/reg/2024/1689/oj (accessed on 11 August 2025).
- Samek, W.; Müller, K.R. Towards explainable artificial intelligence. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Springer International Publishing: Cham, Switzerland, 2019; pp. 5–22. [Google Scholar] [CrossRef]
- Meyers, R.A. Mathematics of Complexity and Dynamical Systems; Springer Science & Business Media: New York, NY, USA, 2011. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
- Truong, J.; Rudolph, M.; Yokoyama, N.H.; Chernova, S.; Batra, D.; Rai, A. Rethinking sim2real: Lower fidelity simulation leads to higher sim2real transfer in navigation. In Proceedings of the 6th Conference on Robot Learning; PMLR: New York, NY, USA, 2023; Volume 205, pp. 859–870. [Google Scholar]
- Gatys, L.A.; Ecker, A.S.; Bethge, M. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2414–2423. [Google Scholar] [CrossRef]
- Jing, Y.; Yang, Y.; Feng, Z.; Ye, J.; Yu, Y.; Song, M. Neural style transfer: A review. IEEE Trans. Vis. Comput. Graph. 2019, 26, 3365–3385. [Google Scholar] [CrossRef]
- Mälzer, M.; Beck, S.; Alipek, S.; Reichart, E.; Moll, J.; Krozer, V.; Oikonomopoulos, C.; Kassner, J.; Hägelen, M.; Heinecke, T.; et al. Radar-based structural monitoring of wind turbines blades: Field results from two operational wind turbines. In Proceedings of the 14th International Workshop on Structural Health Monitoring, Stanford, CA, USA, 12–14 September 2023. [Google Scholar] [CrossRef]
- Kexel, C. SiWiRoRa—Simulated Wind-Turbine Rotor-Blade Radargrams; Zenodo: Geneva, Switzerland, 2024. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar] [CrossRef]
- Mumuni, A.; Mumuni, F. Data augmentation: A comprehensive survey of modern approaches. Array 2022, 16, 100258. [Google Scholar] [CrossRef]
- Buslaev, A.; Iglovikov, V.I.; Khvedchenya, E.; Parinov, A.; Druzhinin, M.; Kalinin, A.A. Albumentations: Fast Image Augmentation Library. 2024. Available online: https://albumentations.ai (accessed on 11 August 2025).
- Gohar, I.; Yew, W.K.; Halimi, A.; See, J. Review of state-of-the-art surface defect detection on wind turbine blades through aerial imagery: Challenges and recommendations. Eng. Appl. Artif. Intell. 2025, 144, 109970. [Google Scholar] [CrossRef]
- Liu, H.; Liu, S.; Liu, Z.; Niu, B.; Xie, J.; Luo, C.; Shi, Z. Wind turbine blade surface defect detection model based on improved you only look once version 10 small and integrated compression. Eng. Appl. Artif. Intell. 2025, 159, 111645. [Google Scholar] [CrossRef]
- Jia, X.; Chen, X. AI-based optical-thermal video data fusion for near real-time blade segmentation in normal wind turbine operation. Eng. Appl. Artif. Intell. 2024, 127, 107325. [Google Scholar] [CrossRef]
- Streser, E.; Alipek, S.; Rao, M.; Simon, J.; Moll, J.; Kraemer, P.; Krozer, V. Radar-based damage detection in a wind turbine blade using convolutional neural networks: A proof-of-concept under fatigue loading. Sensors 2025, 25, 3337. [Google Scholar] [CrossRef] [PubMed]
- Moll, J.; Arnold, P.; Mälzer, M.; Krozer, V.; Pozdniakov, D.; Salman, R.; Rediske, S.; Scholz, M.; Friedmann, H.; Nuber, A. Radar-based structural health monitoring of wind turbine blades: The case of damage detection. Struct. Health Monit. 2018, 17, 815–822. [Google Scholar] [CrossRef]
- Tang, W.; Blanche, J.; Mitchell, D.; Harper, S.; Flynn, D. Characterisation of composite materials for wind turbines using frequency modulated continuous wave sensing. J. Compos. Sci. 2023, 7, 75. [Google Scholar] [CrossRef]
- Alipek, S.; Mälzer, M.; Beck, S.; Kexel, C.; Moll, J.; Krozer, V.; Kassner, J.; Heinecke, T.; Rose, J.; Berger, M. Potential and limitations of anomaly detection via tower-radar monitoring of wind turbine blades in regular operation with convolutional networks. e-J. Nondestruct. Test. 2024, 29, 1–11. [Google Scholar] [CrossRef]
- Tu, W.; Deng, W.; Gedeon, T. A closer look at the robustness of contrastive language-image pre-training (clip). Adv. Neural Inf. Process. Syst. 2023, 36, 13678–13691. [Google Scholar]
- Hendrycks, D.; Dietterich, T. Benchmarking neural network robustness to common corruptions and perturbations. arXiv 2019, arXiv:1903.12261. [Google Scholar] [CrossRef]
- Mintun, E.; Kirillov, A.; Xie, S. On interaction between augmentations and corruptions in natural corruption robustness. Adv. Neural Inf. Process. Syst. 2021, 34, 3571–3583. [Google Scholar]
- Oveis, A.H.; Giusti, E.; Ghio, S.; Meucci, G.; Martorella, M. LIME-Assisted automatic target recognition with SAR images: Toward incremental learning and explainability. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 9175–9192. [Google Scholar] [CrossRef]
- Qiu, Z.; Rivaz, H.; Xiao, Y. Is visual explanation with Grad-CAM more reliable for deeper neural networks? A case study with automatic pneumothorax diagnosis. In Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2023; pp. 224–233. [Google Scholar] [CrossRef]
- Galli, A.; Marrone, S.; Moscato, V.; Sansone, C. Reliability of explainable artificial intelligence in adversarial perturbation scenarios. In Pattern Recognition. ICPR International Workshops and Challenges; Springer: Cham, Switzerland, 2021; pp. 243–256. [Google Scholar] [CrossRef]
- Czerkawski, M.; Clemente, C.; Michie, C.; Andonovic, I.; Tachtatzis, C. Robustness of deep neural networks for micro-Doppler radar classification. In Proceedings of the 2022 23rd International Radar Symposium (IRS), Gdansk, Poland, 12–14 September 2022; pp. 480–485. [Google Scholar] [CrossRef]
- Chen, T.; Zhang, L.; Guo, W.; Zhang, Z.; Datcu, M. Analyzing the Adversarial Robustness and Interpretability of Deep SAR Classification Models: A Comprehensive Examination of Their Reliability. Remote Sens. 2025, 17, 1943. [Google Scholar] [CrossRef]
- Oveis, A.H.; Giusti, E.; Cantelli-Forti, A.; Martorella, M. XAI-Driven Resilient Image Classification in the Presence of Adversarial Perturbations. IET Radar Sonar Navig. 2025, 19, e70041. [Google Scholar] [CrossRef]
- Tien, T.B.; Quang, T.V.; Ngoc, L.N.; Ngoc, H.T. Enhancing time series data classification for structural damage detection through out-of-distribution representation learning. Structures 2024, 65, 106766. [Google Scholar] [CrossRef]
- Shu, J.; Ding, W.; Zhang, J.; Lin, F.; Duan, Y. Continual-learning-based framework for structural damage recognition. Struct. Control Health Monit. 2022, 29, e3093. [Google Scholar] [CrossRef]
- Golubović, D.; Erić, M.; Vukmirović, N. High-resolution Doppler and azimuth estimation and target detection in HFSWR: Experimental study. Sensors 2022, 22, 3558. [Google Scholar] [CrossRef]
- Walenczykowska, M.; Kawalec, A. Application of continuous wavelet transform and artificial naural network for automatic radar signal recognition. Sensors 2022, 22, 7434. [Google Scholar] [CrossRef]
- Dai, W.; Dai, C.; Qu, S.; Li, J.; Das, S. Very deep convolutional neural networks for raw waveforms. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 421–425. [Google Scholar] [CrossRef]
- Manwell, J.F.; McGowan, J.G.; Rogers, A.L. Wind Energy Explained: Theory, Design and Application; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar] [CrossRef]
- Kim, B.S.; Jin, Y.; Lee, J.; Kim, S. FMCW radar estimation algorithm with high resolution and low complexity based on reduced search area. Sensors 2022, 22, 1202. [Google Scholar] [CrossRef]
- Moll, J.; Maetz, T.; Maelzer, M.; Krozer, V.; Mischke, K.; Krause, S.; Bagemiel, O.; Nuber, A.; Kremling, S.; Kurin, T.; et al. Radar-based monitoring of glass fiber reinforced composites during fatigue testing. Struct. Control Health Monit. 2021, 28, e2812. [Google Scholar] [CrossRef]
- Cortés, E.; Sánchez, F.; O’Carroll, A.; Madramany, B.; Hardiman, M.; Young, T.M. On the material characterisation of wind turbine blade coatings: The effect of interphase coating–laminate adhesion on rain erosion performance. Materials 2017, 10, 1146. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Wightman, R. PyTorch Image Models. 2019. Available online: https://github.com/rwightman/pytorch-image-models (accessed on 11 August 2025).
- Masters, D.; Luschi, C. Revisiting small batch training for deep neural networks. arXiv 2018, arXiv:1804.07612. [Google Scholar] [CrossRef]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Computer Vision—ECCV 2014; Springer: Cham, Switzerland, 2014; pp. 818–833. [Google Scholar] [CrossRef]
- Springenberg, J.T.; Dosovitskiy, A.; Brox, T.; Riedmiller, M. Striving for simplicity: The all convolutional net. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar] [CrossRef]
- Sundararajan, M.; Taly, A.; Yan, Q. Axiomatic Attribution for Deep Networks. arXiv 2017. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef]
- Meta. Captum—Model Interpretability for PyTorch. 2020. Available online: https://captum.ai/api/guided_grad_cam.html (accessed on 11 August 2025).
- Dubey, S.R.; Singh, S.K.; Chaudhuri, B.B. Activation functions in deep learning: A comprehensive survey and benchmark. Neurocomputing 2022, 503, 92–108. [Google Scholar] [CrossRef]
- Agarap, A.F. Deep Learning using Rectified Linear Units (ReLU). arXiv 2019, arXiv:1803.08375. [Google Scholar] [CrossRef]
- Farrar, C.; Dervilis, N.; Worden, K. The past, present and future of structural health monitoring: An overview of three ages. Strain 2025, 61, e12495. [Google Scholar] [CrossRef]
- Chen, Y.; Griffith, D.T. Blade mass imbalance identification and estimation for three-bladed wind turbine rotor based on modal analysis. Mech. Syst. Signal Process. 2023, 197, 110341. [Google Scholar] [CrossRef]
- Krogh, C.; Hermansen, S.M.; Lund, E.; Kepler, J.; Jakobsen, J. A matter of course: Generating optimal manufacturing instructions from a structural layup plan of a wind turbine blade. Compos. Part A Appl. Sci. Manuf. 2023, 172, 107599. [Google Scholar] [CrossRef]
- Tong, K.; Wu, Y. Deep learning-based detection from the perspective of small or tiny objects: A survey. Image Vis. Comput. 2022, 123, 104471. [Google Scholar] [CrossRef]
- Talebi, H.; Milanfar, P. Learning to resize images for computer vision tasks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 497–506. [Google Scholar] [CrossRef]
- Montanari, A.; Zhong, Y. The interpolation phase transition in neural networks: Memorization and generalization under lazy training. Ann. Stat. 2022, 50, 2816–2847. [Google Scholar] [CrossRef]
- Mallinar, N.R.; Simon, J.B.; Abedsoltan, A.; Pandit, P.; Belkin, M.; Nakkiran, P. Benign, Tempered, or Catastrophic: Toward a Refined Taxonomy of Overfitting. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022. [Google Scholar]
- Kexel, C.; Mueller, H.; Alipek, S.; Moll, J. Mast-bound and Too Curious: Overcoming Drift in Wind-Tower Radar for Blade Monitoring Using Pre-training, Augmentation and Weight Consolidation Due to Correlated Conditions. In Proceedings of the 2nd Latin-American Workshop on Structural Health Monitoring (LATAM-SHM), Santiago, Chile, 7–9 January 2026. [Google Scholar] [CrossRef]
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? Adv. Neural Inf. Process. Syst. 2014, 27, 3320–3328. [Google Scholar] [CrossRef]
- Ardalan, Z.; Subbian, V. Transfer learning approaches for neuroimaging analysis: A scoping review. Front. Artif. Intell. 2022, 5, 780405. [Google Scholar] [CrossRef] [PubMed]
- Kornblith, S.; Shlens, J.; Le, Q.V. Do better imagenet models transfer better? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2661–2671. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
- Yang, R.; Wang, R.; Deng, Y.; Jia, X.; Zhang, H. Rethinking the random cropping data augmentation method used in the training of CNN-based SAR image ship detector. Remote Sens. 2020, 13, 34. [Google Scholar] [CrossRef]
- Liu, L.; Ouyang, W.; Wang, X.; Fieguth, P.; Chen, J.; Liu, X.; Pietikäinen, M. Deep learning for generic object detection: A survey. Int. J. Comput. Vis. 2020, 128, 261–318. [Google Scholar] [CrossRef]
- Rossi, L.; Karimi, A.; Prati, A. A novel region of interest extraction layer for instance segmentation. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 2203–2209. [Google Scholar] [CrossRef]
- Buslaev, A.; Iglovikov, V.I.; Khvedchenya, E.; Parinov, A.; Druzhinin, M.; Kalinin, A.A. Albumentations: Fast and flexible image augmentations. Information 2020, 11, 125. [Google Scholar] [CrossRef]
- Ali, Z.; Elsayed, M.; Tiwari, G.; Ahmad, M.; Le Kernec, J.; Heidari, H.; Gupta, S. Impact of receiver thermal noise and PLL RMS jitter in radar measurements. IEEE Trans. Instrum. Meas. 2024, 73, 2002710. [Google Scholar] [CrossRef]
- Zadeh, A.T.; Mälzer, M.; Simon, J.; Beck, S.; Moll, J.; Krozer, V. Range–Doppler analysis for rain detection at Ka-band: Numerical and experimental results from laboratory and field measurements. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1027–1033. [Google Scholar] [CrossRef]
- Mälzer, M.; Taremi Zadeh, A.; Beck, S.; Moll, J.; Krozer, V. Towards radar barriers for animal fatality detection at wind turbines: Numerical and preliminary experimental results. IET Radar Sonar Navig. 2020, 14, 1767–1772. [Google Scholar] [CrossRef]
- Zadeh, A.T.; Diyap, M.; Moll, J.; Krozer, V. Towards localization and classification of birds and bats in windparks using multiple FMCW-radars at Ka-band. Prog. Electromagn. Res. M 2022, 109, 1–12. [Google Scholar] [CrossRef]
- Verwimp, E.; Aljundi, R.; Ben-David, S.; Bethge, M.; Cossu, A.; Gepperth, A.; Hayes, T.L.; Hüllermeier, E.; Kanan, C.; Kudithipudi, D.; et al. Continual learning: Applications and the road forward. Trans. Mach. Learn. Res. 2024, 2835–8856. Available online: https://openreview.net/forum?id=axBIMcGZn9 (accessed on 2 February 2026).
- Mundt, M.; Hong, Y.; Pliushch, I.; Ramesh, V. A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning. Neural Netw. 2023, 160, 306–336. [Google Scholar] [CrossRef] [PubMed]
- Mälzer, M.; Beck, S.; Moll, J. Radar-based sensing of wind turbines blades based on 35 GHz FMCW sensors installed at operational wind turbine towers. In Proceedings of the 14th International Workshop on Structural Health Monitoring (IWSHM 2023); Zenodo: Geneva, Switzerland, 2024. [Google Scholar] [CrossRef]
















| Train → Finetune → Test → | vb06 - vb06 | vb07 - vb07 | vb06 - vb07 | vb07 - vb06 | vb06 vb07 vb07 | vb07 vb06 vb06 | vb06 vb07 vb07 | vb07 vb06 vb06 |
|---|---|---|---|---|---|---|---|---|
| ROT1 | 0.981 | 0.910 | 0.193 | 0.478 | 0.881 | 0.968 | 0.749 | 0.917 |
| ROT2 | 0.979 | 0.898 | 0.452 | 0.208 | 0.863 | 0.971 | 0.727 | 0.896 |
| ROT3 | 0.980 | 0.863 | 0.336 | 0.217 | 0.813 | 0.971 | 0.716 | 0.910 |
| Train → Test → | Low Low | High High | Low High | High Low | Both Both | Both Mid | Both * Mid * | Both ** Mid ** |
|---|---|---|---|---|---|---|---|---|
| ROT1 | 0.962 | 0.929 | 0.478 | 0.344 | 0.970 | 0.543 | 0.679 | 0.868 |
| ROT2 | 0.965 | 0.913 | 0.324 | 0.398 | 0.965 | 0.545 | 0.657 | 0.866 |
| ROT3 | 0.979 | 0.944 | 0.007 | 0.173 | 0.975 | 0.635 | 0.769 | 0.921 |
| Meta-Parameter | Low | Mid | High |
|---|---|---|---|
| rotation speed [rpm] | {7.0, 9.0} | {9.5, 11.5} | {12.5, 13.5} |
| nacelle orientation [°] | {200, 270} * | ||
| pitch angle [°] | {−2.5, −1.5} * | ||
| wind direction [°] | {150, 300} * | ||
| wind speed [m/s] | {4, 7} | {7, 8} | {8, 11.5} |
| Train → Test → | Low Low | High High | Low High | High Low | Both Both | Both Mid |
|---|---|---|---|---|---|---|
| ROT1 | 0.926 | 0.999 | 0.235 | 0.430 | 0.971 | 0.458 |
| ROT2 | 0.935 | 0.991 | 0.400 | 0.264 | 0.971 | 0.344 |
| ROT3 | 0.950 | 0.992 | 0.291 | 0.101 | 0.979 | 0.538 |
| Train → Test → | Low Low | High High | Low High | High Low | Both Both | Both Mid |
|---|---|---|---|---|---|---|
| ROT1 | 0.985 | 0.972 | 0.503 | 0.459 | 0.986 | 0.889 |
| ROT2 | 0.985 | 0.974 | 0.470 | 0.082 | 0.985 | 0.696 |
| ROT3 | 0.990 | 0.973 | 0.065 | 0.013 | 0.986 | 0.821 |
| Meta-Parameter | Low | Mid | High |
|---|---|---|---|
| rotation speed [rpm] | {0, 13.5} * | ||
| nacelle orientation [°] | {200, 220} | {225, 245} | {250, 270} |
| pitch angle [°] | {−5, 2} * | ||
| wind direction [°] | {190, 230} | {225, 245} | {240, 280} |
| wind speed [m/s] | {0, 10.5} * | ||
| Meta-Parameter | Low | Mid | High |
|---|---|---|---|
| rotation speed [rpm] | {12.25, 13.25} * | ||
| nacelle orientation [°] | {237.5, 262.5} * | ||
| pitch angle [°] | {−3, 3} | {5, 10} | {12, 18} |
| wind direction [°] | {237.5, 262.5} * | ||
| wind speed [m/s] | {8.25, 11.5} | {10, 16} | {14.5, 18.5} |
| Train ROT1 | Train ROT2 | Train ROT3 | Test ROT1 | Test ROT2 | Test ROT3 | Test * ROT1 | Test * ROT2 | Test * ROT3 | |
|---|---|---|---|---|---|---|---|---|---|
| raw | 0.996 | 0.996 | 0.999 | 0.980 | 0.976 | 0.988 | 0.744 | 0.677 | 0.820 |
| crop | 0.953 | 0.953 | 0.971 | 0.946 | 0.946 | 0.964 | 0.757 | 0.814 | 0.823 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Alipek, S.; Kexel, C.; Moll, J. Explainable Machine Learning for Tower-Radar Monitoring of Wind Turbine Blades: Fine-Grained Blade Recognition Under Changing Operational Conditions. Sensors 2026, 26, 1083. https://doi.org/10.3390/s26041083
Alipek S, Kexel C, Moll J. Explainable Machine Learning for Tower-Radar Monitoring of Wind Turbine Blades: Fine-Grained Blade Recognition Under Changing Operational Conditions. Sensors. 2026; 26(4):1083. https://doi.org/10.3390/s26041083
Chicago/Turabian StyleAlipek, Sercan, Christian Kexel, and Jochen Moll. 2026. "Explainable Machine Learning for Tower-Radar Monitoring of Wind Turbine Blades: Fine-Grained Blade Recognition Under Changing Operational Conditions" Sensors 26, no. 4: 1083. https://doi.org/10.3390/s26041083
APA StyleAlipek, S., Kexel, C., & Moll, J. (2026). Explainable Machine Learning for Tower-Radar Monitoring of Wind Turbine Blades: Fine-Grained Blade Recognition Under Changing Operational Conditions. Sensors, 26(4), 1083. https://doi.org/10.3390/s26041083

