An Improved YOLOv11n Model Based on Wavelet Convolution for Object Detection in Soccer Scenes
Abstract
1. Introduction
- (1)
- The proposed C3k2-WTConv module integrates wavelet convolution [9] into the C3k2 architecture, leveraging the orthogonal symmetry of quadrature mirror filters to achieve balanced frequency-space decomposition, enhancing multi-scale feature representation while reducing model parameters.
- (2)
- The introduced Channel Prior Convolutional Attention (CPCA) [10] mechanism incorporates symmetric operations (e.g., average-max pooling pairs and multi-scale convolutional kernels) to effectively direct feature focus toward critical regions, significantly improving detection accuracy without compromising inference speed.
- (3)
- The incorporation of the InnerShape-IoU loss function substantially improves bounding box generalization performance.
2. Related Works
2.1. Object Detection in Soccer Scenes
2.2. Wavelet Transforms in Deep Learning
3. Methods
3.1. Wavelet Convolution
3.1.1. The Wavelet Transform as Convolutions
3.1.2. Convolution in the Wavelet Domain
Algorithm 1: Wavelet Convolution (WConv) |
Input: Input tensor convolution kernel size k Output: Output tensor 1: // Step 1: Decomposition via Wavelet Transform (WT) 2: ← HaarWT(X) ▷ Using kernels in Equation (2) ) ▷ 4: // Step 2: Frequency-domain Convolution ←DepthwiseConv2D(Z,Wdw,kernel_size=k) ▷ Wdw is a learnable kernel ) ▷ Processed subbands 7: // Step 3: Reconstruction via Inverse Wavelet Transform (IWT) 8: Y ▷ Using transposed convolution 9: return Y |
3.2. Network Architecture
- (1)
- A novel C3k2-WTConv module is designed to replace the original C3k2 module;
- (2)
- An additional P2 detection branch is introduced specifically to enhance small object detection performance;
- (3)
- The CPCA mechanism is incorporated into the Neck network;
- (4)
- The state-of-the-art Inner ShapeIoU is adopted for bounding box regression.
3.3. C3k2-WTConv Module
3.4. Channel Prior Convolutional Attention Mechanism(CPCA)
3.5. The InnerShape-Iou Loss Function
4. Experiment
4.1. Dataset
4.2. Evaluation Metrics
4.3. Experimental Environment and Parameter Settings
4.4. Experiment Results
4.5. Visualization and Detailed Performance Analysis
4.5.1. Visualization of Detection Results
4.5.2. Per-Class Performance Breakdown
- (1)
- Dramatic Improvement on the Critical “Sports Ball” Class: The most significant advancement is observed on the sports ball class—the smallest and most challenging object. Our method elevates the AP@0.5 to 0.701, which constitutes a substantial increase of 19.2% over the strongest baseline (YOLOv8n, 0.588) and a 31% gain over the baseline YOLOv11n (0.535). This leap in performance quantitatively confirms that our proposed innovations—specifically the P2 detection head and enhanced feature representation—are highly effective for small object detection, directly addressing a core challenge in soccer analytics.
- (2)
- Sustained High Performance on Larger Objects: For the larger goal and person classes, all models (including baselines) already perform at a very high level (AP@0.5 > 0.95), nearing the performance ceiling for these categories. Our method successfully maintains this superior performance, with AP scores on par with the best baselines. This indicates that the architectural complexities introduced for small object detection do not compromise the model’s capability on larger, less challenging objects.
4.6. Ablation Study
4.6.1. Impact of the Improved Module on Model Performance
- (1)
- Effectiveness of the P2 Branch.
- (2)
- Effectiveness of the C3k2-WTConv Module.
- (3)
- Combined Effectiveness of P2 Branch and C3k2-WTConv.
- (4)
- Further Improvement with CPCA and InnerShape-IoU.
4.6.2. Impact of C3K2-WTConv Module Placement on Model Performance
4.6.3. Impact of CPCA Mechanism Placement on Model Performance
4.6.4. Impact of Different Attention Mechanism on Model Performance
5. Conclusions
- (1)
- Enhancing Generalization: A primary focus will be on exploring domain adaptation and generalization techniques to mitigate the performance drop across datasets, which is the key challenge identified above.
- (2)
- Extending to Multi-Object Tracking: The robust detection capabilities of our model provide a strong foundation for extension to multi-object tracking (MOT) in dynamic sports scenarios, enabling holistic video analysis.
- (3)
- Enabling Edge Deployment: We will explore lightweight optimizations and model compression techniques to facilitate the real-time deployment of our system on edge devices, such as embedded systems at sporting venues.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Yoon, H.S.; Bae, Y.J.; Yang, Y.K. A Soccer Image Sequence Mosaicking and Analysis Method Using Line and Advertisement Board Detection. ETRI J. 2002, 24, 443–454. [Google Scholar] [CrossRef]
- Vandenbroucke, N.; Macaire, L.; Postaire, J.-G. Color Image Segmentation by Pixel Classification in an Adapted Hybrid Color Space. Application to Soccer Image Analysis. Comput. Vis. Image Underst. 2003, 90, 190–216. [Google Scholar] [CrossRef]
- Gerke, S.; Singh, S.; Linnemann, A.; Ndjiki-Nya, P. Unsupervised Color Classifier Training for Soccer Player Detection. In Proceedings of the 2013 Visual Communications and Image Processing (VCIP), Kuching, Malaysia, 17–20 November 2013. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
- Ultralytics YOLO Repository. Available online: https://github.com/ultralytics/yolov5/releases (accessed on 8 October 2024).
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 7464–7475. [Google Scholar]
- Ultralytics YOLO Repository. Available online: https://github.com/ultralytics/ultralytics (accessed on 10 March 2025).
- Finder, S.E.; Amoyal, R.; Treister, E.; Freifeld, O. Wavelet Convolutions for Large Receptive Fields. In Computer Vision—ECCV 2024; Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G., Eds.; Springer Nature: Cham, Switzerland, 2025; pp. 363–380. [Google Scholar] [CrossRef]
- Huang, H.; Chen, Z.; Zou, Y.; Lu, M.; Chen, C.; Song, Y.; Zhang, H.; Yan, F. Channel Prior Convolutional Attention for Medical Image Segmentation. Comput. Biol. Med. 2024, 178, 108784. [Google Scholar] [CrossRef]
- Kim, H.; Nam, S.; Kim, J. Player Segmentation Evaluation for Trajectory Estimation in Soccer Games. In Proceedings of the Image and Vision Computing, Palmerston North, New Zealand, 26–28 November 2003; pp. 159–162. [Google Scholar]
- Nunez, J.R.; Facon, J.; de Souza Brito, A. Soccer Video Segmentation: Referee and Player Detection. In Proceedings of the 15th International Conference on Systems, Signals and Image Processing, Bratislava, Slovakia, 25–28 June 2008. [Google Scholar]
- Mačkowiak, S. Segmentation of Football Video Broadcast. Int. J. Electron. Telecommun. 2013, 59, 75–84. [Google Scholar] [CrossRef]
- Baysal, S.; Duygulu, P. Sentioscope: A Soccer Player Tracking System Using Model Field Particles. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 1350–1362. [Google Scholar] [CrossRef]
- Manafifard, M.; Ebadi, H.; Moghaddam, H. A Survey on Player Tracking in Soccer Videos. Comput. Vis. Image Underst. 2017, 159, 19–46. [Google Scholar] [CrossRef]
- Hsu, H.K.; Hung, W.C.; Tseng, H.Y.; Yao, C.J.; Tsai, Y.H.; Maneesh, S.; Yang, M.H. Progressive Domain Adaptation for Object Detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 749–757. [Google Scholar]
- Inoue, N.; Furuta, R.; Yamasaki, T.; Aizawa, K. Cross-Domain Weakly-Supervised Object Detection Through Progressive Domain Adaptation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2018, Salt Lake City, UT, USA, 18–22 June 2018; pp. 5001–5009. [Google Scholar] [CrossRef]
- Hurault, S.; Ballester, C.; Haro, G. Self-supervised Small Soccer Player Detection and Tracking. In Proceedings of the 3rd International Workshop on Multimedia Content Analysis in Sports, Seattle, WA, USA, 16 October 2020; pp. 9–18. [Google Scholar]
- Komorowski, J.; Kurzejamski, G.; Sarwas, G. FootAndBall: Integrated Player and Ball Detector. arXiv 2019, arXiv:191205445. [Google Scholar]
- Lu, K.; Chen, J.; Little, J.J.; He, H. Lightweight Convolutional Neural Networks for Player Detection and Classification. Comput. Vis. Image Underst. 2018, 172, 77–87. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- Sorano, D.; Carrara, F.; Cintia, P.; Falchi, F.; Pappalardo, L. Automatic Pass Annotation from Soccer Video Streams Based on Object Detection and LSTM. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Valletta, Malta, 27–29 February 2020; pp. 475–490. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end Object Detection with Transformers. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Qi, M.; Zheng, K.D. Soccer Video Object Detection Based on Deep Learning with Attention Mechanism. Intell. Comput. Appl. 2022, 12, 143–145+154. [Google Scholar]
- He, Y.Y. Improved YOLOX-S-based Video Target Detection Method for Football Matches. J. Sci. Teach. Coll. Univ. 2024, 44, 30–35. [Google Scholar]
- Daubechies, I. Ten Lectures on Wavelets; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
- Duan, Y.; Liu, F.; Jiao, L.; Zhao, P.; Zhang, L. SAR Image Segmentation Based on Convolutional-Wavelet Neural Network and Markov Random Field. Pattern Recognit. 2017, 64, 255–267. [Google Scholar] [CrossRef]
- Williams, T.; Li, R. Wavelet Pooling for Convolutional Neural Networks. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Finder, S.E.; Zohav, Y.; Ashkenazi, M.; Treister, E. Wavelet Feature Maps Compression for Image-to-image CNNs. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2022. [Google Scholar]
- Liu, P.; Zhang, H.; Zhang, K.; Lin, L.; Zuo, W. Multi-level Wavelet-CNN for Image Restoration. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 773–782. [Google Scholar]
- Alaba, S.; Ball, J. WCNN3D: Wavelet Convolutional Neural Network-based 3d Object Detection for Autonomous Driving. Sensors 2022, 18, 7010. [Google Scholar] [CrossRef]
- Guth, F.; Coste, S.; De Bortoli, V.; Mallat, S. Wavelet Score-based Generative Modeling. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2022. [Google Scholar]
- Phung, H.; Dao, Q.; Tran, A. Wavelet Diffusion Models are Fast and Scalable Image Generators. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 10199–10208. [Google Scholar]
- Mallat, S.; Peyré, G. A Wavelet Tour of Signal Processing: The Sparse Way; Elsevier Academic Press: Cambridge, MA, USA, 2008. [Google Scholar]
- Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Publishing House of Electronics Industry: Beijing, China, 2010. [Google Scholar]
- Zhang, H.; Zhang, S. Shape-IoU: More Accurate Metric Considering Bounding Box Shape and Scale. arXiv 2023, arXiv:2312.17663v2. [Google Scholar]
- Zhang, H.; Xu, C.; Zhang, S. Inner-IoU: More Effective Intersection over Union Loss with Auxiliary Bounding Box. arXiv 2023, arXiv:2311.02877v4. [Google Scholar]
- Soccer-Detection Dataset Repository. Available online: https://github.com/Qunmasj-Vision-Studio/Soccer-Detectiin118 (accessed on 10 March 2025).
- Wang, C.Y.; Yeh, I.H.; Mark Liao, H.Y. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. In Computer Vision—ECCV 2024; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2025; Volume 15089, pp. 1–21. [Google Scholar] [CrossRef]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458v1. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Computer Vision—ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 3–19. [Google Scholar]
- Xu, W.; Wan, Y. ELA: Efficient Local Attention for Deep Convolutional Neural Networks. arXiv 2024, arXiv:2403.01123v1. [Google Scholar] [CrossRef]
- Cai, X.; Lai, Q.; Wang, Y.; Wang, W.; Sun, Z.; Yao, Y. Poly Kernel Inception Network for Remote Sensing Detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2024, Seattle, WA, USA, 17–21 June 2024; pp. 27706–27716. [Google Scholar] [CrossRef]
- Ouyang, D.; He, S.; Zhang, G.; Luo, M. Efficient Multi-Scale Attention Module with Cross-Spatial Learning. arXiv 2023, arXiv:2305.13563, 2023. [Google Scholar]
- Lau, K.W.; Po, L.M.; Rehman, Y.A.U. Large Separable Kernel Attention: Rethinking the Large Kernel Attention Design in CNN. Expert Syst. Appl. 2024, 236, 121352. [Google Scholar] [CrossRef]
- Wan, D.; Lu, R.; Shen, S.; Xu, T.; Lang, X.; Ren, Z. Mixed Local Channel Attention for Object Detection. Eng. Appl. Artif. Intell. 2023, 123, 106442. [Google Scholar] [CrossRef]
- Baidu Inc. Football Dataset; Baidu AI Studio: Beijing, China, 2023; Available online: https://aistudio.baidu.com/datasetdetail/254098 (accessed on 8 September 2025).
- Torralba, A.; Efros, A.A. Unbiased Look at Dataset Bias. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 1521–1528. [Google Scholar] [CrossRef]
Methods | Precision | Recall | mAP0.5 | mAP0.5:0.95 | Params/M | GFLOPs | ImgLatency/ms |
---|---|---|---|---|---|---|---|
YOLOv5n | 0.877 | 0.796 | 0.838 | 0.515 | 1.68 | 4.1 | 3.2 |
YOLOv7-tiny | 0.876 | 0.813 | 0.859 | 0.537 | 5.73 | 13 | 1.7 |
YOLOv8n | 0.883 | 0.78 | 0.838 | 0.545 | 2.87 | 8.1 | 2.1 |
YOLOv9-tiny | 0.85 | 0.772 | 0.827 | 0.543 | 2.5 | 10.7 | 2.9 |
YOLOv10n | 0.817 | 0.759 | 0.819 | 0.520 | 2.57 | 8.2 | 1.4 |
YOLOv11n | 0.848 | 0.773 | 0.822 | 0.533 | 2.46 | 6.3 | 2.0 |
WCC-YOLO | 0.886 | 0.842 | 0.875 | 0.560 | 2.44 | 10.5 | 2.3 |
Methods | Precision | Recall | mAP0.5 | mAP0.5:0.95 |
---|---|---|---|---|
YOLOv8n | 0.871 ± 0.013 | 0.785 ± 0.006 | 0.838 ± 0.002 | 0.538 ± 0.007 |
YOLOv9-t | 0.861 ± 0.014 | 0.770 ± 0.002 | 0.829 ± 0.008 | 0.545 ± 0.003 |
YOLOv11n | 0.855 ± 0.015 | 0.774 ± 0.009 | 0.827 ± 0.008 | 0.534 ± 0.005 |
WCC-YOLO | 0.875 ± 0.011 | 0.837 ± 0.003 | 0.874 ± 0.001 | 0.558 ± 0.003 |
Methods | mAP0.5 | mAP0.5:0.95 | ||||||
---|---|---|---|---|---|---|---|---|
Goal | Person | Sports Ball | All | Goal | Person | Sports Ball | All | |
YOLOv8n | 0.960 | 0.965 | 0.588 | 0.838 | 0.765 | 0.615 | 0.256 | 0.545 |
YOLOv9-t | 0.960 | 0.965 | 0.556 | 0.827 | 0.756 | 0.622 | 0.250 | 0.543 |
YOLOv11n | 0.968 | 0.962 | 0.535 | 0.822 | 0.762 | 0.613 | 0.225 | 0.533 |
WCC-YOLO | 0.966 | 0.959 | 0.701 | 0.875 | 0.765 | 0.610 | 0.306 | 0.560 |
Methods | Precision | Recall | mAP0.5 | mAP0.5:0.95 | Params/M | GFLOPs | ImgLaten-cy/ms |
---|---|---|---|---|---|---|---|
YOLOv11n | 0.848 | 0.773 | 0.822 | 0.533 | 2.46 | 6.3 | 2.2 |
YOLOv11n + P2 | 0.874 | 0.840 | 0.867 | 0.549 | 2.54 | 10.2 | 2.3 |
YOLOv11n + WTConv | 0.865 | 0.789 | 0.844 | 0.542 | 2.36 | 6.2 | 2.2 |
YOLOv11n + P2 + WTConv | 0.852 | 0.835 | 0.869 | 0.555 | 2.43 | 10.0 | 2.3 |
YOLOv11n + P2 + WTConv + CPCA | 0.881 | 0.847 | 0.878 | 0.558 | 2.44 | 10.5 | 2.3 |
YOLOv11n + P2 + WTConv + CPCA + InnerShape-IoU (WCC-YOLO) | 0.886 | 0.835 | 0.875 | 0.560 | 2.44 | 10.5 | 2.3 |
Methods | Precision | Recall | mAP0.5 | mAP0.5:0.95 | Params/M | GFLOPs | ImgLaten-cy/ms |
---|---|---|---|---|---|---|---|
Backbone (P4 + P5) | 0.858 | 0.812 | 0.851 | 0.549 | 2.48 | 10.4 | 2.4 |
Backbone | 0.869 | 0.818 | 0.856 | 0.548 | 2.48 | 10.5 | 2.3 |
all | 0.886 | 0.835 | 0.875 | 0.560 | 2.44 | 10.5 | 2.3 |
Methods | Precision | Recall | mAP0.5 | mAP0.5:0.95 | Params/M | GFLOPs | ImgLaten-cy/ms |
---|---|---|---|---|---|---|---|
Backbone (in C2PSA) | 0.906 | 0.816 | 0.861 | 0.549 | 2.45 | 10.1 | 2.2 |
Backbone (after C2PSA) | 0.849 | 0.835 | 0.864 | 0.548 | 2.59 | 10.6 | 2.2 |
Neck (before P2 head) | 0.886 | 0.835 | 0.875 | 0.560 | 2.44 | 10.5 | 2.3 |
Neck (before P2 + P3 head) | 0.889 | 0.813 | 0.865 | 0.548 | 2.45 | 10.7 | 2.4 |
Backbone (after C2PSA) +Neck (before P2) | 0.873 | 0.825 | 0.867 | 0.560 | 2.60 | 10.9 | 2.4 |
Methods | Precision | Recall | mAP0.5 | mAP0.5:0.95 | Params/M | GFLOPs | ImgLaten-cy/ms |
---|---|---|---|---|---|---|---|
SE [41] | 0.886 | 0.827 | 0.862 | 0.543 | 2.43 | 10.1 | 2.3 |
CBAM [42] | 0.855 | 0.831 | 0.850 | 0.548 | 2.43 | 10.1 | 2.3 |
ELA [43] | 0.863 | 0.821 | 0.849 | 0.545 | 2.44 | 10.1 | 2.3 |
CAA [44] | 0.861 | 0.832 | 0.861 | 0.553 | 2.43 | 10.3 | 2.1 |
EMA [45] | 0.854 | 0.815 | 0.858 | 0.546 | 2.43 | 10.2 | 2.2 |
LSKA [46] | 0.878 | 0.815 | 0.860 | 0.547 | 2.43 | 10.2 | 2.3 |
MLCA [47] | 0.850 | 0.824 | 0.857 | 0.545 | 2.43 | 10.1 | 2.3 |
CPCA | 0.886 | 0.835 | 0.875 | 0.560 | 2.44 | 10.5 | 2.3 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, Y.; Geng, L.; Guo, X.; Wu, C.; Yu, G. An Improved YOLOv11n Model Based on Wavelet Convolution for Object Detection in Soccer Scenes. Symmetry 2025, 17, 1612. https://doi.org/10.3390/sym17101612
Wu Y, Geng L, Guo X, Wu C, Yu G. An Improved YOLOv11n Model Based on Wavelet Convolution for Object Detection in Soccer Scenes. Symmetry. 2025; 17(10):1612. https://doi.org/10.3390/sym17101612
Chicago/Turabian StyleWu, Yue, Lanxin Geng, Xinqi Guo, Chao Wu, and Gui Yu. 2025. "An Improved YOLOv11n Model Based on Wavelet Convolution for Object Detection in Soccer Scenes" Symmetry 17, no. 10: 1612. https://doi.org/10.3390/sym17101612
APA StyleWu, Y., Geng, L., Guo, X., Wu, C., & Yu, G. (2025). An Improved YOLOv11n Model Based on Wavelet Convolution for Object Detection in Soccer Scenes. Symmetry, 17(10), 1612. https://doi.org/10.3390/sym17101612