A Scene Knowledge Integrating Network for Transmission Line Multi-Fitting Detection
Abstract
:1. Introduction
- (1)
- Severe occlusion. Generally speaking, there are varying degrees of occlusion between the fittings due to the cameras’ shooting angles and the fittings’ connection modes. As shown in Figure 1a, the fittings such as yoke plates, u-type hanging rings and hanging boards are occluded by the shielded rings, which leads to a lack of the occluded region features. At the same time, the region proposal of shielding rings has noise features due to the existence of other fittings.
- (2)
- Tiny-scale object. As shown in Figure 1b, the tiny-scale fittings such as hanging boards and u-type hanging rings account for a miniature proportion in the whole image due to the influence of the camera range and the scale of the fittings, resulting in less information on the region proposal features.
- (1)
- We define the aggregation of fittings as a “scene,” incorporating electrical power industry knowledge and operators’ recognition habits to improve multi-fitting detection. Hence, as shown in Figure 3, we exemplify eleven common scenes to assist the multi-fitting detection. The Scene Knowledge Integrating Network (SKIN) integrates this knowledge through the scene filter and scene structure information modules.
- (2)
- The scene filter module collects global context and fine-grained visual information using a Gated Recurrent Unit (GRU), encoding it into scene features and passing it to the scene semantic features for further processing.
- (3)
- The scene structure information module encodes and learns scene structure information from a scene-fitting prior matrix, integrating this information with the scene semantic features to improve the detection of occluded and small fittings.
2. Related Work
2.1. Fitting Detection
2.2. Object Detection Model Integrating Knowledge
3. Methods
3.1. Overview
3.2. Scene Filter Module
3.2.1. The Generation of the Original Scene Feature
3.2.2. The Filtering of the Original Scene Feature
3.2.3. The Constraint of the Scene Filtering
- (1)
- First, the scene label space is constructed. Specifically, the scene label is assigned to the i-th image . In addition, is the number of scene categories and the extra dimension represents the situation of no scene. When the image contains the scenes, the values of their corresponding dimension are 1, and otherwise they are 0. And if the image does not contain the scenes, the is 1.
- (2)
- Second, the scene classifier is constructed for completing the scene classification task. The filtered scene feature is mapped into the scene label space through the scene classifier:
3.3. Scene Structure Information Module
3.3.1. Scene-Fitting Prior Matrix Construction
3.3.2. The Network Structure of SSIM
4. Experiment
4.1. Experiment Settings
4.1.1. Dataset Description
4.1.2. Experiment Environment and Hyperparameter Setting
4.2. Comparison with State-of-the-Art Models
- (1)
- In subfigure (a), our model detects two inverted bag-type suspension clamps.
- (2)
- In subfigure (b), our model detects small fitting targets connecting the grading ring and the yoke plate, specifically the u-type hanging ring.
- (3)
- In subfigures (c) and (d), our model detects the occluded link plates.
- (4)
- In subfigure (e), our model detects the previously missed yoke plate, and the misdetected wedge-type strain clamp bounding box is also correctly rectified.
- (5)
- In subfigure (f), our model corrects the false detection of a u-type hanging ring and accurately detects the previously missed yoke plate.
4.3. Ablation Analysis
4.4. More Discussion
- (1)
- Severe Occlusion: In some cases, when fittings are extensively obscured by other components, the model’s ability to infer their presence is reduced, even with the assistance of scene knowledge.
- (2)
- Extreme Scale Variations: Very small fittings, which occupy only a few pixels, pose challenges due to limited visual information, making them harder to detect accurately.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Jenssen, R.; Roverso, D. Automatic autonomous vision-based power line inspection: A review of current status and the potential role of deep learning. Int. J. Electr. Power Energy Syst. 2018, 99, 107–120. [Google Scholar]
- Zhao, Z.; Qi, H.; Nie, L. A Review of Visual Inspection of Transmission Lines Based on Deep Learning. Guangdong Electr. Power 2019, 32, 13. [Google Scholar]
- Gao, R.; Cheng, X.; Fan, B. A Brief Discussion on the Necessity of Using X-ray Inspection for Defects in the Tension Lines of Transmission Lines. China Equip. Eng. 2020, 21, 181–182. [Google Scholar]
- Fang, Z.; Lin, W.; Fan, S.; Ma, Y.; Gao, X.; Wu, H. Defect Identification Method for Small Fittings of Transmission Line Towers Based on Hierarchical Recognition Model. Power Inf. Commun. Technol. 2020, 18, 16–24. [Google Scholar]
- Zhao, Z.; Zhang, W.; Qi, Y.; Zhai, J.; Zhao, Q. Causal Classification Method for Defects in Transmission Line Fittings by Integrating Deep Features. J. Beijing Univ. Aeronaut. Astronaut. 2021, 47, 461–468. [Google Scholar]
- Chen, R.; Xu, H. Research on UAV Power Inspection Technology for High-Voltage Transmission Lines. Electron. Test. 2021, 20, 92–94. [Google Scholar]
- Huang, Z.; Wang, H.; Zhai, X.; Wang, Y.Q.; Gao, C. Research and Application of Autonomous Inspection Methods for Transmission Lines Using Drones. J. Comput. Technol. Autom. 2021, 40, 157–161. [Google Scholar]
- Shen, J.; Zhang, X.; Chen, Y.; Wang, H.; Huang, Z.; Ji, Y. Drone Inspection Methods for Transmission Lines in Complex Scenarios. Eng. Surv. 2021, 49, 73–78. [Google Scholar]
- Liu, X.; Miao, X.; Jiang, H.; Chen, J. Data analysis in visual power line inspection: An in-depth review of deep learning for component detection and fault diagnosis. Annu. Rev. Control 2020, 50, 253–277. [Google Scholar] [CrossRef]
- Peng, X.; Qian, J.; Wu, G.; Mai, X.; Wei, L.; Rao, Z. Fully Autonomous Inspection System for Overhead Transmission Lines Using Robots and Demonstration Applications. High Volt. Eng. 2017, 43, 2582–2591. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision 2015, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Cai, Z.; Vasconcelos, N. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6154–6162. [Google Scholar]
- Liu, Y.; Wang, R.; Shan, S.; Chen, X. Structure inference net: Object detection using scene-level context and instance-level relationships. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6985–6994. [Google Scholar]
- Zhang, Z.; Hoai, M. Object detection with self-supervised scene adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 21589–21599. [Google Scholar]
- Sagar, A.S.; Chen, Y.; Xie, Y.; Kim, H.S. MSA R-CNN: A comprehensive approach to remote sensing object detection and scene understanding. Expert Syst. Appl. 2024, 241, 122788. [Google Scholar] [CrossRef]
- Xie, X.; Cheng, G.; Li, Q.; Miao, S.; Li, K.; Han, J. Fewer is more: Efficient object detection in large aerial images. Sci. China Inf. Sci. 2024, 67, 112106. [Google Scholar] [CrossRef]
- Li, Z.; Du, X.; Cao, Y. Gar: Graph assisted reasoning for object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 1295–1304. [Google Scholar]
- Shu, X.; Liu, R.; Xu, J. A Semantic Relation Graph Reasoning Network for Object Detection. In Proceedings of the 2021 IEEE 10th Data Driven Control and Learning Systems Conference (DDCLS), Suzhou, China, 14–16 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1309–1314. [Google Scholar]
- Tan, L.; Wang, Y.; Shen, C. Obstacle Vision Detection and Recognition Algorithm for De-icing Robots on Transmission Lines. J. Instrum. Meas. 2011, 32, 8. [Google Scholar]
- Jin, L.; Hu, J.; Yan, S. Image-Based Fault Diagnosis Method for Spacers of High-Voltage Transmission Lines. High Volt. Eng. 2013, 39, 1040–1045. [Google Scholar]
- Wang, W.; Zhang, J.; Han, J.; Liu, L.; Zhu, M. Detection Method for Wire Breakage and Foreign Object Defects in Transmission Lines Based on UAV Images. Comput. Appl. 2015, 35, 2404–2408. [Google Scholar]
- Wan, L.; Wu, S.; Xie, F.; Liu, Q.; Dai, J.C. Monitoring System for Tension Splice Clamps of Transmission Lines Based on Image Processing. J. Wuhan Univ. (Eng. Ed.) 2020, 53, 1106–1111. [Google Scholar]
- Liu, H. Research on Visual Recognition Methods for Obstacles in High-Voltage Transmission Line; Harbin Institute of Technology: Harbin, China, 2017. [Google Scholar]
- Guo, S. Research on Obstacle Recognition and Localization for Line Inspection Robots Based on Binocular Vision; Shandong University of Science and Technology: Qingdao, China, 2020. [Google Scholar]
- Tang, Y.; Han, J.; Wei, W.; Ding, J.; Peng, X. Research on Component Recognition and Defect Detection in Transmission Lines Using Deep Learning. Electron. Meas. Technol. 2018, 41, 60–65. [Google Scholar]
- Zhang, Y.; Wu, G.; Liu, Z.; Yang, S.; Xu, W. Transfer Learning for Detection of Shock Absorbers and Clamps in Transmission Lines Based on YOLOv3 Network. Comput. Appl. 2020, 40, 188–194. [Google Scholar]
- Jiao, R.T.; Ni, H.; Wang, Z. Research on Identification of Shock Absorbers in Transmission Lines Based on Faster R-CNN Algorithm. J. Chang. Eng. Inst. (Nat. Sci. Ed.) 2021, 22, 38–43. [Google Scholar]
- Xu, H.; Jiang, C.; Liang, X.; Lin, L.; Li, Z. Reasoning-RCNN: Unifying Adaptive Global Reasoning into Large-Scale Object Detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Chen, X.; Gupta, A. Spatial memory for context reasoning in object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4086–4096. [Google Scholar]
- Jiang, C.; Xu, H.; Liang, X.; Lin, L. Hybrid knowledge routed modules for large-scale object detection. Adv. Neural Inf. Process. Syst. 2018, 31, 1559–1570. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Cho, K.; Van, M.B.; Bahdanau, D.; Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches. arXiv 2014, 1409, 1259. [Google Scholar]
- Galleguillos, C.; Rabinovich, A.; Belongie, S. Object categorization using co-occurrence, location and appearance. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–8. [Google Scholar]
- Chen, Z.; Wei, X.S.; Wang, P.; Guo, Y. Multi-label image recognition with graph convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5177–5186. [Google Scholar]
- Zhai, Y.; Yang, X.; Wang, Q.; Zhao, Z.; Zhao, W. Hybrid Knowledge R-CNN for Transmission Line Multi-fitting Detection. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [Google Scholar]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic Differentiation in Pytorch. 2017. Available online: https://openreview.net/pdf?id=BJJsrmfCZ (accessed on 29 October 2017).
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Chen, Q.; Wang, Y.; Yang, T.; Zhang, X.; Cheng, J.; Sun, J. You only look one-level feature. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13039–13048. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Kamal, M.R.M.; Shahbudin, S.; Rahman, F.Y.A. Photovoltaic (PV) Module Defect Image Classification Analysis Using EfficientNetV2 Architectures. In Proceedings of the 2023 IEEE 14th Control and System Graduate Research Colloquium (ICSGRC), Shah Alam, Malaysia, 5 August 2023; pp. 236–241. [Google Scholar]
- Kulkarni, U.; Gurlahosur, S.V.; Babar, P.; Muttagi, S.I.; Soumya, N.; Jadekar, P.A.; Meena, S.M. Facial Key points Detection using MobileNetV2 Architecture. In Proceedings of the 2023 IEEE 8th International Conference for Convergence in Technology (I2CT), Lonavla, India, 7–9 April 2023; pp. 1–6. [Google Scholar]
- Guo, M.; Xu, T.; Liu, J.; Liu, Z.; Jiang, P.; Mu, T.; Zhang, S.; Martin, R.R.; Cheng, M.; Hu, S. Attention mechanisms in computer vision: A survey. Comput. Vis. Media 2022, 8, 331–368. [Google Scholar] [CrossRef]
- Ma, J.; Tang, L.; Fan, F.; Huang, J.; Mei, X.; Ma, Y. SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer. IEEE/CAA J. Autom. Sin. 2022, 9, 1200–1217. [Google Scholar] [CrossRef]
- Ning, X.; Tian, W.J.; Yu, L.N.; Li, W. A Brain-Inspired CIRA-DETR Full Inference Method for Small and Occluded Object Detection. J. Comput. Sci. 2022, 45, 2080–2092. [Google Scholar]
- Zhang, H.; Li, F.; Liu, S.; Zhang, L.; Su, H.; Zhu, J.; Ni, L.M.; Shum, H.Y. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. arXiv 2022, arXiv:2203.03605. [Google Scholar]
- Zong, Z.; Song, G.; Liu, Y. Detrs with collaborative hybrid assignments training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 6748–6758. [Google Scholar]
Fitting Name | Training Subset | Testing Subset | Total per Object | ||
---|---|---|---|---|---|
#Images | #Objects | #Images | #Objects | ||
PT | 56 | 98 | 27 | 50 | 148 |
BT | 497 | 1735 | 150 | 463 | 2198 |
CT | 254 | 923 | 32 | 110 | 1033 |
WT | 24 | 62 | 12 | 42 | 104 |
HB | 825 | 3800 | 146 | 577 | 4377 |
UT | 707 | 2767 | 138 | 357 | 3124 |
YP | 794 | 1531 | 161 | 264 | 1795 |
PG | 55 | 64 | 20 | 24 | 88 |
SH | 265 | 924 | 94 | 260 | 1184 |
SP | 289 | 536 | 42 | 64 | 600 |
GR | 438 | 701 | 101 | 153 | 854 |
SR | 381 | 959 | 43 | 97 | 1056 |
WE | 246 | 279 | 77 | 83 | 362 |
AB | 506 | 1979 | 66 | 223 | 2202 |
total | 1330 | 16,358 | 318 | 2767 | 19,125 |
Models | mAP50 | PT | BT | CT | WT | HB | UT | YP | PG | SH | SP | GR | SR | WE | AB | Timems/i |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SSD300 | 51.4 | 78.3 | 85.5 | 40.8 | 11.0 | 29.2 | 23.2 | 58.3 | 5.0 | 82.9 | 69.5 | 92.6 | 71.5 | 97.7 | 54.2 | 8 |
SSD512 | 74.3 | 91.2 | 90.1 | 53.9 | 41.9 | 63.6 | 59.2 | 74.9 | 51.8 | 90.4 | 74.8 | 92.9 | 76.4 | 99.6 | 79.2 | 36 |
RetinaNet | 69.8 | 81.2 | 93.2 | 44.9 | 71.3 | 60.0 | 58.3 | 68.9 | 7.7 | 88.1 | 74.0 | 91.4 | 62.9 | 99.3 | 76.6 | 50 |
YOLOv5 | 71.3 | 86.7 | 73.7 | 60.8 | 77.3 | 55.7 | 68.4 | 63.7 | 42.3 | 88.8 | 78.4 | 90.7 | 54.8 | 97.6 | 58.6 | 33 |
YOLOv8 | 75.4 | 85.7 | 77.4 | 69.6 | 74.9 | 60.3 | 74.5 | 71.2 | 58.6 | 90.7 | 63.9 | 91.6 | 79.5 | 98.6 | 59.7 | 127 |
R-FCN | 67.0 | 76.3 | 35.4 | 59.3 | 73.3 | 57.6 | 48.7 | 78.4 | 52.7 | 72.5 | 62.4 | 87.7 | 69.9 | 94.7 | 68.4 | 230 |
Efficientnetv2 | 68.7 | 47.5 | 64.7 | 69.2 | 74.7 | 58.9 | 43.8 | 59.7 | 56.1 | 87.6 | 69.3 | 90.6 | 77.8 | 93.0 | 68.7 | 20 |
Mobilenetv2 | 59.4 | 48.5 | 64.5 | 45.3 | 62.8 | 37.7 | 29.6 | 57.8 | 50.3 | 68.7 | 70.8 | 69.4 | 59.6 | 93.5 | 73.6 | 5 |
Swin Transformer | 75.2 | 87.4 | 79.4 | 86.5 | 73.4 | 69.9 | 76.3 | 74.2 | 32.9 | 89.8 | 76.3 | 76.1 | 51.2 | 99.8 | 80.2 | 214 |
DETR | 72.6 | 74.7 | 73.6 | 62.8 | 67.5 | 73.8 | 63.6 | 73.7 | 44.2 | 74.3 | 87.5 | 95.2 | 69.7 | 97.6 | 57.7 | 145 |
DINO | 75.8 | 91.2 | 87.5 | 58 | 78.2 | 63 | 72 | 81.5 | 34 | 80 | 87.5 | 85.5 | 72 | 90.5 | 79.8 | 210 |
CO-DETR | 75.5 | 90.8 | 73 | 57.5 | 77.5 | 67.5 | 71 | 80.5 | 33.5 | 85.5 | 87 | 83.2 | 71.5 | 99 | 79.2 | 175 |
Baseline | 71.4 | 81.6 | 89.2 | 56.0 | 64.7 | 49.6 | 49.6 | 78.8 | 33.3 | 81.1 | 86.4 | 89.7 | 62.7 | 100 | 76.9 | 158 |
Ours | 76.3 | 91.0 | 93.8 | 58.6 | 79.0 | 48.7 | 52.5 | 82.3 | 34.8 | 90.8 | 88.2 | 96.1 | 72.6 | 100 | 80.4 | 193 |
% | SFM | SSIM | AP50−95 | AP50 | AR1 | AR100 |
---|---|---|---|---|---|---|
Baseline | 38.4 | 73.6 | 26.5 | 46.8 | ||
√ | 41.5+3.1 | 76.9+3.3 | 27.5+1.0 | 49.6+2.8 | ||
Ours | √ | √ | 42.0+3.6 | 78.4+4.8 | 27.4+0.9 | 49.9+3.1 |
Different Matrix | AP50−95 | AP50 | AP75 | AR1 | AR100 |
---|---|---|---|---|---|
Ones Prior Matrix | 41.1 | 76.4 | 40.4 | 27.3 | 49.7 |
Random Prior Matrix | 41.1 | 76.7 | 40.7 | 27.6 | 49.6 |
Scene-Fitting Prior Matrix | 42.0 | 78.4 | 41.2 | 27.4 | 49.9 |
Experiments | AP50:95 | AP50 | AP75 | AR1 | AR10 | AR100 |
---|---|---|---|---|---|---|
41.6 | 76.9 | 41.3 | 27.5 | 49.4 | 49.6 | |
41.9 | 77.3 | 41.4 | 27.4 | 49.8 | 49.9 | |
41.6 | 77.2 | 41.0 | 27.1 | 49.2 | 49.4 | |
41.8 | 77.9 | 40.9 | 27.4 | 49.4 | 49.5 | |
42.0 | 78.4 | 41.2 | 27.4 | 49.8 | 49.9 | |
41.1 | 77.7 | 41.3 | 27.7 | 49.3 | 49.4 | |
41.7 | 77.6 | 42.7 | 27.3 | 49.5 | 49.7 | |
42.0 | 77.4 | 41.3 | 28.0 | 49.9 | 50.0 | |
41.2 | 77.1 | 40.0 | 27.7 | 49.3 | 49.5 |
Experiments | AP50:95 | AP50 | AP75 | AR1 | AR10 | AR100 |
---|---|---|---|---|---|---|
40.7 | 76.2 | 39.9 | 27.4 | 49.6 | 49.7 | |
42.0 | 78.4 | 41.2 | 27.4 | 49.8 | 49.9 | |
41.1 | 77.2 | 41.0 | 27.0 | 49.2 | 49.3 | |
41.1 | 77.3 | 40.4 | 27.4 | 49.2 | 49.3 | |
42.0 | 77.0 | 42.5 | 27.9 | 50.1 | 50.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, X.; Xu, X.; Xu, J.; Zheng, W.; Wang, Q. A Scene Knowledge Integrating Network for Transmission Line Multi-Fitting Detection. Sensors 2024, 24, 8207. https://doi.org/10.3390/s24248207
Chen X, Xu X, Xu J, Zheng W, Wang Q. A Scene Knowledge Integrating Network for Transmission Line Multi-Fitting Detection. Sensors. 2024; 24(24):8207. https://doi.org/10.3390/s24248207
Chicago/Turabian StyleChen, Xinhang, Xinsheng Xu, Jing Xu, Wenjie Zheng, and Qianming Wang. 2024. "A Scene Knowledge Integrating Network for Transmission Line Multi-Fitting Detection" Sensors 24, no. 24: 8207. https://doi.org/10.3390/s24248207
APA StyleChen, X., Xu, X., Xu, J., Zheng, W., & Wang, Q. (2024). A Scene Knowledge Integrating Network for Transmission Line Multi-Fitting Detection. Sensors, 24(24), 8207. https://doi.org/10.3390/s24248207