Research on Arc Sag Measurement Methods for Transmission Lines Based on Deep Learning and Photogrammetry Technology
Abstract
:1. Introduction
2. Related Work
2.1. Survey Area Overview
2.2. Data Acquisition
3. CM-Mask-RCNN-Based Transmission Line Spacer Bar Automatic Segmentation Network
3.1. CM-Mask-RCNN Framework
3.2. Mask-RCNN Network
3.3. Coordinate Attention Block
3.4. Multi-Head Self-Attention Block
4. Transmission Line Arc Sag Measurement
4.1. Extraction of Spacer Bar Center Coordinates Based on CM-Mask-RCNN
4.2. Recovery of 3D Information in the Center of the Spacer Bar
4.2.1. Calculation of External Azimuth Parameters with Bundle Adjustment
4.2.2. Space Front Rendezvous Algorithm for Computing Ground Coordinates of Spacer Center
4.3. Establishment of Transmission Line Model and Arc Sag Measurement
5. Experimental Results and Analysis
5.1. Dataset and Evaluation Criteria
5.2. CM-Mask-RCNN Model Training
5.3. Comparison and Analysis of Transmission Line Spacer Bar Extraction Experiments
5.4. Transmission Line Arc Sag Measurement Experiment
6. Discussion
6.1. Ablation Experiments
6.2. The Superiority of Combining CAB Attention Mechanism with MHSA Attention Mechanism
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ramachandran, P.; Vittal, V. On-line monitoring of sag in overhead transmission lines with leveled spans. In Proceedings of the 2006 38TH Annual North American Power Symposium, NAPS-2006 Proceedings, Carbondale, IL, USA, 17–19 September 2006; p. 405. [Google Scholar]
- Ren, L.; Li, H.; Liu, Y. On-line Monitoring and prediction for transmission line sag. In Proceedings of the 2012 IEEE International Conference on Condition Monitoring and Diagnosis (IEEE CMD 2012), Bali, Indonesia, 23–27 September 2012; pp. 813–817. [Google Scholar]
- Dong, X. Analytic Method to Calculate and Characterize the Sag and Tension of Overhead Lines. IEEE Trans. Power Deliv. 2016, 31, 2064–2071. [Google Scholar] [CrossRef]
- Xu, Q.; Ji, H.; Wang, M. Real-time Monitoring of Transmission Line Sag. High Volt. Technol. 2007, 176, 206–209. [Google Scholar] [CrossRef]
- Mensah-Bonsu, C.; Heydt, G.T. Overhead transmission conductor sag: A novel measurement technique and the relation of sag to real time circuit ratings. Electr. Power Compon. Syst. 2003, 31, 61–69. [Google Scholar] [CrossRef]
- Wydra, M.; Kisala, P.; Harasim, D.; Kacejko, P. Overhead Transmission Line Sag Estimation Using a Simple Optomechanical System with Chirped Fiber Bragg Gratings. Part 1: Preliminary Measurements. Sensors 2018, 18, 309. [Google Scholar] [CrossRef]
- Abraham, A.P.; Ashok, S. Gyro-Accelerometric SAG analysis and Online-Monitoring of Transmission Lines using Line Recon Robot. In Proceedings of the 2012 Annual IEEE India Conference (INDICON), Kochi, India, 7–9 December 2012; pp. 1036–1040. [Google Scholar]
- Yan, D.; Liao, Y. Online estimation of power transmission line parameters, temperature and sag. In Proceedings of the 2011 North American Power Symposium, Boston, MA, USA, 4–6 August 2011; pp. 1–6. [Google Scholar]
- Oleinikova, I.; Mutule, A.; Putnins, M. PMU measurements application for transmission line temperature and sag estimation algorithm development. In Proceedings of the 2014 55th International Scientific Conference on Power and Electrical Engineering of Riga Technical University (RTUCON), Riga, Latvia, 14 October 2014; pp. 181–185. [Google Scholar]
- Khawaja, A.H.; Huang, Q.; Li, J.; Zhang, Z. Estimation of Current and Sag in Overhead Power Transmission Lines With Optimized Magnetic Field Sensor Array Placement. IEEE Trans. Magn. 2017, 53, 2657490. [Google Scholar] [CrossRef]
- Sun, X.; Huang, Q.; Hou, Y.; Jiang, L.; Pong, P.W.T. Noncontact Operation-State Monitoring Technology Based on Magnetic-Field Sensing for Overhead High-Voltage Transmission Lines. IEEE Trans. Power Deliv. 2013, 28, 2145–2153. [Google Scholar] [CrossRef]
- Li, L.; Zhang, B.; Zhu, Y. Research on Sag Measurement System of Transmission Line Based on UAV Image and Catenary Model. Electromech. Inf. 2020, 33, 36–38. [Google Scholar] [CrossRef]
- Zhang, C.; Xu, Z.; Yang, S.; Liu, Z. Analysis and Application of Power Line Sag Assisted by Airborne LiDAR. Bull. Surv. Mapp. 2017, 7, 94–98+107. [Google Scholar] [CrossRef]
- Zhou, Y.; Guo, Q.; Mao, Q.; Zhang, S.; Kong, L.; Zhang, S.; Liu, D. Sag State Estimation of Transmission Line Based on Airborne LiDAR Data. Power Sci. Eng. 2020, 36, 58–63. [Google Scholar]
- Ma, W.; Wang, J.; Wang, C.; Ma, Y. Sag Simulation of Airborne Lidar Point Cloud Data Transmission Line. J. Surv. Mapp. Sci. Technol. 2019, 36, 394–399+405. [Google Scholar]
- Du, C.; Chen, B.; Yu, Y.; Sun, M.; Yuan, X.; Wang, Z.; Zhong, J. Transmission Line Sag Detection Method Based on 3D Model Comparison. Power Grid Clean Energy 2021, 37, 35–42+50. [Google Scholar]
- Sermet, M.Y.; Demir, I.; Kucuksari, S. Overhead Power Line Sag Monitoring through Augmented Reality. In Proceedings of the 2018 North American Power Symposium (NAPS), Fargo, ND, USA, 9–11 September 2018. [Google Scholar]
- Lu, Q.; Chen, W.; Wang, W. Research on Sag Measurement of Transmission Line Based on Aerial Image. Comput. Appl. Softw. 2019, 36, 108–111. [Google Scholar]
- Tong, W.; Li, B.; Yuan, J.; Zhao, S. Sag Measurement Method of Transmission Line Based on Aerial Sequence Image. Chin. J. Electr. Eng. 2011, 31, 115–120. [Google Scholar] [CrossRef]
- Wang, L.; Shao, F.; Xiao, B. Transmission Line Sag Measurement Method Based on Binocular Vision Sparse Point Cloud Reconstruction. J. Taiyuan Univ. Technol. 2016, 47, 747–751+785. [Google Scholar] [CrossRef]
- Li, J. Sag Measurement of Transmission Line Based on Computer Vision. Master’s Thesis, North China Electric Power University (Hebei), Beijing, China, 2009. [Google Scholar]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021, Nashville, TN, USA, 20–25 June 2021; pp. 13708–13717. [Google Scholar]
- Xie, C.; Zhu, H.; Fei, Y. Deep coordinate attention network for single image super-resolution. IET Image Process. 2022, 16, 273–284. [Google Scholar] [CrossRef]
- Zhang, J.; Wang, K.; He, Y.; Kuang, L. Visual Object Tracking via Cascaded RPN Fusion and Coordinate Attention. CMES-Comput. Model. Eng. Sci. 2022, 132, 909–927. [Google Scholar] [CrossRef]
- Chang, X.; Zhang, Y.; Yang, L.; Kou, J.; Wang, X.; Xu, D. Speech enhancement method based on the multi-head self-attention mechanism. J. Xidian Univ. 2020, 47, 104–110. [Google Scholar]
- Guo, Q.; Huang, J.; Xiong, N.; Wang, P. MS-Pointer Network: Abstractive Text Summary Based on Multi-Head Self-Attention. IEEE Access 2019, 7, 138603–138613. [Google Scholar] [CrossRef]
- Jin, Y.; Tang, C.; Liu, Q.; Wang, Y. Multi-Head Self-Attention-Based Deep Clustering for Single-Channel Speech Separation. IEEE Access 2020, 8, 100013–100021. [Google Scholar] [CrossRef]
- Chang, C.-C.; Wang, Y.-P.; Cheng, S.-C. Fish Segmentation in Sonar Images by Mask R-CNN on Feature Maps of Conditional Random Fields. Sensors 2021, 21, 7625. [Google Scholar] [CrossRef]
- Degawa, N.; Lu, X.; Kimura, A. A performance improvement of Mask R-CNN using region proposal expansion. In Proceedings of the International Workshop on Advanced Image Technology (IWAIT) 2019, Singapore, 6–9 January 2019. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Chen, H.; Yu, X. The Segmentation Algorithm of Marine Warship Image Based on Mask RCNN. In Proceedings of the Twelfth International Conference on Signal Processing Systems, Shanghai, China, 6–9 November 2021. [Google Scholar]
- Gong, Y.; Yu, X.; Ding, Y.; Peng, X.; Zhao, J.; Han, Z. Effective Fusion Factor in FPN for Tiny Object Detection. In Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision (WACV 2021), Waikoloa, HI, USA, 3–8 January 2021; pp. 1159–1167. [Google Scholar]
- Li, X.; Liu, G.; Qiao, D. Application of Improved Mask RCNN in Offshore Ship Instance Segmentation. Ship Eng. 2021, 43, 166–171. [Google Scholar]
- Yang, C.; Wu, Z.; Zhou, B.; Lin, S. Instance Localization for Self-supervised Detection Pretraining. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021, Nashville, TN, USA, 20–25 June 2021; pp. 3986–3995. [Google Scholar]
- Krueangsai, A.; Supratid, S. Effects of Shortcut-Level Amount in Lightweight ResNet of ResNet on Object Recognition with Distinct Number of Categories. In Proceedings of the 2022 International Electrical Engineering Congress (iEECON), Khon Kaen, Thailand, 9–11 March 2022; pp. 1–4. [Google Scholar]
- Li, B.; He, Y. An Improved ResNet Based on the Adjustable Shortcut Connections. IEEE Access 2018, 6, 18967–18974. [Google Scholar] [CrossRef]
- Zhou, T.; Huo, B.; Lu, H.; Ren, H. Research on Residual Neural Network and Its Application on Medical Image Processing. Acta Electron. Sin. 2020, 48, 1436–1447. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
- Liu, X.; Pan, H.; Li, X. Object detection for rotated and densely arranged objects in aerial images using path aggregated feature pyramid networks. In Proceedings of the MIPPR 2019: Pattern Recognition and Computer Vision, Wuhan, China, 2–3 November 2020. [Google Scholar]
- Wu, Y.; Tang, S.; Zhang, S.; Ogai, H. An Enhanced Feature Pyramid Object Detection Network for Autonomous Driving. Appl. Sci. 2019, 9, 4363. [Google Scholar] [CrossRef]
- Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. YOLACT++ Better Real-Time Instance Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 1108–1121. [Google Scholar] [CrossRef]
- Wang, J.; Zhuang, W.; Shang, D. Light Enhancement Algorithm Optimization for Autonomous Driving Vision in Night Scenes based on YOLACT++. In Proceedings of the 2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS), Guangzhou, China, 22–24 July 2022; pp. 417–423. [Google Scholar]
- Alom, M.Z.; Yakopcic, C.; Hasan, M.; Taha, T.M.; Asari, V.K. Recurrent residual U-Net for medical image segmentation. J. Med. Imaging 2019, 6, 014006. [Google Scholar] [CrossRef]
- Chen, K.; Xu, G.; Qian, J.; Ren, C.-X. A Bypass-Based U-Net for Medical Image Segmentation. In Proceedings of the Intelligence Science and Big Data Engineering: Visual Data Engineering, PT I, Nanjing, China, 17–20 October 2019; pp. 155–164. [Google Scholar]
- Siddique, N.; Paheding, S.; Elkin, C.P.; Devabhaktuni, V. U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications. IEEE Access 2021, 9, 82031–82057. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Lyu, Z.; Xu, X.; Zhang, F. Lightweight attention mechanism module based on squeeze and excitation. J. Comput. Appl. 2022, 42, 2353–2360. [Google Scholar]
- Ovalle-Magallanes, E.; Gabriel Avina-Cervantes, J.; Cruz-Aceves, I.; Ruiz-Pinales, J. LRSE-Net: Lightweight Residual Squeeze-and-Excitation Network for Stenosis Detection in X-ray Coronary Angiography. Electronics 2022, 11, 3570. [Google Scholar] [CrossRef]
- Wang, C.; Li, B.; Jiao, B.; Liang, Y.; Cheng, P. Convolutional Block Attention Module (CBAM)-Based Convolutional Neural Network Rolling Bearing Fault Diagnosis Method, Involves Inputting Detected Rolling Bearing Data Set to Trained CBAM-Based Network to Output Fault Diagnosis Result. CN111458148-A, 28 July 2020. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the Computer Vision–ECCV 2018, PT VII, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Xiang, L.; Zhou, Z.; Miao, L.; Chen, Q. Signal Recognition Method of X-ray Pulsar Based on CNN and Attention Module CBAM. In Proceedings of the 33rd Chinese Control and Decision Conference (CCDC 2021), Kunming, China, 22–24 May 2021; pp. 5436–5441. [Google Scholar]
Camera, GNSS | Parameters |
---|---|
Image sensor | 2048 million total pixels |
Video resolution | 4K: 3840 × 2160 |
Maximum photo resolution | 5472 × 3078 (16:9) 4864 × 3648 (4:3) 5472 × 3648 (3:2) |
Hovering accuracy | When RTK is enabled and RTK is working normally. Vertical: ±0.1 m; Horizontal: ±0.1 m Without RTK enabled. Vertical: ±0.1 m (when visual positioning is working properly) ±0.5 m (when GNSS positioning is working properly) Horizontal: ±0.3 m (when visual positioning is working properly) ±1.5 m (when GNSS positioning is working properly) |
Positioning accuracy | Vertical: 1.5 cm + 1 ppm (RMS) Horizontal: 1 cm + 1 ppm (RMS) |
Num | Camera | Lens Distortion Parameters | Interior Orientation Element | |||||||
---|---|---|---|---|---|---|---|---|---|---|
Radial Distortion | Tangential Distortion | Non-Square Correction Factor | Image Main Point and Focal Length (pixels) | |||||||
k1 | k2 | p1 | p2 | a | b | x0 | y0 | f | ||
1 | Phantom 4 RTK | 1.06 × 10−9 | −9.60 × 10−17 | 3.23 × 10−7 | 9.19 × 10−7 | −3.42 × 10−6 | −0.00027 | 1911.867 | 1055.782 | 2668.048 |
Parameter | Value |
---|---|
weight decay | 0.0001 |
lr | 0.001 |
Max-iteration | 36,000 |
ims_per_batch | 4 |
Batch_size_per_image | 128 |
Method | AP |
---|---|
Ours | 73.399 |
Yolact++ | 71.081 |
U-net | 71.352 |
MRCNN | 71.159 |
SE-MRCNN | 72.299 |
CBAM-MRCNN | 72.441 |
Num | P-num | X | Y | Z |
---|---|---|---|---|
1 | A | **7818.455 | **67,936.626 | 781.100 |
1 | **7843.433 | **67,957.120 | 778.112 | |
2 | **7889.561 | **67,994.134 | 774.038 | |
3 | **7929.365 | **68,026.084 | 773.759 | |
4 | **7975.983 | **68,063.355 | 773.138 | |
5 | **8027.400 | **68,104.606 | 774.481 | |
6 | **8068.621 | **68,137.530 | 777.582 | |
7 | **8113.931 | **68,173.788 | 782.366 | |
B | **8142.398 | **68,196.196 | 786.410 | |
... | ... | ... | ... | ... |
10 | A | **1688.332 | **71,433.326 | 783.372 |
1 | **1687.413 | **71,466.734 | 781.056 | |
2 | **1685.257 | **71,525.498 | 778.137 | |
3 | **1683.751 | **71,576.037 | 777.594 | |
4 | **1681.730 | **71,641.541 | 778.871 | |
5 | **1680.008 | **71,695.496 | 781.641 | |
6 | **1678.343 | **71,755.476 | 787.08 | |
B | **1677.107 | **71,791.992 | 790.971 |
Num | P-num | x | y |
---|---|---|---|
1 | A | 0 | 0 |
1 | 32.2547 | −3.4008 | |
2 | 91.3517 | −8.2311 | |
3 | 142.3831 | −9.9629 | |
4 | 202.0574 | −10.5473 | |
5 | 267.9887 | −10.0476 | |
6 | 320.7794 | −7.6216 | |
7 | 378.8667 | −3.5803 | |
B | 415.1426 | 0 | |
… | … | … | … |
10 | A | 0 | 0 |
1 | 33.3502 | −3.0228 | |
2 | 92.0913 | −7.1864 | |
3 | 142.6291 | −8.7997 | |
4 | 208.1781 | −8.9105 | |
5 | 262.2067 | −7.284 | |
6 | 322.3114 | −3.1166 | |
B | 358.9227 | 0 |
Num | Calculated Values/m | True Value/m | Error/m | Error Rate/% |
---|---|---|---|---|
1 | 10.781 | 11.033 | −0.252 | −2.28 |
2 | 11.041 | 11.011 | 0.03 | 0.27 |
3 | 10.884 | 10.808 | 0.076 | 0.70 |
4 | 16.05 | 16.344 | −0.294 | −1.80 |
5 | 16.764 | 16.768 | −0.004 | −0.02 |
6 | 15.876 | 16.441 | −0.565 | −3.44 |
7 | 7.378 | 7.557 | −0.179 | −2.37 |
8 | 7.985 | 7.913 | 0.072 | 0.91 |
9 | 9.05 | 9.251 | −0.201 | −2.17 |
10 | 9.208 | 9.025 | 0.183 | 2.03 |
Method | AP | AP50 | AP75 | APs | APm |
---|---|---|---|---|---|
Ours | 73.399 | 98.757 | 93.827 | 69.862 | 76.536 |
MRCNN | 71.159 | 98.208 | 94.325 | 68.751 | 75.594 |
CAB-MRCNN | 72.341 | 98.575 | 96.010 | 69.131 | 75.811 |
MHSA-MRCNN | 72.128 | 98.645 | 94.768 | 69.363 | 76.196 |
Method | AP | AP50 | AP75 | APs | APm |
---|---|---|---|---|---|
Ours | 73.399 | 98.757 | 93.827 | 69.862 | 76.536 |
Mask-RCNN | 71.159 | 98.208 | 94.325 | 68.751 | 75.594 |
CBMH-MRCNN | 72.309 | 98.889 | 93.524 | 69.511 | 75.936 |
SEMH-MRCNN | 72.065 | 98.920 | 95.743 | 69.519 | 75.381 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Song, J.; Qian, J.; Liu, Z.; Jiao, Y.; Zhou, J.; Li, Y.; Chen, Y.; Guo, J.; Wang, Z. Research on Arc Sag Measurement Methods for Transmission Lines Based on Deep Learning and Photogrammetry Technology. Remote Sens. 2023, 15, 2533. https://doi.org/10.3390/rs15102533
Song J, Qian J, Liu Z, Jiao Y, Zhou J, Li Y, Chen Y, Guo J, Wang Z. Research on Arc Sag Measurement Methods for Transmission Lines Based on Deep Learning and Photogrammetry Technology. Remote Sensing. 2023; 15(10):2533. https://doi.org/10.3390/rs15102533
Chicago/Turabian StyleSong, Jiang, Jianguo Qian, Zhengjun Liu, Yang Jiao, Jiahui Zhou, Yongrong Li, Yiming Chen, Jie Guo, and Zhiqiang Wang. 2023. "Research on Arc Sag Measurement Methods for Transmission Lines Based on Deep Learning and Photogrammetry Technology" Remote Sensing 15, no. 10: 2533. https://doi.org/10.3390/rs15102533
APA StyleSong, J., Qian, J., Liu, Z., Jiao, Y., Zhou, J., Li, Y., Chen, Y., Guo, J., & Wang, Z. (2023). Research on Arc Sag Measurement Methods for Transmission Lines Based on Deep Learning and Photogrammetry Technology. Remote Sensing, 15(10), 2533. https://doi.org/10.3390/rs15102533