YOMO-Runwaynet: A Lightweight Fixed-Wing Aircraft Runway Detection Algorithm Combining YOLO and MobileRunwaynet
Abstract
:1. Introduction
- One of the most significant challenges is the highly variable environmental conditions. Factors such as the changing weather, shadows, sun glare, and seasonal variations affect the visual appearance of the runway, complicating the detection process [22].
- Currently, many advanced research methods use traditional algorithms for detection in single runway scenarios, making it difficult to achieve high-precision detection in diverse runway scenarios.
- Existing algorithms struggle to meet the real-time operational requirements of embedded systems.
- A lightweight network structure following the YOMO inference framework is designed, combining the advantages of YOLOv10 and MobileNetV3 in feature extraction and operational speed.
- Firstly, a lightweight attention module is introduced into MnasNet, and the improved MobileNetV3 is used as the backbone network to enhance the feature extraction efficiency. Then, PANet and SPPnet are incorporated to aggregate features from multiple effective feature layers. Finally, to reduce the latency and improve efficiency, YOMO-Runwaynet generates a single optimal prediction for each object, eliminating the need for non-maximum suppression (NMS).
- Experiments conducted on the RK3588 embedded platform show that the proposed runway detection algorithm achieves an accuracy of 89.5%, with an inference speed of ≤40 ms per frame on a single-core NPU.
2. Methodology
2.1. Yolo Roi Object Detection
- Backbone: Responsible for feature extraction, the backbone network in YOLOv10 utilizes an enhanced version of CSPNet (cross-stage partial network) to improve gradient flow and reduce computational redundancy.
- Neck: It is designed to aggregate the features of different scales and pass them to the head, including the PAN (path aggregation network) layer for efficient multi-scale feature fusion.
- One-to-one head: During inference, it generates a single optimal prediction for each object, eliminating the need for non-maximum suppression (NMS), thus reducing latency and increasing efficiency.
- Consistent matching metric: During assignment, both one-to-one and one-to-many methods use a unified metric to quantitatively evaluate the consistency between predictions and instances. To achieve the prediction-aware matching for these two branches, a unified matching metric is employed.
2.2. Enhanced MobileNet
3. Architectural Design of Neural Networks
3.1. Architectural Design of YOMO-Runwaynet
- Backbone network: MobileNetV3 extracts three initial effective feature layers from the images.
- Enhanced feature extraction network: Using SPP and PANet to fuse and optimize the initial feature layers for better results.
- Prediction network head: Utilizes extracted runway features to make predictions and produce results.
3.2. Branch Calculation Method of YOMO-Runwaynet
- Backbone: Utilizes the improved MobileNetV3 for feature extraction.
- Neck: Employs SPP (spatial pyramid pooling) and PAN (path aggregation network) for feature aggregation.
- Head: Combines regression and classification to produce a single optimal prediction for each object during inference, eliminating the need for non-maximum suppression (NMS) to reduce latency and enhance efficiency.
4. Runway Geographic Information Point Extraction
4.1. Runway Keypoint Detection Method
4.2. Runway Landmarks Loss Wing
5. Aerovista Runway Dataset
6. Testing and Verification
6.1. YOMO-Runwaynet Training and Testing
6.2. Geographic Beacon Point Detection Pixel Error
6.3. Algorithm Robustness Verification Results
6.4. Ablation Study and Performance of the Model
7. Conclusions
8. Patents
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wang, Z.; Zhao, D.; Cao, Y. Visual Navigation Algorithm for Night Landing of Fixed-Wing Unmanned Aerial Vehicle. Aerospace 2022, 9, 615. [Google Scholar] [CrossRef]
- Guo, M. Airport localization based on contextual knowledge complementarity in large scale remote sensing images. EAI Endorsed Trans. Scalable Inf. Syst. 2022, 9, e5. [Google Scholar] [CrossRef]
- Yin, S.; Li, H.; Teng, L. Airport Detection Based on Improved Faster RCNN in Large Scale Remote Sensing Images. Sens. Imaging 2020, 21, 49. [Google Scholar] [CrossRef]
- Wang, Q.; Feng, W.; Yao, L.; Zhuang, C.; Liu, B.; Chen, L. TPH-YOLOv5-Air: Airport Confusing Object Detection via Adaptively Spatial Feature Fusion. Remote Sens. 2023, 15, 3883. [Google Scholar] [CrossRef]
- Li, H.; Kim, P.; Zhao, J.; Joo, K.; Cai, Z.; Liu, Z.; Liu, Y. Globally optimal and efficient vanishing point estimation in atlanta world. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 17 November 2020; pp. 153–169. [Google Scholar]
- Lin, Y.; Wiersma, R.; Pintea, S. Deep vanishing point detection: Geometric priors make dataset variations vanish. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 6103–6113. [Google Scholar]
- Zhang, L.; Cheng, Y.; Zhai, Z. Real-time Accurate Runway Detection based on Airborne Multi-sensors Fusion. Def. Sci. J. 2017, 67, 542–550. [Google Scholar] [CrossRef]
- Xu, Y.; Cao, Y.; Zhang, Z. Monocular Vision Based Relative Localization For Fixed-wing Unmanned Aerial Vehicle Landing. Sensors 2022, 29, 1–14. [Google Scholar]
- Men, Z.C.; Jiang, J.; Guo, X.; Chen, L.J.; Liu, D.S. Airport runway semantic segmentation based on DCNN in high spatial resolution remote sensing images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 42, 361–366. [Google Scholar] [CrossRef]
- Ding, W.; Wu, J. An airport knowledge-based method for accurate change analysis of airport runways in VHR remote sensing images. Remote Sens. 2020, 12, 3163. [Google Scholar] [CrossRef]
- Chen, M.; Hu, Y. An image-based runway detection method for fixed-wing aircraft based on deep neural network. IET Image Process. 2024, 18, 1939–1949. [Google Scholar] [CrossRef]
- Amit, R.A.; Mohan, C.K. A robust airport runway detection network based on R-CNN using remote sensing images. IEEE Aerosp. Electron. Syst. Mag. 2021, 36, 4–20. [Google Scholar] [CrossRef]
- Hao, W.Y. Review on lane detection and related methods. Cogn. Robot. 2023, 3, 135–141. [Google Scholar] [CrossRef]
- Zhou, S.; Jiang, Y.; Xi, J.; Gong, J.; Xiong, G.; Chen, H. A novel lane detection based on geometrical model and gabor filter. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 59–64. [Google Scholar]
- Shen, Y.; Bi, Y.; Yang, Z.; Liu, D.; Liu, K.; Du, Y. Lane line detection and recognition based on dynamic ROI and modified firefly algorithm. Int. J. Intell. Robot. Appl. 2021, 5, 143–155. [Google Scholar] [CrossRef]
- Wang, J.; Hong, W.; Gong, L. Lane detection algorithm based on density clustering and RANSAC. In Proceedings of the 2018 Chinese Control And Decision Conference (CCDC), Shenyang, China, 9–11 June 2018; pp. 919–924. [Google Scholar]
- Bhavadharini, R.M.; Sutha, J. A Robust Road Lane Detection Using Computer Vision Approach for Autonomous Vehicles. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024; pp. 1–6. [Google Scholar]
- Wang, W. OpenCV-based Lane Line Detection Method for Mountain Curves. Acad. J. Sci. Technol. 2024, 10, 79–82. [Google Scholar] [CrossRef]
- Kishor, S.; Nair, R.R.; Babu, T.; Sindhu, S.; Vilashini, S.V. Lane Detection for Autonomous Vehicles with Canny Edge Detection and General Filter Convolutional Neural Network. In Proceedings of the 2024 11th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 28 February–1 March 2024; pp. 1331–1336. [Google Scholar]
- Li, Z.; Lan, P.; Zhang, Q.; Yang, L.; Nie, Y. Lane Line Detection Network Based on Strong Feature Extraction from USFDNet. In Proceedings of the 2024 IEEE 4th International Conference on Power, Electronics and Computer Applications (ICPECA), Shenyang, China, 26–28 January 2024; pp. 229–234. [Google Scholar]
- Gong, X.; Abbott, L.; Fleming, G. A survey of techniques for detection and tracking of airport runways. In Proceedings of the 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada, 9–12 January 2006; pp. 1436–1449. [Google Scholar]
- Zhao, Y.; Chen, D.; Gong, J. A Multi-Feature Fusion-Based Method for Crater Extraction of Airport Runways in Remote-Sensing Images. Remote Sens. 2024, 16, 573. [Google Scholar] [CrossRef]
- Luo, Q.; Chen, J.; Zhang, X.; Zhang, T. Multi-scale target detection for airfield visual navigation of taxiing aircraft. In Proceedings of the 2024 4th International Conference on Neural Networks, Information and Communication (NNICE), Guangzhou, China, 19–21 January 2024; pp. 749–753. [Google Scholar]
- Zakaria, N.J.; Shapiai, M.I.; Abd Ghani, R.; Yassin, M.N.M.; Ibrahim, M.Z.; Wahid, N. Lane detection in autonomous vehicles: A systematic review. IEEE Access 2023, 11, 3729–3765. [Google Scholar] [CrossRef]
- Haris, M.; Hou, J.; Wang, X. Lane line detection and departure estimation in a complex environment by using an asymmetric kernel convolution algorithm. Vis. Comput. 2023, 39, 519–538. [Google Scholar] [CrossRef]
- Dai, J.; Wu, L.; Wang, P. Overview of UAV target detection algorithms based on deep learning. In Proceedings of the 2021 IEEE 2nd International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA), Chongqing, China, 17–19 December 2021; Volume 2, pp. 736–745. [Google Scholar]
- Li, N.; Liang, C.; Huang, L.Y.; Chen, J.; Min, J.; Duan, Z.X.; Li, J.; Li, M.C. Framework for Unknown Airport Detection in Broad Areas Supported by Deep Learning and Geographic Analysis. Appl. Earth Obs. Remote Sens. 2021, 14, 6328–6338. [Google Scholar] [CrossRef]
- Boukabou, I.; Kaabouch, N. Electric and magnetic fields analysis of the safety distance for UAV inspection around extra-high voltage transmission lines. Drones 2024, 8, 47. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Yeh, I.-H.; Liao, H.-Y. Yolov9: Learning what you want to learn using programmable gradient information. arXiv 2024, arXiv:2402.13616. [Google Scholar]
- Li, C.; Li, L.; Geng, Y.; Jiang, H.; Cheng, M.; Zhang, B.; Ke, Z.; Xu, X.; Chu, X. Yolov6 v3.0: A full-scale reloading. arXiv, 2023; arXiv:2301.05586. [Google Scholar]
- Niu, S.; Nie, Z.; Li, G.; Zhu, W. Early Drought Detection in Maize Using UAV Images and YOLOv8+. Drones 2024, 8, 170. [Google Scholar] [CrossRef]
- Hosang, J.; Benenson, R.; Schiele, B. Learning Non-maximum Suppression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6469–6477. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Prasad, S.B.R.; Chandana, B.S. Mobilenetv3: A deep learning technique for human face expressions identification. Int. J. Inf. Technol. 2023, 15, 3229–3243. [Google Scholar] [CrossRef]
- Cao, Z.; Li, J.; Fang, L.; Yang, H.; Dong, G. Research on efficient classification algorithm for coal and gangue based on improved MobilenetV3-small. Int. J. Coal Prep. Util. 2024, 1–26. [Google Scholar] [CrossRef]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
Loss Function Name | Characteristic | Applicable Scene |
---|---|---|
L1 | Insensitive to outliers | Suitable for data with a lot of noise |
L2 | Sensitive to outliers | Suitable for data with less noise |
WING LOSS | Balances large and small errors | Suitable for various scenarios, especially when outliers and noise coexist |
Runway Data Type | Flight Altitude Range | Camera Acquisition Frequency | Number | Camera Pitch (°) | |
---|---|---|---|---|---|
1 | Infrared mountain runway | 100-500 ft | 25 Hz | 1315 | 3 |
2 | Infrared farmland runway | 100–550 ft | 25 Hz | 1403 | 5 |
3 | Sunny mountain runway | 100-500 ft | 30 Hz | 1300 | 4.5 |
4 | Rain, snow, mountain runway | 100-500 ft | 30 Hz | 1458 | 4.5 |
5 | Fog mountain runway | 100-500 ft | 30 Hz | 1458 | 4.5 |
6 | Cloudy and sunny day urban runway | 100-650 ft | 20 Hz | 1850 | 3.5 |
7 | Fixed-wing UAV mountain runway | 100-800 ft | 25 Hz | 2263 | 0 |
8 | Aircraft carrier platform runway | 100-400 ft | 30 HZ | 857 | 3.5 |
Type | Weather | Runway Detection Accuracy (mAP) | Average Error of Runway Corner Points | RMSE of Runway Corner Points |
---|---|---|---|---|
Visible light simulation of the mountain runway data | Sunny | 93.5% | −0.8359 | 1.6 |
Visible light simulation of the mountain runway data | Rainy | 89.2% | −0.924 | 2.13 |
Visible light simulation of the mountain runway data | Snowy | 88.7% | 1.05 | 1.97 |
Visible light simulation of the mountain runway data | Foggy | 87.6% | 1.193 | 2.06 |
Noisy visible light urban runway data | Sunny | 88.2% | 0.759 | 1.53 |
Noisy visible light data for urban runway | Sunny | 86.3% | 2.32 | 2.52 |
Advanced Network | FLOPs (M) | Param (M) | Delay on PC (ms) | Delay on RK3588 (ms) | APVAL (%) |
---|---|---|---|---|---|
YOMO-Runwaynet-n | 211.1 | 2.32 | 8 ms | 11 ms | 89.5 |
YOMO-Runwaynet-s | 575.1 | 2.675 | 13 ms | 18 ms | 91.6 |
YOMO-Runwaynet-l | 4160.7 | 2.86 | 21 ms | 25 ms | 95 |
YOLOv10-n [29] | 6700 | 2.3 | 14 ms | 16 ms | 86 |
Shufflenet [36] | 723 | 2.95 | 45 ms | 58 ms | 82 |
MobileNetV3-s [33] | 66 | 2.9 | 10 ms | 14 ms | 67.4 |
MobileNetV3-n [33] | 219 | 5.4 | 15 ms | 19 ms | 75.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dai, W.; Zhai, Z.; Wang, D.; Zu, Z.; Shen, S.; Lv, X.; Lu, S.; Wang, L. YOMO-Runwaynet: A Lightweight Fixed-Wing Aircraft Runway Detection Algorithm Combining YOLO and MobileRunwaynet. Drones 2024, 8, 330. https://doi.org/10.3390/drones8070330
Dai W, Zhai Z, Wang D, Zu Z, Shen S, Lv X, Lu S, Wang L. YOMO-Runwaynet: A Lightweight Fixed-Wing Aircraft Runway Detection Algorithm Combining YOLO and MobileRunwaynet. Drones. 2024; 8(7):330. https://doi.org/10.3390/drones8070330
Chicago/Turabian StyleDai, Wei, Zhengjun Zhai, Dezhong Wang, Zhaozi Zu, Siyuan Shen, Xinlei Lv, Sheng Lu, and Lei Wang. 2024. "YOMO-Runwaynet: A Lightweight Fixed-Wing Aircraft Runway Detection Algorithm Combining YOLO and MobileRunwaynet" Drones 8, no. 7: 330. https://doi.org/10.3390/drones8070330
APA StyleDai, W., Zhai, Z., Wang, D., Zu, Z., Shen, S., Lv, X., Lu, S., & Wang, L. (2024). YOMO-Runwaynet: A Lightweight Fixed-Wing Aircraft Runway Detection Algorithm Combining YOLO and MobileRunwaynet. Drones, 8(7), 330. https://doi.org/10.3390/drones8070330