A Low-Cost and Lightweight Real-Time Object-Detection Method Based on UAV Remote Sensing in Transportation Systems
Abstract
:1. Introduction
- The limited memory of embedded devices leads to frequent memory accesses and data transfers, considerably slowing down the inference process and impacting the real-time perceptual capabilities of UAV systems. Current models lack optimizations for small-memory hardware constraints.
- An in-depth analysis of the FasterNet backbone network is conducted, resulting in the reduction of feature-processing dimensionality and the introduction of the FasterNet-16 backbone network. This enhancement enables faster and more efficient feature extraction in target detection.
- Multi-scale feature fusion in the neck module is evaluated, and a strategy utilizing multiple PConv convolutions is developed. This approach ensures optimal feature utilization while maintaining a minimal structural footprint within the target-detection network.
- Based on various datasets and three different embedded platforms, extensive experiments are conducted. The proposed model demonstrates improvements in both detection precision and inference speed on the Rockchip RK3566-embedded platform compared to state-of-the-art methods.
2. Related Work
2.1. Applications of UAV Remote-Sensing Technology in Transportation Systems
2.2. Lightweight Object-Detection Methods for Embedded Devices in UAVs
2.3. Optimization Strategies for Lightweight Model Performance
3. Design of a Memory-Efficient Lightweight Object-Detection Model
3.1. Memory Consumption Analysis
3.2. Design of a Lightweight Backbone Network Based on FasterNet
3.3. Multi-Scale Feature-Fusion Neck Module Based on PConv
3.4. Detection Head and Loss Functions
- 1.
- Classification loss. The classification loss is computed as follows:
- 2.
- Bounding box regression loss. The bounding box regression loss is defined using the smooth L1 loss function:
- 3.
- Objectness loss. The objectness loss, assessing the likelihood of an object’s presence, is calculated by:
- 4.
- Combined loss function. The overall loss, combining the three individual components, is given by:
4. Experimental Results
4.1. Embedded Devices
- AllWinner H618: Engineered by Zhuhai AllWinner Technology, the H618 is an embedded processor designed for applications in security surveillance and smart connectivity in the internet of things [46,47,48]. It operates on the ARM Cortex-A53 architecture and includes 4 processor cores with a peak frequency of 1.5 GHz. The H618 development board featured in this study is equipped with 1 GB of memory.
4.2. Dataset
- VisDrone Dataset [49]: The VisDrone dataset, utilized for the experiments, is developed by the AISKYEYE team at Tianjin University, China. It consists of 288 video clips with 261,908 frames and 10,209 static images, captured across 14 cities in China under varying conditions using different drone models. This dataset includes over 2.6 million bounding boxes annotating diverse objects such as pedestrians, vehicles, and bicycles in various urban and rural settings. Additional data attributes like scene visibility, object class, and occlusion are provided to enhance the robustness of object-detection algorithms tested.
- COCO Dataset [50]: The COCO dataset offers a rich diversity of 80 common object categories. The images in the COCO dataset come from a wide range of scenes and environments, such as indoor and outdoor settings, urban and rural areas, which enhances the model’s ability to generalize across different contexts. Compared with the Visdrone dataset, the COCO dataset contains more object categories and larger targets, providing a diverse perspective to evaluate the detection model.
- Traffic-Net Dataset [51]: The Traffic-Net Dataset contains 4400 images across four classes—accident, dense traffic, fire, and sparse traffic—with 900 images for training and 200 for testing. Designed for training machine-learning models in real-time traffic-condition detection and monitoring, Traffic-Net enables systems to effectively perceive and respond to various traffic scenarios. By applying the model to this dataset, the effectiveness is evaluated in UAV-based traffic management.
4.3. Evaluation Metrics
4.4. Comparison Methods
4.5. Methods Evaluation
4.6. Comprehensive Performance Evaluation
4.7. UAV Application in Transportation Analysis
5. Discussion
6. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Xiao, T.; Chen, C.; Pei, Q.; Jiang, Z.; Xu, S. SFO: An adaptive task scheduling based on incentive fleet formation and metrizable resource orchestration for autonomous vehicle platooning. IEEE Trans. Mob. Comput. 2023, 23, 7695–7713. [Google Scholar] [CrossRef]
- Sánchez-Montero, M.; Toscano-Moreno, M.; Bravo-Arrabal, J.; Serón Barba, J.; Vera-Ortega, P.; Vázquez-Martín, R.; Fernandez-Lozano, J.J.; Mandow, A.; García-Cerezo, A. Remote planning and operation of a UGV through ROS and commercial mobile networks. In Proceedings of the Iberian Robotics Conference, Zaragoza, Spain, 23–25 November 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 271–282. [Google Scholar]
- Liu, Z.; Chen, C.; Wang, Z.; Cong, L.; Lu, H.; Pei, Q.; Wan, S. Automated Vehicle Platooning: A Two-Stage Approach Based on Vehicle-Road Cooperation. IEEE Trans. Intell. Veh. 2024; early access. [Google Scholar] [CrossRef]
- Lyu, X.; Li, X.; Dang, D.; Dou, H.; Wang, K.; Lou, A. Unmanned aerial vehicle (UAV) remote sensing in grassland ecosystem monitoring: A systematic review. Remote Sens. 2022, 14, 1096. [Google Scholar] [CrossRef]
- Butilă, E.V.; Boboc, R.G. Urban traffic monitoring and analysis using unmanned aerial vehicles (UAVs): A systematic literature review. Remote Sens. 2022, 14, 620. [Google Scholar] [CrossRef]
- Liu, Q.; Li, Z.; Yuan, S.; Zhu, Y.; Li, X. Review on vehicle detection technology for unmanned ground vehicles. Sensors 2021, 21, 1354. [Google Scholar] [CrossRef]
- Yao, H.; Qin, R.; Chen, X. Unmanned aerial vehicle for remote sensing applications—A review. Remote Sens. 2019, 11, 1443. [Google Scholar] [CrossRef]
- Zhu, J.; Sun, K.; Jia, S.; Li, Q.; Hou, X.; Lin, W.; Liu, B.; Qiu, G. Urban traffic density estimation based on ultrahigh-resolution UAV video and deep neural network. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2018, 11, 4968–4981. [Google Scholar] [CrossRef]
- Nousi, P.; Mademlis, I.; Karakostas, I.; Tefas, A.; Pitas, I. Embedded UAV real-time visual object detection and tracking. In Proceedings of the 2019 IEEE International Conference on Real-time Computing and Robotics (RCAR), Irkutsk, Russia, 4–9 August 2019; pp. 708–713. [Google Scholar]
- Luo, X.; Wu, Y.; Wang, F. Target detection method of UAV aerial imagery based on improved YOLOv5. Remote Sens. 2022, 14, 5063. [Google Scholar] [CrossRef]
- Niu, K.; Yan, Y. A Small-Object-Detection Model Based on Improved YOLOv8 for UAV Aerial Images. In Proceedings of the 2023 2nd International Conference on Artificial Intelligence and Intelligent Information Processing (AIIIP), Hangzhou, China, 27–29 October 2023; pp. 57–60. [Google Scholar]
- Zhou, H.; Ma, A.; Niu, Y.; Ma, Z. Small-object detection for UAV-based images using a distance metric method. Drones 2022, 6, 308. [Google Scholar] [CrossRef]
- Tang, G.; Ni, J.; Zhao, Y.; Gu, Y.; Cao, W. A survey of object detection for UAVs based on deep learning. Remote Sens. 2023, 16, 149. [Google Scholar] [CrossRef]
- Amarasingam, N.; Salgadoe, A.S.A.; Powell, K.; Gonzalez, L.F.; Natarajan, S. A review of UAV platforms, sensors, and applications for monitoring of sugarcane crops. Remote Sens. Appl. Soc. Environ. 2022, 26, 100712. [Google Scholar] [CrossRef]
- Min, X.; Zhou, W.; Hu, R.; Wu, Y.; Pang, Y.; Yi, J. Lwuavdet: A lightweight uav object detection network on edge devices. IEEE Internet Things J. 2024, 11, 24013–24023. [Google Scholar] [CrossRef]
- Wu, W.; Liu, A.; Hu, J.; Mo, Y.; Xiang, S.; Duan, P.; Liang, Q. EUAVDet: An Efficient and Lightweight Object Detector for UAV Aerial Images with an Edge-Based Computing Platform. Drones 2024, 8, 261. [Google Scholar] [CrossRef]
- Ma, X. FastestDet: Ultra Lightweight Anchor-Free Real-Time Object Detection Algorithm. 2022. Available online: https://github.com/dog-qiuqiu/FastestDet (accessed on 14 July 2022).
- Ahmed, F.; Mohanta, J.; Keshari, A.; Yadav, P.S. Recent advances in unmanned aerial vehicles: A review. Arab. J. Sci. Eng. 2022, 47, 7963–7984. [Google Scholar] [CrossRef] [PubMed]
- Chen, C.; Wang, W.; Liu, Z.; Wang, Z.; Li, C.; Lu, H.; Pei, Q.; Wan, S. RLFN-VRA: Reinforcement Learning-based Flexible Numerology V2V Resource Allocation for 5G NR V2X Networks. IEEE Trans. Intell. Veh. 2024; early access. [Google Scholar] [CrossRef]
- Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Vehicle detection from UAV imagery with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6047–6067. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhu, L. A review on unmanned aerial vehicle remote sensing: Platforms, sensors, data processing methods, and applications. Drones 2023, 7, 398. [Google Scholar] [CrossRef]
- Adade, R.; Aibinu, A.M.; Ekumah, B.; Asaana, J. Unmanned Aerial Vehicle (UAV) applications in coastal zone management—A review. Environ. Monit. Assess. 2021, 193, 154. [Google Scholar] [CrossRef]
- Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A compilation of UAV applications for precision agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar] [CrossRef]
- Chen, N.; Li, Y.; Yang, Z.; Lu, Z.; Wang, S.; Wang, J. LODNU: Lightweight object detection network in UAV vision. J. Supercomput. 2023, 79, 10117–10138. [Google Scholar] [CrossRef]
- Hua, W.; Chen, Q.; Chen, W. A new lightweight network for efficient UAV object detection. Sci. Rep. 2024, 14, 13288. [Google Scholar] [CrossRef] [PubMed]
- Yue, M.; Zhang, L.; Huang, J.; Zhang, H. Lightweight and Efficient Tiny-Object Detection Based on Improved YOLOv8n for UAV Aerial Images. Drones 2024, 8, 276. [Google Scholar] [CrossRef]
- Fan, Q.; Li, Y.; Deveci, M.; Zhong, K.; Kadry, S. LUD-YOLO: A novel lightweight object detection network for unmanned aerial vehicle. Inf. Sci. 2024, 686, 121366. [Google Scholar] [CrossRef]
- Yang, Z.; Li, Y.; Wang, B.; Ding, S.; Jiang, P. A lightweight sea surface object detection network for unmanned surface vehicles. J. Mar. Sci. Eng. 2022, 10, 965. [Google Scholar] [CrossRef]
- Zhou, H.; Hu, F.; Juras, M.; Mehta, A.B.; Deng, Y. Real-time video streaming and control of cellular-connected UAV system: Prototype and performance evaluation. IEEE Wirel. Commun. Lett. 2021, 10, 1657–1661. [Google Scholar] [CrossRef]
- Ghazali, M.H.M.; Teoh, K.; Rahiman, W. A systematic review of real-time deployments of UAV-based LoRa communication network. IEEE Access 2021, 9, 124817–124830. [Google Scholar] [CrossRef]
- Cao, Z.; Kooistra, L.; Wang, W.; Guo, L.; Valente, J. Real-time object detection based on uav remote sensing: A systematic literature review. Drones 2023, 7, 620. [Google Scholar] [CrossRef]
- Zhang, P.; Zhong, Y.; Li, X. SlimYOLOv3: Narrower, faster and better for real-time UAV applications. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Chen, J.; Kao, S.h.; He, H.; Zhuo, W.; Wen, S.; Lee, C.H.; Chan, S.H.G. Run, don’t walk: Chasing higher FLOPS for faster neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 12021–12031. [Google Scholar]
- Chen, C.; Si, J.; Li, H.; Han, W.; Kumar, N.; Berretti, S.; Wan, S. A High Stability Clustering Scheme for the Internet of Vehicles. IEEE Trans. Netw. Serv. Manag. 2024, 21, 4297–4311. [Google Scholar] [CrossRef]
- Ye, T.; Qin, W.; Zhao, Z.; Gao, X.; Deng, X.; Ouyang, Y. Real-time object detection network in UAV-vision based on CNN and transformer. IEEE Trans. Instrum. Meas. 2023, 72, 2505713. [Google Scholar] [CrossRef]
- Cheng, Q.; Wang, H.; Zhu, B.; Shi, Y.; Xie, B. A real-time uav target detection algorithm based on edge computing. Drones 2023, 7, 95. [Google Scholar] [CrossRef]
- Kaiser, L.; Gomez, A.N.; Chollet, F. Depthwise separable convolutions for neural machine translation. arXiv 2017, arXiv:1706.03059. [Google Scholar]
- Hossain, S.M.M.; Deb, K.; Dhar, P.K.; Koshiba, T. Plant leaf disease recognition using depth-wise separable convolution-based models. Symmetry 2021, 13, 511. [Google Scholar] [CrossRef]
- Mittapalli, P.S.; Tagore, M.; Reddy, P.A.; Kande, G.B.; Reddy, Y.M. Deep learning based real-time object detection on jetson nano embedded gpu. In Microelectronics, Circuits and Systems: Select Proceedings of Micro2021; Springer: Berlin/Heidelberg, Germany, 2023; pp. 511–521. [Google Scholar]
- Süzen, A.A.; Duman, B.; Şen, B. Benchmark analysis of jetson tx2, jetson nano and raspberry pi using deep-cnn. In Proceedings of the 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 26–28 June 2020; pp. 1–5. [Google Scholar]
- Kareem, A.A.; Hammood, D.A.; Alchalaby, A.A.; Khamees, R.A. A performance of low-cost nvidia jetson nano embedded system in the real-time siamese single object tracking: A comparison study. In Proceedings of the International Conference on Computing Science, Communication and Security, Gandhingar, India, 6–7 February 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 296–310. [Google Scholar]
- Kim, J.; Kim, S. Hardware accelerators in embedded systems. In Artificial Intelligence and Hardware Accelerators; Springer: Berlin/Heidelberg, Germany, 2023; pp. 167–181. [Google Scholar]
- Liu, Q.; Wu, Z.; Wang, J.; Peng, W.; Li, C. Implementation and acceleration of driving behavior detection model based on Rockchip RV1126+ Yolov5s. In Proceedings of the 2024 International Conference on Power Electronics and Artificial Intelligence, Zhuhai, China, 23–25 August 2024; pp. 902–906. [Google Scholar]
- Pomšár, L.; Brecko, A.; Zolotová, I. Brief overview of Edge AI accelerators for energy-constrained edge. In Proceedings of the 2022 IEEE 20th Jubilee World Symposium on Applied Machine Intelligence and Informatics (SAMI), Poprad, Slovakia, 2–5 March 2022; pp. 461–466. [Google Scholar]
- Lee, J.K.; Jamieson, M.; Brown, N.; Jesus, R. Test-driving RISC-V Vector hardware for HPC. In Proceedings of the International Conference on High Performance Computing, Hamburg, Germany, 23–25 May 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 419–432. [Google Scholar]
- Shi, H. Research on the Application of the Internet of Things Based on A-IoT. In Proceedings of the 2022 6th International Seminar on Education, Management and Social Sciences (ISEMSS 2022), Chongqing, China, 15–17 July 2022; Atlantis Press: Dordrecht, The Netherlands, 2022; pp. 1496–1502. [Google Scholar]
- Brown, N.; Jamieson, M.; Lee, J.K. Experiences of running an HPC RISC-V testbed. arXiv 2023, arXiv:2305.00512. [Google Scholar]
- Du, D.; Zhu, P.; Wen, L.; Bian, X.; Lin, H.; Hu, Q.; Peng, T.; Zheng, J.; Wang, X.; Zhang, Y.; et al. VisDrone-DET2019: The vision meets drone object detection in image challenge results. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
- Cao, F.; Chen, S.; Zhong, J.; Gao, Y. Traffic Condition Classification Model Based on Traffic-Net. Comput. Intell. Neurosci. 2023, 2023, 7812276. [Google Scholar] [CrossRef]
- Krichen, M.; Adoni, W.Y.H.; Mihoub, A.; Alzahrani, M.Y.; Nahhal, T. Security challenges for drone communications: Possible threats, attacks and countermeasures. In Proceedings of the 2022 2nd International Conference of Smart Systems and Emerging Technologies (SMARTTECH), Riyadh, Saudi Arabia, 9–11 May 2022; pp. 184–189. [Google Scholar]
- Ko, Y.; Kim, J.; Duguma, D.G.; Astillo, P.V.; You, I.; Pau, G. Drone secure communication protocol for future sensitive applications in military zone. Sensors 2021, 21, 2057. [Google Scholar] [CrossRef]
Layer Name | Output Feature Map Size | Layer Composition | |
---|---|---|---|
Embedded layer | Conv_4_16_4, BN | # Channels 16 | |
First group | # Modules | ||
Merging layer | Conv_2_32_2, BN | # Channels 32 | |
Second group | # Modules | ||
Merging layer | Conv_2_64_2, BN | # Channels 64 | |
Third group | # Modules | ||
Merging layer | Conv_2_128_2, BN | # Channels 128 | |
Fourth group | # Modules |
Device | Architecture | Max Frequency | Memory | Power Consumption | FLOPS (GFLOPS) | Bandwidth (MB/s) | Price |
---|---|---|---|---|---|---|---|
Jetson Nano | Cortex-A57 * 4 | 1.5 GHz | 4 GB | 10 W | 43.449 | 3706.2 | 1299 RMB |
Rockchip RK3566 | Cortex-A55 * 4 | 1.8 GHz | 2 GB | 5 W | 57.396 | 3080.8 | 299 RMB |
AllWinner H618 | Cortex-A53 * 4 | 1.5 GHz | 1 GB | 5 W | 47.691 | 1323.3 | 99 RMB |
Configuration | Parameters |
---|---|
CPU model | Intel Core i7-10700 |
Memory capacity | 32 GB |
GPU model | Nvidia RTX3090 |
GPU memory capacity | 24 GB |
Operating system | Ubuntu 22.04 |
CUDA version | 12.01 |
Training framework | PyTorch 1.13 |
Parameter | Setting |
---|---|
Optimizer | Adam |
Learning rate | 0.001 |
Learning rate decay strategy | Cosine |
Batch size | 64 |
CUDA version | 12.01 |
Training framework | PyTorch 1.13 |
Model | mAP | AP50 | Input Resolution | Inference Time | Model Parameters |
---|---|---|---|---|---|
yolov8s | 17.3% | 31.00% | 640 × 640 | 140.72 ms | 11.2 M |
yolo-fastestv2 | 10.1% | 20.05% | 320 × 320 | 19.86 ms | 0.25 M |
FastestDet | 10.3% | 21.00% | 352 × 352 | 20.47 ms | 0.24 M |
Proposed model | 10.8% | 21.50% | 320 × 320 | 12.13 ms | 0.4035 M |
Proposed model | 11.5% | 23.05% | 352 × 352 | 15.27 ms | 0.4035 M |
Embending Devices | Nvidia Jetson Nano | Rockchip RK3566 | AllWinner H618 |
---|---|---|---|
Architecture | cortex-A57 * 4 | cortex-A55 * 4 | cortex-A53 * 4 |
Operation performance (GFLOPS) | 43.449 | 57.396 | 47.691 |
Cache bandwidth (MB/s) | 3706.2 | 3080.8 | 1323.3 |
YOLOv8s (single-core) | 506.74 ms | 353.46 ms | 832.76 ms |
YOLOv8s (dual-core) | 283.40 ms | 203.13 ms | 486.68 ms |
YOLOv8s (quad-core) | 174.82 ms | 140.72 ms | 356.32 ms |
FastestDet (single-core) | 34.16 ms | 30.06 ms | 58.00 ms |
FastestDet (dual-core) | 22.32 ms | 23.36 ms | 40.30 ms |
FastestDet (quad-core) | 15.93 ms | 20.47 ms | 37.63 ms |
Proposed model(320 × 320) (single-core) | 30.46 ms | 21.09 ms | 47.74 ms |
Proposed model (320 × 320) (dual-core) | 16.13 ms | 14.66 ms | 29.60 ms |
Proposed model (320 × 320) (quad-core) | 13.86 ms | 12.13 ms | 24.66 ms |
Proposed model (352 × 352) (single-core) | 36.52 ms | 24.78 ms | 56.32 ms |
Proposed model (352 × 352) (dual-core) | 22.18 ms | 16.88 ms | 37.50 ms |
Proposed model (352 × 352) (quad-core) | 17.23 ms | 15.27 ms | 30.41 ms |
Model | Inference Time | mAP(%) | EPF (J) | CPFPS(RMB) | CPI |
---|---|---|---|---|---|
yolov8s | 140.72 ms | 17.3% | 0.4925 | 42.06 | 6.20 |
yolo-fastestv2 | 19.86 ms | 10.1% | 0.0695 | 5.94 | 21.49 |
FastestDet | 20.47 ms | 10.3% | 0.0716 | 6.12 | 20.95 |
Proposed model (320 × 320) | 12.13 ms | 10.8% | 0.0425 | 3.63 | 34.02 |
Proposed model (352 × 352) | 15.27 ms | 11.5% | 0.0534 | 4.57 | 27.61 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Z.; Chen, C.; Huang, Z.; Chang, Y.C.; Liu, L.; Pei, Q. A Low-Cost and Lightweight Real-Time Object-Detection Method Based on UAV Remote Sensing in Transportation Systems. Remote Sens. 2024, 16, 3712. https://doi.org/10.3390/rs16193712
Liu Z, Chen C, Huang Z, Chang YC, Liu L, Pei Q. A Low-Cost and Lightweight Real-Time Object-Detection Method Based on UAV Remote Sensing in Transportation Systems. Remote Sensing. 2024; 16(19):3712. https://doi.org/10.3390/rs16193712
Chicago/Turabian StyleLiu, Ziye, Chen Chen, Ziqin Huang, Yoong Choon Chang, Lei Liu, and Qingqi Pei. 2024. "A Low-Cost and Lightweight Real-Time Object-Detection Method Based on UAV Remote Sensing in Transportation Systems" Remote Sensing 16, no. 19: 3712. https://doi.org/10.3390/rs16193712
APA StyleLiu, Z., Chen, C., Huang, Z., Chang, Y. C., Liu, L., & Pei, Q. (2024). A Low-Cost and Lightweight Real-Time Object-Detection Method Based on UAV Remote Sensing in Transportation Systems. Remote Sensing, 16(19), 3712. https://doi.org/10.3390/rs16193712