Development of a High-Precision and Lightweight Detector and Dataset for Construction-Related Vehicles
Abstract
:1. Introduction
- We proposed a new high-precision and lightweight vehicle detector. The detector utilizes Densenet121 as the backbone network, greatly improving feature transmission and reuse. Additionally, depth-wise separable convolution was introduced outside the backbone network to reduce computational costs and the number of parameters. Moreover, we employed the H-Swish function to enhance non-linear feature extraction.
- We proposed a new image dataset comprising 8425 images across 13 different categories of vehicles for the detection of construction-related vehicles. The dataset solves the problem of limited availability of image datasets for construction-related vehicles.
- We performed a series of experiments using 17 popular SOTA detection models on the proposed dataset. The experimental results revealed that the proposed detector achieves higher detection accuracy, lower computational costs, and fewer parameters than other detection models.
2. Related Work
2.1. Methods Based on Digital Image Processing and Machine Learning
2.2. Methods Based on Deep Learning
3. Methodology
3.1. The Basic Detection Framework
3.2. Improvement of the Backbone Feature Extraction Network
3.3. The Lightweight Neck and Head with DSC
3.4. The H-Swish Activation Function
- Similar to the ReLU function, it has no upper limit. This characteristic is required for any activation function. It can prevent gradient saturation, which results in a significant decline in the speed of training. Therefore, this feature ensures that it does not suffer from gradient saturation problems and can accelerate the training of detection models.
- It has a lower bound (the left half-axis of x gradually tends to 0). This can produce stronger regularization effects and effectively prevent overfitting.
- Non-monotonic function: this characteristic preserves minor negative values, leading to a stable gradient flow in the network. Most commonly used activation functions cannot maintain negative values, making most neurons unable to be updated.
- It is continuous and differentiable everywhere, making it easier to train.
- The H-Swish activation function is a differentiable function with robust generalization capability and efficient optimization ability, which can significantly enhance the recognition accuracy of neural networks.
3.5. The Proposed Vehicle Detector, YOLOC
4. Dataset
4.1. Data Collection
4.2. Data Pre-Processing
5. Experiment
5.1. Experimental Setting
5.2. Evaluation Metrics
6. Results and Analysis
6.1. Comparison with YOLOv4
6.2. Comparison with other Detectors
6.3. Ablation Study
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Cheon, M.; Lee, W.; Yoon, C.; Park, M. Vision-based vehicle detection and tracking for intelligent transportation systems. IEEE Trans. Ind. Inform. 2014, 10, 1397–1406. [Google Scholar]
- Zaman, F.T.; Kurt, G.K. Vehicle detection and tracking in urban traffic surveillance: A comprehensive survey. IET Intell. Transp. Syst. 2016, 10, 619–629. [Google Scholar]
- Nguyen, T.; Chen, Y.; Kankanhalli, M.S.; Nguyen, T.Q. Vehicle detection in urban traffic scenes: A survey. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 1019–1034. [Google Scholar]
- Li, X.; Xie, Y.; Wang, J.; Li, T. Vehicle detection in urban traffic scenes: A comprehensive survey. ACM Comput. Surv. 2016, 48, 1–31. [Google Scholar] [CrossRef]
- Wang, C.; Wang, H.; Li, B. Vehicle detection using partial shape features. IEEE Trans. Intell. Transp. Syst. 2013, 14, 230–240. [Google Scholar]
- Li, X.Q.; Song, L.K.; Choy, Y.S.; Bai, G.C. Multivariate ensembles-based hierarchical linkage strategy for system reliability evaluation of aeroengine cooling blades. Aerosp. Sci. Technol. 2023, 138, 108325. [Google Scholar] [CrossRef]
- Srikar, M.; Malathi, K. An Improved Moving Object Detection in a Wide Area Environment using Image Classification and Recognition by Comparing You Only Look Once (YOLO) Algorithm over Deformable Part Models (DPM) Algorithm. J. Pharm. Negat. Results 2022, 13, 1701–1707. [Google Scholar]
- Yar, H.; Khan, Z.A.; Ullah, F.U.M.; Ullah, W.; Baik, S.W. A modified YOLOv5 architecture for efficient fire detection in smart cities. Expert Syst. Appl. 2023, 231, 120465. [Google Scholar] [CrossRef]
- Dilshad, N.; Ullah, A.; Kim, J.; Seo, J. LocateUAV: Unmanned aerial vehicle location estimation via contextual analysis in an IoT environment. IEEE Internet Things J. 2022, 10, 4021–4033. [Google Scholar] [CrossRef]
- Park, J.; Baek, J.; Kim, J.; You, K.; Kim, K. Deep Learning-Based Algal Detection Model Development Considering Field Application. Water 2022, 14, 1275. [Google Scholar] [CrossRef]
- Zhou, Z.H.; Zhang, M.L. A brief introduction to weakly supervised learning. Natl. Sci. Rev. 2017, 4, 697–712. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Adam, H. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision; 2019; pp. 1314–1324. [Google Scholar] [CrossRef]
- Horn, B.K.P.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef]
- Barnich, O.; Van Droogenbroeck, M. ViBe: A universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 2011, 20, 1709–1724. [Google Scholar] [CrossRef]
- Rin, V.; Nuthong, C. Front moving vehicle detection and tracking with Kalman filter. In Proceedings of the 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS), Singapore, 23–25 February 2019; pp. 304–310. [Google Scholar]
- Sun, W.; Sun, M.; Zhang, X.; Li, M. Moving vehicle detection and tracking based on optical flow method and immune particle filter under complex transportation environments. Complexity 2020, 2020, 3805320. [Google Scholar] [CrossRef]
- Ge, D.Y.; Yao, X.F.; Xiang, W.J.; Chen, Y.P. Vehicle detection and tracking based on video image processing in intelligent transportation system. Neural Comput. Appl. 2023, 35, 2197–2209. [Google Scholar] [CrossRef]
- Wan, S.; Ding, S.; Chen, C. Edge computing enabled video segmentation for real-time traffic monitoring in internet of vehicles. Pattern Recognit. 2022, 121, 108146. [Google Scholar] [CrossRef]
- El Jaafari, I.; El Ansari, M.; Koutti, L.; Ellahyani, A.; Charfi, S. A novel approach for on-road vehicle detection and tracking. Int. J. Adv. Comput. Sci. Appl. 2016, 7. [Google Scholar]
- Wei, Y.; Tian, Q.; Guo, J.; Huang, W.; Cao, J. Multi-vehicle detection algorithm through combining Harr and HOG features. Math. Comput. Simul. 2019, 155, 130–145. [Google Scholar] [CrossRef]
- Şentaş, A.; Tashiev, İ.; Küçükayvaz, F.; Kul, S.; Eken, S.; Sayar, A.; Becerikli, Y. Performance evaluation of support vector machine and convolutional neural network algorithms in real-time vehicle type and color classification. Evol. Intell. 2020, 13, 83–91. [Google Scholar] [CrossRef]
- Goerick, C.; Noll, D.; Werner, M. Artificial neural networks in real-time car detection and tracking applications. Pattern Recognit. Lett. 1996, 17, 335–343. [Google Scholar] [CrossRef]
- Jabri, S.; Saidallah, M.; El Alaoui, A.E.B.; El Fergougui, A. Moving vehicle detection using Haar-like, LBP and a machine learning Adaboost algorithm. In Proceedings of the 2018 IEEE International Conference on Image Processing, Applications and Systems (IPAS), Sophia Antipolis, France, 12–14 December 2018; pp. 121–124. [Google Scholar]
- Kowsari, T.; Beauchemin, S.S.; Cho, J. Real-time vehicle detection and tracking using stereo vision and multi-view AdaBoost. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 1255–1260. [Google Scholar]
- Wang, L.W.; Yang, X.F.; Siu, W.C. Learning approach with random forests on vehicle detection. In Proceedings of the 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp. 1–5. [Google Scholar]
- Halin, A.A.; Sharef, N.M.; Jantan, A.H.; Abdullah, L.N. License plate localization using a Naïve Bayes classifier. In Proceedings of the 2013 IEEE International Conference on Signal and Image Processing Applications, Melaka, Malaysia, 8–10 October 2013; pp. 20–24. [Google Scholar]
- Duarte, M.F.; Hu, Y.H. Vehicle classification in distributed sensor networks. J. Parallel Distrib. Comput. 2004, 64, 826–838. [Google Scholar] [CrossRef]
- Bhatt, C.; Kumar, I.; Vijayakumar, V.; Singh, K.U.; Kumar, A. The state of the art of deep learning models in medical science and their challenges. Multimed. Syst. 2021, 27, 599–613. [Google Scholar] [CrossRef]
- Yao, S.; Guan, R.; Huang, X.; Li, Z.; Sha, X.; Yue, Y.; Lim, E.G.; Seo, H.; Man, K.L.; Zhu, X.; et al. 2d car detection in radar data with pointnets. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 61–66. [Google Scholar]
- Najafabadi, M.M.; Villanustre, F.; Khoshgoftaar, T.M.; Seliya, N.; Wald, R.; Muharemagic, E. Deep learning applications and challenges in big data analytics. J. Big Data 2015, 2, 1. [Google Scholar] [CrossRef]
- Abdar, M.; Pourpanah, F.; Hussain, S.; Rezazadegan, D.; Liu, L.; Ghavamzadeh, M.; Fieguth, P.; Cao, X.; Khosravi, A.; Acharya, U.R.; et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Inf. Fusion 2021, 76, 243–297. [Google Scholar] [CrossRef]
- Valappil, N.K.; Memon, Q.A. CNN-SVM based vehicle detection for UAV platform. Int. J. Hybrid Intell. Syst. 2021, 17, 59–70. [Google Scholar] [CrossRef]
- Chen, R.; Li, X.; Li, S. A lightweight CNN model for refining moving vehicle detection from satellite videos. IEEE Access 2020, 8, 221897–221917. [Google Scholar] [CrossRef]
- Xiao, Y.; Tian, Z.; Yu, J.; Zhang, Y.; Liu, S.; Du, S.; Lan, X. A review of object detection based on deep learning. Multimed. Tools Appl. 2020, 79, 23729–23791. [Google Scholar] [CrossRef]
- Sang, J.; Wu, Z.; Guo, P.; Hu, H.; Xiang, H.; Zhang, Q.; Cai, B. An improved YOLOv2 for vehicle detection. Sensors 2018, 18, 4272. [Google Scholar] [CrossRef]
- Song, H.; Liang, H.; Li, H.; Dai, Z.; Yun, X. Vision-based vehicle detection and counting system using deep learning in highway scenes. Eur. Transp. Res. Rev. 2019, 11, 51. [Google Scholar] [CrossRef]
- Murugan, V.; Vijaykumar, V.R.; Nidhila, A. A deep learning RCNN approach for vehicle recognition in traffic surveillance system. In Proceedings of the 2019 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 4–6 April 2019; pp. 0157–0160. [Google Scholar]
- Shi, F.; Zhang, T.; Zhang, T. Orientation-aware vehicle detection in aerial images via an anchor-free object detection approach. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5221–5233. [Google Scholar] [CrossRef]
- Zhang, X.; Zhu, X. Vehicle detection in the aerial infrared images via an improved Yolov3 network. In Proceedings of the 2019 IEEE 4th International Conference on Signal and Image Processing (ICSIP), Wuxi, China, 19–21 July 2019; pp. 372–376. [Google Scholar]
- Mahto, P.; Garg, P.; Seth, P.; Panda, J. Refining yolov4 for vehicle detection. Int. J. Adv. Res. Eng. Technol. IJARET 2020, 11. [Google Scholar]
- Chen, Z.; Cao, L.; Wang, Q. Yolov5-based vehicle detection method for high-resolution UAV images. Mob. Inf. Syst. 2022, 2022, 1828848. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14. Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
- Arora, N.; Kumar, Y.; Karkra, R.; Kumar, M. Automatic vehicle detection system in different environment conditions using fast R-CNN. Multimed. Tools Appl. 2022, 81, 18715–18735. [Google Scholar] [CrossRef]
- Kashevnik, A.; Ali, A. 3D Vehicle Detection and Segmentation Based on EfficientNetB3 and CenterNet Residual Blocks. Sensors 2022, 22, 7990. [Google Scholar] [CrossRef] [PubMed]
- Bisio, I.; Haleem, H.; Garibotto, C.; Lavagetto, F.; Sciarrone, A. Performance evaluation and analysis of drone-based vehicle detection techniques from deep learning perspective. IEEE Internet Things J. 2021, 9, 10920–10935. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2014; pp. 346–361. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2018; pp. 8759–8768. [Google Scholar] [CrossRef]
Layer | Output Size | DenseNet121 |
---|---|---|
Convolution | 112 × 112 | 7 × 7 conv; stride = 2 |
Pooling | 56 × 56 | 3 × 3 max pool; stride 2 |
Dense Block_1 | 56 × 56 | 6 × (1 × 1 conv, 3 × 3 conv) |
Transition Layer_1 | 56 × 56 | 1 × 1 conv |
28 × 28 | 2 × 2 average pool; stride = 2 | |
Dense Block_2 | 28 × 28 | 12 × (1 × 1 conv, 3 × 3 conv) |
Transition Layer_2 | 28 × 28 | 1 × 1 conv |
14 × 14 | 2 × 2 average pool; stride = 2 | |
Dense Block_3 | 14 × 14 | 24 × (1 × 1 conv, 3 × 3 conv) |
Transition Layer_3 | 14 × 14 | 1 × 1 conv |
7 × 7 | 2 × 2 average pool; stride = 2 | |
Dense Block_4 | 7 × 7 | 16 × (1 × 1 conv, 3 × 3 conv) |
Classification layer | 1 × 1 | 7 × 7 global average pool |
1000D fully connected, softmax |
Camera Parameter | Value |
---|---|
Protect model number | iDS-2DY9437IX-A/SP |
Manufacturer | Hikvision |
Horizontal range | 360° |
Vertical range | (−20°, 90°) |
Horizontal preset point speed | 0.1°–210°/s |
Vertical preset point speed | 0.1°–150°/s |
Operating temperature | (−40 °C, 70 °C) |
Working humidity | <95% |
Weight | 8 kg |
Protection level | IP67 |
Index | Category Name | Description |
---|---|---|
1 | Big truck | Big truck refers to large freight vehicles used for long-distance transportation or carrying large amounts of cargo, usually with large loads and sizes. |
2 | Boxcar | Boxcar refers to trucks with an independent closed structure compartment used to carry goods. |
3 | Bulldozer | Bulldozer refers to self-propelled mechanical devices used to excavate, transport, and discharge soil with a dozer knife in front of the tractor. |
4 | Concrete truck | Concrete truck refers to special trucks used to mix and transport concrete. |
5 | Crane closed | Crane closed refers to the cranes with the hanger and electric hoist in the closed state, without any operation. |
6 | Crane | Crane refers to the multi-action lifting machinery equipment that carries and lifts heavy objects horizontally within a certain range, and this category represents the cranes with the hangers and electric hoists in the open state. |
7 | Digger | Digger refers to mechanical devices used for digging and loading and unloading materials. |
8 | Drill | Drill refers to the mechanical equipment with drilling tools used for core exploration and obtaining physical geological data. |
9 | Earth vehicle | Earth vehicles refer to standardized freight vehicles used to transport medium loads, often for a wide range of transport tasks. |
10 | Fuel tank | Fuel tank refers to enclosed vehicles used to store and transport fuel for vehicles, such as cars and aircraft. |
11 | Small truck | Small truck refers to small freight vehicles that transport small goods or equipment, often used for short distances, or transportation tasks that require access to tight spaces. |
12 | Tower | Tower refers to the lifting equipment used to bear the load of the boom rope and the balance arm rope. |
13 | Tractor | Tractor refers to the agricultural machinery and equipment used to pull and drive the working machinery to complete various mobile operations. |
GPU Server | Configuration Information |
---|---|
Architecture | x86_64 |
CPU op-mode(s) | 32-bit and 64-bit |
Model name | Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20 GHz |
GPU | NVIDIA GeForce GTX TiTan X |
Cuda memory | 12 GB |
System | Ubuntu18.04 |
Category | YOLOC | YOLOv4 | ||||||
---|---|---|---|---|---|---|---|---|
AP | Precision | Recall | F1 Score | AP | Precision | Recall | F1 Score | |
Big truck | 96.66% | 94.23% | 96.71% | 95.45% | 92.42% | 91.47% | 86.94% | 89.15% |
Boxcar | 99.70% | 98.11% | 100.00% | 99.05% | 98.41% | 97.34% | 97.86% | 97.60% |
Bulldozer | 99.89% | 97.87% | 97.87% | 97.87% | 98.32% | 94.12% | 97.39% | 95.73% |
Concrete truck | 99.78% | 97.87% | 97.87% | 97.87% | 99.37% | 98.0% | 94.23% | 96.08% |
Crane closed | 100.00% | 100.00% | 100.00% | 100.00% | 100.00% | 100.00% | 100.00% | 100.00% |
Crane | 96.87% | 96.30% | 94.55% | 95.42% | 91.74% | 89.47% | 87.63% | 88.54% |
Digger | 99.90% | 100.00% | 96.92% | 98.44% | 96.67% | 94.71% | 93.23% | 93.96% |
Drill | 98.96% | 96.67% | 96.67% | 96.67% | 84.09% | 69.23% | 83.72% | 75.79% |
Earth vehicles | 94.08% | 87.50% | 88.89% | 88.19% | 91.47% | 92.94% | 84.95% | 88.77% |
Fuel tank | 100.00% | 100.00% | 100.00% | 100.00% | 99.63% | 93.1% | 96.43% | 94.74% |
Small truck | 94.27% | 86.67% | 95.12% | 90.70% | 83.85% | 77.33% | 87.88% | 82.27% |
Tower | 99.83% | 95.83% | 95.83% | 95.83% | 89.39% | 83.33% | 83.33% | 83.33% |
Tractor | 80.42% | 95.00% | 73.08% | 82.61% | 82.66% | 96.97% | 78.05% | 86.49% |
Total | 96.95% | 95.85% | 94.89% | 95.24% | 92.92% | 90.62% | 90.13% | 90.19% |
Detector | mAP | Precision | Recall | F1 Score | Parameters | FLOPs |
---|---|---|---|---|---|---|
SSD | 94.73% | 94.03% | 92.38% | 93.14% | 25.22 MB | 117.65 G |
RetinaNet | 95.98% | 89.21% | 94.40% | 91.50% | 36.58 MB | 71.06 G |
Faster R-CNN | 93.59% | 73.34% | 95.38% | 82.14% | 136.94 MB | 252.88 G |
YOLOv5-L | 93.62% | 85.82% | 87.30% | 83.99% | 46.70 MB | 48.49 G |
YOLOv5-M | 94.57% | 89.89% | 93.00% | 90.83% | 21.11 MB | 21.44 G |
YOLOv5-S | 91.32% | 85.38% | 90.05% | 86.87% | 7.10 MB | 7.01 G |
YOLOv5-X | 93.62% | 92.80% | 89.03% | 90.60% | 87.33 MB | 92.13 |
YOLOX | 84.64% | 80.68% | 74.41% | 76.40% | 99.01 MB | 119.19 G |
EfficientDet-D0 | 90.04% | 88.82% | 84.51% | 86.20% | 3.84 MB | 4.81 G |
EfficientDet-D1 | 90.64% | 80.61% | 86.61% | 81.56% | 6.56 MB | 11.65 G |
EfficientDet-D2 | 89.33% | 82.73% | 89.23% | 85.12% | 8.02 MB | 20.84 G |
EfficientDet-D3 | 91.07% | 85.34% | 86.76% | 85.76% | 11.92 MB | 47.48 G |
YOLOv3 | 94.28% | 93.15% | 91.22% | 91.98% | 61.59 MB | 65.68 G |
YOLOv4 | 92.92% | 90.62% | 90.13% | 90.19% | 64.00 MB | 60.00 G |
YOLOv7 | 94.89% | 94.17% | 93.51% | 93.69% | 37.26 MB | 44.50 G |
YOLOv8 | 95.39% | 91.26% | 92.45% | 91.58% | 11.17 M | 28.82 G |
DETR | 93.59% | 92.48% | 91.13% | 91.67% | 36.74 M | 31.93 G |
YOLOC | 96.95% | 95.85% | 94.89% | 95.24% | 16.08 MB | 26.09 G |
Detector | DenseNet121 | DSC | H-Swish | mAP | Precision | Recall | F1 Score |
---|---|---|---|---|---|---|---|
YOLOv4 | × | × | × | 92.92% | 90.62% | 90.13% | 90.19% |
YD | √ | × | × | 95.88% | 94.17% | 91.53% | 92.61% |
YM | × | √ | × | 93.59% | 92.48% | 91.13% | 91.67% |
YA | × | × | √ | 94.77% | 93.27% | 93.86% | 93.53% |
YDM | √ | √ | × | 96.06% | 93.69% | 94.59% | 94.07% |
YDA | √ | × | √ | 96.56% | 95.36% | 94.49% | 94.92% |
YMA | × | √ | √ | 95.50% | 94.56% | 94.30% | 94.38% |
YOLOC | √ | √ | √ | 96.95% | 95.85% | 94.89% | 95.24% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, W.; Zhang, S.; Zhou, L.; Luo, N.; Xu, M. Development of a High-Precision and Lightweight Detector and Dataset for Construction-Related Vehicles. Electronics 2023, 12, 4996. https://doi.org/10.3390/electronics12244996
Liu W, Zhang S, Zhou L, Luo N, Xu M. Development of a High-Precision and Lightweight Detector and Dataset for Construction-Related Vehicles. Electronics. 2023; 12(24):4996. https://doi.org/10.3390/electronics12244996
Chicago/Turabian StyleLiu, Wenjin, Shudong Zhang, Lijuan Zhou, Ning Luo, and Min Xu. 2023. "Development of a High-Precision and Lightweight Detector and Dataset for Construction-Related Vehicles" Electronics 12, no. 24: 4996. https://doi.org/10.3390/electronics12244996
APA StyleLiu, W., Zhang, S., Zhou, L., Luo, N., & Xu, M. (2023). Development of a High-Precision and Lightweight Detector and Dataset for Construction-Related Vehicles. Electronics, 12(24), 4996. https://doi.org/10.3390/electronics12244996