Robust Pedestrian Detection and Intrusion Judgment in Coal Yard Hazard Areas via 3D LiDAR-Based Deep Learning
Abstract
1. Introduction
- A 3D point cloud object detection network, EFT-RCNN, is proposed. This network takes Voxel-RCNN as the baseline and makes three key improvements to address the interference of widespread complex backgrounds in coal yard environments on pedestrian detection. (a) An EnhancedVFE module is proposed to enhance the ability to extract geometric features of voxel data. (b) The FocalConv is employed to reconstruct the 3D backbone network, focusing on the feature learning of the foreground regions and suppressing the noise from the cluttered background. (c) TeBEVPooling is applied to optimize the generation of a bird’s eye view and improve the quality of feature fusion.
- A point–region hierarchical judgment method is proposed. This method analyzes the spatial relationship between pedestrians and the hazardous area in a progressive manner through the prejudge layer, warning layer, and alarm layer, avoiding the limitations of traditional single-step intrusion judgment and more effectively preventing accidents.
2. Methods
2.1. Offline Training and Online Detection
2.2. Pedestrian Object Detection Network EFT-RCNN
2.2.1. EFT-RCNN
2.2.2. EnhancedVFE
2.2.3. Reconstruction of 3D Backbone by FocalConv
2.2.4. Reconstruction of MAP to BEV by TeBEVPooling
2.3. Point–Region Hierarchical Judgment Method
3. Experiments and Results
3.1. Pedestrian Object Experiments Using a Public Dataset
3.1.1. Dataset
3.1.2. Experimental Setup and Evaluation Metric
3.1.3. Experimental Results of the Application of the EFT-RCNN Network to a Public Dataset
3.2. On-Site Pedestrian Intrusion Detection Experiments
3.2.1. On-Site Deployment
3.2.2. Results of Pedestrian Intrusion Detection in the Coal Yard Environment
3.2.3. Static Pedestrians Grading Judgment Results in the Coal Yard
4. Discussion and Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Huang, H.; Hu, H.; Xu, F.; Zhang, Z.; Tao, Y. Skeleton-based automatic assessment and prediction of intrusion risk in construction hazardous areas. Saf. Sci. 2023, 164, 106150. [Google Scholar] [CrossRef]
- Li, Q.; Yang, Y.; Yao, G.; Wei, F.; Li, R.; Zhu, M.; Hou, H. Classification and application of deep learning in construction engineering and management—A systematic literature review and future innovations. Case Stud. Constr. Mater. 2024, 21, e04051. [Google Scholar] [CrossRef]
- Mei, X.; Zhou, X.; Xu, F.; Zhang, Z. Human Intrusion Detection in Static Hazardous Areas at Construction Sites: Deep Learning–Based Method. J. Constr. Eng. Manag. 2023, 149, 04022142. [Google Scholar] [CrossRef]
- Tang, G.; Ni, J.; Zhao, Y.; Gu, Y.; Cao, W. A Survey of Object Detection for UAVs Based on Deep Learning. Remote Sens. 2023, 16, 149. [Google Scholar] [CrossRef]
- Duong, H.-T.; Le, V.-T.; Hoang, V.T. Deep Learning-Based Anomaly Detection in Video Surveillance: A Survey. Sensors 2023, 23, 5024. [Google Scholar] [CrossRef]
- Guo, P.; Shi, T.; Ma, Z.; Wang, J. Human intrusion detection for high-speed railway perimeter under all-weather condition. Railw. Sci. 2024, 3, 97–110. [Google Scholar] [CrossRef]
- Segireddy, S.; Koneru, S.V. Wireless IoT-based intrusion detection using LiDAR in the context of intelligent border surveillance system. In Proceedings of the Smart Innovation, Systems and Technologies (SIST), Singapore, 27 September 2020; pp. 455–463. [Google Scholar]
- Nan, Z.; Zhu, G.; Zhang, X.; Lin, X.; Yang, Y. Development of a High-Precision Lidar System and Improvement of Key Steps for Railway Obstacle Detection Algorithm. Remote Sens. 2024, 16, 1761. [Google Scholar] [CrossRef]
- Li, X.; Hu, Y.; Jie, Y.; Zhao, C.; Zhang, Z. Dual-Frequency Lidar for Compressed Sensing 3D Imaging Based on All-Phase Fast Fourier Transform. J. Opt. Photon- Res. 2023, 1, 74–81. [Google Scholar] [CrossRef]
- Shi, T.; Guo, P.; Wang, R.; Ma, Z.; Zhang, W.; Li, W.; Fu, H.; Hu, H. A Survey on Multi-Sensor Fusion Perimeter Intrusion Detection in High-Speed Railways. Sensors 2024, 24, 5463. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Xiao, Y.; Wang, B.; Ren, H.; Zhang, Y.; Ji, J. Automatic targetless LiDAR–camera calibration: A survey. Artif. Intell. Rev. 2022, 56, 9949–9987. [Google Scholar] [CrossRef]
- Wang, M.; Huang, R.; Xie, W.; Ma, Z.; Ma, S. Compression Approaches for LiDAR Point Clouds and Beyond: A Survey. ACM Trans. Multimedia Comput. Commun. Appl. 2025, 21, 1–31. [Google Scholar] [CrossRef]
- Wang, M.; Huang, R.; Liu, Y.; Li, Y.; Xie, W. suLPCC: A Novel LiDAR Point Cloud Compression Framework for Scene Understanding Tasks. IEEE Trans. Ind. Inform. 2025, 21, 3816–3827. [Google Scholar] [CrossRef]
- Gong, B.; Zhao, B.; Wang, Y.; Lin, C.; Liu, H. Lane Marking Detection Using Low-Channel Roadside LiDAR. IEEE Sens. J. 2023, 23, 14640–14649. [Google Scholar] [CrossRef]
- Zhang, Z.; Chen, P.; Huang, Y.; Dai, L.; Xu, F.; Hu, H. Railway obstacle intrusion warning mechanism integrating YOLO-based detection and risk assessment. J. Ind. Inf. Integr. 2024, 38, 100571. [Google Scholar] [CrossRef]
- Zhang, Z.; Yang, N.; Yang, Y. Autonomous navigation and collision prediction of port channel based on computer vision and lidar. Sci. Rep. 2024, 14, 11300. [Google Scholar] [CrossRef] [PubMed]
- Hu, K.; Chen, Z.; Kang, H.; Tang, Y. 3D vision technologies for a self-developed structural external crack damage recognition robot. Autom. Constr. 2024, 159, 105262. [Google Scholar] [CrossRef]
- Dong, Y.; Liu, Y.; He, B.; Li, L.; Li, J. Dynamic Object Detection and Instance Tracking Based on Spatiotemporal Sector Grids. IEEE/ASME Trans. Mechatron. 2025, 1–11. [Google Scholar] [CrossRef]
- Jin, X.; Yang, H.; He, X.; Liu, G.; Yan, Z.; Wang, Q. Robust LiDAR-Based Vehicle Detection for On-Road Autonomous Driving. Remote Sens. 2023, 15, 3160. [Google Scholar] [CrossRef]
- Chen, S.; Li, X.; Ma, S.; Wang, S.; Ren, X. DBSCAN-Based Dynamic Object Recognition and Semantic Information Entropy-Assisted Vehicle LiDAR Odometry. IEEE Trans. Instrum. Meas. 2025, 74, 8509013. [Google Scholar] [CrossRef]
- Zhou, Y.; Tuzel, O. VoxelNet: End-to-end learning for point cloud based 3D object detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4490–4499. [Google Scholar]
- Yan, Y.; Mao, Y.; Li, B. SECOND: Sparsely Embedded Convolutional Detection. Sensors 2018, 18, 3337. [Google Scholar] [CrossRef] [PubMed]
- Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 12689–12697. [Google Scholar]
- Shi, G.; Li, R.; Ma, C. PillarNet: Real-time and high-performance pillar-based 3D object detection. In Proceedings of the 17th European Conference on Computer Vision (ECCV), Tviv, Israel, 23–27 October 2022; pp. 35–52. [Google Scholar]
- Chen, Y.; Liu, J.; Zhang, X.; Qi, X.; Jia, J. VoxelNext: Fully sparse voxelnet for 3D object detection and tracking. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 21674–21683. [Google Scholar]
- Shi, S.; Guo, C.; Jiang, L.; Wang, Z.; Shi, J.; Wang, X.; Li, H. PV-RCNN: Point-voxel feature set abstraction for 3D object detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10529–10538. [Google Scholar]
- Shi, S.; Wang, Z.; Shi, J.; Wang, X.; Li, H. From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2647–2664. [Google Scholar] [CrossRef]
- Shi, S.; Wang, X.; Li, H. PointRCNN: 3D object proposal generation and detection from point cloud. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 770–779. [Google Scholar]
- Deng, J.; Shi, S.; Li, P.; Zhou, W.; Zhang, Y.; Li, H. Voxel R-CNN: Towards high performance voxel-based 3D object detection. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 2–9 February 2021; pp. 1201–1209. [Google Scholar]
- Ye, Q.; Fang, Y.; Zheng, N. Performance evaluation of struck-by-accident alert systems for road work zone safety. Autom. Constr. 2024, 168, 105837. [Google Scholar] [CrossRef]
- Kulinan, A.S.; Park, M.; Aung, P.P.W.; Cha, G.; Park, S. Advancing construction site workforce safety monitoring through BIM and computer vision integration. Autom. Constr. 2023, 158, 105227. [Google Scholar] [CrossRef]
- Newaz, M.T.; Ershadi, M.; Jefferies, M.; Davis, P. A critical review of the feasibility of emerging technologies for improving safety behavior on construction sites. J. Saf. Res. 2024, 89, 269–287. [Google Scholar] [CrossRef]
- Miao, Y.; Tang, Y.; Alzahrani, B.A.; Barnawi, A.; Alafif, T.; Hu, L. Airborne LiDAR Assisted Obstacle Recognition and Intrusion Detection Towards Unmanned Aerial Vehicle: Architecture, Modeling and Evaluation. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4531–4540. [Google Scholar] [CrossRef]
- Darwesh, A.; Wu, D.; Le, M.; Saripalli, S. Building a smart work zone using roadside LiDAR. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 2602–2609. [Google Scholar]
- Shi, H.; Zhao, J.; Mu, R. Design and implementation of laser radar-based railway foreign object intrusion detection system. In Proceedings of the 2023 5th International Conference on Electronics and Communication, Network and Computer Technology (ECNCT), Guangzhou, China, 18–20 August 2023; pp. 304–307. [Google Scholar]
- Wu, J.D.; Le, M.; Ullman, J.; Huang, T.; Darwesh, A.; Saripalli, S. Development of a Roadside LiDAR-Based Situational Awareness System for Work Zone Safety: Proof-of-Concept Study; (Report No. TTI 05-03); Office of the Secretary of Transportation (OST) U.S. Department of Transportations (US DOT): Washington, DC, USA, 2023.
- Heng, L.; Shuang, D.; Skitmore, M.; Qinghua, H.; Qin, Y. Intrusion warning and assessment method for site safety enhancement. Saf. Sci. 2016, 84, 97–107. [Google Scholar] [CrossRef]
- Ma, C.; Gou, S.; Li, P.; Yang, Y. Synergistic monitoring system via LiDAR and visual sensors for detecting wildlife intrusion. In Proceedings of the 2024 IEEE 19th Conference on Industrial Electronics and Applications (ICIEA), Kristiansand, Norway, 5–8 August 2024; pp. 1–6. [Google Scholar]
- Graham, B.; Engelcke, M.; Van Der Maaten, L. 3D semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9224–9232. [Google Scholar]
- Chen, Y.; Li, Y.; Zhang, X.; Sun, J.; Jia, J. Focal sparse convolutional networks for 3D object detection. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5428–5437. [Google Scholar]
- Wu, H.; Wen, C.; Li, W.; Li, X.; Yang, R.; Wang, C. Transformation-equivariant 3D object detection for autonomous driving. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; pp. 2795–2802. [Google Scholar]
- Galetzka, M.; Glauner, P.O. A simple and correct even-odd algorithm for the point-in-polygon problem for complex polygons. arXiv 2012, arXiv:1207.3502. [Google Scholar]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Zhou, C.; Zhang, Y.; Chen, J.; Huang, D. OcTr: Octree-based transformer for 3D object detection. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 5166–5175. [Google Scholar]
- Hu, J.S.; Kuai, T.; Waslander, S.L. Point density-aware voxels for LiDAR 3D object detection. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 8469–8478. [Google Scholar]
- Xia, Q.; Ye, W.; Wu, H.; Zhao, S.; Xing, L.; Huang, X.; Deng, J.; Li, X.; Wen, C.; Wang, C. HINTED: Hard instance enhanced detector with mixed-density feature fusion for sparsely-supervised 3D object detection. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 15321–15330. [Google Scholar]
- Group, H. QT128C2X Mechanical LiDAR User Manual. Available online: https://www.hesaitech.com/downloads/#qt128 (accessed on 13 August 2025).
Method | 3D AP | Pedestrians@0.5 AP | Pedestrians@0.25 AP | FPS | ||||
---|---|---|---|---|---|---|---|---|
Easy | Moderate | Hard | Easy | Moderate | Hard | |||
SECOND [22] | 57.87 | 51.84 | 45.57 | 40.81 | 74.24 | 70.08 | 66.65 | 36.10 |
PointPillars [23] | 55.58 | 49.62 | 43.51 | 38.50 | 71.40 | 67.15 | 63.28 | 58.14 |
PillarNet [24] | 57.29 | 47.29 | 41.57 | 37.69 | 77.07 | 71.80 | 68.31 | 55.56 |
PV-RCNN [26] | 61.50 | 56.56 | 49.35 | 44.82 | 77.58 | 71.77 | 68.94 | 29.24 |
Part-A2 [27] | 62.67 | 60.81 | 52.22 | 46.67 | 77.48 | 71.47 | 67.36 | 35.46 |
Voxel-RCNN [29] | 60.54 | 58.91 | 51.95 | 47.15 | 72.78 | 68.09 | 64.37 | 55.87 |
OcTr [44] | 62.92 | 59.67 | 52.43 | 46.83 | 78.29 | 71.76 | 68.54 | 32.62 |
PDV [45] | 64.59 | 60.92 | 54.06 | 48.25 | 79.74 | 74.16 | 70.40 | 16.18 |
HINTED [46] | 62.10 | 60.32 | 53.51 | 47.38 | 76.74 | 70.49 | 64.14 | 18.05 |
Ours | 64.93 | 63.50 | 54.31 | 49.43 | 79.51 | 72.97 | 69.83 | 28.56 |
Method | Improvement 1 | Improvement 2 | Improvement 3 | 3D AP | BEV AP | FPS |
---|---|---|---|---|---|---|
Voxel-RCNN | 60.54 | 62.19 | 55.87 | |||
(a) | ✓ | 63.21 | 65.54 | 34.48 | ||
(b) | ✓ | 62.53 | 64.35 | 30.58 | ||
(c) | ✓ | 63.22 | 65.04 | 44.84 | ||
Ours | ✓ | ✓ | ✓ | 64.93 | 66.87 | 28.56 |
Parameter | Value |
---|---|
range capability | 20 m @10% reflectivity |
point rate | 864,000 (single return) |
field of view | 360° * 105° |
angular resolution | 0.4°(H) * 0.4°(V) |
range accuracy | ±2 cm |
Metric | Detection Method | Scenario A | Scenario B | Scenario C | Average Value |
---|---|---|---|---|---|
Precision (%) | Baseline | 79.6 | 72.9 | 70.8 | 74.4 |
PointPillars | 63.7 | 58.6 | 55.2 | 59.2 | |
PDV | 92.4 | 89.3 | 86.5 | 89.4 | |
Ours | 97.0 | 92.1 | 89.6 | 92.9 | |
FPS | Baseline | 12.56 | 11.28 | 13.51 | 12.45 |
PointPillars | 18.32 | 16.40 | 16.83 | 17.18 | |
PDV | 5.64 | 5.31 | 5.28 | 5.41 | |
Ours | 7.42 | 7.13 | 7.36 | 7.30 |
Location ID | Warning Level | Ground Truth (m) | Predicted Value (m) | Error (m) |
---|---|---|---|---|
L1-01 | L1 | 4.50 | 4.45 | 0.05 |
L2-01 | L2 | 3.50 | 3.54 | 0.04 |
L2-02 | L2 | 3.00 | 3.07 | 0.07 |
L2-03 | L2 | 2.00 | 2.03 | 0.03 |
L3-01 | L3 | 0.50 | 0.62 | 0.08 |
L3-02 * | In the hazardous area | - | - | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, A.; Zhao, Y.; Zheng, Q. Robust Pedestrian Detection and Intrusion Judgment in Coal Yard Hazard Areas via 3D LiDAR-Based Deep Learning. Sensors 2025, 25, 5908. https://doi.org/10.3390/s25185908
Zhao A, Zhao Y, Zheng Q. Robust Pedestrian Detection and Intrusion Judgment in Coal Yard Hazard Areas via 3D LiDAR-Based Deep Learning. Sensors. 2025; 25(18):5908. https://doi.org/10.3390/s25185908
Chicago/Turabian StyleZhao, Anxin, Yekai Zhao, and Qiuhong Zheng. 2025. "Robust Pedestrian Detection and Intrusion Judgment in Coal Yard Hazard Areas via 3D LiDAR-Based Deep Learning" Sensors 25, no. 18: 5908. https://doi.org/10.3390/s25185908
APA StyleZhao, A., Zhao, Y., & Zheng, Q. (2025). Robust Pedestrian Detection and Intrusion Judgment in Coal Yard Hazard Areas via 3D LiDAR-Based Deep Learning. Sensors, 25(18), 5908. https://doi.org/10.3390/s25185908