Next Article in Journal
An Adaptive User Tracking Algorithm Using Irregular Data Frames for Passive Fingerprint Positioning
Next Article in Special Issue
Energy Saving Planner Model via Differential Evolutionary Algorithm for Bionic Palletizing Robot
Previous Article in Journal
High Sensitivity Cryogenic Temperature Sensors Based on Arc-Induced Long-Period Fiber Gratings
Previous Article in Special Issue
Bimodal Learning Engagement Recognition from Videos in the Classroom
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Detection of Hazardous Goods Vehicles and Determination of Risk Grade Based on Deep Learning

1
School of Artificial Intelligence, Wuchang University of Technology, Wuhan 430223, China
2
China Railway Wuhan Survey and Design Institute Co., Ltd., Building E5, Optics Valley Software Park, No. 1, Guanshan Avenue, Donghu High-Tech Zone, Wuhan 430050, China
3
School of Safety Science and Emergency Management, Wuhan University of Technology, Wuhan 430070, China
4
USTC iFLYTEK Co., Ltd., Hefei 230088, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(19), 7123; https://doi.org/10.3390/s22197123
Submission received: 15 July 2022 / Revised: 26 August 2022 / Accepted: 26 August 2022 / Published: 20 September 2022

Abstract

:
Currently, deep learning has been widely applied in the field of object detection, and some relevant scholars have applied it to vehicle detection. In this paper, the deep learning EfficientDet model is analyzed, and the advantages of the model in the detection of hazardous good vehicles are determined. The adaptive training model is built based on the optimization of the training process, and the training model is used to detect hazardous goods vehicles. The detection results are compared with Cascade R-CNN and CenterNet, and the results show that the proposed method is superior to the other two methods in two aspects of computational complexity and detection accuracy. Simultaneously, the proposed method is suitable for the detection of hazardous goods vehicles in different scenarios. We make statistics on the number of detected hazardous goods vehicles at different times and places. The risk grade of different locations is determined according to the statistical results. Finally, the case study shows that the proposed method can be used to detect hazardous goods vehicles and determine the risk level of different places.

1. Introduction

The safe production of road transportation for hazardous goods is related to the safety of life and property of the country and people, as well as national economic development and social harmony and stability. Hazardous goods are in a hazardous process, from coming out of the warehouse to arriving at the destination through vehicle transportation. Therefore, it is particularly important to supervise the transportation process of hazardous goods vehicles. The supervision of the transportation process of hazardous goods vehicles involves supervision of vehicle travel dynamics, the frequency of vehicles passing through a certain place, and accidents. The dynamic supervision of hazardous goods vehicle travel is realized mainly based on the GPS positioning of the hazardous goods vehicle. The specific frequency of passing through a certain place and accident conditions of the hazardous goods vehicles can be achieved by the number of times and continuous detection time of the hazardous goods vehicle detected by the camera. Due to the influence of environmental factors such as lighting conditions, partial occlusion, and messy background, the accuracy of hazardous goods vehicle detection will be greatly affected. In order to improve the reliability and accuracy of hazardous goods vehicle detection, foreign scholars have carried out a great deal of research on it.
Vehicle detection methods mainly include image-based detection methods and deep learning-based detection methods. The image-based detection method mainly detects vehicle targets through vehicle image features and directional gradient histogram features. For example, Arthi R et al. [1] used the feature transformation of the image for vehicle classification and detection. Matos F et al. [2] achieved vehicle detection by analyzing the edge features of vehicle images and combining principal component analysis. Although the vehicle detection method based on image features has low computational complexity and can detect vehicles quickly, it is difficult to detect it in the area where the vehicle has partial occlusion or illumination change. In view of this, Iqbal U et al. [3] enhanced vehicle image features by fusing Sobel and SIFT features to realize vehicle detection. M.T. Pei et al. [4] used Sobel edge detection to detect vehicles in parking spaces as well as realized the detection scores of different types of vehicles. S. Ghaffarian [5] used a classifier based on fuzzy c-means clustering and super parameter optimization to detect vehicles and completed the location of vehicles in a parking space based on the vehicle detection results. Because the image of the vehicle itself has unique texture features, vehicle detection can be realized according to the vehicle texture features [6]. At present, the main disadvantage of vehicle detection methods based on vehicle image texture features and vehicle edge features is that they are greatly affected by illumination and vehicle integrity. With the continuous development of deep learning [7,8,9,10], more and more scholars have begun to study vehicle detection based on the deep learning method. X.J. Shen et al. [11] used a convolutional neural network to train vehicle images and trained models for vehicle detection. X. Xiang et al. [12] proposed a vehicle detection method based on the Haar–Adaboosting algorithm and convolutional neural network. Tang T et al. [13] proposed a super region candidate network, which can detect small vehicles photographed by distant cameras. In the process of vehicle detection, the vehicle is easily obscured by other objects. In order to solve the problem that the vehicle can be detected in the case of occlusion, Wang X et al. [14] introduced countermeasure learning into the process of RCNN target detection, which improves the accuracy of vehicle detection. The advantage of the RCNN algorithm is high detection accuracy, but its detection speed is slow, so it is difficult to implement real-time detection of vehicles. In order to improve the detection efficiency, Lu J et al. [15] introduced Yolo series algorithms to realize vehicle detection. The detection network is mainly based on an SSD network, which can detect vehicles quickly, but its accuracy is not particularly high. In order to improve the detection accuracy, Cao G et al. [16] integrated cascade modules and element modules to improve the SSD network and realize high-precision vehicle detection. However, due to the integration of more modules, the detection speed decreases. At present, deep learning [17,18,19] has received extensive attention in the field of vehicle detection. In the process of vehicle detection, a two-time-scale discrete-time system with multiple agents was used to optimize multi-vehicle detection [20]. Therefore, this paper will also use the deep learning method to implement vehicle detection. Specifically, we optimize the training stage of the deep learning EfficientDet model and build a phased training model to realize fast and accurate vehicle detection.

2. Construction of Vehicle Detection Model

Firstly, the effective deep learning model is used to train the hazardous goods vehicles, and the trained model is used to detect the hazardous goods vehicles. The target recognition network based on deep learning mainly includes cascade R-CNN [21], SpineNet [22] and CenterNet [23,24]. In order to verify the performance of different deep learning networks, Table 1 shows the detection performance of different recognition network models. COCO mAP [^1] in Table 1 is the average accuracy that is used to measure detection performance. The larger the value of mAP, the better the detection performance [25].
From Table 1, it is obvious that the detection performance of the deep learning network model of the EfficientDet series is the best, followed by the cascade R-CNN_ResNet deep learning network model. The best detection performance of the EfficientDet series is the EfficientDet-D7x network. The recognition accuracy of this network reaches 55.1, followed by EfficientDet-D3. The disadvantage of the EfficientDet-D7x network is that the complexity is too high to be used as the demand for real-time vehicle detection. In view of this, this paper selects EfficientDet-D3 as the vehicle detection network.
In order to build a hazardous goods vehicle detection model that is optimized based on DfficientDet-D3, the BiFPN depth named DbiFPN is linearly increased, and the BiFPN named width WbiFPN is exponentially increased. The depth and width of BiFPN are obtained, as shown in Equation (1).
D B i F P N = 3 + W B i F P N = 64 1.35
For the EfficientDet-D3 regression prediction network, the regression prediction box width is fixed to be the same as BiFPN ( W p r e d = W B i F P N ), and Equation (2) is used to linearly increase its depth.
D b o x = D c l a s s = 3 + / 3
In order to increase the image detection accuracy, the input image resolution needs to be increased. Considering the BiFPN function level in the EfficientDet-D3 deep learning network, Equation (3) is used to linearly improve the resolution.
R i n p u t = 512 + 128
Since = 3 , the construction of the EfficientDet-D3 deep learning network model was completed. Figure 1 shows the EfficientDet-d3 network structure, in which BiFPN, as a feature network, obtains features and reuses top-down and bottom-up bidirectional feature fusion. These fused features are sent to the classification and box regression network to generate the target category and predicted bonding box. We employed ImageNet-pre-trained EfficientNets as the backbone network. The BiFPN serves as the feature network, which takes level 3–7 features P 3 , P 4 , P 5 , P 6 , P 7 from the backbone network and repeatedly applies top-down and bottom-up bidirectional feature fusion. The classification and box regression network weights are shared among the features of all levels.
Based on the EfficientDet-D3 deep learning network model, this paper establishes a hazardous goods vehicle detection model, as shown in Figure 2. In Figure 2, the EfficientDet backbone and BiFBN layer were first used to construct the deep learning network. Then, D B i F P N and W B i F P N were used to conduct the classification of vehicles. Simultaneously, the vehicle detection model is constructed according to Equation (4). Finally, we used the vehicle detection model to train the hazardous goods vehicles and detect them.
L c o n f x , c = i P o s N x i j p log c ^ i p i N e g log c ^ i 0 c ^ i p = exp ( c i p ) p exp ( c i p )
where x i j p = 1 , 0 is the matching degree of the i-th a priori detection frame to the j-th real detection frame in category p, and c i p is the predicted value of category confidence of the i-th a priori detection box.

3. Experiment Analysis

3.1. Experiment Settings

The vehicle image data sets under different scenes in different time periods are collected, as shown in Figure 3. The data set is divided into the training data set, verification data set, and test data set. Among them, 2387 images are training data sets, 211 images are verification data sets, and 146 images are test data sets. A total of 2744 are annotated with labeling tool. The annotation file generated by each image is converted into Tensorflow unified TFRecord data format through a Python script file. The server configuration used for the experiment is an NVIDIA 3070 8G graphics card with 32G of RAM and an Intel® Core™ i7-10,700F CPU (Santa Clara, CA, USA). The operating system is Ubuntu 22.04 and CUDA 11.4 for the parallel computing framework.
For the vehicle detection, there are four possible detection outcomes, as listed in Table 2.
In Table 2, TP indicates the number of correctly detected vehicles; FP indicates the number of incorrectly detected vehicles; FN indicates the number of vehicles that are missed; TN indicates the number of correctly detected non-vehicles. According to the four possible detection results, we use Precision, Recall and F1-score to evaluate the performance of the model, defined as
Precision = TP TP + FP
Recall = TP TP + FN
F 1 score = 2 × precision × recall precision + recall

3.2. Training of the Model

According to the characteristics of the data set, the parameters in the configuration file corresponding to the model are adjusted before training, including the number of classes, batch size, initial learning rate, and related data reading path. The hazardous goods vehicle detection model is not affected by the vehicle model. At the beginning of the iteration, some data are selected, and the predicted value is obtained through the deep learning algorithm [26,27,28,29].
According to the data set size and computer configuration, we first selected the batch size of the initial training and then adjusted it according to the change of loss function value and detection effect. The standard gradient descent algorithm was used to train the hazardous goods vehicle detection model. In the training stage of the model, 2387 images were used to train the model, and 211 images were used to verify the effect of model training. We used the TensorFlow_Slim module to construct the model, which provides a simple but powerful training model. In the process of model training, the return value of the loss function is the value of the objective function generated in each iteration. In addition, the sum of vehicle positioning loss and confidence loss of the detection model is an index to measure the performance of the prediction model. In order to debug and optimize the model training process, TensorFlow provides a visual tool TensorBoard, which monitors and displays the training process by reading the recorded data file. The larger the batch size, the higher the memory utilization and the faster the data processing speed. In training, when using a single NVIDIA GeForce RTX 2080Ti GPU (Santa Clara, CA, USA), this article sets the batch size to 16. Learning rate is one of the most important parameters affecting the performance of the model. Too large of learning rate will lead to unstable detection, and too small of learning rate will lead to over fitting and slow convergence. In view of this, this paper uses AdamW algorithm [30] to optimize the learning rate. In the training stage, if the loss does not decrease for three consecutive cycles, the learning rate will decrease. In this paper, the changes of learning rate parameters in the training stage of hazardous goods vehicle detection model are adjusted according to the variations of total loss value, as shown in Figure 4.
According to Figure 4, the first stage is the iteration stage from 0 to 41,870, and the initial learning rate was set to 1e-3. Simultaneously, it can be seen from Figure 5 that the total loss value decreased rapidly to 0.26, which means that the learning rate needed to be adjusted. The second stage iterated from 41,871 to 54,980, and the learning rate was set to 1e-4 in order to determine whether the learning rate still needed to be optimized. It can be seen from Figure 5 that the total loss value was reduced to 0.225 and tends to be smooth nearby. Finally, from 59,431 iterations to the end, the learning rate was reduced to 1e-6, and the total loss value remained unchanged, which was 0.225, and tended to be smooth. After all the iterations, the whole model training process was completed, the total training convergence time was about 37 h, the training score was greater than 99.5%, the total learning rate parameters were adjusted three times, and finally, the construction of the deep learning vehicle detection model was completed.

3.3. Ablation Experiments

In order to verify that the proposed method have an effect on the accuracy and speed of the EfficientDet-D3 model, the ablation experiment is designed to verify its effectiveness. We use the improved EfficientDet-D3 and original EfficientDet-D3 to conduct the detection of hazardous goods vehicles. In the training phase, 211 images are used to verify the effect of these two models’ training, and the training time and accuracy of these two methods are obtained, as shown in Table 3.
Table 3 shows that the improved EfficientDet-D3 needs spend 4.2 h to finish the training of 2387 images. However, the original EfficientDet-D3 takes 6.3 h to complete the training of 2387 images. It is clearly visible that the training time of the improved EfficientDet-D3 model is much less than the original EfficientDet-D3 model. It illustrates that the proposed method improves the training efficiency and shortens the training time. Additionally, the training accuracy of the improved EfficientDet-D3 is almost the same as the original EfficientDet-D3. It shows that the optimization method in this paper does not affect the model training accuracy.
We use the original and improved EfficientDet-D3 to conduct the detection of 146 images. According to Equations (5)–(7), we obtain the Precision, Recall, and F1-scores of these two methods, as shown in Table 4.
Table 4 shows that the precision of the original EfficientDet-D3 is the same for Recall and F1-score. The reason for this phenomenon is that the sum of TP and FP is equal to the sum of TP and FN. From Table 4, it is clearly visible that the precision, Recall, and F1-score of improved EfficientDet-D3 are greater than original EfficientDet-D3. It illustrates that the performance of the improved EfficientDet-D3 is better than the original EfficientDet-D3 for the detection of hazardous goods vehicles.

3.4. Performance Analysis

(1)
Comparison of different detection methods for hazardous goods vehicles
In order to verify the performance of the proposed method, we use 146 test data sets images to conduct the detection of hazardous goods vehicles and compare the detection results with cascade R-CNN, CenterNet, and EfficientDet-D7x methods. The 146 test data sets include long-distance and short-range hazardous goods vehicles. Additionally, the 146 test data sets are composed of 102 vehicle images and 44 non-vehicle images. Table 5 shows the number of parameters to be used in the detection of hazardous goods vehicles by these four methods.
From Table 5, it is clearly visible that the parameter of the proposed method is much lower than Cascade R-CNN, CenterNet, and EfficientDet-D7x methods. The proposed method implements the detection of hazardous goods vehicles with a minimum number of parameters. The parameter of the Cascade R-CNN is much higher than the other three methods. The Cascade R-CNN uses the maximum number of parameters for detecting hazardous goods vehicles. The parameter of CenterNet is almost half of Cascade R-CNN. However, it is still higher than EfficientDet-D7x and proposed methods. It illustrates that the calculation complexity of the cascade R-CNN method is the highest, followed by the CenterNet model. Simultaneously, the calculation complexity of the proposed method is much lower than the other three methods.
We used the four methods to conduct the detection of hazardous goods vehicles. According to the concept of TP, FP, TN, and FN, the detection results of the 146 test data sets images of different methods were obtained, as shown in Table 6.
Table 6 shows that the TP values of Cascade R-CNN and CenterNet methods are lower than EfficientDet-D7x and proposed methods. The main reason for the detection results is that the detection effect of Cascade R-CNN and CenterNet methods are easily affected by long-distance hazardous goods vehicles. The FP values of the Cascade R-CNN, CenterNet, and EfficientDet-D7x methods are greater than the proposed method. The possible reason for this phenomenon is that these three methods are easily influenced by the non-vehicle images and identify the vehicle-like objects as vehicles.
According to the TP, TN, FP, and FN, the tri-partite measures are calculated, as shown in Table 7.
The precision of the CenterNet is lower than the other three methods. However, its Recall is higher than Cascade R-CNN. The reason is that the precision is related to TP and FP, and the Recall is related to TP and FN. The FN of CenterNet is lower than Cascade R-CNN. The precision, Recall, and F1-score of the EfficientDet-D7x and proposed method are higher than Cascade R-CNN and CenterNet methods. Although, the Recall of the proposed method is slightly lower than EfficientDet-D7x. The precision of the proposed method is higher than EfficientDet-D7x. Simultaneously, the F1-score of the proposed method is slightly higher than EfficientDet-D7x. The precision and F1-score of the proposed method are both considerably higher than the other methods. Therefore, the experimental results show that the proposed method can successfully improve the vehicle identification accuracy compared with Cascade R-CNN, CenterNet, and EfficientDet-D7x by reducing false identifications.
These four methods are used to detect hazardous goods vehicles, as shown in Figure 6.
As can be seen from Figure 6, the detection scores of these four methods for short-range vehicles are greater than 80%. It shows that these four methods are suitable for vehicle detection in this case. Among them, the detection scores of the two hazardous goods vehicles by this method are 98% and 99%, which is higher than Cascade R-CNN and CenterNet methods. Although, it is slightly lower thanEfficientDet-D7x method. The computational cost and memory requirements of the proposed method are much lower than the EfficientDet-D7x method.
In order to evaluate the detection efficiency of the proposed method and the other three methods in detecting hazardous goods vehicles, the time cost of detecting hazardous goods vehicles is obtained, as shown in Figure 7. As can be seen from Figure 7, the detection time of the Cascade R-CNN is the longest, which takes about 160 milliseconds. The detection time of the EfficientDet-D7x is similar to that of the method based on CenterNet, and the two methods only take about 40 milliseconds, which is completely lower than that of the method based on Cascade R-CNN. The detection time of the proposed method is much lower than the Cascade R-CNN and slightly lower than the CenterNet and EfficientDet-D7x methods. Therefore, from the detection time cost, it can be seen that the proposed method is better than the other three methods.
(2)
Comparison detection of hazardous goods vehicles in different scenarios
In order to evaluate that the deep learning model constructed in this paper is applicable to the detection of hazardous goods vehicles in different scenarios, the hazardous goods vehicles in different scenarios are detected, as shown in Figure 8. The higher the detection score of hazardous goods vehicles, the better the detection effect.
It can be clearly seen from Figure 8 that the vehicle detection scores under six different backgrounds are greater than 90%, especially when there is interference from other objects around the vehicle body. The vehicle detection score of hazardous goods is still greater than 90%, while the background interference object detection score is less than 50%, as shown in Figure 8d,f. This shows that this method has a good effect on the detection of hazardous goods vehicles and is suitable for the detection of hazardous goods vehicles under different backgrounds.

4. Case Study

There are four hazardous goods warehouses in the Hanyang district of Wuhan, as shown in Figure 9. According to the route of hazardous goods vehicles marked in the figure, it is clear that all vehicles must pass the yellow route in the upper left corner, as shown in the location of camera one. We used the trained deep learning models to conduct the detection of hazardous goods vehicle on cameras in ten locations.
We set the detection period to one week, and the number of hazardous goods vehicles were counted every day, as shown in Figure 10.
From Figure 10, it is visible that the number of hazardous goods vehicles in location one is much greater than in location two. The reason for this result is that all the hazardous goods vehicles from hazardous goods warehouses pass through location one. The number of hazardous goods vehicles in location two is almost the same as in locations three and four. It illustrates that the number of distributed vehicles on the three roads is similar. Location 10 has the least number of hazardous goods vehicles. The reason for this phenomenon is that the hazardous goods vehicles only from one hazardous goods warehouse pass location 10. There are far fewer hazardous goods vehicles on Saturdays than on other days. The main reason is that Saturday is a rest day and only a few hazardous goods vehicles are in and out.
From Figure 11, it is visible that hazardous goods vehicles pass through locations 1 to 10. It illustrates that positions 1 through 10 are all possible hazardous locations affected by hazardous goods. The number of hazardous goods vehicles in location 1 is much greater than in other locations. It illustrates that the location one has the highest risk of being affected by hazardous goods. The risk of position 2 is like that in positions 3 and 4, and they are higher than in positions 5 to 10. The levels of risk descend from location 5 to location 10. According to Figure 11, it is clearly visible that the risk level of location 10 is lower than other locations.
According to the statistical result, we can use the total number of hazardous goods vehicles to evaluate the risk level of each location. The risk for different locations is divided into six levels, and different risk levels are labeled by circles of different sizes and colors, as shown in Figure 12.
Figure 12 depicts the different risk levels of different locations. The size of the circle on location one is the largest, and the color is red. It is visible that location one is the highest risk level affected by hazardous goods. The reason is that all the hazardous goods vehicles need to pass through position 1. Locations two, three, and four have the same risk level of 2. This is followed by position three, with a risk level of 3 and a circle size smaller than position one, two, three, and four, colored yellow. The risk level of position six is 4. The risk level of position seven is the same as position eight and nine. The risk level of position ten is 6, the lowest risk. Therefore, risk levels at different locations can be determined according to the number of hazardous goods vehicles. Simultaneously, the risk level of the road section passed by hazardous goods vehicles can be clearly seen in the map.

5. Conclusions

In this paper, the hazardous goods vehicle detection method based on deep learning is proposed, and a hazardous goods vehicle detection model based on the Efficientdet-d3 model is established. In the training stage of the Efficientdet-d3 model, to improve the training efficiency of the detection model, the setting of phased training and learning parameters is given according to the change of total loss value. The learning mechanism of adaptive model training is established.
Comparing the detection model in this paper with the methods based on cascade R-CNN and CenterNet, the method in this paper uses the least parameters and has the lowest computational complexity. The detection time of hazardous goods vehicles in this method is equivalent to that of the CenterNet method, which is completely lower than that of the cascade R-CNN method. At the same time, the detection accuracy of the three methods is basically the same. Finally, from the three aspects of computational complexity, time consumption and detection accuracy, it is determined that this method is better than the other two methods.
The detection model is used to judge hazardous goods vehicles in different scenes. The results show that this method can accurately detect hazardous goods vehicles in different scenes, and the detection accuracy is higher than 90%. The deep learning model constructed in this paper is applied to the detection of hazardous goods vehicles in four sections of Wuhan Petrogoods Company. The experimental results show that the accuracy of this method is higher than 90%. It shows that this method can be used to detect hazardous goods vehicles in different sections. This paper analyzes the detection of hazardous goods vehicles in the surrounding sections of four hazardous goods warehouses in the Wuhan District of Wuhan and obtains the number of hazardous goods vehicles passing through each section according to the detection results in one week. The risk level of different sections can be obtained according to the passing times of hazardous goods vehicles in different sections, and then the risk level of each section around the hazardous goods warehouse can be clearly seen on the map.

Author Contributions

Conceptualization, Q.A. and S.W.; methodology, Q.A.; software, H.W. and R.S.; validation, Z.L.; formal analysis, J.Y.; investigation, J.Y.; resources, R.S.; data curation, Q.A. and S.W.; writing—original draft preparation, Q.A.; writing—review and editing, R.S.; funding acquisition, Q.A. and H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the technology project of Hubei Province Safety Production special fund (Program SN: SJZX 20211006). This work was supported by the Opening Foundation of State Key Laboratory of Cognitive Intelligence, iFLYTEK (CIOS-2022SC07).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflict of interest regarding the publication of this paper.

References

  1. Arthi, R.; Padmavathi, S.; Amudha, J. Vehicle detection in static images using color and corner map. In Proceedings of the 2010 International Conference on Recent Trends in Information, Telecommunication and Computing, Kerala, India, 12–13 March 2010; pp. 244–246. [Google Scholar]
  2. Matos, F.; Souza, R. An image vehicle classification method based on edge and PCA applied to blocks. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Seoul, Korea, 14–17 October 2012; pp. 1688–1693. [Google Scholar]
  3. Iqbal, U.; Zamir, S.W.; Shahid, M.H.; Parwaiz, K.; Yasin, M.; Sarfraz, M.S. Image based vehicle type identification. In Proceedings of the 2010 International Conference on Information and Emerging Technologies, Karachi, Pakistan, 14–16 June 2010; pp. 1–5. [Google Scholar]
  4. Pei, M.T.; Shen, J.J.; Yang, M.; Jia, Y.D. Vehicle detection method in complex illumination environment. J. Beijing Univ. Technol. 2016, 36, 393–398. [Google Scholar]
  5. Ghaffarian, S.; Kasar, I. Automatic vehicle detection based on automatic histogram-based fuzzy. J. Appl. Remote Sens. 2016, 10, 12–21. [Google Scholar] [CrossRef]
  6. Li, Y.; Li, B.; Tian, B.; Yao, Q. Vehicle Detection Based on the AND-OR Graph for Congested Traffic Conditions. IEEE Trans. Intell. Transp. Syst. 2013, 14, 984–993. [Google Scholar] [CrossRef]
  7. Liu, H.; Liu, T.; Zhang, Z.; Sangaiah, A.K.; Yang, B.; Li, Y. ARHPE: Asymmetric Relation-Aware Representation Learning for Head Pose Estimation in Industrial Human-Computer Interaction. IEEE Trans. Ind. Inf. 2022, 18, 7107–7117. [Google Scholar] [CrossRef]
  8. Liu, T.; Wang, J.; Yang, B.; Wang, X. NGDNet: Nonuniform Gaussian-label distribution learning for infrared head pose estimation and on-task behavior understanding in the classroom. Neurocomputing 2021, 436, 210–220. [Google Scholar] [CrossRef]
  9. Liu, H.; Zheng, C.; Li, D.; Shen, X.; Lin, K.; Wang, J.; Zhang, Z.; Zhang, Z.; Xiong, N. EDMF: Efficient Deep Matrix Factorization with Review Feature Learning for Industrial Recommender System. IEEE Trans. Ind. Inf. 2022, 18, 4361–4371. [Google Scholar] [CrossRef]
  10. Chen, X.; Wu, H.; Lichti, D.; Han, X.; Ban, Y.; Li, P.; Deng, H. Extraction of indoor objects based on the exponential function density clustering model. Inf. Sci. 2022, 607, 1111–1135. [Google Scholar] [CrossRef]
  11. Shen, X.J.; Zhe, S.; Huang, Y.P.; Wang, Y. Deep convolution neural network parking space occupancy detection algorithm based on nonlocal operation. J. Electron. Inf. 2020, 42, 2269–2276. [Google Scholar]
  12. Xiang, X.; Lv, N.; Zhai, M.; El Saddik, A. Real-time parking occupancy detection for gas stations based on Haar-AdaBoosting and CNN. IEEE Sens. J. 2017, 17, 6360–6367. [Google Scholar] [CrossRef]
  13. Tang, T.; Zhou, S.; Deng, Z.; Zou, H.; Lei, L. Vehicle detection in aerial images based on region convolutional neural networks and hard negative example mining. Sensors 2017, 17, 336–352. [Google Scholar] [CrossRef]
  14. Wang, X.; Shrivastava, A.; Gupta, A. A-fast- RCNN: Hard positive generation via adversary for object detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2606–2615. [Google Scholar]
  15. Lu, J.; Ma, C.; Li, L.; Xing, X.; Zhang, Y.; Wang, Z.; Xu, J. A vehicle detection method for aerial image based on YOLO. J. Comput. Commun. 2018, 6, 98–107. [Google Scholar] [CrossRef]
  16. Cao, G.; Xie, X.; Yang, W.; Liao, Q.; Shi, G.; Wu, J. Feature-fused SSD: Fast detection for small objects. In Proceedings of the Ninth International Conference on Graphic and Image Processing, Qingdao, China, 14–16 October 2017; Volume 1. [Google Scholar]
  17. Liu, H.; Liu, T.; Chen, Y.; Zhang, Z.; Li, Y. EHPE: Skeleton Cues-based Gaussian Coordinate Encoding for Efficient Human Pose Estimation. IEEE Trans. Multimed. 2022, 1–12. [Google Scholar] [CrossRef]
  18. Liu, T.; Liu, H.; Li, Y.; Chen, Z.; Zhang, Z.; Liu, S. Flexible FTIR Spectral Imaging Enhancement for Industrial Robot Infrared Vision Sensing. IEEE Trans. Ind. Inf. 2020, 16, 544–554. [Google Scholar] [CrossRef]
  19. Liu, T.; Liu, H.; Li, Y.; Zhang, Z.; Liu, S. Efficient Blind Signal Reconstruction with Wavelet Transforms Regularization for Educational Robot Infrared Vision Sensing. IEEE/ASME Trans. Mechatron. 2019, 24, 384–394. [Google Scholar] [CrossRef]
  20. Su, H.S.; Long, M.K.; Zeng, Z.G. Controllability of two-time-scale discrete-time multiagent systems. IEEE Trans. Cybern. 2020, 50, 1440–1449. [Google Scholar] [CrossRef]
  21. Cai, Z.; Vasconcelos, N. Cascade R-CNN: Delving into High Quality Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6154–6162. [Google Scholar]
  22. Du, X.; Lin, T.Y.; Jin, P.; Ghiasi, G.; Tan, M.; Cui, Y.; Le, Q.V.; Song, X. SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11589–11598. [Google Scholar]
  23. Zhou, X.; Wang, D.; Krähenbühl, P. Objects as points. arXiv 2019, arXiv:1904.07850. Available online: http://arxiv.org/abs/1904.07850 (accessed on 25 April 2019).
  24. Su, H.S.; Liu, Y.F.; Zeng, Z.G. Second-order consensus for multiagent systems via intermittent sampled position data control. IEEE Trans. Cybern. 2020, 50, 2063–2072. [Google Scholar] [CrossRef]
  25. An, Q.; Chen, X.; Zhang, J.; Shi, R.; Yang, Y.; Huang, W. A Robust Fire Detection Model via Convolution Neural Networks for Intelligent Robot Vision Sensing. Sensors 2022, 22, 2929. [Google Scholar] [CrossRef]
  26. Liu, H.; Fang, S.; Zhang, Z.; Li, D.; Lin, K.; Wang, J. MFDNet: Collaborative Poses Perception and Matrix Fisher Distribution for Head Pose Estimation. IEEE Trans. Multimed. 2022, 24, 2449–2460. [Google Scholar] [CrossRef]
  27. Li, Z.; Liu, H.; Zhang, Z.; Liu, T.; Xiong, N. Learning Knowledge Graph Embedding with Heterogeneous Relation Attention Networks. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 3961–3973. [Google Scholar] [CrossRef]
  28. Liu, H.; Nie, H.; Zhang, Z.; Li, Y.-F. Anisotropic angle distribution learning for head pose estimation and attention understanding in human-computer interaction. Neurocomputing 2021, 433, 310–322. [Google Scholar] [CrossRef]
  29. Liu, T.; Liu, H.; Chen, Z.; Lesgold, A.M. Fast Blind Instrument Function Estimation Method for Industrial Infrared Spectrometers. IEEE Trans. Ind. Inf. 2018, 14, 5268–5277. [Google Scholar] [CrossRef]
  30. Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. arXiv 2017, arXiv:1711.05101. Available online: http://arxiv.org/abs/1711.05101 (accessed on 4 January 2019).
Figure 1. EfficientDet-D3 deep learning network.
Figure 1. EfficientDet-D3 deep learning network.
Sensors 22 07123 g001
Figure 2. Vehicle detection diagram.
Figure 2. Vehicle detection diagram.
Sensors 22 07123 g002
Figure 3. Vehicle data set.
Figure 3. Vehicle data set.
Sensors 22 07123 g003
Figure 4. Variation of learning rate parameters.
Figure 4. Variation of learning rate parameters.
Sensors 22 07123 g004
Figure 5. Variation trend of total loss value.
Figure 5. Variation trend of total loss value.
Sensors 22 07123 g005
Figure 6. Detection effect of different methods on hazardous goods vehicles. (a,b) Proposed method. (c,d) Cascade R-CNN method. (e,f) CenterNet method. (g,h) EfficientDet-D7x method.
Figure 6. Detection effect of different methods on hazardous goods vehicles. (a,b) Proposed method. (c,d) Cascade R-CNN method. (e,f) CenterNet method. (g,h) EfficientDet-D7x method.
Sensors 22 07123 g006
Figure 7. Detection time of four different methods.
Figure 7. Detection time of four different methods.
Sensors 22 07123 g007
Figure 8. (af) Deep learning vehicle detection model for hazardous goods vehicle detection in different scenarios.
Figure 8. (af) Deep learning vehicle detection model for hazardous goods vehicle detection in different scenarios.
Sensors 22 07123 g008
Figure 9. Four hazardous goods warehouses and ten CCD cameras on ten locations.
Figure 9. Four hazardous goods warehouses and ten CCD cameras on ten locations.
Sensors 22 07123 g009
Figure 10. The number of hazardous goods vehicles of different position from Monday to Saturday.
Figure 10. The number of hazardous goods vehicles of different position from Monday to Saturday.
Sensors 22 07123 g010
Figure 11. Total number of hazardous goods vehicles on each location from Monday to Saturday.
Figure 11. Total number of hazardous goods vehicles on each location from Monday to Saturday.
Sensors 22 07123 g011
Figure 12. Different risk levels of different locations.
Figure 12. Different risk levels of different locations.
Sensors 22 07123 g012
Table 1. Performance of different identification network models.
Table 1. Performance of different identification network models.
Different Recognition Network ModelsSpeed/msCOCO mAP [^1]
CascadeR-CNN_ResNet-10141042.8
CenterNet_DLA-343141.6
RetinaNet_ResNet-1013239.9
EfficientDet-D11640.5
EfficientDet-D33745.6
EfficientDet-D7x28555.1
Table 2. Four possible outcomes of the vehicle detection.
Table 2. Four possible outcomes of the vehicle detection.
Positive (Presence of Fire)Negative (Absence of Fire)
True Positive(TP)True Negative (TN)
False Positive(FP)False Negative(FN)
Table 3. Training time and accuracy of the two methods.
Table 3. Training time and accuracy of the two methods.
EfficientDet-D3Improved EfficientDet-D3
Training time (h)6.34.2
Training accuracy0.9870.986
Table 4. Tri-partite measures of original and improved EfficientDet-D3.
Table 4. Tri-partite measures of original and improved EfficientDet-D3.
Different Methods Precision (%)Recall (%)F1-Score (%)
EfficientDet-D396.196.196.1
Improved EfficientDet-D3979797
Table 5. The number of parameters for the four different methods.
Table 5. The number of parameters for the four different methods.
Cascade R-CNNCenterNetEfficientDet-D7xProposed Method
Parameter (MB)3451857712
Table 6. The number of four possible outcomes.
Table 6. The number of four possible outcomes.
Different Methods TPTNFPFN
Cascade R-CNN953867
CenterNet963776
EfficientDet-D7x1003952
Proposed method994133
Table 7. Tri-partite measures of different methods.
Table 7. Tri-partite measures of different methods.
Different Methods Precision (%)Recall (%)F1-Score (%)
Cascade R-CNN9493.193.5
CenterNet93.294.193.6
EfficientDet-D7x95.29896.6
Proposed method979797
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

An, Q.; Wu, S.; Shi, R.; Wang, H.; Yu, J.; Li, Z. Intelligent Detection of Hazardous Goods Vehicles and Determination of Risk Grade Based on Deep Learning. Sensors 2022, 22, 7123. https://doi.org/10.3390/s22197123

AMA Style

An Q, Wu S, Shi R, Wang H, Yu J, Li Z. Intelligent Detection of Hazardous Goods Vehicles and Determination of Risk Grade Based on Deep Learning. Sensors. 2022; 22(19):7123. https://doi.org/10.3390/s22197123

Chicago/Turabian Style

An, Qing, Shisong Wu, Ruizhe Shi, Haojun Wang, Jun Yu, and Zhifeng Li. 2022. "Intelligent Detection of Hazardous Goods Vehicles and Determination of Risk Grade Based on Deep Learning" Sensors 22, no. 19: 7123. https://doi.org/10.3390/s22197123

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop