Next Article in Journal
A Reference Voltage Self-Correction Method for Capacitor Voltage Offset Suppression of Three-Phase Four-Switch Inverter-Fed PMSM Drives
Previous Article in Journal
Conductive Electric Road Localization and Related Vehicle Power Control
 
 
Article
Peer-Review Record

Real-Time Fire Detection Method for Electric Vehicle Charging Stations Based on Machine Vision

World Electr. Veh. J. 2022, 13(2), 23; https://doi.org/10.3390/wevj13020023
by Shiyu Zhang 1, Qing Yang 2,*, Yuchen Gao 2 and Dexin Gao 1
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
World Electr. Veh. J. 2022, 13(2), 23; https://doi.org/10.3390/wevj13020023
Submission received: 29 December 2021 / Revised: 9 January 2022 / Accepted: 13 January 2022 / Published: 18 January 2022

Round 1

Reviewer 1 Report

The authors have discussed the fire detection system for the EV charging station. The system is based on Machine vision. The authors have introduced a real-time detection algorithm called improved YOLO-v4. The idea looks okay and it requires the following revision.

  1. language of the paper is poor. It should be improved in all aspects. This is a very serious comment.
  2. The literature study is not sufficient. The authors should improve the literature study by including recent publications. 
  3. In what way, YOLOv4 is improved. It is not clear in the paper. 
  4. The authors have constructed a fireset database. How do we know that it has been constructed? I strongly recommend you to deposit in any repository and share the link of the same. Without this, your results cannot be validated. 
  5. In Fig. 2, each figure situation needs to be labeled. So that readers can understand the different fire situation 
  6. From fig. 11, I could observe the performance of v3 is better than v4. But in the statement, the authors have claimed that v4 results is better than v3. Please check and justify.
  7. So many, spelling mistakes, grammatical mistakes, typo errors, etc.
  8. What is the drawback of the proposed YOLOv4? Discusses the challenges.

Author Response

Point 1: the language of the paper is poor. It should be improved in all aspects. This is a very serious comment.

 

Response 1: We have checked the technical English of our manuscript and corrected the errors found, some of which are as follows:

  1. In line 16, the word "greatly " was corrected to " significantly. "
  2. In line 22, the statement “The experimental results show that the improved algorithm is fast and accurate in detecting not only large-size flames in real-time, but also small-size flames at the beginning of a fire, with a detection speed of 43 fps/s, mAP value of 91.53% and F1 value of 0.91.” was corrected to “The experimental results show that the improved algorithm is fast and accurate in detecting large-size flames in real-time and small-size flames at the beginning of a fire, with a detection speed of 43 fps/s, mAP value of 91.53%, and F1 value of 0.91.”.

Of course we made more than just these changes, and you can see the details of the changes in the revised version.

  1. In line 283, the statement “we can learn that the growth of avg IoU values almost stagnates when the number of clusters reaches 9.” was corrected to “we can learn that the growth of avg IoU values almost stagnates when clusters reach 9.”.

Of course we made more than just these changes, and you can see the details of the changes in the revised version.

 

Point 2: The literature study is not sufficient. The authors should improve the literature study by including recent publications.

Response 2: Thank you very much for raising this issue and pointing out our shortcomings. Our proposed fire detection method for electric vehicle charging stations is based on dynamic video detection. Therefore, most of the references in this paper are dynamic video-based target detection methods. In the revised article, we have added 5 references, they are literature [5],[15],[16],[17],[26]. From the literature, we can see that YOLOv4 is widely used in target detection and has achieved excellent detection results. Therefore, we choose the YOLOv4 algorithm and improve the algorithm according to the specific target characteristics of the flame.

 

Point 3: In what way YOLOv4 is improved. It is not clear in the paper.

 

Response 3: We improve the original YOLOv4 algorithm for the complex and variable flame shape. The improvement method can be carried out in two steps.

  1. To avoid selecting a chance parameter model without generalization capability due to the single division of the training and test sets, the existing data set is divided several times according to the cross-validation principle to reduce the chance and improve the generalization capability of the model. The data set is divided into two parts, one for the training set, which is used to train the network model, and one for the test set, which is used to test the performance of the network model. The EV charging station fire data set is divided into ten copies, and one copy is selected as the test set and the others as the training set for training and validation. The process is performed ten times in sequence, as shown in Figure 3.
  2. We introduce the K-means algorithm into the original YOLOv4 algorithm model to obtain the improved YOLOv4-Kmeans algorithm. The EV flame flames dataset images are clustered before being input to the training network, which can compress the training time and improve the model detection accuracy. The improved YOLOv4-kmeans network model uses nine clustering centers, and the specific width and height parameter values of the anchor box are shown in Table 3.

 

Point 4: The authors have constructed a fireset database. How do we know that it has been constructed? I strongly recommend you to deposit in any repository and share the link of the same. Without this, your results cannot be validated.

 

Response 4: During the experiment, we constructed the EV charging station fire dataset according to our needs; the dataset is produced by the group members and is accurate, as shown in Table 1. In the current experimental process, we further improve the YOLOv4 algorithm on the dataset.

Point 5: In Fig. 2, each figure situation needs to be labeled. So that readers can understand the different fire situation

 

Response 5: Thank you very much for pointing out our shortcomings. We have illustrated this for Figure 2 in line 227. Figure 2 serves to show the complex and varied shape of the flame. Compared with conventional target detection, there is a particular difficulty in flame detection.

 

Point 6: From fig. 11, I could observe the performance of v3 is better than v4. But in the statement, the authors have claimed that v4 results are better than v3. Please check and justify.

 

Response 6:  I am very sorry for this mistake. Due to an error in our writing, the position of the image was incorrect and has been corrected, as shown in Figure 13(c)(d). The experimental results show that YOLOv4 outperforms YOLOv3, as shown in Table 5.

 

Point 7: So many spelling mistakes, grammatical mistakes, typo errors, etc.

 

Response 7: We have checked the technical English of our manuscript and corrected the errors found, some of which are as follows:

  1. In line 15, the word "real time " was corrected to " real-time. “
  2. In line 42, the word "circuit " was corrected to " circuits ".
  3. In line 215, the word "then the" was corrected to " The ".
  4. In line 259, the word "as a measure of " was corrected to " to measure ".
  5. In line 406, the word "kmeans " was corrected to " K-means".

Of course we made more than just these changes, and you can see the details of the changes in the revised version.

 

Point 8: What is the drawback of the proposed YOLOv4? Discusses the challenges.

 

Response 8: YOLOv4 algorithms need to run on the computer's GPU, and the larger the GPU's video memory, the faster the computation speed. Using the YOLOv4-kmeans algorithm for fire detection as an example, the algorithm takes 7-8 days to complete a training session on the NVIDIA RTX-960M, 4-5 hours to complete a training session on the NVIDIA RTX-2080Ti, and 1-2 hours to complete a training session on the NVIDIA RTX-3090. We can see specific requirements for computer configuration to change the deep learning algorithm. In addition, the output weight file size of the YOLOv4 algorithm after training is 250M, and the algorithm is not suitable for running on mobile devices. The size of the improved lightweight network weight file is 50M, and this algorithm can run on embedded devices.

 

We also corrected other minor errors and touched up the language of our manuscript, as detailed in the revised version.

 

Special thanks to you for your good comments.

Author Response File: Author Response.docx

Reviewer 2 Report

The work seems to be an interesting one:

Some figures are not clear

More experimental results must be added.

Please, discuss the cost of system based on the proposed algorithm 

Author Response

Point 1:  Some figures are not clear.

 

Response 1: Thank you very much for raising this issue, it was an oversight in our typesetting, and we have fixed it in the paper. We made adjustments to Figures 6, 7, and 13 and added Figures 4,11. We will continue to correct our mistakes during the subsequent revision process.

 

Point 2: More experimental results must be added.

 

Response 2: Thank you very much for pointing out this error, and we are sorry for that. We added four images in Figure 11, which are from our experimental simulation results, corresponding to Table 5, to increase the credibility of the experimental results.

During the experiments, we used SSD, Faster R-CNN, YOLOv3, YOLOv4, and YOLOv4-Kmeans for simulation experiments. The experimental results show that the YOLO series algorithms perform better, so the YOLO algorithms are mainly compared in the paper. We add the experimental results of SSD and Faster R-CNN, as shown in Figure 13 and Table 5.

Table.5 Comparison of different model evaluation index parameters.

Types of models

AP= mAP

Recall

Pre

F1

Speed(photo)

FPS

(video)

SSD

76.95%

67.26%

82.17%

0.74

0.0235s

28.20

Faster R-CNN

82.48%

90.39%

50.6.%

0.65

0.0773

32.09

YOLOv3

81.81%

63.35%

95.19%

0.73

0.0332s

38.63

YOLOv4

91.39%

80.80%

96.33%

0.88

0.0306s

39.41

YOLOv4-Kmeans

91.53%

84.06%

98.10%

0.91

0.0258s

42.57

For more detailed revisions, see the revised version of the paper.

 

Point 3: Please, discuss the cost of system based on the proposed algorithm.

 

Response 3: This thesis aims to perform target detection based on the existing monitoring equipment of EV charging stations. We only need to input the surveillance video to the algorithm. Therefore, the cost of the monitoring equipment can be neglected. In Table 4, we list the specific configuration of the experimental equipment. After repeated experiments, we conclude that the algorithm for real-time target detection has specific requirements for the GPU. We recommend the NVIDIA RTX-1080ti and NVIDIA RTX-2060 or more advanced graphics cards. The cost of the entire system is between $800 and $1500.

 

 

We also corrected other minor errors and touched up the language of our manuscript, as detailed in the revised version.

 

Special thanks to you for your good comments.

Reviewer 3 Report

A very interesting paper untitled "", which gives a valuable solution for resolving the problem of Real-time fire detection in the recharge station. The topic is very interesting and the exposed method is important. Using the YOLO tool is popular but what is the author contribution comparing to basic YOLO form is not so clear. 

So i recommend: 

  • adding a table which can compare the different YOLO version
  • Presents the contribution in relation to the YOLO V4 face the other
  • Can authors gives in the perspective section, more points that can be in relation to this work? What kind of actions can be executed after this detection.  
  • an interesting point, must be discussed into the end of the simulation phase, what is the system rapidity ?

 

Globally the work is interesting and i like to see it online. 

Author Response

Point 1: Adding a table which can compare the different YOLO version

 


Response 1: It is true as reviewers suggested that we should discuss the difference of the method. YOLOv4 is the fourth version of the YOLO algorithm. YOLOv1 laid the foundation of the algorithm, and the subsequent algorithms are to improve it and enhance its perfor-mance. yOLOv2 improved the feature extraction network to Darknet-19, and Darknet-53 was proposed in the YOLOv3 algorithm. YOLOv4 uses a more advanced CSPDarknet-53. The test results of the YOLO al-gorithm on the COCO dataset are shown in Figure 4. YOLOv4 achieves optimal detection results.

Figure.4 Comparison of the proposed YOLOv4 and other state-of-the-art object detectors.

Due to the low detection performance of YOLOv1 and YOLOv2, they have been rarely used in the field of target detection. The more popular algorithms are SSD, Faster R-CNN, YOLOv3, and YOLOv4. Table 5 shows the performance index parameters such as F1, mAP, Precision, and Recall for different models. From the comparison of AP, Recall, and Precision, it is concluded that the improved YOLOv4-kmeans network model has significantly improved all parameters, and the most noticeable improvement is in Recall.

Table.5 Comparison of different model evaluation index parameters

Types of models

AP= mAP

Recall

Pre

F1

Speed(photo)

FPS

(video)

SSD

76.95%

67.26%

82.17%

0.74

0.0235s

28.20

Faster R-CNN

82.48%

90.39%

50.6.%

0.65

0.0773

32.09

YOLOv3

81.81%

63.35%

95.19%

0.73

0.0332s

38.63

YOLOv4

91.39%

80.80%

96.33%

0.88

0.0306s

39.41

YOLOv4-Kmeans

91.53%

84.06%

98.10%

0.91

0.0258s

42.57

From the F1 values, it can be seen that the comprehensive performance of the improved YOLOv4-kmeans network model is 24.66% higher than that of YOLOv3 and 3.4% higher than that of YOLOv4.

 

Point 2: Presents the contribution in relation to the YOLO V4 face the other

 

Response 2:  The YOLOv4 algorithm was proposed and improved by Alexey from Russia. In less than two years, the algorithm has been widely used in the field of target detection and has achieved good detection results and been praised by researchers. In the literature [15-23], The yolov4 network has been applied to target detection tasks such as agricultural product inspection, industrial safety, and robot vision and achieved good detection results. In this paper, we mainly refer to the application of YOLOv4 in the field of dynamic target detection in the construction process. We firmly believe that with the development of IoT technology and 5G technology, video-based dynamic target detection will be widely used.

 

  1. 15. T, Li,; Lv, X,Y,; Lian, X,F,; Wang, G. YOLOv4_Drone: UAV image target detection based on an improved YOLOv4 algorithm[J]. Computers and Electrical Engineering,2021,6:93.
  2. Singha, S,; Aydin, B. Automated Drone Detection Using YOLOv4[J]. Drones,2021,5(3): 95-95.
  3. 17. Fu, H,X,; Song, G,Q,; ,Wang, Y,C. Improved YOLOv4 Marine Target Detection Combined with CBAM[J]. Symmetry,2021,13(4): 623-623.
  4. Liu, H.S.; Fan, K.G.; Ouyang, Q.H.; Li, N. Real-Time Small Drones Detection Based on Pruned YOLOv4[J]. Sensors,2021,21(10): 3374-3374.
  5. Yu, Z.W.; Shen, Y.G.; Shen, C.K. A real-time detection approach for bridge cracks based on YOLOv4-FPM[J]. Automation in Construction,2021,122: 103514-.
  6. Kulshreshtha, M.; Chandra, S.S.; Randhawa, P.; Tsaramirsis, G.; Khadidos, A.; Khadidos, A. O. OATCR: Outdoor Autonomous Trash-Collecting Robot Design Using YOLOv4-Tiny[J]. Electronics,2021,10(18): 2292-2292.
  7. Parico, A.I.B.; Ahamed, T. Real Time Pear Fruit Detection and Counting Using YOLOv4 Models and Deep SORT[J]. Sensors,2021,21(14): 4803-4803.
  8. Kumar.; Saurav.; Gupta.; Himanshu.; Yadav.; Drishti.; Ansari.; Irshad, A.; Verma.; Om, P. YOLOv4 algorithm for the real-time detection of fire and personal protective equipments at construction sites[J]. Multimedia Tools and Applications,2021(republish): 1-21.
  9. Schütz, A.K.; Schöler, V.; Krause, E.; Tobias, F.M.; Müller, T.; Freuling, C.M.; Conraths, F.J.; Stanke, M.; HomeierBachmann, T.; Lentz, H.H.K. Application of YOLOv4 for Detection and Motion Monitoring of Red Foxes[J]. Animals,2021,11(6): 1723-1723.

 

Point 3: Can authors gives in the perspective section, more points that can be in relation to this work? What kind of actions can be executed after this detection

 

Response 3: Thank you very much for asking this question. We firmly believe that with the development of IoT technology and 5G technology, EV charging stations will develop into unmanned, intelligent charging stations in the future, where staff can manage multiple charging stations through a single monitoring device. We use an improved YOLOv4 algorithm for real-time fire detection, which automatically identifies fires as soon as they occur. In future work, we can connect this algorithm with the fire protection system of EV charging stations to achieve automatic detection, alarm, and fire extinguishing. Using this method, we can realize to initiate fire response in the shortest time, reduce fire damage, and help to improve the safety of EV charging station operation. In Chapter 5 of the paper, we explain this issue.

 

Point 3: An interesting point, must be discussed into the end of the simulation phase, what is the system rapidity ?

 

Response 3:  The rapidity of the system is the time it takes between when a fire occurs and when the system detects the flame target. Electrical equipment fires start very quickly and the flame shape is radial, which can easily trigger adjacent vehicles to catch fire and cause a large area fire. Therefore, how to shorten the detection time is our concern. In this paper, we propose an improved algorithm that can achieve video detection at 40fps with excellent tracking capability and compress the fire response time to milliseconds. The corresponding description is given in line 508 of the paper.

 

We also corrected other minor errors and touched up the language of our manuscript, as detailed in the revised version.

 

Special thanks to you for your good comments.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

I am okay with the responses provided by the authors. However, I am still not happy with the datasets. The authors have skipped this comment without proper responses.

Back to TopTop