Next Article in Journal
Bandwidth-Aware Rescheduling Mechanism in SDN-Based Data Center Networks
Previous Article in Journal
Optimization and Design of Passive Link with Single Channel 25 Gbps Based on High-Speed Backplane
 
 
Article
Peer-Review Record

Concrete Cracks Detection and Monitoring Using Deep Learning-Based Multiresolution Analysis

Electronics 2021, 10(15), 1772; https://doi.org/10.3390/electronics10151772
by Ahcene Arbaoui 1, Abdeldjalil Ouahabi 2,3,*, Sébastien Jacques 4 and Madina Hamiane 5
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Reviewer 5: Anonymous
Electronics 2021, 10(15), 1772; https://doi.org/10.3390/electronics10151772
Submission received: 11 June 2021 / Revised: 10 July 2021 / Accepted: 21 July 2021 / Published: 24 July 2021
(This article belongs to the Section Artificial Intelligence)

Round 1

Reviewer 1 Report

This article presents a novel implementation for crack monitoring in concrete structures. Please find my comments as follows:

  1. In Abstract we have some typos
  2. Please improve Introduction with literature about CNN and related works.
  3. Please give some figures of NDT experimental images and also improve Discussion subsection and give more details about your approach.
  4. Improve Conclusions section 
  5. Some minor comments: In general all the text has formatting issues - Section 3, paragraph formatting.- line 501 text typos- line 124 typo- Figure 10 formatting- lines 222-227- line 213

Author Response

 

Thank you for your thorough review of our paper.

Please find below our responses given point-by-point.

Please also refer to the revised manuscript whose changes are in red.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

The article is devoted to the topic of automatic detection of hidden defects in concrete structures. As the main contribution, the authors declare a new method of automatic detection, based on the use of a convolutional neural network, the input of which is wavelet multiresolution analysis. The topic of the article is interesting, but at the beginning of this review I must state that the article does not meet the qualitative conditions so that it can be recommended for publication in the journal Electronis. In addition to unconvincing English, the main shortcomings of the article are the following:

(1) Unbalanced content structure of the article.

  • Too much space is devoted today to a trivial discussion of the principles, advantages and disadvantages of wavelet analysis (eg lines 133-169) and neural networks (almost the whole chapters 2.3.1 and 2.3.2), although in the experimental part they used standard CNN AlexNet and ResNet50.
  • The experimental part of the work is insufficiently described. How much data was obtained in the experiment described in section 2? What were the input and output signals? The authors give one example of a scalgram - was it the only entry into a convolutional neural network? How the data were distributed in the training and testing of neural networks, i. what was the training, evaluation and testing part like?

(2) I consider it a big mistake to use picture no. 2. Obviously this is a software-edited photo (even of poor quality), which was downloaded from the Internet !!!

(3) The methodological aspect of the experimental part is controversial. Although the authors compare the accuracy of detecting visible and invisible material defects, they are based on two incomparable image databases.

Author Response

 

Thank you for your thorough review of our paper.

Please find below our responses given point-by-point.

Please also refer to the revised manuscript whose changes are in red.

 

Author Response File: Author Response.pdf

Reviewer 3 Report

First of all, I would like to notice the paper is original research work without plagiarism suspicion. The non-destructive technique of concrete crack detection attracted much attention being a powerful tool for monitoring also. Ultrasonic wave propagation methods have good practical implementations and electronic devices based on that principle have reasonable cost and thus can be widely applied.
However, I suggest several comments could improve the readability of this paper.
1. I suggest spell and typos checking to avoid "on a n this paper" (line 16) and others.
2. Rethinking and reorganizing Section 2.3 is strongly recommended. On the strong background of the previous Section 2.2 (it is a well-written one), this Section looks like a student's lightweight discussion of deep learning technologies. And the title of subsection 2.3.1 "Neuronal Networks" is abstract without any regarding of the main context. Figures 8 and 9 are trivial examples and I think most readers will just slide that text with general pictures down. I strongly recommend giving real examples closely related to the main topic.
3. A similar problem concerns Figures 11, 12, 13. I think the readers of high-quality journal Electronics should be aware of RELU function and max-pooling operation in a general case. I recommend changing those figures to similar pictures that reflect operations with numerical values given from a really constructed ANN architecture.
4. And finally, I think the whole monitoring system structure can be presented in the resulting section. It can be not a system structure but a flow diagram, for example. This Figure can help to understand the proper functioning of the proposed monitoring procedure (or system).

Author Response

 

Thank you for your thorough review of our paper.

Please find below our responses given point-by-point.

Please also refer to the revised manuscript whose changes are in red.

 

Author Response File: Author Response.pdf

Reviewer 4 Report

An undoubted achievement is the creation of your own image database, but without making it public it is difficult to refer to the published results. We do not know what the scale of difficulties is in relation to the well-known base SDNET2018 dataset. 

It is a good idea to use the F1 indicator defined in your work to evaluate the results. Figures 8 and 9 add nothing (are well known) and can be removed. 
Figures 8 ,9, 11 add nothing  (are well known) and can be removed.
 

Instead of the above-mentioned drawings, it would be useful to present a more spectacular presentation of the developed image database. 

"The procedure for these images is described in Section 2. This is our main contribution here." - Section 2 - Has its own subtitle, wouldn't it be better to use it here. 

 

Author Response

 

Thank you for your thorough review of our paper.

Please find below our responses given point-by-point.

Please also refer to the revised manuscript whose changes are in red.

 

Author Response File: Author Response.pdf

Reviewer 5 Report

  1. typo – a n? What is this trying to say?
  2. Which multi-resolution analysis is it based on?
  3. Which studied material?
  4. Several types? Which types?
  5. Describe dedicated wavelet
  6. Several scales? Which scales?

 

              Describe your dataset. What the classifications are

 

  1. Top-1 accuracy?
  2. This closing sentence is currently irrelevant.
  3. Reference?
  4. mechanical behavior – What do you mean?
  5. Reference?
  6. What are the equally devastating factors. Mention them here if you bring them up.
  7. Reference?
  8. Reference?
  9. Clean up list formatting here. Honestly, I’d scrap line 81 – 91 all together.
  10. Space Figure3
  11. What sensors? Describe in detail. Is there a max/min width? – I see you describe it later. Reformat to bring them sensors up and describe them in the same paragraph
  12. Evolution is not the right word here.
  13. Space [24]of
  14. Describe this replacement in detail
  15. Unclear
  16. Reference?
  17. Why are these objectives important?
  18. Reference?
  19. Again. Make sure this process is described in detail. Somewhat repetitive.
  20. So-called? Are the called that or not? What does “so-called” add to your paper
  21. Reference?
  22. Called a wavelet, because
  23. Reference
  24. Such a way – what way. Be specific. Your paper is written in a way that the reader has to go and gather information from multiple other sources
  25. This section so far is long-winded and unclear. I recommend breaking the list down into subheaders and describing each item within the subheader
  26. in various fields do what?
  27. Reference
  28. This section should have it’s own subsection. Provide an introduction paragraph an closing paragraph of the main takeaways
  29. Are not all sections of your paper important?
  30. Neural Networks
  31. This section needs references throughout!! You’re missing over 30 sentences that require references.
  32. Absolutely not. Neural networks do not mimic the function of the human brain neurons. Neural networks are software. Neural networks are a function approximator. They use an action potential, but that is the only similarity to the human brain.
  33. No. Each neuron does not have a digital input and output. Each neuron has a weight modified through gradient decent. Can think of the weight like m in Y=mx+b
  34. What do you mean, behavior. Be specific.
  35. Transfer function? Do you mean activation function?
  36. Delete with or without loops.
  37. Which basic model? You’re describing a node below.
  38. Describe ReLu or Sigmoid. Those are the most common activation functions. Which did you use?
  39. Diffused? I don’t know what you mean.
  40. Each neuron has a set of parameters? What do you mean. This section does not have a single reference. What do you mean learning, or training?
  41. What do you mean? Which functions are more expensive than others?
  42. Type of connection? What other types of connections are there other than the propagation of latent features through the weights?
  43. Design? Do you mean architecture? Oh. No you don’t. Just say training a network requires…
  44. Just training. No one calls it the learning phase.
  45. No. The model never learns the output classes. It maps the weights to the provided labels using gradient descent with backpropagation.
  46. Exploitation? I’ve never heard this term before. Are you referring to validation? Or just running the model?
  47. No. You are not searching for weights. You are taking the derivative relative to your error to modify the weights in a direction suitable to reducing your loss.
  48. All examples is extremely rare (and frankly means you’re overfitting).
  49. Does “ this is basically” add anything your paper?
  50. Reference. What kind of diagnostics? Be specific
  51. This isn’t a different “configuration”.
  52. References.
  53. Why is this section here? It adds nothing as you move to overfitting
  54. It’s not a phenomenon. Bad dimensioning? That’s not the correct term
  55. Why?
  56. Remove learning by heart
  57. References. Why?
  58. Not be too important? What do you mean?
  59. Your going from list to list and your flow is extremely confusing. This is written like an undergrad taking notes.
  60. Is “by definition” necessary here? You said this already
  61. This is in French?
  62. You said this already
  63. You said this already
  64. You haven’t mentioned a differentiable function at all yet.
  65. This is your first mention of a loss function.
  66. What is cross-entropy loss?
  67. Your first mention of back-propagation.
  68. Reference
  69. CNNs have actually been around since 1979. See fukushima1979
  70. How do they “detect their features”. What makes CNNs unique? CNNs are the classifier. It makes no sense to say “then train a classifier” as if they’re different.
  71. Machine learning methods do not anything “by hand”. That is the definition of machine learning.
  72. “in fact” why is this here? What do you mean linear filtering? Describe in detail.
  73. What 3 operations? Don’t just reference a figure.
  74. In your example it is crack vs non-crack (which you never formally defined). But it can be whatever output you want.
  75. “brick” not appropriate.
  76. “dragging”??
  77. How are these filtered updated? What types of features do they learn? Is it the same at each layer of the network?
  78. “volume”??
  79. No. Depth is the number of layers.
  80. “pitch”?? I’ve never heard this term before.
  81. What is padding?
  82. BN Is not unique to CNNS.
  83. ReLU is not unique to CNNs. Why are they in this section?
  84. Also helps improve generalization
  85. Dropout is not unique to CNNS
  86. Describe bagging.
  87. “learn well”??
  88. What do you mean co-adaptation?
  89. Softmax is not unique to CNNs
  90. You haven’t described ResNet at all. Or transfer learning. Or referenced ResNet. You need to describe what transfer learning is. And also what skip connections are. And why you chose ResNet over other architectures.
  91. Remove intro paragraph here.
  92. Discussion?

Author Response

 

Thank you for your thorough review of our paper.

Please find below our responses given point-by-point.

Please also refer to the revised manuscript whose changes are in red.

 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

I am satisfied with your correction based on all reviewers' comments.

Some minor comments:

I suggest redesigning figure 3 (use modern style)

line 92 ... ?

All Section 3 paragraph formatting

line 618 numeric typos

line 634 rewrite the sentence

 

Reviewer 3 Report

The paper has been significantly improved. The authors have cleared the vague issues, have added relevant figures, and have done necessary comments. I think this paper can be recommended for acceptance in the present form.

Reviewer 5 Report

  1. What are the the crack types? How many classifications are you testing?
  2. “Machine is able to learn by itself”. What does this mean? Talk about the weights of a neural network modified through training using gradient descent with backpropagation
  3. “Most powerful deep learning architecture”. What do you mean most powerful? This terminology does not apply to neural networks as different architectures are often specialized for a specific task. CNNs are successful for images, but transformer models are also applicable to images and have excellent performance.
  4. No. CNNs do not automatically detect and extract features. Within the conv filters the model learn spatial patterning of pixel values representative of features that help solve the given classification task.
  5. What is pooling? Why is it valuable?
  6. …?
    95. Cite additional papers more recent than 2012.
  7. Unnecessary
  8. Don’t use the term “intelligent computer system”. It’s a function approximator.
  9. It is not and/or software. I’m not sure what you mean by this.
  10. All examples in the training set is unlikely.
  11. It’s not actually new. CNNs have been around since the 80s https://www.cs.princeton.edu/courses/archive/spr08/cos598B/Readings/Fukushima1980.pdf. AlexNet just brought modern computation to the approach
  12. particularity? rephrase
  13. …?
  14. Don’t put an ! in a paper.
  15. What is Keras?
  16. Avoid term such as “Nowadays”
Back to TopTop