Next Article in Journal / Special Issue
Fault Diagnosis of Rolling Bearing Using Multiscale Amplitude-Aware Permutation Entropy and Random Forest
Previous Article in Journal
A Novel Hybrid Genetic-Whale Optimization Model for Ontology Learning from Arabic Text

Algorithms 2019, 12(9), 183; https://doi.org/10.3390/a12090183

Article
An Intelligent Warning Method for Diagnosing Underwater Structural Damage
by 1, 1,* and 2,*
1
College of Civil Engineering, Northeast Forestry University, Harbin 150040, China
2
College of Science, Northeast Forestry University, Harbin 150040, China
*
Authors to whom correspondence should be addressed.
Received: 16 July 2019 / Accepted: 26 August 2019 / Published: 30 August 2019

Abstract

:
A number of intelligent warning techniques have been implemented for detecting underwater infrastructure diagnosis to partially replace human-conducted on-site inspections. However, the extensively varying real-world situation (e.g., the adverse environmental conditions, the limited sample space, and the complex defect types) can lead to challenges to the wide adoption of intelligent warning techniques. To overcome these challenges, this paper proposed an intelligent algorithm combing gray level co-occurrence matrix (GLCM) with self-organization map (SOM) for accurate diagnosis of the underwater structural damage. In order to optimize the generative criterion for GLCM construction, a triangle algorithm was proposed based on orthogonal experiments. The constructed GLCM were utilized to evaluate the texture features of the regions of interest (ROI) of micro-injury images of underwater structures and extracted damage image texture characteristic parameters. The digital feature screening (DFS) method was used to obtain the most relevant features as the input for the SOM network. According to the unique topology information of the SOM network, the classification result, recognition efficiency, parameters, such as the network layer number, hidden layer node, and learning step, were optimized. The robustness and adaptability of the proposed approach were tested on underwater structure images through the DFS method. The results showed that the proposed method revealed quite better performances and can diagnose structure damage in underwater realistic situations.
Keywords:
structural health monitoring; digital image processing; damage; gray level co-occurrence matrix; self-organization map

1. Introduction

Underwater structures, including bridges and dams, are becoming susceptible to deterioration as they are interfere with irresistible factors, such as wind, wave, corrosion, and hydraulic erosion. This inevitable process signifies urgent warning issues, which have motivated people to warn these structures on a regular basis. Thus, many research groups have proposed structural damage warning techniques [1,2,3]. Structural damage warning is an important part in the process of damage identification, and it is a great challenge to solve the damage detection problem of large complex engineering structures [4,5,6].
Recently, few satisfactory schemes for underwater structures damage warning have been proposed [7,8]. One possible solution is combining visual inspection with machine learning [9]. Rutkowski and Prokopiuk proposed the identification of the contamination source location in the water distribution system (WDS) based on the learning vector quantization (LVQ) neural network classifier [10].
Chatterjee proposed a neural network training method based on particle swarm optimization, which can solve the problem of multi-layer reinforced concrete building structure damage prediction by detecting the future damage possibility of a multi-layer reinforced concrete building structure [11]. Kai Xu et al. proposed a new approach to damage detection of a concrete column structure subjected to blast loads using embedded piezoceramic smart aggregates [12]. According to research by Feng et al., almost all types of structural damage can be effectively identified by image processing [13,14,15]. Zhu Z et al. proposed a new retrieval method for crack defect performance, which located the crack point by the most advanced crack detection technology and used image refinement technology to identify the skeleton structure of the point [16]. Molero et al. applied ultrasound imaging to assess the extent of damage during concrete freeze–thaw cycles and extracted NAAP (damage index) parameters as an evaluation criterion [17]. German et al. presented a new method for automatic detection of the flaking area of reinforced concrete columns and measured its characteristics in image data [18]. Based on the data provided by self-powered wireless sensors, Hasni et al. proposed a steel beam tensional vibration fatigue cracking detection method based on artificial intelligence [19].
Although these methods have achieved superior results through the combination of image properties and neural network models for damage identification, it is difficult to realize accurate damage diagnosis for underwater real-world structures [20,21,22,23]. Taking image-labeled data from underwater real-world situations is restricted and the identification accuracy is influenced by environmental conditions and the defect type. The gray level co-occurrence matrix is a typical algorithm in statistical analysis and is widely used in various fields because of its good discrimination ability [24,25]. As an important pattern recognition method, self-organization map neural network (SOM) has been widely used in automatic identification of signals, texts, images, and other fields due to its illuminating memory, learning, and generalization [26,27,28]. Additionally, the combination of gray level co-occurrence matrix (GLCM) and self-organization map (SOM) is widely used for tumor diagnosis, wood defect classification, and cloud classification [29,30,31,32]. However, there are few algorithms for the detection of damage in underwater concrete structures that combine the two methods. Therefore, an intelligent warning method is essential for dealing with the accurate diagnose of structural damage problems. This paper combines GLCM and the SOM neural network to improve the recognition performance of underwater structure damage.

2. Methodology

The construction of GLCM is proposed for characterizing the micro-damage of underwater structures in order to obtain an effective micro-damage diagnostic factor. Figure 1 shows micro-damage patterns of common underwater structures. Different types of micro-damages have a variety of texture primitives and exhibit their respective texture characteristics, obtained according to the analysis of the image properties [33,34,35,36]. GLCM can evaluate the overall situation of different damage types in gray space comprehensively. It is necessary to use GLCM as a texture analysis method to characterize the apparent micro-damage morphology of underwater structures.

2.1. Gray Level Co-Occurrence Matrix

The gray level co-occurrence matrix (GLCM), which specifies a certain condition in the gray space, counts the number of probability occurrences. Additionally, it meets the specified condition in the entire image of the pixel pair. Suppose the pixel pair coordinates are in order ( x , y ) , ( x + d x , y + d y ) and the pixel values are f ( x , y ) = i , f ( x + d x , y + d y ) = j . When the image gray level is g, the criterion of the sliding window is defined as follows. The pixel pair spacing value is d, and the angle between the connecting line and the horizontal direction is θ. Among them, g, d, and θ are generative criterion of the GLCM. P ( i , j , d , θ ) is the probability that pairs of pixels appear at the same time. The GLCM can be expressed by Equation (1):
P ( i , j , d , θ ) = { [ ( x , y ) , ( x + d x , y + d y ) | f ( x , y ) = i , f ( x + d x , y + d y ) = j ] } ,
where θ is 0°, 45°, 90°, and 135°.
Besides, d x , d y are the offset vectors; the equation is as follows:
d x = d cos θ ,   d y = d sin θ .

2.2. Feature Parameters

In order to quantitatively describe the direction, trend, and complexity of different micro-damage types in gray space, the gray-scale co-occurrence matrix was applied for feature extraction [30]. Table 1 shows the calculation formula and texture characteristic of the parameters.

2.3. SOM Networks Model

The self-organizing map neural network (SOM); strengthening central neurons and inhibiting peripheral neurons is a unique property of biological neuronal systems. The patterns of competition between neurons is a mechanism formed by the brain’s self-organization [37]. SOM retains the spatial topology of input neurons due to its unique neighborhood characteristics and obtains the best mapping by continuously adjusting the connection weights of winning neurons and neighboring neuron network nodes [38]. The SOM neural network closely combines the phenomenon of enhancing the center and suppressing the surrounding of the biological neuron system with the self-organization characteristics of the human brain. Therefore, SOM retains the topology of the network and can be based on limited sample space failure detection, effectively reducing the need for spatial samples in the intelligent diagnostic system. Figure 2 shows a typical self-organizing feature mapping network model consisting of an input layer and a contention layer. The input layer contains n neurons, and the competition layer is a two-dimensional planar array composed of m × n neurons [39]. This section utilizes SOM to identify micro-damage structures.

2.4. Learning Steps

(i)
Initialization: Set the [0,1] random value as the initial connection weight between the input neuron and the output neuron. A set, Sj, of outputting neighboring neurons is selected, wherein Sj(0) represents a set of neighboring neurons of neuron j at time t = 0, and Sj(t).
(ii)
Set the input of the neural network: Make the sample feature parameters into the following matrix and input them to the SOM network:
X ( n ) = [ x 1 ( n ) , x 2 ( n ) , x 3 ( n ) x N 1 ( n ) , x N ( n ) ] T .
(iii)
Calculate the Euclidean distance: Input layer neurons, i, into mapping layer neurons’, j, available Euclidean distance, dij, indicated as:
d i j = X W j = i = 1 n [ x i ( t ) w i j ( t ) ] 2 .
In the equation, wij is the weight of the input layer neuron, i, to the mapping layer neuron, j. Wj is the connection weight of the neuron, j, on the mapping layer.
(iv)
Obtain the winning neuron: The position of the winning neuron can be obtained by calculating the minimum Euclidean distance between the input vector and the weight vector. When the input vector is X and the winning neuron is denoted by c, the formula is expressed as:
X W c = min i X W i ,   i = 1 , 2 , 3 M 1 , M ,
where x is the input vector and the winning neuron is labeled c. Wc is the weight of the winning neuron, c. Wi is the connection weight of the neuron, i, on the mapping layer.
(v)
Adjust weight: The connection weight of the input neuron and all neurons in the competition neighborhood are corrected by Equation (6):
Δ w i j = w i j ( t + 1 ) w i j ( t ) = η ( t ) [ x i ( t ) w i j ( t ) ] .
Among them, t is the continuous time, and the learning rate at time t is η ( t ) . η ( t ) = 1 t or η ( t ) = 0.2 ( 1 t 1000 ) . The value range of η ( t ) is [0,1].
(vi)
Determine whether the output result meets the expected requirements: If the result meets the previously set requirements, then end; if not, return to step (ii) to continue.

3. Experiment and Result Analysis

A diagnosis model based on SOM and GLCM was established in order to obtain the results of the micro-damage recognition of underwater structures. The diagnosis steps are shown in Figure 3.
As shown in Figure 3, the diagnostic process consists of six steps. The diagnosis process is detailed on the left side, including the generation of grayscale co-occurrence matrices, the establishment of standard samples, the training of network models, and the acquisition of diagnostic results. The right side of Figure 3 is the information about the important parameters or results corresponding to the diagnostic steps.

3.1. Image Acquisition and Processing

An underwater micro-damage visual diagnosis system was established according to the characteristics of underwater structures, as shown in Figure 4. The system mainly contains a HCCK-102 series crack detection system (including industrial camera, industrial endoscope, control interface), maintenance pool, concrete test block, computer etc. Among them, the industrial endoscope has a magnification of 40x and a measurement accuracy of 0 to 0.08 mm. The micro-damage image of the underwater concrete test block was obtained based on the diagnosis system. The image definition was extremely reduced due to the turbid water quality, insufficient light, and the configuration of the protective cover for the micro-probe. Underwater micro-damage images were processed by ROI (region of interesting) interception, histogram equalization, and a median filter.

3.2. Triangle Algorithm

The accuracy of extracting feature parameters depends on the optimal generation criteria, including the image gray level, g, the generative step length, d, and the generative angle, θ. Therefore, it is crucial to optimize the factor to obtain accurate and effective feature parameters. Generally, the orthogonal test method (OTM) and full test method (FTM) are used to establish generative criterion. However, OTM leads to the absence of a partial criterion and FTM reduces detection efficiency. Therefore, a new triangulation algorithm used for the confirming of generative criterion is presented in this paper. Figure 5 is the diagram of algorithm.
Step 1:
The initial range of the factor of the generative criterion is determined according to the properties of the micro-damage image.
Step 2:
Confirm the generative angle, θ, by the theory of image rotation invariant. In Figure 5, the sequence A is he generative step length, d, g1-gn n is the sequence B image gray level, g, g1-gn.
Step 3:
The sequence A is linked to the sequence B, and θ is then joined to sequence A and sequence B, respectively.
Step 4:
Extract all dn-θ-gt combinations. Then, a triangular combination can form.
The algorithm was validated based on the underwater micro-damage diagnosis system. Based on the comparative analysis, the unique advantages of the triangulation algorithm were proven.
In order to ensure the features of the image rotation invariant, the average values directions of 0°, 45°, 90°, and 135° were selected as the generative angle, θ. Figure 6 is the variations’ form of θ.
Figure 6 shows the difference in the characteristic parameters of micro-damage images. The different directions are small. Therefore, the generation angle, θ, of the generation criterion was determined.
The initial range of the generated standard factor was obtained based on the characteristics of the micro-damage image:
(1)
Generative angle, θ: The average of directions of 0°, 45°, 90°, and 135°.
(2)
Image gray level: gt = 2m+2, among them, t = m + 2, t is taken as the integer of [1,6].
(3)
Generative step length, d: Take the integer of [1,6].
The combination form of the generation standard, dn-θ-gt, was obtained by the triangulation algorithm. It has a total of n × t = n × (m + 2) = 36 groups. The OTM requires determination of the values of the two construction factors in advance to obtain the combined form. If the two construction factors select g and d, the d-θ-g forms a total of 1 × 1 × 4 = 4 groups. Additionally, the FTM does not specify the range of variation of any factor. If the values of d and g are the same as the triangulation algorithm, the d-θ-g forms a total of 6 × 6 × 4 = 144 groups. Experimental verification and comparison results showed that this method can efficiently obtain all the generation criteria of micro damage. The proposed triangulation algorithm obtained the appropriate combination form based on the image properties and effectively improved the detection efficiency.

3.3. Optimization of Generative Criterion

To improve the characteristic performance of structure damage, the generative criterion optimization of GLCM was carried out based on the theory of difference maximization. Figure 7 show the variation tendency of d, in which the selected characteristic parameters showed a good correlation with the change trend of d. Figure 8 shows the variation form of g. The selected characteristic parameters are sensitive to the changing trend of g.
It can be derived that when d > 4, the characteristic parameter changes tend to be stable basically; when d = 5, the samples of each group can be better distinguished; when d = 6, there is a clear downward trend in aggregation in the correlation, the significant clustering, and the clustering shadow characteristic parameters.
Figure 8 is the variation of g of six characteristic parameters, including angular second-order moment, entropy maximum probability, etc. It can be analyzed that when g = 5, each characteristic parameter exhibits high discreteness and high stability.
All the analyses show that the optimal generative criterion, dn-θ-gt, of the underwater structure micro-damage is n is 5, t is 7, and θ is taken as the average of the four directions of 0°, 45°, 90°, and 135°.

3.4. Establishing a Standard Sample Label

For the purpose of obtaining the standard performance parameters of the SOM diagnostic model, the standard sample label was established using the DFS (digital feature screening) method under the optimal generative criteria (d5-θ-g7, among them, θ takes 0°, 45°, 90°, and 135°). The DFS method establishes screening criteria for screening parameters by observing the trend of the characteristic parameters of different diagnostic types. The screening criteria are determined according to the size of the intersection interval of different types of damages: The smaller the crossover interval, the better the screening results, and the parameters are suitable for establishing a diagnostic model; otherwise, the parameter is not suitable for diagnosing underwater structural damage. Figure 9, Figure 10 and Figure 11 shows the screening results.
In Figure 9, the characteristic parameters occupy their respective intervals with preferable screening results. In Figure 10, the two types of damage have a small interaction interval, the screening results are ordinary. In Figure 11, each of the three characteristic parameters is interlaced with each other and each has no independent interval.
The angular second moment, P1, entropy, P2, variance of grayscale, P3, correlation, P4, inverse matrix, P5, variance, P6, significant clustering, P7, and sums of mean, P8, were screened. Therefore, the DFS method could effectively solve these problems of characteristic parameter selection. The standard values of each parameter are given in Table 2.
Table 2 gives the standard value of the characteristic parameters corresponding to each kind of damage, which were obtained according to the average value of 200 images.

3.5. Training Network Model

The network model was constructed. The parameters of the network are shown in Table 3.
The created network topology is a two-layer network, and the competition layer consists of eight vectors. [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 ] . The initial weight was set to 0.125. The neighborhood shape of each network node was set to a hexagon, and the neighborhood radius was set to 3. The competition layer was 8 × 8 = 64 neuron nodes. The training steps were set to 10, 50, 100, 200, 500, and 1000. Table 4 shows the classification effects under different training steps; the numbers in the table represent the classification numbers.
Table 4 shows that when the number of training steps equals 10, a micro-damage diagnosis model is initially established, and the micro-damage is roughly divided into two groups. With the increase of the number of training steps, when the number of training steps is 50 and 100, respectively, the diagnostic accuracy is further improved. The micro-injury is divided into three groups. When the number of training steps reaches 200, the micro-injury is effectively distinguished. However, when the number of training steps reaches 500 and 1000, the micro-injury classification results are the same as 200 steps, which has no practical significance. Therefore, 200 steps 200 were chosen as the optimal value for the micro-damage diagnosis model.
To verify the reliability of parameter optimization, the topological structure of the winning neurons in Figure 12 was analyzed. The gray hexagon with the number in the figure corresponds to the classification label of the winning neurons.
The unique neighborhood characteristics of SOM and the spatial topology of the input neurons are preserved. Figure 12 shows that the winning neuron numbers are 1, 16, 49, and 64, occupying four different topological spatial locations. The classification result of the topological result graph is completely consistent with the clustering result obtained under the optimal parameters of the network. By analyzing the topology, the network parameters were optimized to build a better SOM network model. It is proven that the setting of various parameters of the network can meet the working requirements of an underwater structure diagnosis system and effectively distinguish the types of micro-damage.
Although the topological position of the winning neurons was obtained, it cannot determine the type of damage for each neuron. Therefore, the neuron distance map was obtained as shown in Figure 13. Small square hexagons represent neurons and straight lines represent linear connections between neurons. The distance between neurons can be derived from the Euclidean distance formula. The long hexagon of the connected neurons and the depth of the color represents the distance between the neurons.
In Figure 13, the connected region of the dark hexagon divides the whole neuron into four sub-regions, corresponding to different damage types. The neurons of the light-colored hexagons connected in the sub-area have the same type of damage. Neurons 53 and 60 are surrounded by dark hexagons, corresponding to unknown damage types. Neuron 28 is far from the four types of damage and is classified as micro-voids defects with close proximity. Alternatively, it can be classified as other unknown defects different from 53 and 60 or 28. Therefore, the corresponding situation of all neuronal damage is obtained.
In order to determine the efficiency of the algorithm, test results were analyzed under different test conditions. Table 5 shows the classification of damage.
Table 5 describes the types of damage that the damage model can identify and the recognition rate for identifying the four types of damage. The data in Table 5 show that the proposed algorithm can not only detect the expected type of defect but also diagnose other unknown defects in the underwater concrete. It can be inferred that when the concrete structure is healthy, Figure 11 will consist of hexagons of a uniform color. Its classification accuracy, R, can be calculated by the formula: R = N a l l N a b N a l l × 100 % . While there are only four kinds of damage in the structure, each damage type can be accurately identified. The algorithm can also accurately extract unknown defects. Therefore, the algorithm can obtain good detection results in the case of correct detection, false positive, and false negative detection. It is proved that the algorithm can be applied to the actual detection of underwater structural damage.

3.6. Application and Validation

In order to prove the practicability and reliability of the network parameters and indicators, this method was applied as an example in a measurement project. Figure 14 shows a schematic diagram of the image acquisition system. The micro-damage images obtained are shown in Figure 15.
A test sample was constructed as shown in Table 6. The case of the predicted classification of the sample is shown in Figure 16.
This paper used the research methods of comparative analysis and graphs analysis. We can draw the conclusion that each classification predictive sample can correspond to the damage state. All prediction samples were correctly classified and the overall classification accuracy rate was 100%. It showed that the intelligent early warning method achieved satisfactory results and can be used as an intelligent and accurate diagnosis algorithm for solving underwater structural damage under complex constraints.

4. Conclusions

A vision-based approach for the diagnosis of damage on underwater structures was proposed by combining GLCM and SOM.
The underwater structural micro-damage images database required for training, validation, and application were conducted with NDT (non-destructive testing) equipment. To secure a wide range of adaptability, the damage images were acquired from the database randomly.
A triangle algorithm was proposed to generative criterion optimization, which has extracted comprehensive aggregation for the GLCM constructed. The generative criterion was confirmed based on the difference maximization principle: d5-θ-g7, which was the texture average of distance, delta = 4, four directions (0°, 45°, 90°, 135°), and image gray level, g = 128.
The SOM standard performance parameters were obtained by applying the DFS method, including the angular second moment, entropy, variance of grayscale, correlation inverse matrix, variance, significant clustering and sums of mean. The numbers of layers, numbers of nodes in hidden layers, and learning steps were set to optimal values by training the SOM network model and analyzing the SOM topology diagram. The measurement project example proved the robustness and adaptability of the method. The application result showed high performance under complicated conditions, including adverse environmental conditions, limited sample space, and complex defect types.
The results show that the proposed method exhibited good performance and can indeed diagnose structure damage in realistic underwater situations. In the future, intelligent warning techniques will replace human-conducted on-site inspections for the detection of underwater infrastructure. The method will also be combined with an unmanned aerial vehicle (UAV) to monitor the damage of civil structures.

Author Contributions

K.L. took part in the entire researching process. J.W. contributed to the experiment design. D.Q. revised the manuscript.

Funding

This research was funded by the Fundamental Research Fund for the Central University (Project No.2572018AB31), Heilongjiang Provincial Postdoctoral Scientific Research Development Fund (Project No.LBH-Q15011).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Farrar, C.R.; Worden, K. An introduction to structural health monitoring. Philos. Trans. A Math. Phys. Eng. Sci. 2010, 365, 303–315. [Google Scholar] [CrossRef] [PubMed]
  2. Amezquita-Sanchez, J.P.; Adeli, H. Signal Processing Techniques for Vibration-Based Health Monitoring of Smart Structures. Arch. Comput. Methods Eng. 2016, 23, 1–15. [Google Scholar] [CrossRef]
  3. Bhuiyan, M.Z.A.; Wang, G.; Wu, J.; Cao, J.; Liu, X.; Wang, T. Dependable Structural Health Monitoring Using Wireless Sensor Networks. IEEE Trans. Dependable Secure Comput. 2017, 14, 363–376. [Google Scholar] [CrossRef]
  4. Park, J.Y.; Lee, J.R. Application of the ultrasonic propagation imaging system to an immersed metallic structure with a crack under a randomly oscillating water surface. J. Mech. Sci. Technol. 2017, 31, 4099–4108. [Google Scholar] [CrossRef]
  5. Sidibe, Y.; Druaux, F.; Lefebvre, D.; Maze, F.L. Signal processing and Gaussian neural networks for the edge and damage detection in immersed metal plate-like structures. Artif. Intell. Rev. 2016, 46, 289–305. [Google Scholar] [CrossRef]
  6. Abdeljaber, O.; Avci, O.; Kiranyaz, S.; Gabbouj, M.; Inman, J.M. Real-time vibration-based structural damage detection using one-dimensional convolutional neural networks. J. Sound Vib. 2017, 388, 154–170. [Google Scholar] [CrossRef]
  7. Bhuiyan, M.Z.A.; Cao, J.; Wang, G. Deploying Wireless Sensor Networks with Fault Tolerance for Structural Health Monitoring. In Proceedings of the IEEE 8th International Conference on Distributed Computing in Sensor Systems, Hangzhou, China, 16–18 May 2012. [Google Scholar]
  8. Zong, Z.; Zhong, R.; Zheng, P.; Qin, Z.; Liu, Q. Research Progress and Challenges of Bridge Structure Damage Prognosis and Safety Prognosis Based on Health Monitoring. China J. Highw. Transp. 2014, 27, 46–57. [Google Scholar]
  9. Alavi, A.H.; Hasni, H.; Jiao, P.; Borchani, W.; Lajnef, N. Fatigue cracking detection in steel bridge girders through a self-powered sensing concept. J. Constr. Steel Res. 2017, 128, 19–38. [Google Scholar] [CrossRef]
  10. Rutkowski, T.A.; Prokopiuk, F. Identification of the Contamination Source Location in the Drinking Distribution System Based on the Neural Network Classifier. IFAC-PapersOnLine 2018, 51, 15–22. [Google Scholar] [CrossRef]
  11. Chatterjee, S.; Sarkar, S.; Hore, S.; Dey, N.; Ashour, A.S.; Balas, V.E. Particle Swarm Optimization Trained Neural Network for Structural Failure Prediction of Multi-storied RC Buildings. Neural Comput. Appl. 2016, 28, 2005–2016. [Google Scholar] [CrossRef]
  12. Xu, K.; Deng, Q.; Cai, L.; Ho, S.; Song, G. Damage Detection of a Concrete Column Subject to Blast Loads Using Embedded Piezoceramic Transducers. Sensors 2018, 18, 1377. [Google Scholar] [CrossRef] [PubMed]
  13. Feng, D.; Feng, M.Q. Experimental validation of cost-effective vision-based structural health monitoring. Mech. Syst. Signal Process. 2017, 88, 199–211. [Google Scholar] [CrossRef]
  14. Gao, W.; Zhang, G.; Li, H.; Huo, L.; Song, G. A novel time reversal sub-group imaging method with noise suppression for damage detection of plate-like structures. Struct. Control Health Monit. 2017, 25, e2111. [Google Scholar] [CrossRef]
  15. Cha, Y.J.; Choi, W.; Suh, G.; Mahmoudkhani, S.; Büyüköztürk, O. Autonomous Structural Visual Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types. Comput. Aided Civ. Infrastruct. Eng. 2017, 33, 731–747. [Google Scholar] [CrossRef]
  16. Zhu, Z.; German, S.; Brilakis, I. Visual retrieval of concrete crack properties for automated post-earthquake structural safety evaluation. Autom. Constr. 2011, 20, 874–883. [Google Scholar] [CrossRef]
  17. Molero, M.; Aparicio, S.; Al-Assadi, G.; Casati, M.J.; Hernandez, M.G.; Anaya, J.J. Evaluation of freeze–thaw damage in concrete by ultrasonic imaging. NDT E Int. 2012, 52, 86–94. [Google Scholar] [CrossRef]
  18. German, S.; Brilakis, I.; Desroches, R. Rapid entropy-based detection and properties measurement of concrete spalling with machine vision for post-earthquake safety assessments. Adv. Eng. Inform. 2012, 26, 846–858. [Google Scholar] [CrossRef]
  19. Hasni, H.; Alavi, A.H.; Jiao, P.; Lajnef, N. Detection of fatigue cracking in steel bridge girders: A support vector machine approach. Arch. Civ. Mech. Eng. 2017, 17, 609–622. [Google Scholar] [CrossRef]
  20. Vetrivel, A.; Gerke, M.; Kerle, N.; Nex, F.; Vosselman, G. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning. ISPRS J. Photogramm. Remote Sens. 2017, 140, 45–59. [Google Scholar] [CrossRef]
  21. Yan, Y.; Cheng, L.; Wu, Z.; Yam, L.H. Development in Vibration-Based Structural Damage Detection Technique. Mech. Syst. Signal Process. 2007, 21, 2198–2211. [Google Scholar] [CrossRef]
  22. Rafiei, M.H.; Adeli, H. A novel unsupervised deep learning model for global and local health condition assessment of structures. Eng. Struct. 2018, 156, 598–607. [Google Scholar] [CrossRef]
  23. Cha, Y.J.; Choi, W.; Büyüköztürk, O. Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks. Comput. Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
  24. Liu, L.; Fieguth, P.; Guo, Y.; Wang, X.; Pietikäinen, M. Local binary features for texture classification: Taxonomy and experimental study. Pattern Recognit. 2017, 62, 135–160. [Google Scholar] [CrossRef]
  25. Malegori, C.; Franzetti, L.; Guidetti, R.; Casiraghi, E.; Rossi, R. GLCM, an image analysis technique for early detection of biofilm. J. Food Eng. 2016, 185, 48–55. [Google Scholar] [CrossRef]
  26. Maddalena, L.; Petrosino, A. A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications. IEEE Trans. Image Process. 2008, 17, 1168–1177. [Google Scholar] [CrossRef]
  27. Li, X.; Zhu, D. An Adaptive SOM Neural Network Method to Distributed Formation Control of a Group of AUVs. IEEE Trans. Ind. Electron. 2018, 65, 8260–8270. [Google Scholar] [CrossRef]
  28. Merainani, B.; Rahmoune, C.; Benazzouz, D.; Ould-Bouamama, B. A novel gearbox fault feature extraction and classification using Hilbert empirical wavelet transform, singular value decomposition, and SOM neural network. J. Vib. Control 2017, 24, 2512–2531. [Google Scholar] [CrossRef]
  29. Li, Y.; Yao, X.; Li, W.; Li, C. Model Optimization of Wood Property and Quality Tracing Based on Wavelet Transform and NIR Spectroscopy. Spectrosc. Spectr. Anal. 2018, 38, 1384–1392. [Google Scholar]
  30. Kamruzzaman, M.; Elmasry, G.; Sun, D.W.; Allen, P. Non-destructive assessment of instrumental and sensory tenderness of lamb meat using NIR hyperspectral imaging. Food Chem. 2013, 141, 389–396. [Google Scholar] [CrossRef]
  31. Raju, P.; Rao, V.M.; Rao, B.P. Optimal GLCM combined FCM segmentation algorithm for detection of kidney cysts and tumor. Multimed. Tools Appl. 2019, 78, 18419–18441. [Google Scholar] [CrossRef]
  32. Ancy, C.A.; Nair, L.S. Tumour Classification in Graph-Cut Segmented Mammograms Using GLCM Features-Fed SVM. Intell. Eng. Inform. 2018, 695, 197–208. [Google Scholar]
  33. Chen, J.; Chen, Z.; Chi, Z.; Fu, H. Facial Expression Recognition in Video with Multiple Feature Fusion. IEEE Trans. Affect. Comput. 2018, 9, 38–50. [Google Scholar] [CrossRef]
  34. Oliveira, R.B.; Papa, J.P.; Pereira, A.S.; Tavares, J.M.R.S. Computational methods for pigmented skin lesion classification in images: Review and future trends. Neural Comput. Appl. 2016, 29, 613–636. [Google Scholar] [CrossRef]
  35. Torres-Alegre, S.; Fombellida, J.; Piñuela-Izquierdo, J.A.; Andina, D. AMSOM: Artificial metaplasticity in SOM neural networks—Application to MIT-BIH arrhythmias database. Neural Comput. Appl. 2018, 1–8. [Google Scholar] [CrossRef]
  36. Rodriguez-Galiano, V.F.; Chica-Olmo, M.; Abarca-Hernandez, F.; Atkinson, P.M.; Jeganathan, C. Random Forest classification of Mediterranean land cover using multi-seasonal imagery and multi-seasonal texture. Remote Sens. Environ. 2012, 121, 93–107. [Google Scholar] [CrossRef]
  37. Rokach, L. A survey of Clustering Algorithms. Data Mining Knowl. Discov. Handb. 2009, 16, 269–298. [Google Scholar]
  38. Vesanto, J.; Alhoniemi, E. Clustering of the self-organizing map. IEEE Trans. Neural Netw. 2000, 11, 586–600. [Google Scholar] [CrossRef]
  39. Kohonen, T. Essentials of the self-organizing map. Neural Netw. 2013, 37, 52–65. [Google Scholar] [CrossRef]
Figure 1. Micro-damage patterns.
Figure 1. Micro-damage patterns.
Algorithms 12 00183 g001
Figure 2. The Self-Organizing Mapping (SOM) network model.
Figure 2. The Self-Organizing Mapping (SOM) network model.
Algorithms 12 00183 g002
Figure 3. The diagram of the diagnostic process.
Figure 3. The diagram of the diagnostic process.
Algorithms 12 00183 g003
Figure 4. Acquisition system.
Figure 4. Acquisition system.
Algorithms 12 00183 g004
Figure 5. Algorithm.
Figure 5. Algorithm.
Algorithms 12 00183 g005
Figure 6. Variations form of θ.
Figure 6. Variations form of θ.
Algorithms 12 00183 g006
Figure 7. Tendency of d (when g = 28). Sample 1–9 represents nine sample images of certain types of defects.
Figure 7. Tendency of d (when g = 28). Sample 1–9 represents nine sample images of certain types of defects.
Algorithms 12 00183 g007
Figure 8. Form of g (when d = 5). Sample 1–9 represents nine sample images of certain types of defects.
Figure 8. Form of g (when d = 5). Sample 1–9 represents nine sample images of certain types of defects.
Algorithms 12 00183 g008
Figure 9. Parameters with preferable screening results.
Figure 9. Parameters with preferable screening results.
Algorithms 12 00183 g009
Figure 10. Parameters with ordinary screening results.
Figure 10. Parameters with ordinary screening results.
Algorithms 12 00183 g010
Figure 11. Parameters with inferior screening results.
Figure 11. Parameters with inferior screening results.
Algorithms 12 00183 g011
Figure 12. Winning neuron topology.
Figure 12. Winning neuron topology.
Algorithms 12 00183 g012
Figure 13. Distance between the neurons.
Figure 13. Distance between the neurons.
Algorithms 12 00183 g013
Figure 14. Image acquisition.
Figure 14. Image acquisition.
Algorithms 12 00183 g014
Figure 15. Micro-damage image. (a) Micro-honeycombs; (b) Micro-voids; (c) Micro-cracks; (d) Micro-depressions.
Figure 15. Micro-damage image. (a) Micro-honeycombs; (b) Micro-voids; (c) Micro-cracks; (d) Micro-depressions.
Algorithms 12 00183 g015
Figure 16. Sample prediction classifications.
Figure 16. Sample prediction classifications.
Algorithms 12 00183 g016
Table 1. Formula of parameters and their texture features.
Table 1. Formula of parameters and their texture features.
NO.ParametersCalculation FormulasTexture Characteristic
T1Angular second moment i = 1 g j = 1 g p 2 ( i , j , d , θ ) The uniformity of gray distribution and degree of texture.
T2Sums of average k = 2 2 g k × P X ( k ) The change of brightness.
T3Sums of variance k = 2 2 g ( k T 2 ) 2 P X ( k ) Texture period size.
T4Maximum probability M A X [ i , j p ( i , j , d , θ ) ] The distribution of the main texture.
T5Sums of entropy k = 2 2 g P X ( k ) × log [ P X ( k ) ] Texture complexity.
T6Variance i = 1 g j = 1 g ( i m ) 2 p ( i , j , d , θ ) Texture periodicity.
T7Variance of grayscale i = 1 g j = 1 g [ ( i j ) 2 × p 2 ( i , j , d , θ ) ] Texture distribution.
T8Correlation i = 1 g j = 1 g [ ( i × j × p ( i , j , d , θ ) u 1 × u 2 ] / ( d 1 × d 2 ) The main direction of the texture.
T9Inverse matrix i = 1 g j = 1 g p ( i , j , d , θ ) / [ 1 + ( i j ) 2 ] Local texture changes.
T10Cluster shadow i = 1 g j = 1 g [ ( i u 1 ) + ( j u 2 ) ] 3 × p ( i , j , d , θ ) Texture uniformity.
T11Significant clustering i = 1 g j = 1 g [ ( i u 1 ) + ( j u 2 ) ] 4 × p ( i , j , d , θ ) Texture uniformity.
T12Entropy i = 1 g j = 1 g p ( i , j , d , θ ) 2 × log 10 p ( i , j , d , θ ) Texture randomness.
Note: g takes a power of 2, u 1 = i = 1 g i j = 1 g p ( i , j , d , θ ) , u 2 = j = 1 g j i = 1 g p ( i , j , d , θ ) , d 1 2 = i = 1 g ( i u 1 ) 2 j = 1 g p ( i , j , d , θ ) , d 2 2 = j = 1 g ( j u 1 ) 2 i = 1 g p ( i , j , d , θ ) , T 2 = k = 2 2 g k × P X ( k ) , m is the average of p ( i , j , d , θ ) , P x ( k ) = i = 1 g j = 1 g p ( i , j , d , θ ) | i + j | = k , k is 2, 3, …, 2g.
Table 2. Normal sample.
Table 2. Normal sample.
Damage TypeP1P2P3P4P5P6P7P8
Micro-honeycombs0.452191.233470.3411730.6590151.48745916074.993050.60851232113537.15
Micro-depressions0.7356260.8440820.2452720.9861241.27051416039.67163−0.230442113536.83
Micro-voids0.9660080.6457850.1558293.7194581.03453116113.921330.1854682113536.14
Micro-cracks0.5619310.1143530.047756−1.7688371.181616103.60313−0.4877342113536.55
Table 3. Network parameters.
Table 3. Network parameters.
ParametersValue
Input Layer Nodex1x2x3x4x5x6x7x8
P1P2P3P4P5P6P7P8
Weight0.125
Neighborhood shapeHexagon, R = 3
Neurons64
Training steps10, 50, 100, 200, 500,1000
Table 4. Classification results for different steps.
Table 4. Classification results for different steps.
Number of Training StepsMicro-HoneycombsMicro-voidsMicro-depressionsMicro-cracksClustering Result
105537375550%
504337375575%
100431373775%
2004911664100%
5004911664100%
10004911664100%
Table 5. Classification of damage.
Table 5. Classification of damage.
Damage TypeSample Classification NumberClassification Accuracy
Micro-honeycombs36, 41, 42, 43, 44, 49, 50, 51, 52, 57, 58, 5980%
Micro-voids1, 2, 3, 4, 9, 10, 17, 18, 19, 20, 25, 26, 27, 33, 34, 3593.33%
Micro-depressions5, 6, 7, 8, 11, 12, 13, 14, 15, 16, 21, 22, 23, 24, 29100%
Micro-cracks30, 31, 32, 38, 39, 40, 45, 46, 47, 48, 54, 55, 56, 61, 62, 63, 6486.66%
Unknown situation53, 60-
28-
37-
Table 6. Damage prediction samples.
Table 6. Damage prediction samples.
Damage TypeSample Number
Micro-honeycombs1–10
Micro-voids11–20
Micro-depressions21–30
Micro-cracks31–40

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Back to TopTop