Next Article in Journal
Raman Spectroscopy as Noninvasive Method of Diagnosis of Pediatric Onset Inflammatory Bowel Disease
Next Article in Special Issue
Wind Fleet Generator Fault Detection via SCADA Alarms and Autoencoders
Previous Article in Journal
Effect of Fluoride on Germination, Early Growth and Antioxidant Enzymes Activity of Three Winter Wheat (Triticum aestivum L.) Cultivars
Previous Article in Special Issue
An Optimal Power Control Strategy for Grid-Following Inverters in a Synchronous Frame
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Damage Diagnosis for Offshore Wind Turbine Foundations Based on the Fractal Dimension

Campus Diagonal-Besòs (CDB), Control, Modeling, Identification and Applications (CoDAlab), Department of Mathematics, Escola d’Enginyeria de Barcelona Est (EEBE), Universitat Politècnica de Catalunya (UPC), Eduard Maristany, 16, 08019 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(19), 6972; https://doi.org/10.3390/app10196972
Submission received: 11 September 2020 / Revised: 28 September 2020 / Accepted: 29 September 2020 / Published: 5 October 2020
(This article belongs to the Special Issue Fault Diagnosis and Control Design Applications of Energy Systems)

Abstract

:
Cost-competitiveness of offshore wind depends heavily in its capacity to switch preventive maintenance to condition-based maintenance. That is, to monitor the actual condition of the wind turbine (WT) to decide when and which maintenance needs to be done. In particular, structural health monitoring (SHM) to monitor the foundation (support structure) condition is of utmost importance in offshore-fixed wind turbines. In this work a SHM strategy is presented to monitor online and during service a WT offshore jacket-type foundation. Standard SHM techniques, as guided waves with a known input excitation, cannot be used in a straightforward way in this particular application where unknown external perturbations as wind and waves are always present. To face this challenge, a vibration-response-only SHM strategy is proposed via machine learning methods. In this sense, the fractal dimension is proposed as a suitable feature to identify and classify different types of damage. The proposed proof-of-concept technique is validated in an experimental laboratory down-scaled jacket WT foundation undergoing different types of damage.

1. Introduction

Structural health monitoring’s (SHM) main purpose is to diagnose in time damage that affects the integrity of a structure and determine whether repair or reinforcement actions are required to avoid or delay its degradation. Generally, SHM strategies consist of the following steps:
(i)
the strategic placement of sensors in the overall structure;
(ii)
data collection and communication; and
(iii)
analysis of the measured data.
It is important to note that, in a wide variety of applications, guided waves, which is a nondestructive approach, is the usual standard. This approach relies on exciting the structure with low frequency ultrasonic waves and then sensing the reflected response waves. Thus, the method relies heavily on the fact that the input excitation is known and also that other perturbations can be filtered or neglected. On the one hand, in civil infrastructures, such as bridges, it is feasible to assume that external perturbations can be neglected or filtered with respect to the induced excitation, see [1,2]. On the other hand, in other applications, as in aerospace, the structure can only be diagnosed with this approach when it is not in service. This strategy is used, for example, in [3] where a multiarea scanning ultrasonic system is built in a hangar to rapidly scan the airplane overall structure. This type of not in service diagnose (the airplane can be diagnosed during no-flight conditions, when it is in the hangar) or neglecting the external perturbations (as in SHM for standard civil structures as bridges or buildings) cannot be straightforwardly extrapolated to the main research area of the present work: wind turbines. Online and in-service SHM for wind turbines (WTs) are extremely important. WTs are extremely large structures subject to remarkable external unknown excitations such as wind and waves in the offshore case. Thus, SHM strategies for WTs must be able to cope with unknown significant external excitations hindering the use of the standard exciting-and-sensing approach [4]. To face this challenge, in this work, a vibration-response-only SHM strategy is stated to monitor online and during service a WT offshore fixed foundation by using only the excitation caused by the external and unknown perturbations.
Offshore wind power will expand dramatically in the next two decades, multiplying by 15 by 2040 to a minimum of 345 gigawatts (GW) of installed capacity, according to the Offshore Wind Outlook 2019 report of the International Energy Agency [5]. However, this achievement will only be possible through cost-competitiveness of offshore wind, which depends entirely on SHM capacity to switch preventive maintenance to predictive one [6]. Thus, SHM for offshore assets is imperative to guarantee its exploitability. Hence, in this work, a SHM methodology for offshore fixed foundations is proposed.
Nowadays, the SHM systems for WTs are mostly deployed to blades [7] and tower [8] but research of SHM for offshore support structures is still scarce [9]. The state of the art in this very specific area has three main research lines:
(i)
model-based, using, for example, the finite element method as in [10,11,12];
(ii)
data-based using solely experimental and/or real data; and
(iii)
a hybrid approach that makes use of real and/or experimental data and numerical models.
Regarding the first option, the work of Stutzamnn et al. is noteworthy [13] where crack detection of monopile offshore foundations is accomplished based on numerical simulations of fatigue cracks. Regarding the second option, a comprehensive review is given in [14] about SHM of offshore WTs through the statistical pattern recognition paradigm. In this review, it is shown that the usual strategy, regarding offshore WT damage detection, is to identify changes in the modal properties. However, this strategy requires detailed attention to take into account the operational and environmental impact, and usually only damage detection (but not classification) is accomplished. For example, in [15] a SHM approach verified on a full-scale foundation is presented. However, dynamic variability between different operational cases only allows the final results to indicate an overall stiffening of the structure but not to conclude whether damage is present or not. Regarding the third approach, the work by Gomez et al. [16] is noteworthy based on acceleration response data and calibrated computer models. However, this work is based in the usual operational modal analysis and holds the difficulties of this type of approach including the fact that only detection (but not classification of damage type) is acquired. In this work, facing the challenge posed by the previous references, different damage types are taken into account and its classification is achieved in an experimental down-scaled jacket WT foundation. It should be noted that the experimental testbed is a reduced model but well-founded for this proof-of-concept work as it is comparable to that employed in the following works: (i) [17], where damage detection is achieved via damage indicators; (ii) [18], where damage detection is obtained via statistical time series analysis; (iii) [19], based on principal component analysis and support vector machines; and (iv) [20], where a deep learning approach based on convolutional neural networks is employed.
It is well known that machine learning requires a feature extraction preprocess. It is a challenge to find suitable features, sensitive to physical characteristics, that lead to the identification of the damage or fault [21]. In this work, the fractal dimension (FD) of the data time series is employed as the main feature. The FD has been used traditionally as a feature for medicine applications. For example, in [22] experiments on intensive care unit data sets show that the FD characterizes the time series better than the correlation dimension; in [23] FD is proven to be discriminant for the detection of epileptic seizures in intracranial electroencephalogram signals; and in [24] glaucomatous eye detection is proposed based on FD estimation. However, it was not until recently that FD has been explored as feature for structural damage detection. It is important to note the recent work by Rezaie et al. [25], where FD for crack pattern recognition is studied. It is also important to note the work by Wen et al. [26] where FD is shown to be effective to realize fault diagnosis of rolling element bearings and cope with the effects of variation in operating conditions. In this work, the FD feature is proposed for the vibration-response signals inspired by the physical insight that the different fractal structures of these signals should be capable to discriminate different types of damage in jacket-type offshore foundations.
The paper is arranged as follows. First, the laboratory test bed and damage scenarios are briefly introduced in Section 2. Section 3 addresses the detailed statement of the developed damage diagnosis strategy that encompasses the following steps:
(i)
data collection and manipulation;
(ii)
fractal dimension feature extraction by means of the Katz’s algorithm; and
(iii)
normalization and classification tools.
The experimental results are comprehensively stated in Section 4. Finally, conclusions are drawn in Section 5.

2. Experimental Test Bed

The reliability of the damage diagnosis approach presented in this paper is verified using different types of damage in an experimental test bed modeling a jacket-type WT as in [19]. For a very detailed description of the function generator, the amplifier and inertial shaker, the sensor network, the data acquisition system, how the vibration signals are acquired and how the time domain waveforms are processed, readers are referred to [19,20].
A brief characterization of the experimental setup of the small scale wind turbine is described below. First, a function generator (model GW INSTEK AF-2005) is used to produce a white noise signal with four different amplitudes ( 0.5 , 1 , 2 , and 3) that account for different wind speed regions. This signal is then amplified and used as input to a modal shaker (GW-IV47 from Data Physics) that induces vibration in the structure. The overall description of the test bench is displayed in Figure 1a.
The structure is 2.7 m high and consists of three parts:
(i)
the top beam;
(ii)
the tower; and
(iii)
the jacket.
The top beam is 1 meter wide and 0.6 meters high and the inertial shaker is attached to one of the ends of the beam. Three tubular sections united with bolts form the tower. Finally, the jacket is a pyramidal structure composed of steel bars of different lengths as well as steel sheets.
The vibration of the structure is measured by means of the data acquisition system cDAQ-9188 (National Instruments) and through 8 triaxial accelerometers (model 356A17, PCB Piezotronic) optimally placed following the work by Zugasti (2014) [17], as can be seen in Figure 1b.
In this work we have considered the same 4 different structural states as in the work by Puruncajas et al. [20]. All of the structural states refer to the jacket bar illustrated in Figure 1a. These states are:
(i)
the healthy structure with the original healthy steel bar;
(ii)
the healthy structure where the original bar is replaced by a replica;
(iii)
the structure with a 5 mm crack damaged bar; and
(iv)
the structure with an unlocked bolt in the jacket.

3. Damage Diagnose Strategy

In this section the damage diagnosis strategy is stated. First, a detailed description on data collection and manipulation is given. On the one hand, how data is collected and reshaped is of utmost importance in machine learning in general and for this specific application in particular, see [27,28]. On the other hand, it is well known that feature selection allows to improve the classification performance making faster and more profitable the classifiers [21]. In this regard, the fractal dimension feature is introduced for damage classification purposes, as well as a physical insight of its nature for time series and a detailed explanation about the Katz’s algorithm used to compute it. Finally, three machine learning classifiers are reviewed and tested for damage classification.

3.1. Data Collection and Manipulation

A total of 100 experimental tests have been conducted that include the four amplitudes that represent the different speed regions. More precisely:
(i)
10 tests with the original healthy bar for each amplitude, i.e., 40 tests;
(ii)
5 tests with the replica bar for each amplitude, i.e., 20 tests;
(iii)
5 tests with the 5 mm crack bar for each amplitude, i.e., 20 tests; and
(iv)
5 tests with the unlocked bolt for each amplitude, i.e., 20 tests.
For each experimental test, the acceleration has been measured through 24 sensors during 59.51636719 s and with a sampling frequency of 275.28 Hz, which leads to 16,384 time instants and a time step of about Δ = 0.0036328125 s.
The raw data of the k-th experimental test, k = 1 , , 100 , can be arranged as the matrix X ( k ) in Equation (1). Each of the 24 columns of matrix X ( k ) contain the 16,384 measures of each sensor:
X ( k ) = sensor # 1 sensor # 2 sensor # 3     sensor # 24   x 1 , 1 ( k )      x 1 , 2 ( k )       x 1 , 3 ( k )           x 1 , 24 ( k )   x 2 , 1 ( k )      x 2 , 2 ( k )          x 2 , 3 ( k )           x 2 , 24 ( k )   x 3 , 1 ( k )      x 3 , 2 ( k )       x 3 , 3 ( k )           x 3 , 24 ( k )                          x 16383 , 1 ( k )      x 16383 , 2 ( k )       x 16383 , 3 ( k )           x 16383 , 24 ( k )   x 16384 , 1 ( k )      x 16384 , 2 ( k )       x 16384 , 3 ( k )           x 16384 , 24 ( k ) M 16384 × 24 ( R )
Each column of the matrix X ( k ) in Equation (1) is reshaped into a 64-by-256 matrix to build a new matrix Y ( k ) M 64 × ( 256 · 24 ) ( R ) in Equation (2):
Applsci 10 06972 i001
Two are the main reasons for reshaping the matrix Y ( k ) in Equation (2):
(i)
on the one hand, for a single experimental test, we create 64 rows. Each one of these rows is what we call a sample;
(ii)
on the other hand, each sample will contain time-history measures of the whole set of sensors.
We will see in Section 4 that when we want to diagnose whether a wind turbine is healthy or not, we just need to measure these 24 sensors during 256 time instants, that is, during 256 Δ 0.93 s.
To define the matrix that contains all the data, the matrices Y ( k ) , k = 1 , , 100 , from each experiment, are stacked to define
Y = Y ( 1 ) Y ( 2 ) Y ( 100 ) M ( 64 · 100 ) × ( 256 · 24 ) ( R ) = M 6400 × 6144 ( R ) .

3.2. Fractal Dimension

Fractal geometry was proposed by Benoît Mandelbrot [29] and it is a relatively new mathematics discipline which has found a lot of applications in bio-science [30,31,32], engineering [33] and many other fields [34].
Euclidean geometry describes common geometric forms like lines, planes, spheres or rectangular volumes. Each of the geometric objects considered so far has an integer dimension (D), either 1 , 2 , or 3. However, many natural shapes do not harmonize with the integer-based idea of dimension.
In order to give meaning to noninteger dimensions, a more mathematical description of dimension proposed by P. Bourke [35] is based on “how the size of an object behaves as the linear dimension increase”. More precisely, consider, for instance, three objects with dimensions D = 1 (a line segment); D = 2 (a square); and D = 3 (a cube). If the line segment, the square and the cube are linearly scaled by a factor of 2, then the results are 2 copies, 4 copies, and 8 copies of the initial objects, respectively. In other words, the length (characteristic size) of the line segment is doubled (Figure 2a), the area (characteristic size) of the square increases by a factor of 4 (Figure 2b), and the volume (characteristic size) of the cube increases by a factor of 8 (Figure 2c).
The relation between the scaling factor S, the dimension D and the number of generated copies N (increasing size) can be generalized and expressed as:
N = S D ,
which is equivalent to
D = log ( N ) log ( S ) .
Since D is defined in terms of N and S in Equation (5), it is possible to find the dimension, for instance, of the famous Koch curve [36]. In the case of the Koch curve, at each step, we divide the line segment into S = 3 segments of equal length and we draw an equilateral triangle that has the middle as its base and points outward. Therefore, we have created N = 4 copies (the two external sides of the original line segment and the two sides of the triangle). Consequently, the fractal dimension D Koch of the Koch curve is:
D Koch = log ( 4 ) log ( 3 ) 1.2619 .
As it is very well known, fractals are self-similar subsets of the Euclidean space where the fractal dimension defined in Equation (5) surpasses their topological dimension. Fractals have the same appearance at different scales. In this sense, many time series of different processes can be considered as fractals, since many parts taken from these time series, scaled by proper factors, are similar to the whole series. Considering that the fractal dimension is, somehow, a measure of the complexity that is repeating on each scale, it seems very interesting to compute the fractal dimension of a time series. In this regard, there are several algorithms that can be applied to estimate the fractal dimension of a time series. The approach used in this paper to estimate the fractal dimension is Katz’s algorithm, that is summarized in Section 3.2.1.

3.2.1. Katz’s Algorithm

For a given sensor τ = 1 , , 24 , the time series used in this work are the rows in matrix Y in Equation (3). More precisely, for a given row i = 1 , , 6400 and a given sensor τ = 1 , , 24 , the associated time series are composed of a sequence of ν = 256 points, s i , τ 1 , s i , τ 2 , , s i , τ ν R 2 where
s i , τ j = j , Y i , j + ν ( τ 1 ) R 2 , j = 1 , , ν ,
where Y [ α , β ] represents the element in the α -th row and β -th column of matrix Y .
To estimate the fractal dimension of the time series, Katz [37] defines two magnitudes, L and d, see Figure 3. On the one hand, the total length of the curve L is defined as the sum of the distance between two consecutive points. More precisely, for a given row i = 1 , , 6400 and a given sensor τ = 1 , , 24 :
L i , τ = j = 1 ν 1 s i , τ j + 1 s i , τ j 2 = j = 1 ν 1 1 + Y i , j + 1 + ν ( τ 1 ) Y i , j + ν ( τ 1 ) 2 .
On the other hand, d is the diameter or planar extent of the time series and it is defined as the maximum distance between the first point in the time series and the rest of points. More precisely, for a given row i = 1 , , 6400 and a given sensor τ = 1 , , 24 :
d i , τ = max j = 2 , , ν s i , τ j s i , τ 1 2 .
The last step in the Katz’s algorithm is the normalization of both L i , τ and d i , τ by the average distance a i , τ between two consecutive points. More precisely, for a given row i = 1 , , 6400 and a given sensor τ = 1 , , 24 :
a i , τ = L i , τ ν 1 .
Finally, for a given row i = 1 , , 6400 and a given sensor τ = 1 , , 24 , the formula for the fractal dimension z i , τ can be represented as:
z i , τ = log L i , τ a i , τ log d i , τ a i , τ = log ν 1 log d i , τ ( ν 1 ) L i , τ = log ( ν 1 ) log ( ν 1 ) + log d i , τ L i , τ .
Note that d i , τ L i , τ , where both d i , τ and L i , τ are positive real numbers. Therefore,
0 < d i , τ L i , τ 1 ,
that implies
log d i , τ L i , τ 0 .
The expression log d i , τ L i , τ is zero if, and only if, the points s i , τ j , j = 1 , , ν are all aligned. In this case, the fractal dimension is exactly 1. When the ratio d i , τ L i , τ decreases from 1 to 1 ν 1 , then the fractal dimension of the time series increases up to 2. Even though the fractal dimension of a plane fractal never exceeds 2, the fractal dimension of a time series using the Katz’s algorithm may exceed this value when the fraction d i , τ L i , τ is less than 1 ν 1 . However, the fractal dimension of a regular time series normally lies within the range [ 1 , 2 ] .
With the fractal dimensions z i , τ of the time series in matrix Y in Equation (3), we build a new matrix Z as:
Applsci 10 06972 i002
Specifically:
  • z 1 , 1 in matrix Z in Equation (10) is the fractal dimension of the time series
    x 1 , 1 ( 1 ) , x 2 , 1 ( 1 ) , , x ν , 1 ( 1 ) ;
  • more generally, z i , τ is the fractal dimension of the time series
    x α , τ ( β ) , x α + 1 , τ ( β ) , , x α + ν , τ ( β ) ,
    where
    α = i mod 64 1 · ν + 1 , if i mod 64 0 , α = 63 ν + 1 , if i mod 64 = 0 , β = i 1 div 64 + 1 ,
    and div and mod stand for the integer quotient and the remainder of an integer division, respectively.

3.3. Normalization and Classification Tools

Although matrix Z in Equation (10) is a matrix of elements generally between 1 and 2, the data is normalized by using column-wise scaling. This way, each column, and consequently each sensor, will have the same influence on the posterior analysis. Otherwise, the sensors closest to the source of the excitation and furthest from the structural damage could have a superior influence and make it difficult to detect the damage. Column-wise scaling is performed by subtracting the mean of each column to the elements on that column and dividing the same elements by the standard deviation of the column.
In this work, different classifiers have been used for the classification: k nearest neighbors (kNN) and support vector machines (SVM) with different kernels. In Section 3.3.1 and Section 3.3.2 these methods will be briefly reviewed. Finally, it is important to note that 5-fold cross validation has been used to evaluate the classifier models.

3.3.1. k Nearest Neighbor

The k nearest neighbor (kNN) algorithm has been used since 1970. It is a classification algorithm that is used to make a prediction of a new observation based on the category of the k nearest neighbors. Two elements are key to this approach:
(i)
the one and only parameter k; and
(ii)
the distance measure [38].
The most commonly used distance measures in machine general are, in general, the Hamming distance, Euclidean distance, the Manhattan distance and the Minkowski distance. In this paper, the Euclidean distance is used.

3.3.2. Support Vector Machines (SVM)

SVM is a supervised machine learning algorithm that is used for classification purposes and it has been applied to a large variety of applications [39]. SVM are based on the simple idea of finding the hyperplane (or the decision boundary) that best divides the data into two classes.
Figure 4a shows the illustration of three separating hyperplanes out of many possible. The goal is to choose a hyperplane with the widest margin to separate both classes, see Figure 4b. In this context, the margin is defined as the smallest distance between any of the samples and the hyperplane. The data points closest to the separating hyperplane are called the support vectors. These points will determine how wide the margin is. Let us consider a two-classes example, a training data set x 1 , , x N , N N with corresponding binary target values { t 1 , , t N } { 1 , 1 } , where one class is labeled as red (corresponding to a positive target value 1) and the other one as blue (corresponding to a negative target value 1 ). Commonly, the hyperplane is expressed in the following form:
h ( x ) = ω x + b
where ω is the weight vector and b is the bias term. The canonical hyperplane is used in this paper, among all the possible descriptions. The canonical hyperplane satisfies:
ω x red sv + b = 1 , ω x blue sv + b = 1 ,
where x red sv and x blue sv represent the so-called support vectors (the closest samples with respect to the hyperplane) on the red and blue classes, respectively. The distance δ from the support vectors to the hyperplane is given by:
δ x { red , blue } sv , h = ω x { red , blue } sv + b ω = 1 ω .
Since the margin is twice the distance from the support vectors to the hyperplane, the margin will be 2 ω . As it has been said, the goal is to maximize the margin 2 ω , which is equivalent to minimizing the inverse function ω 2 . This is also equivalent to minimizing
min ω , b 1 2 ω 2 subject to h ( x i ) t i 1 , i = 1 , , N .
In order to find the extreme values of a function with multiple constraints, one possible approach is to use the Lagrange multipliers. With this approach, the previous minimization problem is re-expressed as:
min ω , b , α i L ( ω , b ; α i ) = min ω , b , α i 1 2 ω 2 i = 1 N α i ( ω x i + b ) t i 1
where α i , i = 1 , , N are the Lagrange multipliers. To find the extreme values, the partial derivatives with respect to ω and b are computed and equated to zero:
L ( ω , b ; α i ) ω = ω i = 1 N α i t i x i = 0 ω = i = 1 N α i t i x i
L ( ω , b ; α i ) b = i = 1 N α i t i = 0
Equation (12) shows that the weight vector ω is a linear combination of the training data set. Replacing Equations (12) and (13) into Equation (11), the minimization problem is uniquely expressed in terms of α i , x i and t i :
min α i 1 2 i = 1 N α i t i x i i = 1 N α i t i x i i = 1 N α i t i j = 1 N α j t j x j x i b i = 1 N α i t i + i = 1 N α i .
After some simple manipulations, Equation (14) is now expressed as:
min α i i = 1 N α i 1 2 i = 1 N j = 1 N α i α j t i t j x i x j
As it can be clearly seen, the optimization problem depends only on the dot product of pairs of training data. However, frequently the data are not linearly separable. Therefore, the margin constraint cannot be satisfied for any ω and b. One possible solution is to allow some data points to violate the margin constraints (soft margin), but it is needed to assign them a cost. In this case, a penalty parameter C (box constraint) has to be considered to control the maximum penalty imposed on margin-violating observations, as well as slack variables ε i that controls the width of the margin. For the case of a linear kernel, dealing with a nonlinearly separable case can be generalized as:
min ω , b , ε i 1 2 ω 2 + C i = 1 N ε i subject to h ( x i ) t i 1 ε i , i = 1 , , N ; ε i 0 , i = 1 , , N .
The constrained minimization problem in Equation (16) can be rewritten, using Lagrange multipliers, as:
min α i i = 1 N α i 1 2 i = 1 N j = 1 N α i α j t i t j x i x j subject to i = 1 N α i t i = 0 ; 0 α i C , i = 1 , , N .
In many cases, even with a soft margin, the space is not linearly separable. In these cases, a transformation ϕ is used to transform the original training data to another space. As it was mentioned before, the optimization depends only in dot products. Therefore, the transformation ϕ is not needed. Instead, only the dot product
K ( x i , x j ) = ϕ ( x i ) ϕ ( x j )
is needed, renamed as the kernel function. In this work, we will used two kernel functions, quadratic kernel K q and Gaussian kernel K G , defined as:
K q ( x i , x j ) = 1 + 1 γ 2 x i x j , K G ( x i , x j ) = exp x i x j 2 γ 2 ,
where γ is the so-called kernel scale.

4. Results

In this section, the results are organized as follows. First, the evaluation metrics used to assess the classification models are introduced and explained in Section 4.1. As it has been detailed in Section 3.3.1Section 3.3.2, the classification models used in this work are kNN, quadratic SVM and Gaussian SVM. The results of the present approach using the fractal dimension to build the feature vector and kNN, quadratic SVM and Gaussian SVM are presented in Section 4.2, Section 4.3 and Section 4.4, respectively.
Figure 5 presents a flowchart summarizing the proposed damage diagnosis strategy. In a nutshell, the fractal dimension is computed and normalized to each time series (per sensor) of the baseline data and machine learning models are trained. Finally, when new data from a structure to be diagnosed comes in, its fractal dimension is computed, normalized and finally the already trained kNN or SVM (quadratic or Gaussian) model is applied for the structural state classification.

4.1. Evaluation Metrics

Before the results are presented, in terms of multiclass confusion matrices, it is important to clearly describe the evaluation metrics that are used to assess the performance of each model. One of the most used metrics is the overall accuracy, which is defined as the number of correct predictions out of the total number of predictions. However, the overall accuracy alone does not always tell if a model performs satisfactorily or unsatisfactorily, especially if the test data are comprised of imbalanced classes. However, even in the case of balanced classes, with the information provided by the overall accuracy, it is not possible to completely know how to improve the model. The metrics used in this work are accuracy, precision, recall, F1-score and specificity. These metrics, for both the binary classification and multiclass classification problem will be defined shortly in the next paragraphs.
Consider categorical labels when n N observations x 1 , , x n have to be assigned into predefined classes C 1 , , C , N . In a binary classification problem, each observation x i is to be classified into one, and only one, of two nonoverlapping classes ( C 1 and C 2 , or positive and n e g a t i v e ). However, in a multiclass classification problem, the input x i is to be classified into one, and only one, of nonoverlapping classes.

4.1.1. Metrics for a Binary Classification Problem

A confusion matrix is a table or matrix that summarizes the prediction results of a classification problem. It is not a metric itself but it helps to visually understand the metrics and types of errors the model is making. Table 1 represents the confusion matrix for the case of a binary classification problem, where two classes have been considered: positive and negative. The observations are distributed in two rows and two columns. The rows represent the actual classes, while the columns represent the predicted classes. The observations in the diagonal represent the correct decisions, while the elements in the antidiagonal represent the misclassifications.
More precisely, the four elements in a confusion matrix of a binary classification problem are:
  • True positive (tp): the number of positive observations predicted as positive;
  • True negative (tn): the number of negative observations predicted as negative;
  • False positive (fp): the number of negative observations wrongly predicted as positive;
  • False negative (fn): the number of positive observations wrongly predicted as negative.
The five metrics for the binary classification problem are then defined in Table 2 in terms of the elements of the confusion matrix. The F1 score is a particular case of the F β score defined in [40] when β = 1 .

4.1.2. Metrics for a Multiclass Classification Problem

Metrics for a multiclass classification problem are based on a generalization of the metrics in Table 2 for many classes C i , i = 1 , , [41,42]. More precisely, with respect to the class C i , we define:
  • tp i as the true positive for C i , that is, the number of observations that belong to the class C i that are correctly labeled as C i ;
  • tn i as the true negative for C i , that is, the number of observations that do not belong to the class C i that are not labeled as C i ;
  • fp i as the false positive for C i , that is, the number of observations that do not belong to the class C i that are wrongly labeled as C i ; and
  • fn i as the false negative for C i , that is, the number of observations that belong to the class C i that are not labeled as C i .
Table 3 presents the metrics for the evaluation of a multiclass classification problem. Although the quality of the overall multiclass classification is usually assessed in two ways: (i) macroaveraging; and (ii) microaveraging, Table 3 only considers the macroaveraging case, where all classes are treated equally, instead of microaveraging, where bigger classes are favored.
Finally, it is important to note that in the next subsections all the presented confusion matrices follow the next nomenclature. The rows represent the actual class and the columns represent the predicted class. Label 0 corresponds to the case when the structure is healthy; label 1 corresponds to the structure with a replica bar; label 2 corresponds to the structure with a 5 mm cracked bar; and label 3 corresponds to the structure with an unlocked bolt in the jacket.

4.2. Results of Fractal Dimension and kNN as Classification Method

As it has been said in Section 3.3.1, the one and only parameter of the kNN classifier is k, the number of neighbors. Table 4 shows the performance of the proposed approach using kNN as the classification method, in terms of the number of neighbors k. As described in Section 4.1.2, the metrics for the evaluation of this multiclass classification problem are the average accuracy, the average precision, the average recall, the average F1 score and the average specificity. The best results for each metric have been highlighted in bold. The same results, as a function of the number of neighbors, are depicted in Figure 6. The case with the best performance corresponds to the case where the number of neighbors is k = 20 . It can be observed that increasing further the number of neighbors does not increase the indicators’ performance and only leads to a higher computational cost. Table 5 represents the confusion matrix for the best case ( k = 20 ). In Table 4, the performance measures are presented using macroaveraging. However, in the confusion matrix in Table 5, precision and recall can be extracted for each class, separately. Similarly, Table 5 also presents the false negative rate (fnr)—defined as 1–tpr— and the false discovery rate (fdr)—defined as 1–ppv—. From this confusion matrix, it can be derived all the aforementioned metrics. In particular, it is noteworthy that an average accuracy of 96.9 % , an average precision of 94.3 % and an average specificity of 97.7 % are obtained.

4.3. Results of Fractal Dimension and Quadratic SVM as Classification Method

Table 6 summarizes the performance, using macroaveraging, of the proposed approach using quadratic SVM as the classification method, in terms of the box constraint C and the kernel scale γ hyperparameters. More precisely, we combine the box constraint for C = 5 , 10 , 20 , 30 , 40 and 50 and the kernel scale for γ = 0.1 , 0.2 , 0.5 , 1 , 2 , 5 , 10 , 15 , 20 , 30 and 50. The best results for each metric have been highlighted in bold. The same results, for a box constraint C = 30 and as a function of the kernel scale γ , are depicted in Figure 7. The case with the best performance corresponds to the case where the box constraint is C = 30 and the kernel scale is γ = 1 . Table 7 represents the confusion matrix for this case, where it is worth remarking that an average accuracy of 98.4 % , an average precision of 96.5 % and an average specificity of 98.9 % are obtained.

4.4. Results of Fractal Dimension and Gaussian SVM as Classification Method

As in Section 4.3, Table 8 summarizes the performance of the proposed approach using Gaussian SVM as the classification method, in terms of both the box constraint C and the kernel scale γ . More precisely, we combine the box constraint for C = 5 , 10 , 20 , 30 , 40 and 50 and the kernel scale for γ = 0.1 , 0.2 , 0.5 , 1 , 2 , 5 , 10 , 15 , 20 , 30 and 50. The best results for each metric have been highlighted in bold. The same results, for a box constraint C = 50 and as a function of the kernel scale γ , are depicted in Figure 8. The case with the best performance corresponds to the case where the box constraint is C = 50 and the kernel scale is γ = 1 . Table 9 represents the confusion matrix for this case. From the confusion matrix, it is worth remarking that an average accuracy of 98.7 % , an average precision of 97.3 % and an average specificity of 99.1 % are obtained.

4.5. Brief Discussion

Section 4.2, Section 4.3 and Section 4.4 present an optimization of the model hyperparameters for the kNN, quadratic SVM and Gaussian SVM, respectively. In each subsection, the confusion matrix for the best (optimized) model is presented. In this subsection, the best models are compared among them. That is, a comparison among the kNN, quadratic SVM and Gaussian SVM methodologies is given. In particular, Figure 9 shows the accuracy, precision, recall, F 1 score and specificity measures for the best kNN, quadratic SVM and Gaussian SVM models. It is noteworthy that the Gaussian SVM accomplishes the highest performance for all the indicators. Thus, it is the recommended approach to be employed with the proposed SHM strategy. Finally, it is also important to note that the quadratic SVM has a close performance to the Gaussian SVM but the kNN falls far behind in all the indicators in general, and more markedly for the recall and F 1 score measures. Therefore, its use is inadvisable. As a final remark, the performance of the Gaussian SVM over the quadratic SVM may depend on the nature of the data or even on how this data is preprocessed and what features are extracted. In this sense, the exceeding performance of the Gaussian SVM has been reported in the literature as a machine learning model for the prediction of the viscosity of nanofluids [43] or, in the field of fault diagnosis, to get the operation status of a wind turbine [44].

5. Conclusions

In this work a proof-of-concept damage diagnosis strategy that can be deployed online and during the WT service has been stated. This main contribution of the paper is accomplished by using only the vibration-response accelerometer signals instead of the standard approach based on guided waves. Furthermore, the methodology is based on machine learning techniques. In this regard, the second main contribution of this work is to introduce the FD as a suitable feature to detect and classify different damage scenarios inspired by the physical insight that the different fractal structures of the accelerometer signals should be capable to discriminate different types of damage. Three supervised machine learning classifiers have been studied and optimized for the specific problem. Finally, the proposed methodology has been validated in an experimental laboratory test bed where for the best selected model (Gaussian SVM with box constraint C = 50 and kernel scale γ = 1 ) all the studied measures (average accuracy, average precision, average recall, average F 1 score and average specificity) have attained values higher than 97%. These results encourage future work in this area of research to develop further this proof-of-concept. More tests including changing the damage location and taking into account and dealing with variable environmental operating conditions, including waves, will be the focus of future work.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been partially funded by the Spanish Agencia Estatal de Investigación (AEI)-Ministerio de Economía, Industria y Competitividad (MINECO), and the Fondo Europeo de Desarrollo Regional (FEDER) through the research project DPI2017-82930-C2-1-R; and by the Generalitat de Catalunya through the research project 2017 SGR 388.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
accaccuracy
FDfractal dimension
fdrfalse discovery rate
fnfalse negative
fnrfalse negative rate
fpfalse positive
GWgigawatts
kNNk nearest neighbor
ppvprecision
SHMstructural health monitoring
SVMsupport vector machine
tntrue negative
tnrspecificity
tptrue positive
tprrecall
WTwind turbine

References

  1. Na, W.S.; Baek, J. A review of the piezoelectric electromechanical impedance based structural health monitoring technique for engineering structures. Sensors 2018, 18, 1307. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Chen, H.P. Structural Health Monitoring of Large Civil Engineering Structures; John Wiley & Sons: Hoboken, NJ, USA, 2018. [Google Scholar]
  3. Shin, H.J.; Lee, J.R. Development of a long-range multi-area scanning ultrasonic propagation imaging system built into a hangar and its application on an actual aircraft. Struct. Health Monit. 2017, 16, 97–111. [Google Scholar] [CrossRef]
  4. Ozbek, M.; Meng, F.; Rixen, D.J. Challenges in testing and monitoring the in-operation vibration characteristics of wind turbines. Mech. Syst. Signal Process. 2013, 41, 649–666. [Google Scholar] [CrossRef]
  5. Outlook, I.O.W. World Energy Outlook Series; International Energy Agency: Paris, France, 2019. [Google Scholar]
  6. Martinez-Luengo, M.; Shafiee, M. Guidelines and Cost-Benefit Analysis of the Structural Health Monitoring Implementation in Offshore Wind Turbine Support Structures. Energies 2019, 12, 1176. [Google Scholar] [CrossRef] [Green Version]
  7. Arnold, P.; Moll, J.; Mälzer, M.; Krozer, V.; Pozdniakov, D.; Salman, R.; Rediske, S.; Scholz, M.; Friedmann, H.; Nuber, A. Radar-based structural health monitoring of wind turbine blades: The case of damage localization. Wind Energy 2018, 21, 676–680. [Google Scholar] [CrossRef]
  8. Nguyen, T.C.; Huynh, T.C.; Yi, J.H.; Kim, J.T. Hybrid bolt-loosening detection in wind turbine tower structures by vibration and impedance responses. Wind Struct 2017, 24, 385–403. [Google Scholar] [CrossRef]
  9. Mieloszyk, M.; Ostachowicz, W. An application of Structural Health Monitoring system based on FBG sensors to offshore wind turbine support structure model. Mar. Struct. 2017, 51, 65–86. [Google Scholar] [CrossRef]
  10. Li, M.; Kefal, A.; Oterkus, E.; Oterkus, S. Structural health monitoring of an offshore wind turbine tower using iFEM methodology. Ocean. Eng. 2020, 204, 107291. [Google Scholar] [CrossRef]
  11. Zhao, X.; Lang, Z. Baseline model based structural health monitoring method under varying environment. Renew. Energy 2019, 138, 1166–1175. [Google Scholar] [CrossRef]
  12. Kim, H.C.; Kim, M.H.; Choe, D.E. Structural health monitoring of towers and blades for floating offshore wind turbines using operational modal analysis and modal properties with numerical-sensor signals. Ocean. Eng. 2019, 188, 106226. [Google Scholar] [CrossRef]
  13. Stutzmann, J.; Ziegler, L.; Muskulus, M. Fatigue crack detection for lifetime extension of monopile-based offshore wind turbines. Energy Procedia 2017, 137, 143–151. [Google Scholar] [CrossRef]
  14. Martinez-Luengo, M.; Kolios, A.; Wang, L. Structural health monitoring of offshore wind turbines: A review through the Statistical Pattern Recognition Paradigm. Renew. Sustain. Energy Rev. 2016, 64, 91–105. [Google Scholar] [CrossRef] [Green Version]
  15. Weijtjens, W.; Verbelen, T.; De Sitter, G.; Devriendt, C. Foundation structural health monitoring of an offshore wind turbine—A full-scale case study. Struct. Health Monit. 2016, 15, 389–402. [Google Scholar] [CrossRef]
  16. Gomez, H.C.; Gur, T.; Dolan, D. Structural condition assessment of offshore wind turbine monopile foundations using vibration monitoring data. Nondestructive Characterization for Composite Materials, Aerospace Engineering, Civil Infrastructure, and Homeland Security 2013. Int. Soc. Opt. Photonics 2013, 8694, 86940B. [Google Scholar]
  17. Zugasti Uriguen, E. Design and Validation of a Methodology for Wind Energy Structures Health Monitoring. Ph.D. Thesis, Universitat Politècnica de Catalunya, Barcelona, Spain, 16 January 2014. [Google Scholar]
  18. Spanos, N.A.; Sakellariou, J.S.; Fassois, S.D. Vibration-response-only statistical time series structural health monitoring methods: A comprehensive assessment via a scale jacket structure. Struct. Health Monit. 2019, 19, 736–750. [Google Scholar] [CrossRef]
  19. Vidal, Y.; Aquino, G.; Pozo, F.; Gutiérrez-Arias, J.E.M. Structural Health Monitoring for Jacket-Type Offshore Wind Turbines: Experimental Proof of Concept. Sensors 2020, 20, 1835. [Google Scholar] [CrossRef] [Green Version]
  20. Puruncajas, B.; Vidal, Y.; Tutivén, C. Vibration-Response-Only Structural Health Monitoring for Offshore Wind Turbine Jacket Foundations via Convolutional Neural Networks. Sensors 2020, 20, 3429. [Google Scholar] [CrossRef]
  21. Ruiz, M.; Mujica, L.E.; Alferez, S.; Acho, L.; Tutiven, C.; Vidal, Y.; Rodellar, J.; Pozo, F. Wind turbine fault detection and classification by means of image texture analysis. Mech. Syst. Signal Process. 2018, 107, 149–167. [Google Scholar] [CrossRef] [Green Version]
  22. Sarkar, M.; Leong, T.Y. Characterization of medical time series using fuzzy similarity-based fractal dimensions. Artif. Intell. Med. 2003, 27, 201–222. [Google Scholar] [CrossRef]
  23. El-Kishky, A. Assessing entropy and fractal dimensions as discriminants of seizures in EEG time series. In Proceedings of the 2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA), Montreal, QC, Canada, 2–5 July 2012; pp. 92–96. [Google Scholar]
  24. Kolář, R.; Jan, J. Detection of glaucomatous eye via color fundus images using fractal dimensions. Radioengineering 2008, 17, 109–114. [Google Scholar]
  25. Rezaie, A.; Mauron, A.J.; Beyer, K. Sensitivity analysis of fractal dimensions of crack maps on concrete and masonry walls. Autom. Constr. 2020, 117, 103258. [Google Scholar] [CrossRef]
  26. Wen, W.; Fan, Z.; Karg, D.; Cheng, W. Rolling element bearing fault diagnosis based on multiscale general fractal features. Shock Vib. 2015, 2015, 167902. [Google Scholar] [CrossRef] [Green Version]
  27. Vidal, Y.; Pozo, F.; Tutivén, C. Wind turbine multi-fault detection and classification based on SCADA data. Energies 2018, 11, 3018. [Google Scholar] [CrossRef] [Green Version]
  28. Pozo, F.; Vidal, Y.; Serrahima, J.M. On real-time fault detection in wind turbines: Sensor selection algorithm and detection time reduction analysis. Energies 2016, 9, 520. [Google Scholar] [CrossRef] [Green Version]
  29. Mandelbrot, B.B. The Fractal Geometry of Nature; WH freeman: New York, NY, USA, 1983; Volume 173. [Google Scholar]
  30. Sevcik, C. A procedure to estimate the fractal dimension of waveforms. arXiv 2010, arXiv:1003.5266. [Google Scholar]
  31. Raghavendra, B.; Dutt, D.N. A note on fractal dimensions of biomedical waveforms. Comput. Biol. Med. 2009, 39, 1006–1012. [Google Scholar] [CrossRef]
  32. Higuchi, T. Approach to an irregular time series on the basis of the fractal theory. Phys. D Nonlinear Phenom. 1988, 31, 277–283. [Google Scholar] [CrossRef]
  33. Russ, J.C. Fractal dimension measurement of engineering surfaces. Int. J. Mach. Tools Manuf. 1998, 38, 567–571. [Google Scholar] [CrossRef]
  34. Breslin, M.; Belward, J. Fractal dimensions for rainfall time series. Math. Comput. Simul. 1999, 48, 437–446. [Google Scholar] [CrossRef]
  35. Bourke, P. An Introduction to Fractals; The University of Western of Australia: Perth, Australia, 1991; Volume 5. [Google Scholar]
  36. Addison, P.S. Fractals and Chaos: An Illustrated Course; CRC Press: Boca Raton, FL, USA, 1997. [Google Scholar]
  37. Katz, M.J. Fractals and the analysis of waveforms. Comput. Biol. Med. 1988, 18, 145–156. [Google Scholar] [CrossRef]
  38. Mulak, P.; Talhar, N. Analysis of distance measures using k-nearest neighbor algorithm on kdd dataset. Int. J. Sci. Res. 2015, 4, 2101–2104. [Google Scholar]
  39. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning; Springer: Berlin/Heidelberg, Germany, 2013; Volume 112. [Google Scholar]
  40. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
  41. Krüger, F. Activity, Context, and Plan Recognition with Computational Causal Behaviour Models. Ph.D. Thesis, University of Rostock, Rostock, Germany, December 2016. [Google Scholar]
  42. Hameed, N.; Hameed, F.; Shabut, A.; Khan, S.; Cirstea, S.; Hossain, A. An Intelligent Computer-Aided Scheme for Classifying Multiple Skin Lesions. Computers 2019, 8, 62. [Google Scholar] [CrossRef] [Green Version]
  43. Shateri, M.; Sobhanigavgani, Z.; Alinasab, A.; Varamesh, A.; Hemmati-Sarapardeh, A.; Mosavi, A.; Shamshirband, S.S. Comparative Analysis of Machine Learning Models for Nanofluids Viscosity Assessment. Nanomaterials 2020, 10, 1767. [Google Scholar] [CrossRef] [PubMed]
  44. Wu, Z.; Wang, X.; Jiang, B. Fault Diagnosis for Wind Turbines Based on ReliefF and eXtreme Gradient Boosting. Appl. Sci. 2020, 10, 3258. [Google Scholar] [CrossRef]
Figure 1. (a) The test bench detailing the location of the damaged bar (red circle); and (b) Location of the sensors.
Figure 1. (a) The test bench detailing the location of the damaged bar (red circle); and (b) Location of the sensors.
Applsci 10 06972 g001
Figure 2. When objects of different dimension (e.g., (a) line, (b) square, and (c) cube) are linearly scaled by a factor of 2, their characteristic size will have different results associated to their dimension.
Figure 2. When objects of different dimension (e.g., (a) line, (b) square, and (c) cube) are linearly scaled by a factor of 2, their characteristic size will have different results associated to their dimension.
Applsci 10 06972 g002
Figure 3. The diameter of a time series d is given by the distance between the first point and the point that provides the maximum distance.
Figure 3. The diameter of a time series d is given by the distance between the first point and the point that provides the maximum distance.
Applsci 10 06972 g003
Figure 4. (a) There are two classes (blue and red), which are separated by three hyperplanes (in this case lines) out of many possible; (b) optimal hyperplane that maximizes the margin between classes.
Figure 4. (a) There are two classes (blue and red), which are separated by three hyperplanes (in this case lines) out of many possible; (b) optimal hyperplane that maximizes the margin between classes.
Applsci 10 06972 g004
Figure 5. Flowchart summarizing the proposed damage diagnose strategy.
Figure 5. Flowchart summarizing the proposed damage diagnose strategy.
Applsci 10 06972 g005
Figure 6. Performance measures (per-unit) corresponding to the kNN strategy for the multiclass classification problem with respect to the number of neighbors k (horizontal axis).
Figure 6. Performance measures (per-unit) corresponding to the kNN strategy for the multiclass classification problem with respect to the number of neighbors k (horizontal axis).
Applsci 10 06972 g006
Figure 7. Performance measures (per-unit) corresponding to the quadratic SVM strategy for the multiclass classification problem for a box constraint C = 30 and with respect to the kernel scale γ (horizontal axis).
Figure 7. Performance measures (per-unit) corresponding to the quadratic SVM strategy for the multiclass classification problem for a box constraint C = 30 and with respect to the kernel scale γ (horizontal axis).
Applsci 10 06972 g007
Figure 8. Performance measures (per-unit) corresponding to the Gaussian SVM strategy for the multiclass classification problem for a box constraint C = 50 and with respect to the kernel scale γ (horizontal axis).
Figure 8. Performance measures (per-unit) corresponding to the Gaussian SVM strategy for the multiclass classification problem for a box constraint C = 50 and with respect to the kernel scale γ (horizontal axis).
Applsci 10 06972 g008
Figure 9. Performance measures (percentage) comparison among the different classifiers.
Figure 9. Performance measures (percentage) comparison among the different classifiers.
Applsci 10 06972 g009
Table 1. Confusion matrix of a binary classification problem.
Table 1. Confusion matrix of a binary classification problem.
Predicted Class
PositiveNegative
Actual classPositiveTrue positiveFalse negative
(tp)(fn)
NegativeFalse positiveTrue negative
(fp)(tn)
Table 2. Metrics for the evaluation of a binary classification problem.
Table 2. Metrics for the evaluation of a binary classification problem.
MetricFormula
accuracy acc = tp + tn tp + fn + fp + tn
precision ppv = tp tp + fp
recall tpr = tp tp + fn
F 1 score F 1 = 2 · ppv · tpr ppv + tpr
specificity tnr = tn tn + fp
Table 3. Metrics for the evaluation of multiclass classification problems, where is the number of classes.
Table 3. Metrics for the evaluation of multiclass classification problems, where is the number of classes.
MetricFormula
average accuracy acc ¯ = 1 i = 1 tp i + tn i tp i + fn i + fp i + tn i
average precision ppv ¯ = 1 i = 1 tp i tp i + fp i
average recall tpr ¯ = 1 i = 1 tp i tp i + fn i
average F 1 score F 1 ¯ = 2 · ppv ¯ · tpr ¯ ppv ¯ + tpr ¯
average specificity tnr ¯ = 1 i = 1 tn i tn i + fp i
Table 4. Performance measures (per-unit) for the kNN method using different number of nearest neighbors (k). The cases with the best performance of each measure are highlighted in bold.
Table 4. Performance measures (per-unit) for the kNN method using different number of nearest neighbors (k). The cases with the best performance of each measure are highlighted in bold.
k acc ¯ ppv ¯ tpr ¯ F ¯ 1 tnr ¯
10.9530.8990.8980.8980.968
50.9640.9290.9170.9220.974
100.9670.9370.9220.9290.976
150.9680.9400.9250.9310.977
200.9690.9430.9260.9330.977
250.9690.9420.9250.9330.977
300.9690.9430.9250.9330.977
350.9690.9430.9240.9320.977
400.9680.9440.9240.9320.977
450.9670.9410.9200.9290.976
500.9670.9410.9190.9280.975
Table 5. Confusion matrix for the kNN algorithm when k = 20 .
Table 5. Confusion matrix for the kNN algorithm when k = 20 .
0123tprfnr
025274722 99 % 1 %
1331186601 93 % 7 %
2668119511 93 % 7 %
31656131096 86 % 14 %
ppv 93 % 94 % 94 % 97 %
fdr 7 % 6 % 6 % 3 %
Table 6. Performance measures (per-unit) corresponding to the quadratic SVM strategy for the multiclass classification problem using different box constraints (C) and different kernel scales ( γ ). The cases with the best performance of each measure are highlighted in bold.
Table 6. Performance measures (per-unit) corresponding to the quadratic SVM strategy for the multiclass classification problem using different box constraints (C) and different kernel scales ( γ ). The cases with the best performance of each measure are highlighted in bold.
C γ acc ¯ ppv ¯ tpr ¯ F ¯ 1 tnr ¯
0.10.9710.9350.9370.9360.981
0.20.9810.9580.9560.9570.987
0.50.9820.9610.9590.9600.988
10.9810.9590.9570.9580.987
20.9800.9580.9530.9560.986
550.9370.9330.8440.8760.949
100.9240.9280.8090.8500.937
150.9220.9240.8050.8460.935
200.9190.9180.7980.8380.933
300.8930.9010.7320.7770.911
500.8680.8870.6690.7200.890
0.10.9710.9330.9350.9340.981
0.20.9800.9560.9550.9550.987
0.50.9820.9610.9600.9600.988
10.9820.9610.9590.9600.988
20.9820.9620.9590.9600.988
1050.9630.9400.9100.9220.972
100.9240.9280.8090.8500.937
150.9230.9260.8080.8490.936
200.9220.9240.8050.8460.935
300.9180.9160.7960.8370.933
500.8880.8980.7200.7660.907
0.10.9610.9150.9120.9120.974
0.20.9780.9510.9500.9510.951
0.50.9820.9610.9590.9600.988
10.9840.9640.9620.9630.989
20.9820.9620.9590.9610.988
2050.9740.9510.9390.9440.982
100.9260.9290.8140.8540.938
150.9240.9270.8090.8490.937
200.9230.9250.8080.8480.936
300.9210.9220.8040.8440.935
500.9060.9080.7660.8100.923
0.10.9630.9170.9190.9160.976
0.20.9760.9470.9460.9470.984
0.50.9830.9610.9600.9600.988
10.9840.9660.9640.9650.989
20.9830.9620.9600.9610.988
3050.9780.9550.9480.9510.984
100.9340.9310.8340.8690.945
150.9240.9280.8100.8500.937
200.9230.9260.8080.8490.936
300.9230.9240.8070.8470.936
500.9180.9160.7970.8370.933
0.10.9630.9170.9190.9160.976
0.20.9760.9470.9460.9460.984
0.50.9820.9610.9590.9600.988
10.9840.9650.9630.9640.989
20.9830.9620.9610.9620.989
4050.9800.9580.9520.9550.986
100.9440.9320.8600.8860.954
150.9240.9270.8100.8500.937
200.9240.9270.8090.8490.937
300.9230.9240.8070.8470.936
500.9190.9180.7990.8390.934
0.10.9630.9170.9190.9160.976
0.20.9740.9410.9410.9410.982
0.50.9820.9590.9580.9580.988
10.9840.9650.9630.9640.989
20.9830.9630.9610.9620.989
5050.9800.9580.9530.9550.986
100.9530.9350.8830.9030.963
150.9250.9270.8120.8520.938
200.9240.9270.8090.8490.937
300.9230.9250.8070.8480.936
500.9200.9190.8010.8410.934
Table 7. Confusion matrix for the quadratic SVM algorithm for the case C = 30 (box constraint) and γ = 1 (kernel scale).
Table 7. Confusion matrix for the quadratic SVM algorithm for the case C = 30 (box constraint) and γ = 1 (kernel scale).
0123tprfnr
025317815 99 % 1 %
15121461 95 % 5 %
2114012236 96 % 4 %
3411091220 95 % 5 %
ppv 98 % 96 % 94 % 98 %
fdr 2 % 4 % 6 % 2 %
Table 8. Performance measures (per-unit) corresponding to the Gaussian SVM strategy for the multiclass classification problem using different box constraints (C) and different kernel scales ( γ ). The cases with the best performance of each measure are highlighted in bold.
Table 8. Performance measures (per-unit) corresponding to the Gaussian SVM strategy for the multiclass classification problem using different box constraints (C) and different kernel scales ( γ ). The cases with the best performance of each measure are highlighted in bold.
C γ acc ¯ ppv ¯ tpr ¯ F ¯ 1 tnr ¯
0.10.7570.7340.3960.3970.800
0.20.8330.8330.5860.6340.863
0.50.9390.9210.8490.8760.951
10.9830.9650.9600.9630.988
20.9780.9590.9460.9520.984
550.9250.9300.8130.8530.938
100.9240.9290.8100.8510.937
150.9220.9260.8060.8470.936
200.9190.9190.7980.8390.933
300.8930.9010.7330.7780.912
500.8680.8860.6690.7200.890
0.10.7560.7300.3950.3960.800
0.20.8320.8280.5840.6310.862
0.50.9400.9230.8510.8790.952
10.9850.9690.9650.9670.990
20.9770.9560.9530.9540.985
1050.9440.9360.8590.8870.954
100.9240.9280.8100.8500.937
150.9240.9280.8090.8500.937
200.9220.9250.8060.8460.935
300.9190.9170.7970.8370.933
500.8880.8980.7210.7670.908
0.10.7560.7300.3950.3960.800
0.20.8310.8220.5810.6270.627
0.50.9400.9220.8520.8790.952
10.9860.9700.9670.9680.990
20.9840.9670.9610.9630.988
2050.9690.9480.9220.9320.976
100.9250.9300.8120.8530.938
150.9240.9280.8090.8500.937
200.9240.9270.8090.8490.937
300.9210.9220.8040.8440.935
500.9090.9100.7730.8170.925
0.10.7560.7300.3950.3960.800
0.20.8300.8210.5810.6270.862
0.50.9400.9230.8520.8790.952
10.9870.9720.9690.9700.991
20.9850.9680.9630.9660.989
3050.9760.9560.9420.9480.982
100.9300.9300.8240.8610.942
150.9240.9280.8090.8500.937
200.9240.9280.8090.8500.937
300.9230.9240.8070.8470.936
500.9180.9160.7960.8370.933
0.10.7560.7300.3950.3960.800
0.20.8300.8190.5800.6260.861
0.50.9400.9230.8520.8790.952
10.9870.9720.9700.9710.991
20.9850.9690.9650.9670.990
4050.9780.9570.9460.9510.984
100.9360.9310.8410.8730.948
150.9240.9280.8110.8510.937
200.9240.9270.8090.8490.937
300.9230.9250.8080.8480.936
500.9190.9180.7990.8390.934
0.10.7560.7300.3950.3960.800
0.20.8300.8180.5790.6250.861
0.50.9400.9220.8520.8780.952
10.9870.9730.9700.9710.991
20.9850.9690.9650.9670.990
5050.9790.9590.950.9540.985
100.9460.9330.8650.8900.956
150.9250.9280.8120.8520.938
200.9240.9270.8090.8490.937
300.9230.9260.8080.8480.936
500.9200.9200.8020.8420.934
Table 9. Confusion matrix for the Gaussian SVM algorithm for the case C = 50 (box constraint) and γ = 1 (kernel scale).
Table 9. Confusion matrix for the Gaussian SVM algorithm for the case C = 50 (box constraint) and γ = 1 (kernel scale).
0123tprfnr
025422115 99 % 1 %
171231402 96 % 4 %
214512304 96 % 4 %
336631235 96 % 4 %
ppv 98 % 96 % 97 % 98 %
fdr 2 % 4 % 3 % 2 %

Share and Cite

MDPI and ACS Style

Hoxha, E.; Vidal, Y.; Pozo, F. Damage Diagnosis for Offshore Wind Turbine Foundations Based on the Fractal Dimension. Appl. Sci. 2020, 10, 6972. https://doi.org/10.3390/app10196972

AMA Style

Hoxha E, Vidal Y, Pozo F. Damage Diagnosis for Offshore Wind Turbine Foundations Based on the Fractal Dimension. Applied Sciences. 2020; 10(19):6972. https://doi.org/10.3390/app10196972

Chicago/Turabian Style

Hoxha, Ervin, Yolanda Vidal, and Francesc Pozo. 2020. "Damage Diagnosis for Offshore Wind Turbine Foundations Based on the Fractal Dimension" Applied Sciences 10, no. 19: 6972. https://doi.org/10.3390/app10196972

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop