The MIVA-MFA-PNN model proposed in this paper consists of MIVA, MFA, and PNN. The MIVA is used to reduce the interference from redundant information in the multiple MS parameters in the input layer of the PNN. The MFA is used to optimize the parameter smoothing factor in the PNN and reduce the error caused by artificial determination. There are three improvements in the MFA compared to the standard firefly algorithm.
2.1. PNN Model
The PNN is a type of adjusted radial basis function network and is a feed-forward neural network. It is a supervised network classifier based on Bayesian decision theory with the advantages of a simple learning process, high accuracy of pattern classification, strong fault tolerance, and generalizability [
33]. The topological structure of the PNN is shown in
Figure 2.
In
Figure 2, the first layer is the input layer. The input vector is
X = (
z1,
z2, …,
zk),
k is the number of neurons, and
zk is the
kth neuron. The second layer is the pattern layer. The output from the second layer is given by
In Equation (1),
wk is the weight of the neuron in the input layer,
p is the dimension of each sample, and σ is the smoothing factor. The third layer is the summation layer, in which the output probability of the implicit neurons belonging to the same class in the pattern layer is accumulated. In the third layer, a weighted result is obtained by using Equation (2):
In Equation (2),
m denotes the number of classes to be identified and
vm denotes the class output.
Q denotes the total number of neurons of class
m and
l is the
lth neuron in the pattern layer. The fourth layer is the output layer. The maximum median value in the summation layer is 1 and in the others is 0. The class related to 1 in the summation layer is the class of the sample. As shown in Equation (3),
y is the output category:
2.3. MFA
An improved firefly algorithm is used to determine the smoothing factor in this PNN. The firefly algorithm [
39] is a swarm intelligence optimization algorithm based on the luminescence of fireflies and their mutually attractive group behavior. It has been successfully applied to solve optimization problems in many fields [
40,
41]. It offers the advantages of simplicity and few required parameters and has better performance than a genetic algorithm or particle swarm optimization algorithm in optimizing some test functions.
Assuming that the solution space of the objective function to be optimized is
d-dimensional, a group of fireflies (
x1,
x2, …,
xn) is initialized randomly, where
n is the number of fireflies and
represents a possible solution of the objective function to be optimized, denoting the position of firefly
xi in the solution space. The absolute brightness of firefly
i is recorded as
Ii, and its value is equal to that of the objective function of firefly
xi, that is,
The relative brightness of firefly
i to firefly
j is recorded as
Iij, with a value of
where
γ is the light absorption coefficient within the range [0.01, 100] and
rij is the Euclidean distance from firefly
i to firefly
j.
The attractive force
βij of firefly
i to firefly
j is given by
where
β0 is the largest attraction force of fireflies in the light source (generally
β0 = 1). A firefly is attracted to another firefly with greater brightness. The location updating formula is as follows
In Equation (7), t is the number of iterations; xi and xj are the spatial positions of fireflies i and j; and αε is a random disturbance term, where α is generally a constant within [0, 1]; and ε is a uniformly distributed random number vector. By updating the position and brightness, fireflies will gather around other fireflies with higher brightness, and the optimal solution of the objective function can be obtained.
The standard firefly algorithm tends to find a local optimum solution, and the phenomenon of “precocity” arises, which leads to poor global search ability. Convergence of the algorithm is slow when a large-range function is optimized. Therefore, considering the properties of smoothing factors in the PNN, the following three improvements are used to improve the ability of the standard firefly algorithm.
(1) Optimization of the initial positions of fireflies
When the smoothing factor of the PNN is optimized by the standard firefly algorithm, the position of the firefly represents the value of the smoothing factor. In the standard firefly algorithm, the initial position of the firefly is determined randomly. Considering that the value range of the smoothing factor is generally (0, 1], to reduce the search range and convergence time and improve the global search ability of fireflies, the initial positions of fireflies are uniformly distributed on (0, 1]. The initial position of the firefly is given by
where
q is the number of fireflies.
(2) Improvement of the perturbed term in the position updating formula
If the random perturbation term in the position updating formula has a large step size in the early iterations of the algorithm, the fireflies can search for the optimal solution in the global range. If the step size decreases gradually in later iterations of the algorithm, a firefly can fine search in a local area. In order to obtain the performance above, αε in Equation (7) is optimized as follows
where
MaxGeneration is the maximum number of iterations,
t is the number of the current iteration, and
rand is a random number of uniform distribution on [0,1]. When the distance between fireflies is large, the second term (attractive force) in the position updating Equation (7) is affected slightly on the renewal of a firefly’s position. At this point, the random perturbation term makes the firefly move autonomously within [−
rij,
rij]. Therefore, the algorithm can search in a larger space. (1–
t/MaxGeneration) is the number of random perturbation terms, which decreases with the iteration. Therefore, the MFA has better global search ability in the early stage of iteration and better local search ability in the later stage.
(3) Improvement of the attractive force formula
From Equation (6), it can be found that when the distance
rij between fireflies tends to be positive infinity, that is, when the distance is big enough, the attractive force
βij tends to be zero. This is not good for the renewal of its position. In order to solve this problem, the minimum attraction
βmin is proposed and used.
βmin guarantees that even if the distance between fireflies is large, there is still a certain attractive force to make the positions of fireflies update normally. The improved formula for attractive force is given by
The error rate of the predicted results of test samples based on the PNN is regarded as the objective function in the MFA. Then the smoothing factor in the PNN model can be obtained and optimized by searching the minimum error rate in the solution space. This will reduce the error caused by determining the smoothing factor based on the trial method in the PNN model. The pseudocode of the smoothing factor search in the PNN model is shown in the
Appendix A.
To sum up, in the proposed MIVA-MFA-PNN model, the MIVA is used to reduce the dimension of the original evaluation index in the PNN. This will reduce interference from redundant information in the sample, and the input layer neuron in the PNN and complexity of the PNN structure are both reduced. Considering the properties of the smoothing factor in the PNN, three improvements are made in the standard firefly algorithm and the MFA is proposed. It will improve the global search ability and the rate of convergence. The error caused by empirical determination of the smoothing factor will be avoided. Therefore, when the new model is combined with the real-time monitored MS information, a good prediction result of rockburst in the tunnel of the hydropower project can be obtained.
The process of rockburst prediction based on the monitored MS information and the proposed MIVA-MFA-PNN model is as follows; first, a correlation analysis of the MS parameters is carried out. If the correlation among parameters is strong, the mean influence value of each parameter will be calculated by using the MIVA. The parameters with strong correlation will be combined with each other to form a new evaluation index for rockburst prediction. Second, the smoothing factor in the PNN is optimized by the MFA to reduce the error caused by experience determination. Finally, the data pertaining to the new evaluation index are used as the input sample of the MIVA-MFA-PNN model to output the rockburst prediction result.