2.2.1. Principles of the MQPSO Algorithm
The Quantum behaved Particle Swarm Optimization (QPSO) algorithm [
21] is an optimization algorithm that is more likely to converge to the global optimal solution. It is proposed on the basis of the Particle Swarm Optimization algorithm by integrating the basic principles of particle motion in quantum mechanics. The QPSO algorithm takes into account the local optimal and global optimal position information of each particle’s current position to update the particle’s position. In the space, the position of a particle is obtained by calculating the probability density of the particle’s appearance through the Schrödinger equation. During particle movement, the
-dimensional position function of the
jth particle at the
tth iteration is as follows:
Among them,
is the local attractor of the
jth particle in the
dth dimension,
is the characteristic length of the potential well, and
is a random number in [0, 1]. The expressions for the local attractor
and the characteristic length
are shown in Equations (3) and (4).
Among them,
and
are random numbers between 0 and 1,
represents the local optimal position of the particle swarm,
represents the global optimal value of the particle swarm,
is the contraction/expansion coefficient, and
represents the average value of the global extrema of all particles. The expressions for the contraction/expansion coefficient
and the average value of the average optimal position [
24]
are shown in Equations (5) and (6).
In this context,
denotes the number of iterations of the QPSO algorithm, with
and
;
represents the number of particles in the particle swarm. Substituting Equations (3)–(6) into Equation (2), we obtain the evolution formula for particles in the QPSO algorithm as follows:
Particles update their positions according to Equation (7), and previous particle movements no longer affect the next position update, resulting in better randomness and higher collective intelligence.
In theory, the QPSO intelligent optimization algorithm can find the optimal solution. But the QPSO algorithm has problems such as slow convergence speed and premature convergence in practical applications [
17]. There are two reasons for this problem: First, the particle swarm is initialized with a random distribution. This may lead to an uneven distribution of particles, ultimately resulting in a local optimum. Second, the value of
changes linearly. That is, throughout the search process, both the search speed and the search accuracy remain unchanged. This can cause problems such as premature convergence of the algorithm or low efficiency.
Therefore, this paper proposes the MQPSO algorithm, which makes the following improvements based on QPSO: 1. Introduce the Circle chaotic mapping sequence [
22] to initialize the population. 2. Use a nonlinear contraction/expansion coefficient update strategy.
- 1.
Circle Chaotic Mapping for Population Initialization
Chaos, which has developed from nonlinear systems, obtains a random-like motion state through deterministic equations. Currently, it is an effective optimization tool, featuring ergodicity, non-periodicity, and sensitivity to initial values. During the optimization process, a chaotic map can replace the pseudo-random number generator. Therefore, to address the above-mentioned problems, this paper adopts the Circle chaotic mapping method to generate the initial particle swarm. The definition formula for the Circle chaotic mapping method is as follows:
In this case,
is the remainder function, and
is the value of the
th mapping. Assuming there are 50 particles, the initial population generated by the Circle chaotic mapping shown in
Figure 4b is more uniformly distributed compared to the traditional initial population method shown in
Figure 4a. This feature enables it to retain more information and enhance particle diversity, thus providing a solid foundation for the subsequent optimization operations of the algorithm.
- 2.
Nonlinear Contraction/expansion Coefficient β
The contraction/expansion coefficient
determines the search radius of the algorithm. This paper adopts a nonlinear descent strategy to dynamically update the contraction/expansion coefficient. In the early stages of the search, the value is kept large to enable rapid global search and quickly locate the approximate position of the optimal solution. Then, the value of
is reduced to perform a precise search for the optimal solution within a small range, thereby improving both search efficiency and accuracy. The new formula of
is as follows:
Among them, , ; represents the current iteration count, and represents the maximum iteration count.
Set the iteration count to 500 and plot the contraction/expansion coefficient curves for the two algorithms, as shown in
Figure 5. It can be seen that at the beginning of the algorithm, the value of
is large, which is conducive to the global region search. As the algorithm runs, the value of
gradually decreases, which is more conducive to precise search within a small range and makes it easier to obtain the optimal solution.
2.2.2. Coefficient Optimization Steps Based on MQPSO
According to the characteristics of the MQPSO intelligent optimization algorithm, the fitness function is used to evaluate the quality of the current position of particles. The construction of the fitness function plays a crucial role in solving the phase compensation digital filter coefficients. In this paper, the mean square error of the phase difference in the vibration sensor is selected as the fitness function [
25], as shown in Equation (10).
Among these, represents the maximum response frequency of the sensor, represents the signal frequency, denotes the phase of the compensated signal, and denotes the phase of the reference signal. The mean square error (MSE) of the phase difference between the vibration sensors is used as the evaluation metric to assess the performance of the phase compensation filter. By optimizing this fitness function, the optimal filter coefficients can be obtained, thereby achieving accurate compensation of the vibration sensor phase. After the algorithm ends, the position of the particle with the minimum fitness value during the entire operation period is the optimal solution obtained by the algorithm, that is, the finally obtained coefficients of the phase compensation filter.
The MQPSO intelligent optimization algorithm is used to optimize the coefficients of the phase compensation filter. The specific algorithm steps are as follows:
Step 1: Initialize the parameters of the particle swarm. We calculated the fitness values by taking the average of three calculations for different numbers of iterations and particle-swarm sizes and also calculated the fitness values by taking the average of fifty calculations for different filter orders. The results are shown in
Figure 6. For the MQPSO algorithm, the number of iterations is set to 500, the number of particles is set to 50, and the order of the phase compensation filter is set to 17, which means the dimension of the particles is 17.
Step 2: Initialize the particle positions. Initialize the population according to the Circle chaotic mapping and take it as the first-generation particles.
Step 3: Calculate the fitness value of each initial particle according to the objective function Equation (10), and select the particle with the minimum fitness value as the optimal position in the current particle swarm.
Step 4: Update the position of each particle through Equation (7). Judge the fitness value of the global optimum of the current particle swarm, and take the position with the minimum fitness as the current global optimum.
Step 5: When the number of iterations has not been reached, continue to update the particle positions according to Equation (7).
Step 6: When the number of iterations reaches the maximum value, or when the fitness value is less than 4 × 10−5 and the change in the optimal solution for 50 consecutive iterations is less than 10−8, the result with the minimum fitness value is output as the phase compensation filter coefficients. If the requirements are not met, the number of iterations is increased.
The flow chart of the optimization steps is shown in
Figure 7.