Dam Deformation Prediction Model Based on Multi-Scale Adaptive Kernel Ensemble

: Aiming at the noise and nonlinear characteristics existing in the deformation monitoring data of concrete dams, this paper proposes a dam deformation prediction model based on a multi-scale adaptive kernel ensemble. The model incorporates Gaussian white noise as a random factor and uses the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) method to decompose the data set finely. Each modal component is evaluated by sample entropy (SE) analysis so that the data set can be reconstructed according to the sample entropy value to retain key information. In addition, the model uses partial autocorrelation function (PACF) to determine the correlation between intrinsic modal function (IMF) and historical data. Then, the global search whale optimization algorithm (GSWOA) is used to accurately determine the parameters of kernel extreme learning machine (KELM), which forms the basis of the dam deformation prediction model based on multi-scale adaptive kernel function. The case analysis shows that CEEMDAN-SE-PACF can effectively extract signal features and identify significant components and trends so as to better understand the internal deformation trend of the dam. In terms of algorithm optimization, compared with the WOA algorithm and other algorithms, the results of the GSWOA algorithm are significantly better than other algorithms and have the optimal convergence. In terms of prediction performance, CEEMDAN-SE-PACF-GSWOA-KELM is superior to the CEEMDAN-WOA-KELM, GSWOA-KELM, CEEMDAN-KELM, and KELM models, showing higher accuracy and stronger stability. This improvement is manifested in the decrease of root mean square error ( RMSE ), mean square error ( MSE ), and mean absolute error ( MAE ) and the improvement of the R square ( R 2 ) value close to 1. These research results provide a new method for dam safety monitoring and evaluation.


Introduction
In the long-term operation and maintenance of concrete dams, their structural performance will be gradually degraded by the combined effects of multiple internal and external factors.As a significant indicator of structural performance degradation, deformation monitoring of dams is essential to ensure their structural integrity and operational safety.Therefore, accurate prediction of the deformation behavior of concrete dams is a key measure to maintain the safe operation and maintenance of dams [1,2].The noise and nonlinear characteristics in the monitoring data have a significant impact on the modeling accuracy.Although the traditional statistical model is widely used in engineering because of its simple model and efficient calculation, it has limitations in dealing with problems such as multicollinearity.Therefore, more advanced machine learning techniques should be used for optimization [3].In recent years, with the rapid development of artificial intelligence technology, a large number of machine learning algorithms such as support vector machines (SVM) [4][5][6][7], artificial neural networks (ANN) [8], extreme learning machines (ELM) [9][10][11], recurrent neural networks (RNN) [12][13][14][15][16], random forest (RF) [17,18] and other technologies have been recognized for their powerful data-driven modeling capabilities and processing capabilities for complex nonlinear systems related to dam deformation prediction.These methods improve the accuracy and robustness of the prediction model by dealing with the deep nonlinear dependence between the dam influence factor and the deformation.Su et al. [4] proposed a dam deformation prediction model to identify the significant nonlinear dynamic characteristics of dam deformation by combining support vector machine (SVM) with phase space reconstruction, wavelet analysis and particle swarm optimization (PSO).Compared with the traditional model, the model shows superior ability in explaining complex nonlinear relationships.Lin et al. [19] proposed a multi-step displacement model prediction algorithm for concrete dams by combining fully integrated CEEMDAN with the K-adjusted harmonic mean (KHM) algorithm and extreme learning machine (ELM).The algorithm uses CEEMDAN to decompose the dam displacement sequence into different signals, uses KHM clustering to group the denoising data with similar features, and uses the sparrow search algorithm (SSA) to improve the KHM algorithm to avoid falling into local optimum.The engineering example shows that the model has good prediction performance and strong robustness, which proves the feasibility of applying the model to multi-step prediction of dam displacement.Xu et al. [20] proposed a combined prediction model of concrete arch dam displacement by combining clustering analysis with long short-term memory (LSTM), CEEMDAN, least squares support vector machine (LSSVM), and PSO, which is used for signal residual correction of concrete arch dams.By mining the effective information in the residual sequence, the combined model has better generalization and robustness than the traditional single model.Tang et al. [21] proposed a CEEMDAN-SSA-CNN-GRU dam deformation prediction model.The model uses CEEMDAN fusion to decompose the noise and uses SSA to further extract and reconstruct the high-frequency intrinsic mode function (IMF) to obtain components with enhanced noise reduction effect.However, the model does not comprehensively analyze the correlation of each IMF from multiple perspectives, and there are some deficiencies in dealing with nonlinear fluctuations caused by unstable loads.Cao et al. [22] proposed a VMD-SE-ER-PACF-ELM hybrid model based on the decomposition ensemble method to deal with the fluctuation characteristics of dam deformation so as to obtain more accurate prediction results.Although the model considers the correlation between IMF components, it shows some limitations in decomposing time series and dealing with high-dimensional nonlinear correlation.Jiang et al. [23] proposed a displacement prediction model of a concrete arch dam based on isolated forest (IF) and kernel extreme learning machine (KELM).The model uses IF to eliminate outliers and uses the robust nonlinear fitting ability of KELM to construct the model.However, it mainly solves the identification of outliers and the processing of significant nonlinear fluctuations.Zhou et al. [24] proposed a dam deformation prediction model based on the CEEMDAN-PSR-KELM framework.The model uses CEEMDAN to decompose the deformation sequence and then reconstructs the phase space of each sequence to establish a KELM prediction model for these reconstructed sequences.
Based on the above considerations, this paper adopts the Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) technology and introduces Gaussian white noise into the initial data set to promote the comprehensive decomposition of the signal and minimize the error processing.After decomposition, the correlation between each modal component is analyzed in depth, and then the relationship between these components and historical data is investigated.In order to reconstruct each decomposed modal component, sample entropy (SE) and partial autocorrelation function (PACF) analysis are used to evaluate the time correlation between each modal component and its historical corresponding component.Based on the above research, this paper chooses the global search whale optimization algorithm and kernel extreme learning machine (GSWOA-KELM) prediction model with excellent nonlinear mapping ability to establish the prediction model of dam deformation.The actual monitoring data of the Xiaowan double-curvature arch dam are used to verify the effectiveness and accuracy of the proposed prediction method.

The CEEDMAN Method Is Employed for Decomposing Dam Data and Noise Reduction Purposes
The traditional EMD algorithm is a commonly used method to deal with nonlinear and non-stationary data.It can decompose the original signal sequence into frequencyfree IMF components according to the fluctuation scale so as to achieve the purpose of data smoothing.However, the EMD decomposition process is prone to modal aliasing, which will affect the decomposition effect of the data.On the other hand, the EEMD algorithm uses the characteristics of uniform distribution of white noise spectrum frequency to make the original signal propagate in the whole time-frequency space, and the distribution is consistent on the background of white noise [25].The signals of different time scales will be automatically distributed on the appropriate reference scale so that the signals have continuity on different scales, so as to achieve the purpose of suppressing modal confusion [26].Although this method can effectively improve the mode mixing phenomenon in EMD decomposition, the influence of white noise still exists, and the reconstruction error after decomposition is difficult to completely eliminate, which affects the accuracy of data decomposition.Therefore, Torres et al. [27] introduced the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) method, which incorporates adaptive Gaussian white noise at each decomposition stage.This technique not only effectively solves the inherent mode mixing problem of empirical mode decomposition (EMD) but also weakens the reconstruction error characteristics caused by the cumulative noise of ensemble empirical mode decomposition (EEMD).Therefore, it is conducive to more accurate signal reconstruction, close to the near-zero error benchmark.In view of the complex nonlinear and non-stationary characteristics of dam deformation data, this paper uses CEEMDAN to decompose the original dam deformation data.Through this decomposition, CEEMDAN can effectively capture the important features in the signal, and CEEMDAN has the ability of adaptive noise processing and can automatically adjust the degree of noise removal according to the characteristics of the signal.This enables CEEMDAN to extract the effective information in the signal more effectively, reduces the influence of noise interference on the prediction model, and improves the accuracy and stability of the prediction.
(1) Gaussian white noise is added to the signal (dam deformation) y(t) to obtain a new signal ( ) ( ) ( ) , and the new signal is decomposed by EMD to obtain the firstorder intrinsic mode component C1.

(
) (2) By integrating and averaging the obtained multiple modal components, the first intrinsic mode function (IMF) in the CEEMDAN decomposition process is obtained: (3) The residual signal is obtained by subtracting the IMF from the original signal: (4) A new signal is obtained by adding positive and negative pairs of Gaussian white noise to ( ) t r 1 .The new signal is used as a carrier to perform EMD decomposition to obtain the first-order modal component 1 D .The second intrinsic modal component of CEEMDAN decomposition can be obtained: (5) By subtracting IMF2 from the above residuals, the quadratic residuals are obtained: (6) Repeat the above steps until the residual signal is a monotone function.At this time, the number of intrinsic mode components obtained is K, and the original signal ( ) t y is decomposed as follows: ( ) where ( ) is the ith eigenmode component obtained after EMD decomposition; The i th eigenmode component obtained by CEEMDAN decomposition is ( ) is the number of times of adding white noise;  is the signal-to-noise ratio of the noise relative to the original sequence; ( ) t y is the signal to be decomposed; and ( )

CEEMDAN Computational Efficiency Analysis
In order to quantitatively determine the performance advantages and computational efficiency of the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) of adaptive noise relative to the ensemble empirical mode decomposition (EEMD), this paper analyzes the decomposition results of the test signal P(t) generated by the two methods and the detailed evaluation of the execution time of the related algorithms [28].
Signal-to-noise ratio (SNR), measured in dB, is the signal-to-noise ratio in electronic devices or electronic systems [29], that is, the ratio of signal power (SP) to noise power (NP), which can also be converted to the square of the ratio of voltage signal (voltage): Figure 1 shows the performance of the two methods at different signal-to-noise ratio levels.The horizontal axis (X axis) usually represents the SNR level of the original signal.This represents different levels of noise conditions, usually from low SNR (high noise) to high SNR (low noise).The longitudinal axis (Y axis) represents the SNR of the signal after decomposition and reconstruction.Ideally, if the decomposition method perfectly extracts the signal from the noise, this value should be high.In the process of signal decomposition, the EEMD and CEEMDAN algorithms are applied to the test signal, which is denoted as P(t).The results of these decompositions are shown in Figure 1.The analysis of the obtained data shows that the EEMD algorithm shows poor decomposition effect, and there is obvious modal frequency aliasing in its derived components.This highlights that the EEMD algorithm cannot fully avoid the original defects of EMD in signal mode separation.In addition, by performing 10 decomposition experiments on the two algorithms and calculating the average duration of their operation time, it is found that the average operation speed of the CEEMDAN algorithm is 0.061 s, while the average operation speed of the EEMD algorithm is 0.273 s.Compared with the EEMD algorithm, the computational efficiency of the CEEMDAN algorithm is obviously better than that of the EEMD algorithm, which is about 77.7% higher than that of the EEMD algorithm, which is equivalent to 22.3% of the calculation time of the EEMD algorithm.In view of the significant advantages of CEEMDAN algorithm in computational efficiency and decomposition performance, it is used as the main mechanism of deformation time series data decomposition within the scope of this study.It can be seen from Figure 1 that in the IMF1~IMF6 diagram, the SNR sensitivity of the two signal processing methods is roughly the same, but in the IMF7~IMF12 diagram, the curve of CEEMDAN is significantly higher than that of EEMD, which indicates that CEEMDAN performs better in SNR enhancement.Moreover, the CEEMDAN method increases significantly with the gradual increase of the signal-to-noise ratio, but the EEMD has a similar signal-to-noise ratio to the IMF1 in IMF12, indicating that the signal-to-noise ratio of the reconstructed signal is roughly the same regardless of the noise level of the original signal, which may mean that the noise suppression effect of the method is limited.

Sample Entropy (SE)
Sample entropy (SE) is a time series complexity measurement method and an improvement of approximate entropy algorithm.The accuracy of the results is better than that of approximate entropy.A nonlinear dynamic parameter SE is used to determine the complexity of the sequence and the probability that the sequence will generate new patterns as the dimension changes.SE will increase with the increase of sequence complexity and the probability of generating new patterns.Sample entropy can quantitatively analyze the self-similarity and complexity of time series data with only a small amount of data, so it is widely used in the engineering field [30,31].
After CEEMDAN decomposition, the original dam displacement sequence generates multiple IMF components; each component captures different frequencies and modes ex-isting in the original signal.However, processing all IMF components in practical applications will lead to increased computational burden and affect computational efficiency.Therefore, this paper adopts the strategy of streamlining the calculation model to improve the overall processing speed by reducing the execution instructions.Specifically, we use SE as the feature of the reconstructed IMF component sequence.By calculating and analyzing the sample entropy of each IMF component, the part containing important information is identified, and the redundant or noisy part is eliminated.This reconstruction method aims to preserve the key information in the original signal while reducing the computational complexity.Through the sample entropy analysis, the IMF components that have a significant impact on the dam displacement change can be effectively identified, which is convenient for subsequent analysis and prediction.The specific formula of sample entropy is as follows: (1) The modal decomposition residual is processed into a time series = with a length of N. According to the sequence number, the mdimensional vector sequence is formed, { . These vectors represent continuous x values starting at the i th point [32].
(2) Define the distance   as the absolute value of the maximum difference between their corresponding elements.That is, (3) For a given ) for which the distance between is less than or equal to r, and denote it as Increase the dimension to m + 1, and repeat the above steps to get where is the probability of matching m points between two sequences under a similarity tolerance r; is the probability of matching m+1 points between two sequences.The sample entropy is defined as When N is finite, it can be estimated by the following formula: where m is the dimension, usually 1 or 2; r is the similarity threshold, usually 10~25% of the standard deviation of the original sequence.

Partial Autocorrelation Function (PACF)
Partial autocorrelation function (PACF) plays an important role in the practical application of dam safety.PACF is a tool for quantifying the exclusive correlation of specific lag orders in time series data sets.Its role is to separate the influence of a certain lag order from all previous lag orders so as to more accurately evaluate the correlation between time series and its lag iteration.In a word, PACF analyzes the correlation of time series with different lag orders in detail, controls the contribution of intermediate lag values, and enables us to understand the dynamic changes of time series data more accurately [22].
In the actual monitoring of dam deformation, the time series is usually characterized by multiple modal and frequency components.These modes and components may fluctuate due to dam structure, environmental factors, and other external influences.In this paper, the original data is decomposed by CEEMDAN-SE, and the IMF components and trend items obtained by decomposition can reflect the different frequency components and variation rules of dam displacement data.PACF can be used to analyze the correlation between these components, especially in time series data, which can help identify the autocorrelation structure in the sequence.Through the analysis of PACF, the lag correlation between IMF components can be found, and the potential change patterns and laws in dam displacement data can be further revealed.By identifying the autocorrelation structure, the appropriate time delay can be selected as the input variable in the prediction model so as to establish a more accurate prediction model.Therefore, by analyzing the interdependence between IMFs, we can not only understand the dynamic changes of time series data more deeply but also select the input variable set more accurately so as to improve the accuracy and reliability of the prediction model and provide more powerful support for dam safety management and monitoring [33].
The PACF method was used to evaluate the correlation.When the PACF value falls within the 95% confidence interval for the first time and there are no subsequent outliers, the lag period is determined as the delay time of the input variable.PCAF is described as follows: The covariance a ˆ with lag a is expressed as where x is the mean of the time series, M is the maximum lag coefficient, a is the lag length of the autocorrelation function, and the autocorrelation function (ACF) at lag a is represented by a  ˆ, which can be estimated as The PACF for delay a is given by aa f as follows: where 1 ≤ a ≤ M.

The Global Search Whale Optimization Algorithm (GSWOA)
The traditional whale optimization algorithm (WOA) is known for its streamlined structure and minimal parameterization.When dealing with multivariate function optimization tasks, it has a competitive advantage over previous algorithms in terms of calculation speed and accuracy [34,35].However, the global search ability of the WOA is limited, and the accuracy of finding the optimal solution is relatively low.Therefore, this paper adopts an improved WOA, the global search whale optimization algorithm (GSWOA), which integrates the global search strategy [36,37].The implementation of the algorithm is refined as follows: The position improvement equation of the improved whale optimization algorithm is as follows: An inertia weight  that changes with the number of iterations is added to the whale position update process.
where  inertia weights is a nonlinear shift that exists in the interval [0, 1]; t is the num- ber of iterations; X is the whale position;  X is the global optimal position; rand X is a random point location where the whale may exist; b is a constant; l is a random num- ber taken out of the interval [−1, 1]; and p is an arbitrary number of values taken from In order to alleviate the problem of the spiral motion mode of the whale rotation search being too homogeneous due to the constant coefficient b , a variable spiral motion position update mechanism is introduced.The mechanism involves setting the parameter b to increase with each iteration, thereby shrinking the spiral trajectory from a larger for- mation to a smaller formation.The modified mathematical model of rotation search is In the process of a whale position update, constantly updating the optimal position will lead to low search efficiency and the emergence of local optimal solutions.In order to improve the convergence speed of the algorithm, this paper introduces an optimal domain fluctuation search.The formula of this improved search mechanism is as follows: where 1 rand and 2 rand are random numbers between [0, 1]; is the new position randomly searched.If the new position is better than the most available position, the ones are exchanged; otherwise, the optimal position remains unchanged.
For the newly generated locations, greedy selection criteria are used to determine their retained survivability.The improved formula is as follows: (26) where () fx is the positional adaptation value of x.If the new position is better than the most available position, the ones are exchanged; otherwise, the optimal position remains unchanged.

Kernel Extreme Learning Machine (KELM) Algorithm
Extreme learning machine (ELM) is a single hidden layer feedforward neural network, which is characterized by the randomness of weight and bias, which leads to the variability of its prediction performance.In order to improve this problem, kernel extreme learning machine (KELM) is proposed, which combines regularization and kernel methods to enhance the stability and generalization ability of the model [38].
ELM improves the generalization ability of the network by minimizing the training error and output weight norm.In the optimization process, the regularization coefficient C is introduced to balance the two, avoid overfitting, and promote the performance of the model.Then the output  weight is For the case where the hidden layer feature map ( ) is unknown, the kernel matrix of the kernel extreme learning machine can be defined as The output function of KELM can be described as follows: The kernel function used in this paper is the Gaussian kernel function, which is defined as follows: ( ) where  is the desired output result of the learning model; H is the output matrix of the hidden layer; + H is the Moore-Penrose generalized inverse matrix of matrix H; I is a unit matrix with dimension N; Y is the output target vector; and ( ) is the kernel function.

The Specific Steps of GSWOA Optimizing KELM Model
The parameter selection of the KELM model mainly depends on the choice of kernel function type and regularization coefficient, and the choice of kernel function has a significant impact on the performance of the model.Therefore, this paper proposes a method to optimize KELM parameters using GSWOA.The optimization steps are as follows: Step 1 Initialize the whale population: A set of random whales is generated, and each whale represents a set of potential parameters of the KELM model.
Step 2 Calculate fitness: For the parameters corresponding to each whale, the KELM model is used to evaluate the performance on the training set or the verification set, such as calculating the fitness through the error of cross-validation.
Step 3 Determine the optimal solution: Find the current optimal solution in the whale population, which will guide other whales to update their positions.
Step 4 Update position: According to the search mechanism in the whale optimization algorithm, combined with the position of the current optimal solution, the position of the whale is updated.
Step 5 Iterative search: Repeat the above steps, update the optimal solution after each iteration, adjust the search behavior according to the global search strategy, and iterate until the termination condition is satisfied.
Step 6 Parameter determination and final model training: After the iteration, the optimal solution (optimal whale position) is used as the parameter of the KELM model.The optimized parameters are used to retrain the KELM model to ensure that the model has fully learned the data features.

Combined Forecasting Modeling
The dam deformation prediction model constructed in this paper combines advanced signal processing and data analysis methods.The construction process of the combined forecasting model is shown in Figure 2, and the detailed steps are as follows: (1) Data preprocessing: Standardize the monitoring point data to eliminate unit differences and reduce the impact of outliers.(2) CEEMDAN decomposition is performed on the processed data: The white noise level is configured, the noisy signal is augmented, and then the IMFs and residuals are extracted by EMD iteration.Ensemble averaging is performed to ensure the stability of the obtained IMFs.In order to verify the prediction effect of the model proposed in this study, four statistical indicators were used to evaluate its performance: coefficient of determination (R 2 ), root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE): where n denotes the total number of samples; i  and i   represent the measured and calculated displacements, respectively; and i  denotes the average of the measured dis- placements.

Case Analysis
This case study explores an important hydropower project located in the middle reaches of the Lancang River in Yunnan Province.The project includes a series of infra-structures, including a concrete double-curvature arch dam, plunge pool, sub dam, spillway tunnel, and extensive underground water diversion power generation network.Among them, the Xiaowan double-curvature arch dam is the focus, with a dam height of 294.5 m, a standard water level of 1240 m, and an installed capacity of 4200 MW.It guarantees a capacity of 1778 megawatts (MW) and generates up to 1.9 million kWh (kWh) of electricity per year.The top view of the dam is shown in Figure 3.

Data Preprocessing: Constructing Model Feature Factors
In the process of collecting prototype data of dam deformation monitoring, a series of inevitable technical challenges are faced, including equipment dysfunction and data transmission failure, which lead to a small amount of missing data in the data set.For those partially missing monitoring data, this paper selects the cubic Hermite interpolation method for data supplement.The interpolation method can not only effectively restore the missing data but also maintain the local high-order continuity of the data, thus ensuring the integrity and accuracy of the data set and improving the reliability of subsequent analysis and model training.Based on the analysis of the influencing factors of dam deformation, the dam deformation displacement is composed of hydraulic component δH, temperature component δT, and aging component δθ [39].
The variables denoted by H and H0 represent the upstream water level and the base elevation of the dam, respectively, at the given moment; t and t0 represent the monitoring sequence at a specific moment and the reference moment, respectively; θ and θ0 are the ratios of t and t0 to 100, respectively; a1i, b1i, b2i, c1, and c2 correspond to the fitting coefficients.The resultant factor influencing water pressure is represented by . The aging influence factor is θ − θ0, lnθ − lnθ0.
In order to unify the range of different features to the same scale and reduce the numerical difference between features to avoid the negative impact on the accuracy of the model, data standardization has become a necessary preprocessing step.

Comparative Analysis of Decomposition and Reconstruction Techniques
In order to comprehensively evaluate the effectiveness of EMD, EEMD and CEEMDAN decomposition techniques in dam deformation monitoring data processing, a series of quantitative analysis was carried out.Specifically, in this paper, the EEMD and EMD algorithms are synchronized and applied to the standardized data set to facilitate the comparative evaluation of their error reconstruction capabilities.Through comparative analysis, it can be seen that the accuracy of EEMD reconstruction of dam deformation signal is significantly lower than that of traditional EMD method.This decline is largely attributed to the number of sets and white noise contained in EEMD.In contrast, the CEEMDAN algorithm shows stronger reconstruction performance.CEEMDAN shows a comparable error level with EMD, highlighting its advantages in signal reconstruction consistency.The results confirm that CEEMDAN improves the accuracy and efficiency of decomposition by noise cancellation.The reconstruction results of the three decomposition methods are shown in Figure 5. Through the comparative analysis of the above three decomposition methods, it can be seen that CEEMDAN shows better performance.In practical engineering applications, this method has the following advantages: (1) The CEEMDAN method can extract features and trends from dam deformation data more accurately.The prediction model based on these reconstructed data has high accuracy and accuracy, which enhances the reliability and effectiveness of dam deformation prediction.(2) The safety management and maintenance of dams requires reliable engineering decisions, including maintenance planning, monitoring, and early warning systems.With the help of CEEMDAN, a more accurate deformation prediction model enhances the reliability of these decisions and improves the safety of the dam.(3) Dam monitoring and early warning systems rely on accurate data to identify potential anomalies and trigger timely interventions.The CEEMDAN decomposition provides more reliable and consistent data, thereby improving the performance of the monitoring and early warning system, reducing false positives and false negatives, and ensuring timely response to potential risks.( 4) The CEEMDAN method enables dam managers to better formulate strategies and allocate resources by suppressing false positives and unnecessary maintenance tasks, thereby reducing costs and improving operational efficiency.( 5) As an important infrastructure, the safety of the dam directly affects the stability and development of the surrounding areas.Using CEEMDAN to predict deformation is conducive to early detection of risks and taking preventive measures so as to ensure the development and stability of social economy.
The practical significance of the CEEMDAN reconstruction error being better than the other two is that (1) the prediction accuracy is improved, and the lower reconstruction error means that the IMFs extracted from the original signal are less different from the real signal; (2) the quality of signal analysis is improved, as CEEMDAN can more accurately reveal the trend, periodic components, and outliers in the dam deformation signal, which is helpful to better understand the dynamic process; and (3) by reducing the reconstruction error, CEEMDAN can reduce the false positives and omissions caused by the model prediction error to a certain extent.

Analysis of the Results of Sample Entropy and CEEMDAN
In the practical application of dam safety, it is a key work to use the CEEMDAN algorithm to decompose dam data, which is helpful to reveal the important features and patterns hidden in the data.The CEEMDAN algorithm decomposes the original dam data into multiple IMF components, and each IMF component represents the components of different frequencies and amplitudes in the data.When determining the key IMFs in these components, we usually compare them based on the sample entropy associated with each intrinsic IMF.Specifically, through the sample entropy analysis, the important IMFs in the dam data can be determined, which have significant fluctuations and characteristics in the data.In Figure 6, we can see the modal components after CEEMDAN-SE decomposition, which show different characteristics and fluctuation modes.It can be seen from Figure 7 that the sample entropy values of IMF1~IMF4 are higher than the overall data, so they represent high-frequency components with significant fluctuation characteristics.These high frequency components may be related to the subtle changes of the dam structure or the influence of environmental factors.IMF5~IMF7 show periodic oscillation, which may be related to the periodic change in dam stress or the influence of surrounding geological conditions.In contrast, IMF8~IMF10 are low frequency components, reflecting the time trend of dam deformation, which may be related to long-term structural changes or temperature and other factors.

The Final Model Input Variables Are Determined by PACF Analysis
In the practical application of dam safety, in order to improve the accuracy and effectiveness of the prediction model, the CEEMDAN-SE method is used to decompose the original time series data in detail to obtain 10 IMF components.These IMFs can capture the fluctuation characteristics of different scales in the original data and serve as signal sources for subsequent analysis.
In this paper, PACF analysis is applied to the 10 IMF components generated above to quantify the direct relationship between time points in the time series and to eliminate the influence of indirect correlation in order to do an in-depth study of the correlation strength between its time series data points and select the best input feature set.As shown in Figure 8, by calculating the partial autocorrelation coefficient between the time series and its lag sequence, a significant correlation can be found, and the optimal input variable length of each GSWOA-KELM model can be determined.Table 1 provides a detailed configuration of the optimal input variables for each IMF component to ensure the maximum correlation between the input features and the target prediction output, thereby enhancing the prediction ability of the model.Through this method, we can make full use of the data after CEEMDAN-SE decomposition, combined with the GSWOA-KELM model and PACF analysis, to build a more accurate and reliable prediction model and provide more effective tools and means for dam safety management.Selecting the appropriate input variables is the key to time series analysis and predictive modeling.The PACF results shown in Figure 8 provide an important statistical basis for variable selection.In this paper, taking IMF1 as an example, the maximum lag period with significant correlation is determined by identifying the PACF value initially exceeding the 95% confidence interval.Specifically, if this threshold is exceeded on the fifth day of the lag, it indicates that there is a significant linear correlation between the lag value from the first day to the fourth day and the current observation value.Based on this analysis, four consecutive lag values from (t − 4) to (t − 1) d are selected as input variables, where d represents the number of days and t represents the current time point.These variables are used to predict the target value of the current day (td).In addition, in order to evaluate the predictive ability of the model for dam operation and management from a broader perspective, this study extends the focus from predicting the current day (td) to predicting the next three days ((t + 3)d) and six days ((t + 6)d).
Figure 9 illustrates the selection of appropriate input variables for different prediction periods based on PACF results.This method framework provides systematic guidance for determining which historical data points are most predictive when constructing prediction models.This input variable selection strategy optimizes the prediction performance of the model and ensures the operation and management of the dam.

Selection of Kernel Functions and Comparative Analysis of GSWOA-KELM Models
In this paper, GSWOA-KELM is used to model and predict the two measuring points.The choice of kernel function in the KELM model plays an important role in its performance and behavior.The kernel function is used to map the input data to a high-dimensional space, thereby enhancing linear differentiability or promoting improved fitting in the above space.Different kernel functions produce different data mappings and model behaviors, which have different effects on model performance.Among all kernel functions, the linear kernel function operates by mapping the data to the original feature space without nonlinear transformation, making it suitable for linearly differentiable scenarios.Therefore, linear kernels show good performance on linearly differentiable data sets.In contrast, the radial basis function (RBF) kernel evaluates the similarity between points in the feature space by projecting the data into an infinite dimensional space and using the negative exponential distance from the data point to the center point.Although the RBF kernel usually performs well in dealing with nonlinear data, it needs to be cautious when tuning parameters, especially bandwidth parameters, to avoid overfitting.In addition, such as the Sigmoid kernel, it may produce good results in specific scenarios, although careful selection and adjustment are required based on existing problems.The selection of kernel function must consider data characteristics, problem complexity, and model performance requirements.Through the reasonable selection and adjustment of the kernel function, the generalization ability, fitting ability, and adaptability to new data of the model can be enhanced.
Therefore, in this paper, the prediction ability of the unoptimized KELM model is evaluated for the original dam deformation data set.Then, under the condition of the uniform data set shown in Figure 10a, the prediction performance of KELM models using different kernel functions is compared and evaluated, and the radar chart of the corresponding evaluation index is given in Figure 10b.After determining the kernel function type, it is necessary to determine the regularization parameter (C), the kernel parameter, and the number of ELM hidden layer nodes.GSWOA has been widely used in the field of function optimization because of its fast computational efficiency, fast convergence speed, and strong global search ability.Therefore, this paper selects the GSWOA algorithm to optimize the KELM parameters.In this paper, GSWOA is compared with traditional algorithms to evaluate its adaptability.As shown in Figure 11, the convergence speed of GSWOA is significantly accelerated after the 10th iteration, which is a phenomenon that other algorithms have not observed in the same time.This observation shows that compared with other algorithms, GSWOA shows superior convergence speed and computational efficiency.The specific parameters of each algorithm are listed in Table 2.

Evaluate the Robustness and Computational Efficiency of the KELM Model
In the practical application of dam safety, in order to evaluate the prediction effect of the KELM model proposed in this paper, we choose the traditional BP, ELM, CNN, SVM, and GRU models as the comparison models.These models are used to predict dam deformation data using the same data set division as the model proposed in this paper, including training set and test set.At the same time, in order to ensure the consistency of the model, this paper uses the initial model to directly verify and compare all the models.This paper compares the prediction results of the models and evaluates their prediction performance through graph display and evaluation indicators.Figure 12 shows the prediction results of each model.By comparing the predicted values and measured values of different models, their fitting degree and prediction accuracy can be intuitively evaluated.By comparing the model proposed in this paper with the traditional neural network and machine learning model, its performance in dam deformation prediction can be comprehensively evaluated.Through this comparison, it is helpful to select the most suitable prediction model for practical application and provide a more reliable and effective prediction tool for dam safety management.As shown in Figure 12, the KELM model, as the final prediction framework of this study, shows excellent prediction performance compared with the traditional model.See Table 3 for   Compared with the CNN, SVM, and GRU models, the KELM model has a significant improvement in all evaluation indicators.Therefore, by predicting the dam deformation at different measuring points, this paper confirms the general effectiveness of the proposed model and also confirms its robustness to predict dam deformation skillfully even in the case of partial missing original data.
In order to verify the computational efficiency advantage of the model proposed in this study, we recorded the average execution time of 20 independent runs of each model, which is recorded separately in Table 4.It can be seen from Table 4 that the CNN and GRU models require longer running time when applied to the same target sequence.This difference is due to the fact that CNN often requires additional convolutional layers and filters to extract relevant features, which increases the complexity of the model.GRU performs well in retaining long-term dependent information in time series data.Compared with the CNN and GRU models, the KELM model shows higher computational efficiency.This observation shows that (1) the KELM model usually requires less memory resources because it does not need to store a large number of convolution kernel parameters.Therefore, it performs well in terms of parameter storage efficiency and computational cost benefits.(2) The KELM model has fewer parameters and a relatively simple structure, which enhances the interpretability and comprehensibility of the prediction process.(3) The KELM model shows greater flexibility in dealing with unstructured and sequential data, making it suitable for different data models.

Deformation Prediction Results and Comparative Analysis
In the practical application of dam safety, in order to verify the effectiveness of the proposed dam deformation monitoring model, it is compared with the CEEMDAN-WOA-KELM, GSWOA-KELM, CEEMDAN-KELM, and KELM models.These models represent different methods and strategies, and we compare them with the proposed models to evaluate their predictive performance.Firstly, the dam deformation data of each model are predicted, and the prediction results are compared.In this paper, the positive vertical lateral displacement is selected for the A22-PL-02 measuring point, and the positive vertical longitudinal displacement is selected for the A22-PL-03 measuring point: (1) There are significant differences in the structure and stress state of the dam at different locations.Some measuring points are susceptible to lateral forces, while others are mainly affected by vertical forces.Therefore, monitoring the displacement in different directions is helpful to fully understand the deformation behavior of the dam.(2) In the dam deformation monitoring, the key parts need to focus on the horizontal or vertical displacement to prevent the risk of structural instability or settlement.According to the location and importance, it is necessary to select the appropriate displacement direction for prediction.
(3) The characteristics of displacement data in different directions may affect the prediction performance of the model.By analyzing historical data and selecting displacement prediction in a specific direction, the accuracy and reliability of the model can be improved.
Figure 13 shows the prediction results of each model.By comparing the predicted values of different models with the actual observations, the fitting degree and prediction accuracy can be evaluated.At the same time, we also analyze the residuals of each model.Figure 14 shows the residual distribution of each model.The final prediction results are shown in Table 5.These indicators can objectively evaluate the prediction accuracy and goodness of fit of the model and help us determine which model performs best in dam deformation analysis.Through comparative analysis, the optimal dam deformation analysis model can be determined, and its effectiveness in practical application can be verified.It can be seen from Figure 13 that the prediction performance of the CEEMDAN-SE-PACF-GSWOA-KELM model is better than that of the CEEMDAN-WOA-KELM, GSWOA-KELM, CEEMDAN-KELM, and KELM models to varying degrees.The A22-PL-02 measurement points in Table 5 are analyzed.Compared with the CEEMDAN-WOA-KELM model, RMSE, MSE, and MAE are reduced by 0.5992 mm, 1.1303 mm 2 , and 0.5523 mm, respectively, and R 2 is increased by 6.83%.This shows that the GSWOA algorithm is effective in optimizing the key parameters of KELM, and the prediction accuracy of the model is improved compared with the WOA.Specifically, 1. GSWOA introduces the variable spiral position update, which improves the global search diversity and the ability of the algorithm to find the optimal solution; 2. GSWOA enhances the search stability, reduces the risk of falling into local optimum, and improves the robustness of the algorithm; and 3. GSWOA shows adaptability to different optimization problems and shows stronger generalization ability in complex scenes.
Compared with the GSWOA-KELM model, the RMSE, MSE, and MAE of the CEEMDAN-SE-PACF-GSWOA-KELM model are reduced by 0.3340 mm, 0.5414 mm 2 , and 0.3702 mm, respectively, while R 2 is increased by 4.79%.This shows that the advantages of CEEMDAN-SE-PACF preprocessing are as follows: (1) CEEMDAN-SE-PACF effectively extracts the principal components of the signal, filters out the noise data components, and improves the data quality and accuracy; (2) CEEMDAN-SE-PACF identifies key signal features, which helps to understand the inherent laws of data and improve prediction accuracy; and (3) CEEMDAN-SE-PACF performs downscaling processing on the signal, reduces the data complexity, improves the analysis efficiency, and reduces the risk of overfitting.
Compared with the CEEMDAN-KELM model, the RMSE, MSE, and MAE of the CEEMDAN-SE-PACF-GSWOA-KELM model are reduced by 1.2763 mm, 3.3046 mm 2 , and 1.1409 mm, respectively, and R 2 is increased by 14.37%.This emphasizes both the benefits of algorithm optimization and the benefits of data preprocessing.
The following can be seen from Figure 14: (1) From the residual diagram, it can be seen that the CEEMDAN-SE-PACF-GSWOA-KELM model obeys the normal distribution, while other models show different degrees of bell symmetry, indicating that it approximately obeys the normal distribution.(2) For the CEEMDAN-SE-PACF-GSWOA-KELM model, the residual mean tends to zero, indicating that the deviation is the smallest, and the deviation from zero means that other models have potential model deviations.(3) It is worth noting that the abnormal residual distribution of the CEEMDAN-WOA-KELM, GSWOA-KELM, CEEMDAN-KELM, and KELM models indicates the prediction bias or error in some scenarios.

Conclusions
Aiming at the data noise and nonlinear characteristics, this paper uses the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) method to improve the decomposition accuracy of the initial deformation sequence by introducing the white noise of Gaussian distribution.Then, the sample entropy (SE) is used to evaluate the complexity of each intrinsic mode function (IMF) obtained by decomposition, and the sample entropy of the initial data is used as the reference standard to properly reconstruct these modal components.In addition, partial autocorrelation function (PACF) analysis is used to identify the best correlation features between a single IMF and past data.Then, these identified features are used to construct input feature vectors for GSWOA-KELM to enhance the prediction performance of the model.In addition, the data of the Xiaowan double-curvature arch dam are used to verify the analysis.Using various indicators to evaluate the performance of the model, the following conclusions are drawn: (1) The CEEMDAN-SE-PACF-GSWOA-KELM model proposed in this paper has higher prediction accuracy than other models.In order to solve the nonlinear characteristics of the original data of the dam, this paper compares the CEEMDAN and EEMD methods and uses the reconstruction error and signal-to-noise ratio index.The results show that the CEEMDAN decomposition method is superior to EEMD in accurately decomposing dam signals, thereby improving the reliability of engineering decisionmaking in practical applications.(2) Effective management and maintenance of dams require reliable engineering decisions, including robust maintenance plans and monitoring strategies.In order to improve the accuracy of CEEMDAN decomposition, in this paper, SE and PACF are integrated into the CEEMDAN decomposition process, which is beneficial to filter noise more effectively and improve the quality of decomposition results.In addition, SE and PACF methods help to identify prominent signal features, thereby identifying and capturing key components and trends in the signal.Through the analysis of sample entropy and autocorrelation function, the frequency components and time series characteristics of the signal can be accurately determined so as to provide a more reliable basis for subsequent analysis and modeling work.(3) In order to construct a more effective prediction model, the GSWOA algorithm is used to optimize the parameters of the KELM model.At the same time, the effectiveness of the GSWOA algorithm is compared with the traditional algorithm, and the superior convergence characteristics of the GSWOA algorithm are revealed.In addition, in the final prediction comparison analysis, the prediction performance of the WOA-KELM and GSWOA-KELM models is juxtaposed, which shows the ability of the GSWOA algorithm to optimize the parameters of the KELM model and obtains better prediction results.(4) This paper aims to verify the robustness and computational efficiency of the KELM model by comparing it with several traditional prediction models.Through comparative analysis, the advantages of the KELM model are summarized as follows: a.
Compared with the BP model, the KELM model usually avoids the local optimal problem by randomly initializing the feature weights, thereby reducing the possibility of converging to the suboptimal solution.b.Compared with the ELM model, the KELM model shows greater flexibility in random weight initialization between the input layer and the hidden layer, ensuring more consistent prediction performance.c.Compared with the SVM model, the KELM model has higher efficiency in dealing with high-dimensional data, because it does not need to explicitly calculate the kernel function or construct the kernel matrix.Therefore, compared with other traditional models, the robustness and computational efficiency of the KELM model have been verified to varying degrees.

2 r
[0, 1].A and C are coefficients matrices, and the expressions are as follows: are random numbers of [0, 1]; m ax t is the maximum number of iterations; and a is the convergence factor.

T 2 
HH is the stochastic matrix of the ELM model; is a kernel parameter;

( 3 )
Sample entropy optimization of decomposition data: The number of effective IMFs is determined by sample entropy to verify the integrity of the decomposition process.(4) PACF analysis of each IMF component: PACF is used to analyze the correlation between each IMF and historical data and select the appropriate feature vector for the model.(5) Optimization based on GSWOA: GSWOA is used to optimize the kernel function parameters and regularization coefficients of KELM.The optimization is to determine the optimal parameter set.(6) Parameterization of KELM model: The parameters optimized by GSWOA are applied to the KELM model, and the prediction model is finally established.(7) Model evaluation and verification: The prediction accuracy of the model is evaluated and verified on the test data set using statistical indicators such as mean square error (MSE) and determination coefficient (R 2 ).

Figure 2 .
Figure 2. Predictive model for dam deformation analysis.

Figure 3 .
Figure 3. Aerial view of the dam.(a) Downstream view; (b) Upstream view.The validity and accuracy of the dam deformation prediction model proposed in this paper are analyzed by using the monitoring data of A22-PL-02 and A22-PL-03 monitors of the Xiaowan double-curvature arch dam.This paper describes the advantages of the model in dam deformation analysis.In order to verify the reliability of the model, the monitoring data of arch crown beams A22-PL-02 and A22-PL-03 from December 2008 to December 2016 were used.The deformation prediction ability of the model was examined by 2896 sets of data sets.A total of 80% (2316 groups) of the data sets were allocated for model training, and the remaining 20% (580 groups) constituted the test set.The spatial distribution of arch dam measuring points is shown in Figure 4a, and the related environmental factors are shown in Figure 4b.These visual representations help in understanding the geospatial dynamics and environmental background of dam operation, thereby enhancing the robustness of deformation analysis.

Figure 4 .
Figure 4. Layout of dam drape monitoring instrumentation.(a) Distribution of arch dam measurement points; (b) Chart of changes in environmental quantities.

Figure 5 .
Figure 5. Reconstruction error diagram for each decomposition method.

Figure 9 .
Figure 9.The process of determining the input and output variables of the IMF.

Figure 10 .
Figure 10.Comparative Chart of KELM Prediction Results Using Different Kernel Functions and Performance Metrics Radar Chart.(a) Predictions Comparing Kernel Functions; (b) Radar Chart of Evaluation Indicators.

Figure 11 .
Figure 11.Fitness Comparison Curves between GSWOA and Traditional Models.
details. Compared with the BP model, the RMSE, MSE, and MAE of the KELM model are reduced by 0.1707 mm, 0.0882 mm 2 , and 0.1630 mm, respectively, while R 2 is increased by 2.79%.Compared with the BP model, the advantages of the KELM model are as follows: (1) The KELM model has a faster training speed because it does not require an iterative back propagation algorithm to directly solve the output weight; (2) compared with the BP model, the KELM model requires fewer hyperparameters to adjust, which simplifies the implementation and adjustment process; and (3) the KELM model often avoids the problem of the BP model falling into the local optimum by randomly initializing the feature weights, thereby reducing the risk of falling into the local optimum.

Table 2 .
Specific parameter settings of each algorithm.

Table 3 .
Error Indices of Comparative Methods.Compared with the ELM model, the RMSE, MSE, and MAE of the KELM model are reduced by 0.8509 mm, 1.0185 mm 2 , and 0.8124 mm, respectively, and the R 2 is increased by 31.87%.Compared with the ELM model, the advantages of the KELM model are as follows: (1) Compared with the ELM model, the KELM model usually requires less parameter optimization, which simplifies the model tuning process; (2) the KELM model shows enhanced robustness to the random weight initialization between the input layer and the hidden layer, ensuring continuous prediction performance stability; (3) the KELM model is usually easy to learn online and can quickly update and adjust when new data arrive.

Table 4 .
Comparison of calculation efficiency of each model.

Table 5 .
Error indicators for different combination methods.