Next Article in Journal
Optimized Microbial Scaffolds Immobilized with Pleurotus ostreatus and Aspergillus oryzae on Foaming Bacterial Cellulose
Next Article in Special Issue
Study on Performance and Aging Mechanism of Rubber-Modified Asphalt Under Variable-Intensity UV Aging
Previous Article in Journal
Thermal Properties of Geopolymer Concretes with Lightweight Aggregates
Previous Article in Special Issue
Mapping the Relationship Between Diffusion Characteristics of Warm-Mix Recycled Asphalt on Molecular Dynamics (MD) and High-Low Temperature Properties of Mixtures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of Time Series Models and Algorithms: Creep Prediction for Low-Carbon Concrete Materials

1
School of Civil Engineering, Architecture and the Environment, Hubei University of Technology, Wuhan 430068, China
2
Innovation Demonstration Base of Ecological Environment Geotechnical and Ecological Restoration of Rivers and Lakes, Hubei University of Technology, Wuhan 430068, China
3
Wuhan Construction Engineering Group Co., Ltd., Wuhan 430014, China
*
Author to whom correspondence should be addressed.
Materials 2025, 18(13), 3152; https://doi.org/10.3390/ma18133152
Submission received: 3 June 2025 / Revised: 23 June 2025 / Accepted: 26 June 2025 / Published: 3 July 2025

Abstract

In practical engineering applications, the use of low-carbon concrete materials is in line with the principles of sustainable development and helps to reduce the impact on the environment. Creep effects are particularly critical in the research on such materials. However, traditional characterization methods are time-consuming and often fail to account for the interactions of multiple factors. This study constructs a time-series database capturing the behavioral characteristics of low-carbon concrete materials over time. Three temporal prediction models—Artificial Neural Network (ANN), Random Forest (RF), and Long Short-Term Memory (LSTM) networks—were retrained for creep prediction. To address limitations in model architecture and algorithmic frameworks, an enhanced Adaptive Crowned Porcupine Optimization algorithm (ACCPO) was implemented. The improved performance of the ACCPO was validated using four diverse benchmark test functions. Post-optimization results showed remarkable improvements. For ANN, RF, and LSTM, single-metric accuracies increased by 20%, 19%, and 6%, reaching final values of 95.9%, 93.9%, and 97.8%, respectively. Comprehensive evaluation metrics revealed error reductions of 22.6%, 7.9%, and 8% across the respective models. These results confirm the rationality of the proposed temporal modeling framework and the effectiveness of the ACCPO algorithm. Among them, the ACCPO-LSTM time series model is the best choice.

1. Introduction

As a predominant material in civil engineering, concrete is extensively used in structures like bridges, roads, tunnels, and dams. However, the long-term performance of concrete in service, particularly its creep behavior, is a critical factor affecting structural durability and safety. For example, prestressed continuous rigid-frame bridges often show excessive long-term deflection and web cracking during prolonged use [1]. Concrete containment structures in nuclear power plants may undergo gradual performance deterioration during prolonged use [2]. In addition, the cracking of industrial concrete floors can lead to reduced load-bearing capacity and impaired structural durability [3]. These challenges highlight the need for systematic research on the creep characteristics of low-carbon concrete materials.
The selection of low-carbon concrete materials, such as fly ash admixtures and silica fume, as a common low-carbon concrete material is driven by the need to promote sustainable development. As in Figure 1, these materials have the following characteristics: fine particle size, high specific surface area, and high chemical reactivity. Concrete mixed with an appropriate amount of silica fume has the property of significantly improving the compressive strength, durability and corrosion resistance of concrete [4,5,6,7]. For example, Hangzhou Hanglong Plaza (Figure 2a), Wuhan Center Building (Figure 2b) and other buildings use such materials, effectively reducing carbon emissions and practicing the concept of green low-carbon. However, compared to ordinary concrete, the creep behavior of low-carbon concrete materials remains unclear, especially under varying conditions, and the specific mechanisms influencing this creep behavior have not yet been fully elucidated [8,9,10]. Therefore, accurately predicting the creep behavior of low-carbon concrete materials has become an urgent challenge that needs to be addressed.
Traditional creep studies are based on extensive experimental data. These experiments typically require lengthy observation periods and are limited by the complexity of the experimental conditions [11]. To date, most international creep studies can be categorized as using nonlinear theoretical models. For example, Bu P, Li Y, Li Y, et al. employed fracture mechanics theory to analyze the actual energy release rate at crack tips in materials undergoing creep deformation [12]. This approach revealed the strain energy accumulation patterns in concrete microcracks under sustained loading, effectively demonstrating the coupling mechanism between concrete damage and creep. Internationally recognized prediction models include four versions proposed by the European Concrete Committee and the International Federation for Prestressing (CEB-FIP): the CEB-FIP (MC1970) model [13], the CEB-FIP (MC1978) model [13], the CEB-FIP (MC1990) model [14], and the FIB MC2010 model [15]. The American Concrete Institute (ACI) Committee 209 introduced the ACI-209R (1982) model [13] and the ACI-209R (1992) model [16] in 1982 and 1992, respectively. Subsequently, Professor Bažant and colleagues developed the B4 model [17] based on micro-prestressing solidification theory, incorporating comprehensive considerations of concrete strength, composition, and long-term creep behavior. Although these models progressively account for various factors influencing concrete creep, they exhibit certain limitations in predicting long-term deformation characteristics.
Recent advancements in data acquisition technologies and computational methodologies have propelled machine learning-based modeling approaches into increasing prominence within creep research. Machine learning techniques have emerged as effective tools for addressing the nonlinear behavior of concrete [18,19,20,21,22]. Notably, Taha et al. developed an artificial neural network (ANN) with a single hidden layer containing six neurons for masonry creep prediction. However, this study considered only four parameters and validated the model with a limited dataset of 14 samples, resulting in constrained generalizability [18]. Given that concrete creep represents typical time-series data, the integration of temporal modeling frameworks appears particularly promising. Thanh Bui-Tien et al. demonstrated the superiority of temporal models over conventional approaches through adaptive cells and deep learning methods in bridge damage assessment using time-series data [23]. Jian Liu et al. developed a multivariate time-series model for asphalt pavement rutting prediction, which outperformed comparative frameworks including ARIMAX, Gaussian process, and mechanistic–empirical (M-E) models [24]. Wang et al. established an LSTM network-based concrete creep model using experimental data, achieving satisfactory prediction accuracy. Nevertheless, this model neglected the influence of material properties and environmental factors on creep behavior [25]. These studies collectively indicate that temporal modeling architectures exhibit distinct advantages in processing time-dependent, nonlinear data with inherent noise through neural networks and deep learning paradigms. Their demonstrated effectiveness stems from their inherent ability to capture temporal dependencies and complex interaction patterns within sequential data structures.
In machine learning models, parameter configuration directly determines predictive performance, making parameter optimization particularly critical [26]. Optimization algorithms exhibit unique advantages and are extensively used for tuning machine learning parameters. The Crested Porcupine Optimizer (CPO), proposed in 2024 [27], represents a novel metaheuristic algorithm. It features a robust global search ability, rapid convergence, minimal parameter requirements, easy implementation, and synergistic compatibility with other algorithms. These characteristics have enabled its application across diverse fields such as water resources management [28] and geological exploration [29]. While standard CPO demonstrates notable merits, challenges persist, including local optima entrapment, parameter sensitivity, computational efficiency limitations, and restricted adaptability in dynamic environments. Addressing these issues remains a significant research frontier. Algorithmic enhancement strategies have proven effective in improving optimization performance. For example, an in-depth analysis of various methods for improving the Sine Cosine Algorithm (SCA) not only comprehensively summarizes its advantages and disadvantages, but also provides a broader study of meta-heuristic optimization algorithms [30]. Huang et al. introduced a simulated degradation mechanism to counteract the insufficient global search capacity in particle swarm optimization, achieving marked improvement in solution accuracy [31].
In summary, this study utilizes the characteristics of time series of low-carbon concrete material data, together with the consideration of multivariate variables such as material properties, environmental parameters and historical values, and establishes a time series deep learning prediction model based on the improvement of three machine learning models, namely, artificial neural networks (ANN), random forests (RF), and long- and short-term memory networks (LSTM), to predict creep. In order to improve the prediction ability of the model, four strategies are used to optimize and improve the CPO, and a new adaptive crown porcupine optimization algorithm (ACCPO) is established; the combination of the ACCPO and the time-series model greatly improves the prediction effect.

2. Materials and Methods

2.1. Database Description and Variable Analysis

The experimental data were collected from the NU database established by Professor Bažant at Northwestern University on the concrete creep and shrinkage NU database, and 1439 sets of creep test data were used as the basis [32]. The low-carbon concrete material data studied in this paper were compiled from this database.
Given the differences in parameter categories and test periods among various groups in the database, this study processed the database data to eliminate their impacts on model development:
  • In the description and analysis of the database, a unified coding rule is adopted for the selection of inputs and outputs and the handling of missing values;
  • For test groups of sufficient length in the database, due to the characteristics of concrete creep, which grows rapidly in the early stages and slowly in the later stages, data from the time period between 1 and 60 days were selected, and data less than 60 days were removed;
  • In processing and selecting good datasets, logarithmic transformations are applied to time. This transformation helps to achieve a more uniform distribution of creep time series data, thereby improving the predictive accuracy of the model;
  • Since the recording times of each data group are different, it is necessary to standardize the time intervals of each data group so that they can be easily input into the time series model. Because the data have been log-transformed, the recording times are approximately linear, so linear interpolation is used to unify the recording times of each data group;
  • Different input parameters have different dimensions and ranges. When using loss functions to calculate errors, features with larger scales play a decisive role in model training. Such input conditions can affect the results of data analysis and even cause the model results to fail to converge. To eliminate differences in dimensions between indicators, the data must be normalized.
Given that the B4 model is based on the widely accepted micro-prestressing solidification theory, it is reasonable to select the model’s input parameters according to this theory [33]. An analysis of the creep data in the database revealed that material properties such as concrete strength, elastic modulus, and mix ratio have the greatest influence on creep, followed by geometric characteristics and environmental factors like temperature and humidity. Based on the parameter selection of the B4 model [33] and the above analysis, 12 feature parameters were chosen as the model’s inputs, as presented in Table 1.
For variables fc28 and E28, if one is missing, the other can be estimated and supplemented using Equation (1) [34]. If both data points are missing, we remove the data.
E 28 = 4734 f c 28
The unit of E28 and fc28 are MPa.
Because cement type is also a key factor in creep, the order of cement types was coded by the authors in order to incorporate this factor into the model. Cement types, including unknown, normal-setting R, rapid-setting RS, and slow-setting SL, are uniformly encoded according to the rules in Table 2.
The creep database for low-carbon concrete materials required for this study was obtained through the above-mentioned processing approach. Table 3 describes the minimum, maximum, and average values of input and output values in the database.
To explain the interdependencies among the model’s input variables, i.e., multicollinearity, we can consult [35]. Ideally, the correlation between variables should be less than 0.8 for the model to be considered a high-precision model [36]. A correlation coefficient heat map, as presented in Figure 3, illustrates the relationships between the input variables and the model output. Given the unique nature of time-series models and the need for unified coding rules, time points and cement types are not discussed. The heat map clearly shows that the impact of multicollinearity is relatively small. Additionally, Figure 4 shows the joint distribution of input and output variables, providing a clear visualization of the data ranges for these variables.

2.2. Optimization Algorithm for Crown Porcupines

The Crown Porcupine Optimizer (CPO) was proposed in 2024, inspired by the various defensive behaviors of the Crown Porcupine (CP) [27]. The crowned porcupine employs four defense mechanisms to counter different threats, ranging from the mildest to the most aggressive: visual, auditory, olfactory, and physical attacks. The first two defense mechanisms (visual and auditory) primarily reflect the exploratory behavior of the crowned porcupine, while the latter two (olfactory and physical attacks) reflect its exploitative behavior. In this algorithm, by simulating the four defense behaviors of the crowned porcupine, four zones are divided from outer to inner based on increasing aggressiveness to model defense behaviors. The outermost zone represents the first defense zone, implementing visual defense; the second defense zone implements auditory defense; the third defense zone implements olfactory defense; and the fourth defense zone implements physical attacks. The defense mechanisms in each zone are activated sequentially based on the threat level of the predator. The introduction of this mechanism helps accelerate the convergence speed of the algorithm while maintaining population diversity. The key technologies of this algorithm lie in population initialization, the technique of reducing the population in each iteration, and the four defense strategies.
For population initialization, the CPO algorithm is a search process that starts from a set of initial individuals. It can be expressed by Equation (2):
X i = L + r × U L | i = 1,2 , 3 , · , · , · , N
in the equation, N represents the number of individuals, i.e., the population size, x i is the i-th candidate solution in the search space, L and U are the lower and upper bounds of the search range, respectively, and r is a randomly initialized vector between 0 and 1.
For the cyclic population reduction (CPR) technique, some CPs are obtained from the population during the optimization process to accelerate the convergence speed, and then reintroduced into the population. This is used to determine the execution frequency during the optimization process, and is expressed mathematically as Equation (3):
N = N m i n + N N m i n ( 1 ( t % T m a x T T m a x T ) )
in the equation, N is the current population size, Nmin is the minimum number of individuals in the newly generated population, N′ is the initial population size, t is the current number of function evaluations, T is the number of iterations, Tmax is the maximum number of function evaluations, and % denotes the modulo operator.
For the four defense strategies:
1.
First line of defense strategy
When the CP becomes aware of the predator, the predator has two choices: either continue approaching or move away. A random value is generated using a normal distribution to simulate these two options. When these random values are less than 1 or greater than −1, the predator moves closer to the CP; otherwise, the predator moves away from the CP. This can be defined as in Equation (4),
x i t + 1 = x i t + T 1 × 2 × T 2 × x t C P y i t
Here, x i t + 1 and x i t   represent the positions of the predator at iterations t + 1 and t, respectively, x t C P is the optimal solution for function computation t, y i t is a vector generated between the current CP and a randomly selected CP from the population to represent the position of the predator at iteration t, where T1 is a random number based on a normal distribution, and T2 is a random value in the interval [0, 1]. The formula for calculating y i t is as shown in Equation (5),
y i t = x i t + x r t 2
where r is between [1, N], and N is the population size.
2.
Second defense strategy.
In this strategy, the CP makes noise by emitting sounds to threaten predators. As predators approach, the porcupine’s sounds gradually increase in volume. This can be expressed by the following formula, shown in Equation (6),
x i t + 1 = ( 1 U 1 ) × x i t + U 1 × ( y + T 3 × ( x t r 1 x t r 2 ) )
where r1 and r2 are two random values between [1, N], and T3 is a random value between 0 and 1.
3.
Third defense strategy.
In this strategy, CP secretes foul-smelling gases to form a diffusion range that prevents predators from approaching. This can be expressed as in Equation (7),
x i t + 1 = ( 1 U 1 ) × x i t + U 1 × ( x r 1 t + S i t × ( x r 2 t x r 3 t ) T 3 × δ × γ t × S i t )
Among these, r3 is [1, N], δ is a parameter used to control the search direction, defined by Equation (8), x i t is the position of the i-th individual at iteration t, γ t is the defense coefficient defined by Equation (9), T3 is a random value in the interval [0, 1], and S i t is the odor diffusion factor, defined by Equation (10), as shown below—
δ = + 1 , i f   r a n d 0.5 1 , E l s e
γ t = 2 × r a n d × 1 t t m a x t t m a x
S t i = exp f ( x t i ) N k = 1 f ( x t k ) + ε
where f(xti) denotes the objective function value of the i-th individual at iteration t, ε is a small value used to avoid division by zero, r a n d is a vector containing random values generated between 0 and 1, wherein rand is a variable containing random numbers generated between 0 and 1, N is the total size, t is the current iteration number, and tmax is the maximum number of iterations.
4.
Fourth defense strategy.
Finally, CP adopts a physical attack strategy. When the predator is very close, CP strikes it with short, thick feathers. At this point, the two objects collide violently, simulating a one-dimensional inelastic collision. This can be expressed by the following formula:
x i t + 1 = x t C P + ( α ( 1 T 4 ) + T 4 ) × ( δ × x t C P x i t ) T 5 × δ × γ t × F i t
where x t C P is the best solution obtained representing CP, x i t is the position of the i-th individual at iteration t, representing the predator at that position, α is the convergence speed factor of z, T4 and T5 are random values in the interval [0, 1], and γ t is the defense coefficient. Furthermore, F i t is the average force affecting the CP of the i-th predator.

2.3. Improving the Optimization Algorithm for Crown-Shaped Pigs

In order to minimize the occurrence of local optima, premature convergence, parameter sensitivity, and dynamic environment adaptability when solving the objective function, multiple strategic optimizations were performed on the conventional CPO algorithm. The specific steps are as follows.

2.3.1. Logistic Chaotic Mapping Step Size Adjustment

The global search capability of the algorithm can be improved by chaotic mapping (Logistic chaotic mapping), which is characterized by traversal and non-repeatability to avoid falling into the local optimum [37]. Therefore, the step size (dynamic_step) in the original algorithm is changed from fixed or simply random to achieve a better balance between the exploration and development phases. The Logistic Chaos Mapping function expression is as follows:
X n + 1 = μ x n ( 1 x n )
in this context, μ is the control parameter, typically ranging from [0, 4], while xn represents the value at the nth iteration, ranging from [0, 1]. When μ falls between 3.57 and 4, the Logistic map enters a chaotic state, at which point the generated sequence exhibits good randomness and ergodicity.
After adjusting the expression, the following are established:
chaos _ value = 4 × chaos _ value × ( 1 chaos _ value )
dynamic _ step = chaos _ value × ( 1 t / Max _ iterations )
in this context, chaos_value refers to the chaos value, and Max_iterations denotes the maximum number of iterations.
The adjustment improves the global search ability of the algorithm, avoids falling into local optimums, and produces sequences with good randomness and traversability.

2.3.2. Dynamic Adjustment of Population Size

The dynamic stochastic wandering strategy refers to the iterative process of the optimization algorithm, according to the current search state and historical information, which involves dynamic adjustment to enhance the exploration ability of the algorithm in order to strengthen its ability to jump out of the local optimum [38]. The waste of computational resources caused by too large a population size can be avoided by dynamically adjusting the population size in this algorithm. The specific expression of dynamic stochastic wandering is as follows:
X i t + 1 = X i t + s t e p × r a n d n ( 0,1 )
in this equation, X i t denotes the position of the i-th individual in the t-th generation, step is the step size, and randn(0,1) denotes a random number from the standard normal distribution.
Based on the algorithm, we improve the initial expression by using Equations (16)–(18):
cycle _ length = Max _ iterations / 2
phase = rem ( t ,   cycle _ length ) / cycle _ length
New _ Search _ Agents = fix ( N _ min + ( Search _ Agents N _ min ) × ( 1 phase × phase ) )
among them, New_Search_Agents is the new population size, N_min is the minimum population size, Search_Agents is the initial population size, phase is the parameter of the current stage, t is the current iteration number, and cycle_length is the cycle length.
The adjustment prevents premature convergence caused by too small a population size, and improves the efficiency and robustness of the algorithm.

2.3.3. Adaptive Adjustment Strategy

Adaptive adjustment is a parameter adjustment strategy that allows the algorithm to automatically adjust its parameters based on the characteristics of the objective function [39]. In the algorithm, this strategy is applied to the probabilities of exploration and exploitation (exploration_prob and exploitation_prob) and the parameters alpha and Tf. First, the fixed probabilities of exploration and exploitation in the original algorithm are adjusted to probabilities that adaptively adjust with the number of iterations, as shown in Equations (19)–(22).
exploration _ prob = 0.5 × ( 1 t / Max _ iterations )
exploitation _ prob = 1 exploration _ prob
alpha = 0.2 × ( 1 t / Max _ iterations )
Tf = 0.8 × ( 1 t / Max _ iterations )
After adjustment, it is helpful to conduct broader exploration in the early stages of the algorithm and undertake more in-depth development in the later stages.

2.3.4. Boundary Handling

For boundary handling, algorithms such as particle swarm optimization [40] and genetic algorithms [41] encounter boundary handling issues during the optimization process. Furthermore, simplifying the boundary handling mechanism and reducing computational complexity are important in many practical applications, as complex boundary handling can slow down the algorithm and cause it to get stuck in a local optimum.
Therefore, in order to improve the stability and reliability of the algorithm, the complex boundary handling mechanism in the original algorithm was simplified to a more concise and effective method without affecting the functionality of the original algorithm. The simplified expression is as follows:
X ( i ,   : ) = max ( min ( X ( i ,   : ) ,   Upperbound ) ,   Lowerbound )
This adjustment allows the algorithm’s computational complexity to be reduced while ensuring that the search agent always remains within the valid search space.

2.4. Combine Implementation with Specific Steps

The specific steps of the algorithm’s improvement are shown in Figure 5.

2.5. Test Function

In the study of optimization algorithms, to verify their performance improvements in the context of problems of varying complexity, various test functions are employed [42]. In this paper, to validate the performance of the improved CPO, we select four representative functions—the Sphere function, the Max function, the Step function, and the Sum of Absolute Values and Product function. Using the unique characteristics of these four functions, we conduct comparative experiments, with the test functions listed in Table 4. The test functions f1 and f2 in Table 4 are single-peak test functions used to assess the optimization accuracy of the optimization algorithm, while f3 and f4 are multi-peak test functions used to evaluate the algorithm’s ability to escape local optima. The setting of the ranges of these functions is based on the mathematical properties of each function, the actual constraints of the problem, and the different aspects of the optimization capabilities that need to be comprehensively evaluated to ensure a comprehensive and accurate assessment of the algorithm performance. This enables the test functions to comprehensively assess the performance of the optimization algorithms in different scenarios, for both global exploration challenges and local optimization tests. The optimization results of these four test functions are analyzed via aspects such as optimization accuracy, convergence speed, and stability.
In order to more intuitively compare the advantages of the algorithm in terms of convergence speed and optimization accuracy before and after optimization, the number of search agents (populations) is set to 100 by fixing the parameter settings, the maximum number of iterations is set to 1000, and the optimization algorithm optimizes the optimization iterative curves for the four test functions given in Figure 6, with the horizontal axis being the number of iterations, and the vertical axis being the optimal objective function value searched for in the algorithm over the successive iterations. The ACCPO objective function value curve of the algorithm decreases rapidly with the increase in the number of iterations, indicating that the optimization performance is greatly improved.
Then, by comparing the solutions of ACCPO in the 3D graphs derived from the test functions and deriving the optimal value of the objective function, the fixed parameter settings set the number of search agents (populations) to 100, and we set the maximum number of iterations to 1000, as shown in Figure 7 and Table 5. Combining Figure 7 and Table 5, it can be seen that the performance of the algorithm on these test functions is effectively improved by combining the other three strategies.

3. Machine Learning Models

Figure 8 represents a group with n time points, and the first four points are read each time to predict the next point, and so on to the nth point; i.e., the sliding window slides in a node-by-node sliding pattern within the same trial group.
The second case regards sliding within the different experimental groups in the way shown in Figure 9.
The sliding window in this figure will jump directly to the beginning of the next sample after the training of the previous set of data is completed, starting the traversal of the second set of samples, and so on, until all the data have been trained.

3.1. ANN Model

The Artificial Neural Network (ANN) is not designed to take into account the properties of time series per se [43]. The inputs and outputs of an ANN are independent, and it is not able to capture temporal dependencies as automatically as a specialized LSTM. Therefore, a sliding window is almost essential when seeking to process time series with ANN. Through sliding windows, we can transform the time series data into a format that the ANN can understand, and train time series models in this way. An ANN usually consists of input, hidden and output layers. ANN neural networks are widely used in model prediction due to their powerful nonlinear mapping capabilities [44].
Finding the optimal neuron in an ANN model is very important. Therefore, in this manuscript, we first use an empirical formula to determine a range of values, and then use an algorithm to optimize and fix the optimal value. The expression is
h = m + n + a
where m represents the number of nodes in the input layer, and n represents the number of nodes in the output layer, a ∈ (1, 10).
Secondly, AS weights and biases are optimized using the established ACCPO. Figure 10 shows the input, hidden and output layers that the ANN neural network has.

3.2. RF Model

Random Forest is an integrated learning method that works by voting or averaging results to obtain a final prediction [45]. It is not specifically designed to be used for time series data per se, but when dealing with time series problems, the model can also be adapted to achieve the desired results [46].
The key parameters of Random Forest mainly include the number of trees (the more decision trees, the better the performance of the model usually is, but the computational cost will increase accordingly) and the number of randomly selected features (the number of randomly selected features when each node is split; usually, the number of randomly selected features is equal to the square root or logarithmic value of the total number of features). In general, the selection of the number of features affects the bias and variance of the model. Figure 11 shows the RF model.
The performance of RF depends largely on the setting of its hyperparameters, such as the number of decision trees (n_estimators), the maximum depth of decision trees (max_depth), etc., in order to improve the accuracy and efficiency of the time series prediction. So by adjusting the hyperparameters such as the number of trees, the maximum depth of the tree, etc., to make the model optimal, the hyperparameters in the established ACCPO optimized Random Forest model can help the RF to achieve better performance in the time series prediction task.

3.3. LSTM Model

For the Long Short-Term Memory recurrent neural network (LSTM), the model itself is usable for time series prediction, as compared to the machine learning models mentioned above [47]. For the study of LSTM, the weights and biases in the model, the setting of important hyperparameters, and the selection of the sliding window have a greater impact on the model. In this study, a brand new LSTM is constructed for training by processing the dataset.
Regarding weights and biases in LSTM, LSTM recurrent neural networks introduce memory units (memory states) to control data transmission between hidden layers. A memory unit in an LSTM network consists of three gate structures: input gates, forget gates, and output gates. The input gates determine how much of the current input is retained in the current unit state; the forget gates determine how much of the previous unit state is retained in the current unit state; and the output gates determine how much of the current unit state is output. The structure is shown in Figure 12.
The structural functions of each gate of the LSTM network are shown below in Equations (25)–(27),
I t = δ ( w i 1 x t + W i 2 h t 1 + b i )
f t = δ ( w f 1 x t + W f 2 h t 1 + b f )
O t = δ ( w O 1 x t + W O 2 h t 1 + b O )
where It, ft and Ot are the vector values of the input, forgetting and output gates of a node of the LSTM neural network at time t, respectively; xt is the input at time t; bi, bf and bo are the corresponding bias values of the gate structures, respectively; w1 is the connection weight between the input node and the hidden node; w2 is the connection weight between the hidden node and the output node; ht−1 is the output at time t−1, which represents the hidden state (hidden state) of the LSTM. ht−1 is the output at time t−1, representing the hidden state of the LSTM.
For the optimization of weights and bias, we use Adam’s algorithm. Adam combines the advantages of two optimization algorithms, the AdaGrad algorithm [48] and RMSProp algorithm [49]. The first-order moment estimation and second-order moment estimation of the gradient are considered comprehensively by these, and different values of the learning rate are determined based on the results of the moment estimation, with the following expressions:
m t = β 1 m t 1 + ( 1 β 1 ) g t
v t = β 2 v t - 1 + ( 1 β 2 ) g t 2
here, mt and vt are the first-order moment estimates and second-order moment estimates of the current gradient; gt is the current gradient value; β1 and β2 are the coefficients.
Usually the values of mt and vt are corrected for bias, and the corrected Adam’s method expression is shown in the following:
θ t + 1 = θ t η ε + v t m t
This can be styled as
m t = m t 1 β 1 t
v t = v t 1 β 2 t
Finally, when using LSTM models for time series prediction, selecting and optimizing hyperparameters is crucial for model performance. The hyperparameters that need to be optimized include the number of hidden layer nodes, the learning rate, and the batch size.

3.4. Model Evaluation Indicators

3.4.1. Single Indicator

In this study, root mean square error (RMSE), mean absolute error (MAE) and coefficient of determination (R2) are used to evaluate the performance of the model. R2 is mainly used to measure the correlation between the actual values and the predicted values. The closer the R2 is to 1, the smaller the MAE is, and the higher the model accuracy is. The following are the mathematical expressions for the three evaluation metrics:
R 2 = k = 1 N ( q 0 , k q 0 ¯ ) ( q t , k q t ¯ ) k = 1 N ( q 0 , k q 0 ¯ ) 2 k = 1 N ( q t , k q t ¯ ) 2
M A E = 1 N ( K = 1 N q 0 , t q t , k q 0 , k )
R M S E = 1 N k = 1 N ( q 0 , k q t , k ) 2
N in this equation denotes the number of samples; q0 denotes the actual value; q 0 ¯ denotes the actual average value; qt denotes the output value; q t ¯ denotes the output mean, k < 1 , N > .

3.4.2. Composite Indicators

In order to compare the prediction performances of different types of machine learning prediction models, this paper unifies the above three single statistical indexes (Equations (33)–(35)) into one comprehensive index for analysis, i.e., the Synthesis Performance Index (SPI) [50], as shown in Equation (36).
S P I = 1 N j = 1 N P j P min , j P max , j P min , j
where N is the number of selected statistical indicators used to measure the prediction performance. In this paper, N = 3 because R2, RMSE and MAE are selected. In addition, Pj is the jth statistical parameter, and at the same time, Pmax,j and Pmin,j are the maximum and minimum indicators of the selected jth statistical parameter in the set of values of the machine learning model used, respectively. As can be seen from Equation (36), the size of the SPI value is distributed between [0, 1], and in terms of the overall prediction performance, when the value of SPI is closer to 0, it indicates that the performance of the machine learning prediction model represented by it is better, and vice versa, when the value of SPI is closer to 1, it indicates that the effectiveness of the machine learning prediction model represented by it is worse. In this paper, in terms of prediction performance, the SPI obtained from different types of machine learning training will be given the distribution of the advantages and disadvantages of the prediction effect of the model according to the size of its value.

4. Model Training Results

4.1. ACCPO-ANN Model

Based on the characteristics of the database and ANN, a three-layer feedforward ACCPO-ANN model was constructed. The range of the number of hidden layer neurons was obtained using Equation (24). The optimal value of 11 was obtained by optimizing the model using the ACCPO algorithm. In addition to these basic parameters, other relevant coefficients need to be determined, such as the maximum number of iterations and the learning rate. The parameters of the ACCPO-ANN model after hyperparameter optimization and global optimization are shown in Table 6.
Table 7 presents the evaluation metrics results of the three ANN models. After introducing the ACCPO algorithm, the predictive capability of the ANN models was significantly improved. The ACCPO-ANN model outperformed the CPO-ANN and traditional ANN models in both the training and testing datasets, achieving an R2 value of 0.9510 in explaining data variance and demonstrating excellent performance in reducing prediction errors (RMSE and MAE). Related studies indicate that if the R2 value is higher than 0.9, the model can be considered excellent [51]. Therefore, the ACCPO-ANN model is the most outstanding.
Figure 13 shows the regression analysis scatter plot comparing the actual values and predicted values of the three ANN models. The points represent data points, and the red dashed line indicates the ideal prediction line. Figure 14 shows the residual plots of the three models. The pink points represent residuals, and the red dashed line indicates the zero-error line. Combining the analysis of the two figures, we see that the data points of the traditional ANN model are generally distributed along the ideal prediction line, but there are deviations, and the residuals have a large fluctuation range. Although it can capture the overall trend of the data, the prediction accuracy in some areas is poor; the data points of the CPO-optimized ANN model are more closely clustered around the ideal prediction line, especially in the region where the actual values are small, and the residuals have a significantly reduced fluctuation range, with higher prediction accuracy for smaller values; the ACCPO-optimized ANN model shows high prediction accuracy in both smaller and larger value regions, with residuals almost concentrated near the zero-error line and a very small fluctuation range. This indicates that ACCPO successfully optimizes the ANN model, and the hybrid algorithm is effective.

4.2. ACCPO-RF Model

The combination of algorithms and RF is intended to optimize the number of random forest decision trees, leaf nodes, and the number of splitting features. The parameter configuration of the ACCPO-RF model is shown in Table 8.
Through the analysis of Table 9, we see that the RF model performs better on the test set than on the training set, indicating that the model is more effective in terms of the time series effect, but we also see that the optimization algorithm has a positive impact on the model performance, and that through optimization via the mixture of algorithms, the model’s performance is also improved by a small margin.
By analyzing Figure 15 and Figure 16, it can be observed that as the model is optimized, the range of residuals gradually narrows, particularly in regions with higher actual values, where the fluctuation range of residuals significantly decreases, indicating that the model’s prediction error gradually decreases. The correlation between predicted values and actual values gradually increases, with predicted values becoming more concentrated and closer to the ideal prediction line, indicating that the model’s predictive capability is gradually improving. The ACCPO-RF model outperforms the other two models in terms of fitting ability and generalization ability.

4.3. ACCPO-LSTM Model

In this study, we delve into the strategy for the optimization of LSTM for the task of time series prediction. The weights and biases, sliding window, and some hyperparameters are optimized to achieve higher prediction accuracy. The parameters of the ACCPO-LSTM model are configured in Table 10.
Table 11 shows the performance metrics of the ACCPO-LSTM model on different datasets. The ACCPO-LSTM model achieves better performance on both the training set and the test set compared to the traditional LSTM model and the CPO-LSTM model, indicating that it has obvious advantages in prediction accuracy. These results fully verify the effectiveness of the proposed sliding window optimization setting and hyperparameter tuning strategy.
The graph of Test_Set_Actual_vs_Predicted in Figure 17 clearly demonstrates the high degree of consistency between the model’s predicted and actual values in terms of the overall trend and fluctuation patterns. The model is able to accurately capture and restore both the short-term fluctuations and long-term trends of the data, which fully reflects its strong ability to capture various types of features in time series data. From the Test_Residual_Plot in Figure 18, we see that the residual distribution of the ACCPO-LSTM model is relatively uniform and random, which indicates that the model fits the time series data adequately and there is no obvious systematic bias. Meanwhile, the absolute values of the residual values are generally small, indicating that the difference between the predicted and actual values of the model is small, which further validates the high prediction accuracy of the model.

5. Results

We divide the nine different models into training sets and test sets to obtain two evaluation metric summary tables, as shown in Table 12 and Table 13.
  • Firstly, looking at Section 2.5 Test Function, combined with Figure 4, Table 5 shows that after the initial verification of the test function, the effect of the CPO algorithm following gradual optimization to ACCPO is further improved, implying that the improvement of the algorithm is successful.
  • Secondly, the performances of the nine models on the training and test sets were compared using three evaluation metrics, as shown in Figure 19. From this, it can be seen that it is feasible to use the CPO algorithm for model performance enhancement. From this, it can also be concluded that the effects of the optimized algorithm ACCPO are further enhanced compared to CPO, thus verifying that the optimization of the algorithm is successful. Further, the ACCPO-LSTM model performed the best overall—on the training set, it achieved an R2 of 0.9784, an RMSE of 0.0205, and an MAE of 0.0096, surpassing all other models. On the test set, it maintained high accuracy, with an R2 of 0.9524, an RMSE of 0.0317, and an MAE of 0.014.
  • Thirdly, based on three evaluation metrics, we can see the effect of training the ACCPO algorithm in order to enhance the ANN, RF and LSTM models on the training and test sets, as shown in Figure 20. The ACCPO-LSTM model exhibits the best performance on both the training and test sets, achieving the highest R2 and the lowest RMSE and MAE. Thus, it is evident that this model surpasses the other two models.
Figure 19. Three evaluation metrics corresponding to the 9 models: (a) representing the training sets, and (b) representing the testing sets.
Figure 19. Three evaluation metrics corresponding to the 9 models: (a) representing the training sets, and (b) representing the testing sets.
Materials 18 03152 g019
Figure 20. Three evaluation metrics for the ACCPO optimization model: (a) represents the training sets, and (b) represents the testing sets.
Figure 20. Three evaluation metrics for the ACCPO optimization model: (a) represents the training sets, and (b) represents the testing sets.
Materials 18 03152 g020
This is then combined with the residual plots of the training and test sets, as shown in Figure 21 and Figure 22. The residuals of the ACCPO-LSTM model are more tightly clustered around the horizontal axis, indicating a closer alignment between predicted and actual values. Thus, residual analysis confirms that the ACCPO-LSTM model possesses the strongest predictive capability.
Thus, by analyzing the three evaluation indicators, as well as performing the residual analysis, it is possible to draw the following conclusion: the LSTM model enhanced by the ACCPO algorithm performs the best.
  • Finally, a comparative analysis of the model’s composite indicators allows for a clearer and more intuitive comparison, as shown in Figure 23. Since the SPI value ranges from 0 to 1, a smaller value indicates better overall model predictability. The radar chart clearly shows that the ACCPO-LSTM model achieves the smallest SPI value across the entire machine learning model range, indicating its superior overall predictive performance.
Overall, with the above four results, it can be concluded that the optimization of the algorithm is successful. The ACCPO optimization method performed well across multiple models, significantly enhancing their predictive capabilities, particularly for LSTM and ANN. Further, while RF exhibited poor training set performance, its test set performance improved following ACCPO optimization.

6. Conclusions

This study investigates the creep of low-carbon concrete materials using machine learning techniques.
  • The original algorithm was improved and enhanced by adding additional strategies, and its effectiveness was first tested using representative benchmark functions. The results show that the improved algorithm performed significantly better than the original algorithm;
  • By modifying the source codes of three models (ANN, RF, and LSTM) and using them to train time series data, the results show that it is feasible to train the three models directly, but the results are generally not very good;
  • The model is optimized using CPO and the proposed ACCPO, and the optimized model is used to train the data to improve the model’s performance and achieve the best training results;
  • The predictive abilities of the models were evaluated using single indicators (R2, RMSE, MAE) and comprehensive indicators (SPI). The results show that the performances of the models optimized by ACCPO were improved, with ACCPO-LSTM being the optimal model;
  • The results of this study indicate that time series models demonstrate significantly superior computational performance compared to other types of models when studying research subjects with time-dependent characteristics (such as creep). Therefore, in studies involving building materials with such time-dependent characteristics, it is recommended to prioritize the use of time series models to obtain optimal computational results. Finally, this study still has limitations, such as the selection of only four unique test functions for validation during the test function verification process. Future studies should consider expanding the number of test functions.

Author Contributions

Conceptualization, Z.Z. and H.L.; methodology, T.Y., J.C. and Y.W.; software, Z.Z.; validation, Z.Z.; formal analysis, K.W., T.Y., J.C. and Y.W.; investigation, Z.Z.; resources, Z.Z.; data curation, Z.Z.; writing—original draft preparation, Z.Z.; writing—review and editing, Z.Z., H.L., T.Y., J.C. and Y.W.; visualization, Z.Z.; supervision, H.L.; project administration, H.L.; funding acquisition, H.L., K.W. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Innovation Demonstration Base for Ecological Environment Geology and River-Lake Ecological Restoration and the Science and Technology Demonstration Project of the Ministry of Housing and Urban-Rural Development of the People’s Republic of China (2021-s-021).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Keyang Wu is employed by the Wuhan Construction Engineering Group Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Maldar, M.; Kianoush, M.R.; Lachemi, M. Time-dependant effects on curved precast segmentally constructed balanced cantilever bridges. Eng. Struct. 2024, 310, 118147. [Google Scholar] [CrossRef]
  2. Huang, X.; Kwon, O.-S.; Bentz, E.; Tcherner, J. Evaluation of CANDU NPP containment structure subjected to aging and internal pressure increase. Nucl. Eng. Des. 2017, 314, 82–92. [Google Scholar] [CrossRef]
  3. Silfwerbrand, J.L.; Farhang, A.A. Reducing crack risk in industrial concrete floors. Acids Mater. J. 2014, 111, 681–689. [Google Scholar] [CrossRef]
  4. Khan, M.I.; Siddique, R. Utilization of silica fume in concrete: Review of durability properties. Resour. Conserv. Recycl. 2011, 57, 30–35. [Google Scholar] [CrossRef]
  5. Sasanipour, H.; Aslani, F.; Taherinezhad, J. Effect of silica fume on durability of self-compacting concrete made with waste recycled concrete aggregates. Constr. Build. Mater. 2019, 227, 116598. [Google Scholar] [CrossRef]
  6. Hamada, H.M.; Abed, F.; Katman, H.Y.B.; Humada, A.M.; Al Jawahery, M.S.; Majdi, A.; Yousif, S.T.; Thomas, B.S. Effect of silica fume on the properties of sustainable cement concrete. J. Mater. Res. Technol. 2023, 24, 8887–8908. [Google Scholar] [CrossRef]
  7. Sahoo, S.; Parhi, P.K.; Panda, B.C. Durability properties of concrete with silica fume and rice husk ash. Clean. Eng. Technol. 2021, 2, 100067. [Google Scholar] [CrossRef]
  8. Lavergne, F.; Barthélémy, J.F. Confronting a refined multiscale estimate for the aging basic creep of concrete with a comprehensive experimental database. Cem. Concr. Res. 2020, 136, 106163. [Google Scholar] [CrossRef]
  9. Hilloulin, B.; Tran, V.Q. Interpretable machine learning model for autogenous shrinkage prediction of low-carbon cementitious materials. Constr. Build. Mater. 2023, 396, 132343. [Google Scholar] [CrossRef]
  10. Bouras, Y.; Li, L. Utilisation of machine learning techniques to model creep behaviour of low-carbon concretes. Buildings 2023, 13, 2252. [Google Scholar] [CrossRef]
  11. Hyde, T.H.; Sun, W.; Williams, J.A. Requirements for and use of miniature test specimens to provide mechanical and creep properties of materials: A review. Int. Mater. Rev. 2007, 52, 213–255. [Google Scholar] [CrossRef]
  12. Bu, P.; Li, Y.; Li, Y.; Wen, L.; Wang, J.; Zhang, X. Creep damage coupling model of concrete based on the statistical damage theory. J. Build. Eng. 2023, 63, 105437. [Google Scholar] [CrossRef]
  13. Lindley, S.; Brodsky, R.; Dahiya, A.; Roberts-Wollmann, C.L.; Koutromanos, I. Continued Monitoring of the Varina-Enon Bridge: Estimation of Effective Prestress; Virginia Transportation Research Council (VTRC): Charlottesville, VA, USA, 2022. [Google Scholar]
  14. Cao, J.; Zeng, P.; Liu, T.; Tu, B. Influence of mineral powder content and loading age on creep behavior of concrete members under axial compression. Results Eng. 2023, 19, 101304. [Google Scholar] [CrossRef]
  15. JoostWalraven Bigaj, A. The 2010 fib Model Code for Structural Concrete: A new approach to structural engineering. Struct. Concr. 2011, 12, 139–147. [Google Scholar]
  16. ACI Committee. Prediction of Creep, Shrinkage and Temperature Effects in Concrete Structures; American Concrete Institute: Farmington Hills, MI, USA, 1992. [Google Scholar]
  17. RILEM Technical Committee. RILEM draft recommendation: TC-242.MDC multi-decade creep and shrinkage of concrete: Material model and structural analysis. Mater. Struct. 2015, 48, 753–770. [Google Scholar] [CrossRef]
  18. Taha, M.M.R.; Noureldin, A.; El-Sheimy, N.; Shrive, N.G. Artificial neural networks for predicting creep with an example application to structural masonry. Can. J. Civ. Eng. 2003, 30, 523–532. [Google Scholar] [CrossRef]
  19. Abed, M.M.; El-Shafie, A.; Osman, S.A.B. Creep predicting model in masonry structure utilizing dynamic neural network. J. Comput. Sci. 2010, 6, 597. [Google Scholar] [CrossRef]
  20. Karthikeyan, J.; Upadhyay, A.; Bhandari, N.M. Artificial neural network for predicting creep and shrinkage of high performance concrete. J. Adv. Concr. Technol. 2008, 6, 135–142. [Google Scholar] [CrossRef]
  21. El-Shafie, A.; Aminah, S. Dynamic versus static artificial neural network model for masonry creep deformation. Proc. Inst. Civ. Eng.-Struct. Build. 2013, 166, 355–366. [Google Scholar] [CrossRef]
  22. Nejati, F.; Mansourkia, A. Prediction of the compressive strength of lightweight concrete containing industrial and waste steel fibers using a multilayer synthetic neural network. Adv. Bridge Eng. 2023, 4, 20. [Google Scholar] [CrossRef]
  23. Wang, H. Research On Concrete Creep Based On Ensemble Learning and LSTM Artificial Intelligence Algorithms; Beijing Jiaotong University: Beijing, China, 2020. [Google Scholar]
  24. Bui-Tien, T.; Nguyen-Chi, T.; Le-Xuan, T.; Tran-Ngoc, H. Enhancing bridge damage assessment: Adaptive cell and deep learning approaches in time-series analysis. Constr. Build. Mater. 2024, 439, 137240. [Google Scholar] [CrossRef]
  25. Liu, J.; Cheng, C.; Zheng, C.; Wang, X.; Wang, L. Rutting prediction using deep learning for time series modeling and K-means clustering based on RIOHTrack data. Constr. Build. Mater. 2023, 385, 131515. [Google Scholar] [CrossRef]
  26. Ly, H.B.; Nguyen, T.A.; Tran, V.Q. Development of deep neural network model to predict the compressive strength of rubber concrete. Constr. Build. Mater. 2021, 301, 124081. [Google Scholar] [CrossRef]
  27. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl.-Based Syst. 2024, 284, 111257. [Google Scholar] [CrossRef]
  28. Wu, X.-T.; Guo, X.; Yuan, X.-H.; Yan, L.-J.; Zeng, Z.-Q.; Lu, T. Monthly Runoff Interval Prediction Based on Crested Porcupine Optimizer CNN BiLSTM and Kernel Density Estimation. J. Chang. River Sci. Res. Inst. 2024, 7, 81. [Google Scholar]
  29. Ma, L.; Gao, W.; Tuo, L.; Zhang, P. Characteristics and prediction methods of coal spontaneous combustion for deep coal mining in the Ximeng mining area. Coal Geol. Explor. 2025, 53, 33–43. [Google Scholar] [CrossRef]
  30. Hamad, Q.S.; Saleh, S.A.M.; Suandi, S.A.; Samma, H.; Hamad, Y.S.; Hussien, A.G. A Review of Enhancing Sine Cosine Algorithm: Common Approaches for Improved Metaheuristic Algorithms. Arch. Comput. Methods Eng. 2024, 32, 2549–2606. [Google Scholar] [CrossRef]
  31. Huang, X.-Y.; Wu, K.-Y.; Wang, S.; Lu, T.; Lu, Y.-F.; Deng, W.-C.; Li, H.-M. Compressive strength prediction of rubber concrete based on artificial neural network model with hybrid particle swarm optimization algorithm. Materials 2022, 15, 3934. [Google Scholar] [CrossRef]
  32. Hubler, M.H.; Wendner, R.; Bazant, Z.P. Comprehensive database for concrete creep and shrinkage: Analysis and recommendations for testing and recording. ACI Mater. J. 2015, 112, 547. [Google Scholar] [CrossRef]
  33. Chen, Y.X. Prediction and Analysis of Concrete Shrinkage and Creep Behavior Based on LSTM Deep Learning Theory. Master’s Thesis, Xi hua University, Chengdu, China, 2021. [Google Scholar]
  34. Zhu, J.; Wang, Y. Convolutional neural networks for predicting creep and shrinkage of concrete. Constr. Build. Mater. 2021, 306, 124868. [Google Scholar] [CrossRef]
  35. Dunlop, P.; Smith, S. Estimating key characteristics of the concrete delivery and placement process using linear regression analysis. Civ. Eng. Environ. Syst. 2003, 20, 273–290. [Google Scholar] [CrossRef]
  36. Smith, G.N. Probability and Statistics in Civil Engineering; Collins professional and technical books; Collins: London, UK, 1986; p. 244. [Google Scholar]
  37. Alawida, M.; Teh, J.S.; Mehmood, A.; Shoufan, A.; Alshoura, W.H. A chaos-based block cipher based on an enhanced logistic map and simultaneous confusion-diffusion operations. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 8136–8151. [Google Scholar] [CrossRef]
  38. Bao, Z.; Cui, G.; Chen, J.; Sun, T.; Xiao, Y. A novel random walk algorithm with compulsive evolution combined with an optimum-protection strategy for heat exchanger network synthesis. Energy 2018, 152, 694–708. [Google Scholar] [CrossRef]
  39. Kong, Z.; Yang, Q.F.; Zhao, J.; Xiong, J. Whale Optimization Algorithm Based on Adaptive Weight and Search Strategy. J. Northeast. Univ. (Nat. Sci.) 2020, 41, 35. (In Chinese) [Google Scholar]
  40. Nayak, J.; Swapnarekha, H.; Naik, B.; Dhiman, G.; Vimal, S. 25 years of particle swarm optimization: Flourishing voyage of two decades. Arc. Comput. Method E. 2023, 30, 1663–1725. [Google Scholar] [CrossRef]
  41. Sohail, A. Genetic algorithms in the fields of artificial intelligence and data sciences. Ann. Data Sci. 2023, 10, 1007–1018. [Google Scholar] [CrossRef]
  42. Halim, A.H.; Ismail, I.; Das, S. Performance assessment of the metaheuristic optimization algorithms: An exhaustive review. Artif. Intell. Rev. 2021, 54, 2323–2409. [Google Scholar] [CrossRef]
  43. Topçu, İ.B.; Sarıdemir, M. Prediction of rubberized concrete properties using artificial neural network and fuzzy logic. Constr. Build. Mater. 2008, 22, 532–540. [Google Scholar] [CrossRef]
  44. Duan, Z.H.; Kou, S.C.; Poon, C.S. Prediction of compressive strength of recycled aggregate concrete using artificial neural networks. Constr. Build. Mater. 2013, 40, 1200–1206. [Google Scholar] [CrossRef]
  45. Zhou, Q.; Zhou, H.; Li, T. Cost-sensitive feature selection using random forest: Selecting low-cost subsets of informative features. Knowl.-Based Syst. 2016, 95, 1–11. [Google Scholar] [CrossRef]
  46. Parmezan, A.R.S.; Souza, V.M.A.; Batista, G.E. Evaluation of statistical and machine learning models for time series prediction: Identifying the state-of-the-art and the best conditions for the use of each model. Inf. Sci. 2019, 484, 302–337. [Google Scholar] [CrossRef]
  47. Siłka, J.; Wieczorek, M.; Woźniak, M. Recurrent neural network model for high-speed train vibration prediction from time series. Neural Comput. Appl. 2022, 34, 13305–13318. [Google Scholar] [CrossRef]
  48. Duchi, J.; Hazan, E.; Singer, Y. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. J. Mach. Learn. Res. 2011, 12, 2121–2159. [Google Scholar]
  49. Tieleman, T.; Hinton, G. Lecture 6.5-rmsprop: Divide the Gradient by a Running Average of Its Recent Magnitude. COURSERA Neural Netw. Mach. Learn. 2012, 4, 26–31. [Google Scholar]
  50. Cook, R.; Lapeyre, J.; Ma, H.; Kumar, A. Prediction of compressive strength of concrete: Critical comparison of performance of a hybrid machine learning model with standalone models. J. Mater. Civ. Eng. 2019, 31, 04019255. [Google Scholar] [CrossRef]
  51. Iqbal, M.F.; Liu, Q.; Azim, I.; Zhu, X.; Yang, J.; Javed, M.F.; Rauf, M. Prediction of mechanical properties of green concrete incorporating waste foundry sand based on gene expression programming. J. Hazard. Mater. 2020, 384, 121322. [Google Scholar] [CrossRef]
Figure 1. (a) Silica fume; (b) fly ash.
Figure 1. (a) Silica fume; (b) fly ash.
Materials 18 03152 g001
Figure 2. (a) Hangzhou Hanglong Plaza; (b) Wuhan Center Building.
Figure 2. (a) Hangzhou Hanglong Plaza; (b) Wuhan Center Building.
Materials 18 03152 g002
Figure 3. Heatmap of correlation coefficients.
Figure 3. Heatmap of correlation coefficients.
Materials 18 03152 g003
Figure 4. Joint distribution plot of input and output variables. (a) a/c and creep’s variable relationship; (b) c and creep’s variable relationship; (c) cem and creep’s variable relationship; (d) dt and creep’s variable relationship; (e) E28 and creep’s variable relationship; (f) fc28 and creep’s variable relationship; (g) Jcreep and creep’s variable relationship; (h) RH_test and creep’s variable relationship; (i) the variable relationship between T and creep; (j) the variable relationship between t’ and creep; (k) the variable relationship between V/S and creep; (l) the variable relationship between w/c and creep.
Figure 4. Joint distribution plot of input and output variables. (a) a/c and creep’s variable relationship; (b) c and creep’s variable relationship; (c) cem and creep’s variable relationship; (d) dt and creep’s variable relationship; (e) E28 and creep’s variable relationship; (f) fc28 and creep’s variable relationship; (g) Jcreep and creep’s variable relationship; (h) RH_test and creep’s variable relationship; (i) the variable relationship between T and creep; (j) the variable relationship between t’ and creep; (k) the variable relationship between V/S and creep; (l) the variable relationship between w/c and creep.
Materials 18 03152 g004aMaterials 18 03152 g004b
Figure 5. Specific steps of algorithm improvement.
Figure 5. Specific steps of algorithm improvement.
Materials 18 03152 g005
Figure 6. Optimization iteration curves for the 4 test functions. (a) f1 iteration curves, (b) f2 iteration curves, (c) f3 iteration curves, (d) f4 iteration curves.
Figure 6. Optimization iteration curves for the 4 test functions. (a) f1 iteration curves, (b) f2 iteration curves, (c) f3 iteration curves, (d) f4 iteration curves.
Materials 18 03152 g006
Figure 7. (ad) The 3D plots of the four test functions of (f1–f4) and the convergence curves of the objective function values, respectively.
Figure 7. (ad) The 3D plots of the four test functions of (f1–f4) and the convergence curves of the objective function values, respectively.
Materials 18 03152 g007
Figure 8. Sliding way within the same group.
Figure 8. Sliding way within the same group.
Materials 18 03152 g008
Figure 9. Different ways of sliding within groups.
Figure 9. Different ways of sliding within groups.
Materials 18 03152 g009
Figure 10. ANN neural network.
Figure 10. ANN neural network.
Materials 18 03152 g010
Figure 11. Random Forest model.
Figure 11. Random Forest model.
Materials 18 03152 g011
Figure 12. LSTM structure.
Figure 12. LSTM structure.
Materials 18 03152 g012
Figure 13. Scatter plot of regression analysis of actual and predicted values of the model: (a) represents ANN, (b) represents CPO-ANN, and (c) represents ACCPO-ANN.
Figure 13. Scatter plot of regression analysis of actual and predicted values of the model: (a) represents ANN, (b) represents CPO-ANN, and (c) represents ACCPO-ANN.
Materials 18 03152 g013
Figure 14. Comparison of residuals of model testing sets: (a) represents ANN, (b) represents CPO-ANN, and (c) represents ACCPO-ANN.
Figure 14. Comparison of residuals of model testing sets: (a) represents ANN, (b) represents CPO-ANN, and (c) represents ACCPO-ANN.
Materials 18 03152 g014
Figure 15. Scatter plot of regression analysis of actual and predicted values of the model: (a) represents RF, (b) represents CPO-RF, and (c) represents ACCPO-RF.
Figure 15. Scatter plot of regression analysis of actual and predicted values of the model: (a) represents RF, (b) represents CPO-RF, and (c) represents ACCPO-RF.
Materials 18 03152 g015
Figure 16. Comparison of residuals of model testing sets: (a) represents RF, (b) represents CPO-RF, and (c) represents ACCPO-RF.
Figure 16. Comparison of residuals of model testing sets: (a) represents RF, (b) represents CPO-RF, and (c) represents ACCPO-RF.
Materials 18 03152 g016
Figure 17. Scatter plot of regression analysis of actual and predicted values of the model: (a) represents LSTM, (b) represents CPO-LSTM, and (c) represents ACCPO-LSTM.
Figure 17. Scatter plot of regression analysis of actual and predicted values of the model: (a) represents LSTM, (b) represents CPO-LSTM, and (c) represents ACCPO-LSTM.
Materials 18 03152 g017
Figure 18. Comparison of residuals of model testing sets: (a) represents LSTM, (b) represents CPO-LSTM, (c) represents ACCPO-LSTM.
Figure 18. Comparison of residuals of model testing sets: (a) represents LSTM, (b) represents CPO-LSTM, (c) represents ACCPO-LSTM.
Materials 18 03152 g018
Figure 21. Residual analysis results of the three models’ training sets: (a) represents ACCPO-LSTM, (b) represents ACCPO-ANN, and (c) represents ACCPO-RF.
Figure 21. Residual analysis results of the three models’ training sets: (a) represents ACCPO-LSTM, (b) represents ACCPO-ANN, and (c) represents ACCPO-RF.
Materials 18 03152 g021
Figure 22. Residual analysis results of the three models on the testing sets: (a) represents ACCPO-LSTM, (b) represents ACCPO-ANN, and (c) represents ACCPO-RF.
Figure 22. Residual analysis results of the three models on the testing sets: (a) represents ACCPO-LSTM, (b) represents ACCPO-ANN, and (c) represents ACCPO-RF.
Materials 18 03152 g022
Figure 23. Radar plots of SPI pairs: (a) represents the training sets, and (b) represents the testing sets.
Figure 23. Radar plots of SPI pairs: (a) represents the training sets, and (b) represents the testing sets.
Materials 18 03152 g023
Table 1. Selected input parameters.
Table 1. Selected input parameters.
Identification NumberNotationSignificance
A1dtLoading duration for creep, days.
A2JcreepThe historical creep compliance is expressed as 10−6/MPa.
A3w/cWater-cement ratio (by weight).
A4a/cAggregate-cement ratio (by weight).
A5cCement content, kg/m3
A6cemCement type, including unknown, normal setting R, rapid setting RS, and slow setting SL.
A7fc28Average cylinder strength at 28 days, MPa.
A8E28Mean Young’s modulus at 28 days, MPa.
A9V/SVolume-surface area ratio, mm.
A10t’Age t’ during creep loading, days.
A11TAmbient temperature, °C.
A12RH_testRelative humidity of the environment, % (99 means a sealed specimen, 100 means storage in water, 101 steam, 85 moist)
Table 2. Encoding rules for cement types.
Table 2. Encoding rules for cement types.
Cement TypeUnknownNormal-Setting RRapid-Setting RSSlow-Setting SL
Encoding Rule0123
Table 3. Statistical analysis of variables.
Table 3. Statistical analysis of variables.
MinMaxAverage
dt04166.67170.05
Jcreep16.8270.655.88
w/c0.20.610.32
a/c1.277.413.56
c266595439.52
cem131.66
fc284313691.87
E2832,50050,80042,959.47
V/S123725.31
t’0.669114.17
T182319.99
RH_test4010183.37
Table 4. Four test functions.
Table 4. Four test functions.
FunctionalDimensionRange of ValuesSingle Peak/Multiple Peaks
f 1 ( x ) = i = 1 n x i 2 30[−100, 100]single peak
f 2 ( x ) = max i x i 30[−100, 100]single peak
f 3 x = i = 1 n j = 1 i x j 2 30[−10, 10]multiple peaks
f 4 ( x ) = i = 1 n x i + i = 1 n x i 30[−100, 100]multiple peaks
Table 5. Optimal values of the objective function found by the 4 test functions.
Table 5. Optimal values of the objective function found by the 4 test functions.
FunctionalCPOACCPO
f11.2268 × 10−961.3146 × 10−296
f26.0520 × 10−618.4366 × 10−169
f39.6280 × 10−531.1896 × 10−158
f43.8164 × 10−1002.1828 × 10−320
Table 6. Model parameters of ACCPO-ANN.
Table 6. Model parameters of ACCPO-ANN.
ParametersSettings
Popsize30
Maxgen100
Activation functiontansig
Training functiontrainlm
Epochs48
Learning rate0.01
Minimum performance gradient1 × 10−6
Maximum validation failure6
Table 7. Comparison of results of ANN model.
Table 7. Comparison of results of ANN model.
R2RMSEMAE
ANNtraining set0.74990.09400.0643
test set0.75140.09080.0625
CPO-ANNtraining set0.94640.04410.0280
test set0.92890.04640.0303
ACCPO-ANNtraining set0.95940.03740.0239
test set0.95300.04190.0248
Table 8. Model parameters of ACCPO-RF.
Table 8. Model parameters of ACCPO-RF.
ParametersSettings
NumTrees100
MinLeafSize15
NumPredictorsToSample6
OOBPredictionon
OOBPredictorImportanceon
MinParentSize2
MaxNumSplits50
Table 9. Comparison of results of RF models.
Table 9. Comparison of results of RF models.
R2RMSEMAE
RFtraining set0.86740.06420.0281
test set0.65190.11930.0623
CPO-RFtraining set0.91370.04550.0154
test set0.83630.08180.0303
ACCPO-RFtraining set0.93960.04330.0142
test set0.85050.08320.0297
Table 10. Model parameter settings for ACCPO-LSTM.
Table 10. Model parameter settings for ACCPO-LSTM.
ParametersSettings
numFeatures11
numRespones1
maxEpochs300
miniBatchSize128
Search_Agents30
Max_iterations50
LearnRate0.01
HiddenUnits223
WindowSize7
Table 11. Comparison of results for LSTM models.
Table 11. Comparison of results for LSTM models.
R2RMSEMAE
LSTMtraining set0.93580.03580.0201
test set0.89650.04360.0238
CPO-LSTMtraining set0.93630.03570.0197
test set0.92070.04060.0226
ACCPO-LSTMtraining set0.97840.02050.0096
test set0.95240.03170.0140
Table 12. Prediction performances of nine different models used on the training set.
Table 12. Prediction performances of nine different models used on the training set.
R2RMSEMAESPI
LSTM0.93580.03580.02010.413
CPO-LSTM0.93630.03570.01970.41
ACCPO-LSTM0.97840.02050.00960.333
ANN0.74990.0940.06430.667
CPO-ANN0.94640.04410.0280.468
ACCPO-ANN0.95940.03740.02390.441
RF0.86740.06420.02810.495
CPO-RF0.91370.04550.01540.399
ACCPO-RF0.93960.04330.01420.416
Table 13. Prediction performances of nine different models used on the test set.
Table 13. Prediction performances of nine different models used on the test set.
R2RMSEMAESPI
LSTM0.89650.04360.02380.363
CPO-LSTM0.92070.04060.02260.365
ACCPO-LSTM0.95240.03170.0140.333
ANN0.75140.09080.06250.624
CPO-ANN0.92890.04640.03030.424
ACCPO-ANN0.9530.04190.02480.434
RF0.65190.11930.06230.666
CPO-RF0.83630.08180.03030.452
ACCPO-RF0.85050.08320.02970.474
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, Z.; Li, H.; Wu, K.; Chen, J.; Yao, T.; Wu, Y. Development of Time Series Models and Algorithms: Creep Prediction for Low-Carbon Concrete Materials. Materials 2025, 18, 3152. https://doi.org/10.3390/ma18133152

AMA Style

Zhou Z, Li H, Wu K, Chen J, Yao T, Wu Y. Development of Time Series Models and Algorithms: Creep Prediction for Low-Carbon Concrete Materials. Materials. 2025; 18(13):3152. https://doi.org/10.3390/ma18133152

Chicago/Turabian Style

Zhou, Zhengpeng, Houmin Li, Keyang Wu, Jie Chen, Tianhao Yao, and Yunlong Wu. 2025. "Development of Time Series Models and Algorithms: Creep Prediction for Low-Carbon Concrete Materials" Materials 18, no. 13: 3152. https://doi.org/10.3390/ma18133152

APA Style

Zhou, Z., Li, H., Wu, K., Chen, J., Yao, T., & Wu, Y. (2025). Development of Time Series Models and Algorithms: Creep Prediction for Low-Carbon Concrete Materials. Materials, 18(13), 3152. https://doi.org/10.3390/ma18133152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop