Predicting the Posture of High-Rise Building Machines Based on Multivariate Time Series Neural Network Models

High-rise building machines (HBMs) play a critical role in the successful construction of super-high skyscrapers, providing essential support and ensuring safety. The HBM’s climbing system relies on a jacking mechanism consisting of several independent jacking cylinders. A reliable control system is imperative to maintain the smooth posture of the construction steel platform (SP) under the action of the jacking mechanism. Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Temporal Convolutional Network (TCN) are three multivariate time series (MTS) neural network models that are used in this study to predict the posture of HBMs. The models take pressure and stroke measurements from the jacking cylinders as inputs, and their outputs determine the levelness of the SP and the posture of the HBM at various climbing stages. The development and training of these neural networks are based on historical on-site data, with the predictions subjected to thorough comparative analysis. The proposed LSTM and GRU prediction models have similar performances in the prediction process of HBM posture, with medians R2 of 0.903 and 0.871, respectively. However, the median MAE of the GRU prediction model is more petite at 0.4, which exhibits stronger robustness. Additionally, sensitivity analysis showed that the change in the levelness of the position of the SP portion of the HBM exhibited high sensitivity to the stroke and pressure of the jacking cylinder, which clarified the position of the cylinder for adjusting the posture of the HBM. The results show that the MTS neural network-based prediction model can change the HBM posture and improve work stability by adjusting the jacking cylinder pressure value of the HBM.


Introduction
High-rise building machines (HBMs) are innovative self-elevating steel-structurebuilding machines designed for constructing super high-rise buildings [1][2][3][4].They offer several advantages: high automation, enhanced safety, simplified construction organization, assured structural quality, and standardized processes [5][6][7].These features contribute to accelerated construction progress.Figure 1 illustrates the three-dimensional structural details of the HBM [8][9][10].The levelness of the steel platform (SP) represents the HBM's posture changes [11][12][13].During the climbing process of the HBM, the telescopic support device retracts from the structurally reinforced concrete shear wall, while the climbing system supports the SP to ascend along the column guide rail.Once the SP reaches the desired position, the telescopic support device is extended into the reserved hole of the structurally reinforced concrete shear wall.However, due to uneven stacking loads on the SP, the levelness of the SP may fluctuate.If excessive levelness deviation is not promptly corrected, the HBM may undergo severe shape distortion, leading to collisions with obstacles and halting the climbing [14].In the early stages of HBM usage, workers employed the hanging-line-pendant method and plumbing instruments to measure the HBM's verticality.
They used jacks to lift parts with low levelness, applying corrective forces to gradually restore the SP's levelness while continuing to climb [15,16].Subsequently, climbing control operators monitored the HBM's posture information directly via signal and image displays on the center console.Although this approach improved efficiency and reduced measurement errors, it incurred significant costs.The detection equipment enhanced production efficiency, minimized operational errors, ensured detection repeatability, and generated extensive monitoring data [17][18][19][20].However, these measures still failed to prevent levelness deviations during the climbing process, addressing the complexity of repairing such deviations and mitigating subjective errors in worker operations.Zuo et al. conducted a study on the determination of concrete strength during the construction of HBMs by proposing a remote real-time monitoring system [10].Pan et al. collected the vibration frequency of HBMs and utilized a machine learning method to categorize and assess the four primary working states of this machinery [9].They also proposed a method for monitoring and issuing warnings during the climbing process of an HBM [8].However, these methods are primarily focused on monitoring and providing early warnings for the safe operation of HBMs.There needs to be more in-depth research on predicting the future working state of this machinery and dealing with potential dangers.We have discovered that a change in posture of a HBM exhibits a multivariate time series (MTS)-related process.To effectively capture the intricate patterns and dependencies in time series data, neural networks (NNs) are a valuable tool [21][22][23][24][25][26].NNs possess linear and nonlinear fitting capabilities, making them adept at handling such data.Notably, NN technology has shown promising results in prediction tasks [27][28][29][30].It is important to note, though, that these neural networks don't do as well when they have to deal with tasks that use MTS class information and simulations that have strong back-andforth correlations.Several articles from 2015 and 2020 have talked about recurrent neural networks (RNNs) [31][32][33][34][35].These are a type of time series neural network that have memory cells and feedback connections that make it easier for information to flow within the model and handle strong back-and-forth dependencies well.Using a recursive structure, RNNs retain contextual information from past observations and apply it to current or future predictions.However, RNNs encounter challenges when dealing with long time series, leading to gradient vanishing or exploding issues.Consequently, traditional RNNs excel at solving short-term dependencies.Advanced architectures such as the Long Short-Term Memory (LSTM) [36][37][38] and the Gated Recurrent Unit (GRU) [39][40][41][42] have been developed to address this.LSTM employs memory cells and gating mechanisms to filter out the noise and selectively capture long-term dependencies more efficiently.On the other hand, GRU makes the LSTM structure simpler by only using hidden states to send information.This cuts down on parameters and computational complexity while keeping the performance the same in some situations.Not only are recurrent neural networks used to process time series data, but so are Temporal Convolutional Neural Networks (TCNs) [43][44][45][46].TCNs modify the conventional approach of using two-dimensional convolutional kernels for image processing by employing one-dimensional convolutional kernels to extract local patterns and features.TCNs possess a large enough sensory field to find patterns and regularities across a range of time scales by stacking multiple convolutional layers with pooling operations.
Previous studies have demonstrated the remarkable capabilities of neural networks in forecasting time series data.This study is organized as follows: Section 2 provides a detailed explanation of the characterization of the HBM through sensor-based monitoring during its climb.It also discusses the preprocessing techniques employed to handle this data effectively.Moving on to Section 3, an in-depth exploration of three multivariate time series-prediction models, namely the LSTM, GRU, and TCN models, is presented.The technical approach adopted to predict the HBM's posture in this study is also discussed.Section 4 delves into an insightful comparison between the prediction results of these three models and the actual values.It also includes a meticulous analysis of the potential sources of errors generated by the models.Finally, Sections 5 and 6 present the discussion and conclusions drawn from the research.It is shown that the suggested smart-prediction system is better than the old ways of manipulating HBMs and reduces the hysteresis of repairing the HBM's posture.

Data Sources
In this work, we observed and analyzed the HBM climbing process at the West Tower of the Shenzhen Xinghe Yabao Building project (356 m) in China.With an average climbing frequency of once every five days, we embarked on an exploration utilizing a dataset comprising 30 sets (days) of monitoring data.Each dataset, sampled at a rate of one sample per second, encompassed 168,048 monitoring data points.These data points comprised 68 sample characterizations, forming a foundation for our investigation.As showcased in Figure 2, our monitoring efforts focused on observing and recording the jacking stroke and pressure of 26 climbing jacking cylinders throughout the climbing process.We also monitored the levelness of 16 crucial monitoring points on the steel platform beam, as shown in Figure 3.

The Pre-Processing of Monitoring Data
Figure 4 depicts the once-climbing process of the HBM, presenting a narrative of the levelness of the SP, jacking cylinder jacking stroke, and jacking-cylinder pressure over time.This intricate climbing process can be divided into five distinct phases: the pre-climb stage, the preparation stage, the stable climbing stage, the closing of the climbing stage, and the climbing-into-position stage.Remarkably, the pre-climbing and climbing-intoposition stages exhibited minimal changes in the posture of the HBM.On the other hand, the preparation stage and the closing of the climbing stage are influenced by equipmentoperation errors arising from installing the jacking cylinders, such as support jacking cylinder position errors and jacking cylinder tilt.And, we want, as the climb progresses into the stabilized climbing stage, a gradual elimination of the levelness deviation of the SP to become evident.In addition, the initial levelness deviation observed in the stable climbing stage is not attributed to the climbing process itself.Consequently, our focus narrows to studying and analyzing the data solely from the stable climbing stage, where the SP's levelness is relatively stable and reliable.To mitigate the occurrence of data jumps during multiple stable climbs, we took measures to initialize the monitoring data for each levelness of the SP at the commencement of every stable climb, ensuring the accuracy and consistency of our analysis.

Methodology
Selecting an appropriate machine learning algorithm is crucial to extracting valuable insights from sensor-monitoring data during the HBM-climbing process.This algorithm should uncover the hidden data and establish a mapping relationship between the HBM climbing parameters and postures.In order to accurately predict postures in HBMs, the chosen model must effectively handle MTS features and automatically capture the nonlinear relationships present in the data.This study employs three MTS neural networks-LSTM, GRU, and TCN-to predict the postures during HBM climbing.

LSTM Model
The LSTM model, introduced by Hochreiter and Schmidhuber (1997) [47], addresses the limitations of traditional RNNs by incorporating explicit memory-management mechanisms.By explicitly adding and subtracting information (see Equation ( 1)) from the LSTM state, the LSTM model ensures that each state cell remains constant over time.Moreover, gated cells and carefully designed memory cells enable the model to preserve its long-term memory while maintaining the relevance of the most recent state.This prevents information distortion, disappearance, and sensitivity explosion, which could occur in other models like Neural Turing Machines S t = S t−1 A typical LSTM cell, as illustrated in Figure 5a, consists of three key gates: the forget gate, the input gate, and the output gate.These gates play a critical role in managing the memory of the network by modulating the activation of the weighted sum function W( * ).As depicted in Equation ( 2) [47], the forget gate f t decides which information should be retained or discarded.It utilizes a sigmoid function (σ(x) = 1 1+e −x ) to compute a vector (output) based on the previous cell state s t−1 and the current input information x t .The purpose of this scaling is to rescale the values of each dimension of the data within the range of [0, 1].Subsequently, the sigmoid function σ(x) determines the relevance of the current information and determines which information will contribute to the computation of the cell state s t .The input gate i t determines the information that should be stored in the cell state.It employs the sigmoid function σ(x) to compute its activation based on the previous cell state s t−1 and the current input information x t .The candidate cell state st is computed using the hyperbolic tangent function (ϕ(x) = e x −e −x e x +e −x ), which yields a value ranging from −1 to +1.The outputs of these computations (s t−1 and st ) are combined to update the current internal cell state s t .Lastly, the output gate o t determines the final output lstm out and the next hidden state h t .The output gate o t multiplies the previous hidden state h t−1 and the current input information x t with the sigmoid function σ(x)-activation output.These calculations determine the information that the hidden layer h t will carry.

GRU Model
The GRU model (Cho et al., 2014 [39]) is a simplified variant of the RNNs architecture.Unlike the LSTM model, the GRU model consists of only two gates: the reset gate and the update gate.Figure 5b illustrates the structure of the GRU model.As shown in Equation (3) [39], in contrast to LSTM, GRU eliminates the need for a separate hidden state h.Instead, it directly replaces the input gates i t with (1 − z t ), where z t represents the update gate.The reset gate r t determines the combination of the input information x t and the previous cell state s t−1 .The update gate z t controls the retention of the previous cell state s t−1 .The cell state s t is then updated by combining all the computed information.
The TCN model (Bai et al., 2018 [43]) is a one-dimensional dilated causal-convolutions neural network (CNN) designed for solving time series problems.It incorporates the architecture of CNNs and introduces the concept of dilated causal convolutions.Figure 6 illustrates the main components of the TCN model.The TCN model utilizes a one-dimensional dilated causal convolution with a filter size of 3 and a residual network structure.The combination of causal convolution and dilated convolutions allows the convolutional layer to expand its receptive field while strictly adhering to temporal constraints.This larger receptive field enables the model to learn and extract historical information from multivariate time series data.The expansion convolution operation H(S) is employed in the sequence unit S to process a one-dimensional time series input X = (x 1 , x 2 , . . ., x t−1 ), where f : {0, 1, . . ., k − 1} is the filter.The operation is defined as [43]  Additionally, the TCN model incorporates a residual network structure, which helps address issues like gradient vanishing or explosion and model degradation during deep neural network training.Typically, the TCN model consists of two layers of residual modules.Each module comprises dilated causal convolution, weight normalization, ReLUactivation function ( f (x) = max(0, x), where x is the input), and dropout.The previous layer's output serves as the input to the dilated causal convolution of the next layer, ensuring that both the inputs x i and outputs x i+1 of the module have the exact dimensions.An additional 1 × 1 convolution is applied to restore the original number of channels.
where F ( * ) refers to a series of transformations leading from one residual module to another.The data obtained from different sensors often have varying size ranges, resulting in sample data in the dataset having different scales.We perform standardized preprocessing on the raw data to address this issue and prevent model bias towards certain features due to the magnitude differences during training.In this experiment, we apply min-max normalization to the sensor data samples, resulting in a value range of [−1, 1].After normalization, the scaled values of the sensor data samples X norm or Y norm (where X represents the active input dataset and Y represents the passive output dataset) can be expressed as follows: where X max denotes the maximum value of the sample dataset, and X min denotes the minimum value of the sample dataset.
The dataset is partitioned into a training set (80%), a validation set (10%), and a test set (10%).The training set is utilized to train the neural network model.The validation set is used to assess the performance of the untrained model, serving as a means to prevent overfitting and underfitting.Finally, the test set evaluates the optimal model's accuracy, precision, and recall performance metrics.

The Evaluation of Predictive Performance
In this study, the mean absolute error (MAE) and coefficient of determination for Goodness of Fit (R 2 ) is used as an evaluation metric for the prediction model: where m represents the total number of moments, y i denotes the actual value at moment i, ŷi refers to the predicted value of the model output at moment i, and ȳi denotes the average of all actual values.The MAE can better reflect the reality of prediction-value errors.A more petite MAE indicates a better prediction effect.On the other hand, R 2 ranges between 0 and 1, where a value closer to 1 indicates a better fit of the model to the data, suggesting a more vital predictive capability.

MTS-Prediction Architecture
The flowchart in Figure 7 illustrates the technical route employed in this study to predict the HBM postures.Initially, the sensor-monitoring data of the HBM is preprocessed.Subsequently, a MTS neural network model is trained using the training set data, while the validation-set data is utilized for model selection and fine-tuning.This experiment employs the root mean square error (RMSE) as the loss function: In addition, the Adam optimizer was used to update the network parameters for model training and model output.Figure 8 illustrates this study's three MTS neural network-model architectures.Notably, there are similarities between the LSTM and GRU model architectures.The LSTM, GRU, and TCN functions are utilized in the hidden layers of these neural network-model architectures.The weight initialization techniques used for all three functions are the commonly used Xavier initialization, including the initialization of input gate, forget gate, output gate, and candidate memory cell weights in the LSTM function, the initialization of reset gate, update gate, and candidate hidden state weights in the GRU function, and the initialization of one-dimensional convolution kernel, residual block, and fully connected layer weights in the TCN function.All three models in this experiment use 3 hidden layers to make the prediction results are comparable and set the same learning rate η = 0.01.In order to prevent overfitting, the Dropout function (1 − p = 0.2) is employed as a regularization technique.It is worth mentioning that both the LSTM and GRU models implement the Dropout function after the hidden layers, while the TCN model incorporates the Dropout function within each TCN function.Finally, a fully connected layer is employed to achieve a dimensionality reduction in the output.

Sensitivity Analysis
This study selected 26 jacking cylinder strokes and 26 jacking cylinder pressures as the active inputs X for the multivariate analysis of HBM-climb posture-prediction.The levelness of 16 monitoring points on the SP during the HBM climb was chosen as the passive output Y.
Figure 9a,b illustrates the Pearson correlation coefficients ρ X,Y (see Equation ( 9)) between the jacking cylinder stroke and the SP levelness, and between the jacking cylinder pressure and the SP levelness, separately.The Pearson correlation coefficient provides insight into the relationship between the jacking cylinder stroke, jacking cylinder pressure, and the levelness of each position on the SP.Notably, the displacements of the SP at SZ01, SZ03, SZ04, SZ07, SZ08, SZ09, SZ12, and SZ14 show a more pronounced response to the activity of the jacking cylinder.
where cov( * , * ) is the covariance and σ is the standard deviation.

Model Comparison and Evaluation
In this study, a posture-prediction model for HBM is constructed using the LSTM neural network, GRU neural network, and TCN neural network.These models were trained and validated using separate datasets.After 200 epochs with a batch size of 1024, the models generated an intelligent prediction model for the 16-point levelness of the SP in HBM.
The intelligent prediction model takes historical time series values of the monitored stroke and pressure of the jacking cylinder as input, and the predicted time series values of steel platform levelness as output.A test dataset was used to validate the final HBM posture-prediction model.The prediction results of the LSTM, GRU, and TCN models were compared with the actual levelness of the 16 monitoring points on the construction platform, as shown in Figure 10.(a) P a s s 0 1 P a s s 0 2 P a s s 0 3 P a s s 0 4 P a s s 0 5 P a s s 0 6 P a s s 0 7 P a s s 0 8 P a s s 0 9 P a s s 1 0 P a s s 1 1 P a s s 1 2 P a s s 1 3 P a s s 1 4 P a s s 1 5 P a s s 1 6 P a s s 1 7 P a s s 1 8 P a s s 1 9 P a s s 2 0 P a s s 2 1 P a s s 2 2 P a s s 2 3 P a s s 2 4 P a s s  Based on the fluctuating patterns and peaks of the overall levelness, the LSTM and GRU models exhibited a certain levelness of reliability in predicting the HBM postures.However, the TCN model demonstrated relatively more significant prediction errors.An in-depth analysis was conducted to evaluate the accuracy and predictive ability of the three intelligent-prediction models.The goodness-of-fit correlation coefficient (R 2 ) and the mean absolute error (MAE) (see Equation ( 7)) were used as performance indicators, as detailed in Table 1.The TCN model consistently exhibited MAE values generally exceeding 1.0, indicating a substantial deviation between the predicted values and the actual monitoring values.Similarly, the R 2 values for the TCN model were predominantly below 0.6, suggesting a lack of compatibility.For the LSTM model, MAE values surpassing 1.0 were observed at monitoring points SZ13, SZ15, and SZ16, with a significant error of 7.305 at SZ13.The corresponding R 2 values were less than 0.8 at SZ06, SZ10, SZ13, and SZ16, indicating poor fitting at these monitor points.The GRU model only exhibited a MAE value greater than 1.0 at monitoring point SZ13, reaching 1.128.Moreover, the R 2 values at SZ05, SZ06, and SZ16 fell below 0.8, indicating a weaker fit at these monitoring points.Notably, the R 2 value at SZ16 was a mere 0.268, suggesting an almost complete lack of fitting ability at this point.Observing the prediction results of the LSTM and GRU models, although not all the predicted values exactly matched the real levelness, the levelness of the building-steel platforms at points SZ01, SZ03, SZ04, SZ07, SZ08, SZ09, SZ12, SZ14, and SZ15 showed a better prediction effect, with similar fluctuation patterns and peaks in the predicted and actual monitoring values.This further validates the results of the sensitivity analysis between multiple explanatory variables and multiple observed variables in Section 4.1.
Figure 11 also demonstrates that the GRU model consistently had smaller MAE values, indicating higher accuracy in predicting the HBM posture.On the other hand, the LSTM model showcased R 2 values closer to 1 across all instances, indicating a better fit to the HBM posture.
However, monitoring points SZ10, SZ13, and SZ16 exhibited poor predictive performance across all three intelligent-prediction models, indicating a lack of sensitivity to jacking cylinder data and susceptibility to other influencing factors.The unsatisfactory prediction outcomes of the TCN model imply an inability to capture the significance of long-term features, requiring further enhancements such as introducing larger convolution kernels to capture the long-term trends of HBM postures.

Discussion
In this research, we focused on utilizing various neural network models, including LSTM, GRU, and TCN, to predict changes in the postures of the HBM based on historical data on the levelness of SPs, stroke, and the pressure of climbing jacking cylinders.We employed a multivariate time series-related model approach to address the challenge of multiple output prediction.We conducted a thorough analysis by comparing and evaluating the prediction outcomes of these time series neural network methods.We selected the optimal solution based on calculating the mean square error, which effectively reduced model randomness and improved the correlation of levelness at each monitoring point of the HBM.By training the multivariate time series neural network, we successfully obtained a model capable of predicting the postures of the HBM.Leveraging the historical monitored data during the climbing process, our model enabled accurate predictions of subsequent data.Thus, the necessary adjustments were made to each of the jacking cylinders, even when the load distribution of the SP remained unknown.
During the climbing of the HBM, it is imperative to minimize the variation in the SP's postures.Drawing upon the comprehensive analysis presented earlier, it becomes clear that the GRU model exhibits superior accuracy in predicting the hand postures of the HBM.Consequently, the HBM operator can leverage the intelligent-prediction model with the GRU neural network to anticipate future changes in HBM postures during the climbing process.Moreover, it enables a continuous adjustment of the HBM's postures in response to the predicted outcomes.Figure 12 illustrates the temporal data obtained from 16 levelness sensors during the climbing of the HBM.As the HBM climbs, the readings from the 16 sensors consistently demonstrate changes in the measured levelness.Furthermore, the levelness deviations across the 16 positions on the SP undergo dynamic changes in conjunction with the climbing process.
During the intricate process of climbing the HBM, the stroke of the jacking cylinder directly controls the HBM's posture while significantly impacting its overall posture dynamics.So, it is clearly suboptimal to rely on adjusting the stroke of the jacking cylinder to rectify the HBM's posture.In order to further leverage the proposed model to offer guidance on the HBM's climbing state, we engage in a dynamic adjustment of the jacking pressure values across the jacking cylinders of the various positions.This approach enables us to dynamically rectify the HBM's postures, thereby minimizing the levelness deviations among different positions on the SP. Figure 9b showcases the results of a Pearson correlation analysis conducted between the levelness of the SP and the pressure exerted by the jacking cylinder.Remarkably, there is always the highest correlation discernible between the fluctuations observed in the levelness at each position on the SP and the pressure values of one jacking cylinder.Consequently, when encountering instances where the levelness at a particular position is suboptimal, we adjust the corresponding jacking cylinder's pressure value, enhancing the levelness, diminishing the deviation in levelness across the SP, and rectifying the HBM's posture.
In summary, in situations where significant deviations in the HBM's postures manifest, we can minimize the levelness deviations on the SP by augmenting the pressure value of the jacking cylinder, which has a positive correlation with the low-levelness position point on the SP.This approach allows us to rectify the HBM's postures and attenuate the levelness deviations.By harnessing the predictive capabilities of the proposed model, we can anticipate the forthcoming changes in the HBM's posture.This integration of the model into the HBM's intelligent system empowers the HBM to adjust its climbing parameters autonomously, ensuring the smooth and secure operation of the HBM. Figure 13 visually illustrates the process of adjusting the hand postures of the HBM during its climbing.Throughout the actual climbing process, it is imperative to minimize the levelness deviations among each monitoring position on the SP.Furthermore, it is crucial to maintain the maximum and minimum values of levelness deviations within a threshold (T d ) to mitigate the risk of the tube structure colliding with the reinforced concrete main structure of the building.Such collisions can lead to instability in the climbing process, posing a potential hazard to the HBM.The proposed model not only accurately predicts the climbing postures of the HBM but also provides specific adjustment recommendations to ensure the safe operation of the HBM and mitigate construction risks.

Conclusions
In this paper, we propose a multivariate time series neural network prediction system to control the HBM and maintain a smooth posture during construction operations.The system utilizes three multivariate time series neural network models: LSTM, GRU, and TCN.The pressure and stroke of the jacking cylinder serve as inputs, while the monitored values of sensor levelness at 16 different positions on the SP are used as outputs.The main objective is to address the issue of unstable posture in the HBM caused by significant levelness deviations resulting from uneven stacking on the SP during the climbing process.By predicting future posture changes based on the climb data from the historical working stage of the HBM, we aim to proactively control the levelness deviations of the SP within a

C
l i mb i n g s u p p o r t p o i n t × 2 6

Figure 4 .
Figure 4.One set of measured time-range data during one climb: (a) levelness-sensor monitoring data, (b) jacking cylinder-monitoring instruments monitoring data.To provide further insight into the study data, we conducted a brief analysis of the monitoring data collected during the stabilized climbing stage.The jacking cylinder stroke values ranged from −551.4 to 551.79, the jacking cylinder pressure values ranged from 0 to 53.4, and the SP levelness values ranged from −39.12 to 40.48.

Figure 7 .
Figure 7. Flowchart of the posture-prediction model framework for the HBM-climbing process.

Figure 9 .
Figure 9. Sensitivity analysis of sensor-monitoring information during climbing of HBM: (a) Sensitivity between jacking cylinder stroke and SP levelness, (b) sensitivity between jacking cylinder pressures and SP levelness.

Figure
Figure Predicted SP levelness using LSTM, GRU, and TCN models.

Figure 11 .
Figure 11.MAE and R 2 under LSTM and GRU model prediction.

Figure 12 .
Figure 12.Use of hydrostatic levelness sensors to monitor the time course of the levelness curves at different positions of the steel platform during the climbing process.

Figure 13 .
Figure 13.Schematic diagram of the posture adjustment during HBM climbing process.

Table 1 .
MAE and R 2 of prediction using LSTM, GRU, and TCN models.