Post-Processing of High Formwork Monitoring Data Based on the Back Propagation Neural Networks Model and the Autoregressive—Moving-Average Model

: Many high formwork systems are currently equipped with health monitoring systems, and the analysis of the data obtained can determine whether high formwork is a hazard. Therefore, the post-processing of monitoring data has become an issue of widespread concern. In this paper, we discussed the ﬁtting effect of the symmetrical high formwork monitoring data using the autoregressive–moving-average (ARMA) model and the back propagation neural networks (BPNN) combined model to process. In the actual project, the symmetry of the high formwork system allows the analysis of local monitoring results to be well extended to the whole. For the establishment of the ARMA model, the accurate judgment of the model order has a signiﬁcant impact. In this paper, back propagation neural networks (BPNN) are used to simulate the ARMA process. The order of the ARMA model is estimated by determining the optimal neural network structure, which is suitable for linear or nonlinear sequences. We validated this approach from the ARMA model data simulated in Monte Carlo and compared it with the Akaike information criterion (AIC) and Bayesian information criterion (BIC). The length of the sequence, the coefﬁcients and the order of the ARMA model are considered as factors that inﬂuence the judgment effect. Under different conditions, the BPNN always shows an accuracy rate of more than 90%, while the BIC only has a higher accuracy rate when the model order is low and the judgment efﬁciency of the AIC is below 50%. Finally, the proposed method successfully modeled the stress sequence and obtained the stress change trend. Compared with AIC and BIC, the efﬁciency of the processing time series is increased by about 50% when an order is obtained by BPNN.


Introduction
The safety management of the high formwork is one of the important tasks of construction safety management, and the majority of failures occur due to inadequate site supervision and poor design [1]. Undoubtedly, the real-time monitoring of near-miss accidents provides an insight into possible accidents and can significantly improve safety performance by appropriate action being taken before potentially impending accidents occur [2]. In view of this, many high formwork systems have installed health monitoring systems of different sizes around the world, and these have accumulated a large amount of data over a long period of time [3,4]. Therefore, how to process this huge monitoring data accurately and timely has become the key process of the high formwork condition assessment and performance prediction [5].
Structural damage will lead to changes in the physical properties of the structure, and these changes are often reflected in the monitoring sequence [6,7]. In the previous evaluation of structural safety performance, most of its data were related to time [8][9][10][11][12].
The autoregressive-moving-average is an important simulation method of stationary time series. Before discussing the order judgment of the time series, we need to analyze the structure of the ARMA model. {X t } is an ARMA (p, q) process if {X t } is stationary and if for every t, X t − φ 1 X t−1 − · · · − φ p X t−p = Z t + θ 1 Z t−1 + · · · + θ q Z t−q (1) where {Z t } ∼ W N(0, σ 2 ) and the polynomials (1 − φ 1 z − · · · − φ p z p ) and (1 − θ 1 z − · · · − θ p z p ) have no common factors [17]. The p of the left-hand side of Equation (1) represents the order of the autoregressive (AR) process. Similarly, the q of the right-hand side is the order of the moving-average (MA) process. When they are equal, the ARMA model has mathematical symmetry. For a mathematical model, it is an important requirement for the effectiveness to be able to fully represent information of the sequence, which is also true for the ARMA model. In Equation (1), Z t is actually a sequence of white noise. Its characteristic is that the value of Z t does not affect the trend of X t . At the same time, when the parameters of Equation (1) and X t are known, the value of Z t can also be determined within a certain range. The characteristics of Z t provide a basis for us to determine the order of the ARMA model through the neural network.

Back Propagation Neural Networks (BPNN)
BPNN discover intricate structures in large datasets by using the backpropagation algorithm to indicate how a machine should change its internal parameters [30]. The ARMA model in this paper is a linear time-invariant system, which can be effectively simulated by using BPNN. BPNN is a multi-layer feedforward neural network trained according to the error back propagation algorithm. The complete neural network structure is composed of a large number of neurons. A typical BPNN consists of three layers: input layer, hidden layer, and output layer. Generally, we use normalized data as the input layer. Different from the input layer, the neurons in the hidden layer and the output layer have computational functions and have similar definitions. For neurons in the hidden layer and output layer, its iteration consists of two parts, namely forward propagation and back propagation. The forward propagation of a single neuron consists of two steps. First, calculate {z} through weight and bias, and then calculate {a} through an activation function g(x), where {a} is the input layer of the next layer of neurons or output layer. According to the calculation result of forwarding propagation, the weight and deviation are updated through back propagation. The back propagation of the BPNN is calculated by the gradient descent method. After many iterations, the neural network can fit the data with less error. It is worth noting that the activation functions of the hidden layer and the output layer can be different. Figure 1 shows single neuron calculation and BPNN structure. have no common factors [17]. The p of the left-hand side of Equation (1) represents the order of the autoregressive (AR) process. Similarly, the q of the right-hand side is the order of the moving-average (MA) process. When they are equal, the ARMA model has mathematical symmetry. For a mathematical model, it is an important requirement for the effectiveness to be able to fully represent information of the sequence, which is also true for the ARMA model. In Equation (1), t Z is actually a sequence of white noise. Its characteristic is that the value of t Z does not affect the trend of t X . At the same time, when the parameters of Equation (1) and t X are known, the value of t Z can also be determined within a certain range. The characteristics of t Z provide a basis for us to determine the order of the ARMA model through the neural network.

Back Propagation Neural Networks (BPNN)
BPNN discover intricate structures in large datasets by using the backpropagation algorithm to indicate how a machine should change its internal parameters [30]. The ARMA model in this paper is a linear time-invariant system, which can be effectively simulated by using BPNN. BPNN is a multi-layer feedforward neural network trained according to the error back propagation algorithm. The complete neural network structure is composed of a large number of neurons. A typical BPNN consists of three layers: input layer, hidden layer, and output layer. Generally, we use normalized data as the input layer. Different from the input layer, the neurons in the hidden layer and the output layer have computational functions and have similar definitions. For neurons in the hidden layer and output layer, its iteration consists of two parts, namely forward propagation and back propagation. The forward propagation of a single neuron consists of two steps. First, calculate { } z through weight and bias, and then cal- a is the input layer of the next layer of neurons or output layer. According to the calculation result of forwarding propagation, the weight and deviation are updated through back propagation. The back propagation of the BPNN is calculated by the gradient descent method. After many iterations, the neural network can fit the data with less error. It is worth noting that the activation functions of the hidden layer and the output layer can be different. Figure 1 shows single neuron calculation and BPNN structure.  When using the BPNN to simulate the ARMA model, we expect this method to be able to estimate the parameters of the ARMA model and obtain a method for determining the order of the model. Hossain et al., 2020, studied artificial neural networks (ANN) to determine the order of ARMA model, but this method did not consider the influence of bias in the process of formula derivation [20]. Therefore, we re-derive the relevant formula. Equation (1) can be rewritten in the following form where X t is the time series, Z t is the noise sequence, φ i and θ j are coefficients of ARMA model. Next, we compare the difference between the calculation method of BPNN and Equation (2). Figure 1 shows the processing of the input data by a single neuron. According to this, we can obtain the calculation process of the hidden layer neuron on the input data as follows or or a [1] = g 1 (Z [1] ) where X is a column vector composed of input data, W [1] is a column vector composed of W T i , b [1] sis a column vector composed of bias, g is the activation function used, and a [1] is a column vector composed of activation value. The output of the proposed BPNN can be written as follows or Next,ŷ = a [2] = g 2 (Z [2] ) (6) where the meaning of each letter is similar to before. In this process, if we do not consider the bias and the activation function of the hidden layer, and at the same time set the activation function of the output layer to the linear activation function, we will obtain the following resultsŷ Equation (7) is consistent with the conclusion derived by Hossain et al., 2020 [20]. Although the method described in Equation (7) can easily obtain the coefficient estimates of the ARMA model, it does not consider the influence of the bias and the nonlinear activation function on the neural network process. The existence of bias is of great significance to the operation of neural networks. It can improve the accuracy of neural network classification and reduce the noise in the evaluation process [31]. When we add bias, although the effect of neural network iteration can be improved, the coefficient estimates cannot be obtained as easily as Equation (7). This is because of the influence of the deviation column vector, the coefficient calculation of the ARMA model in the calculation of the BPNN can no longer be simply obtained through the weight matrix. Our other improvement to Equation (7) is the addition of a nonlinear activation function. This is not only because the nonlinear activation function can better exert the computational performance of the neural network but also the coefficient estimation of the ARMA model itself is a nonlinear process. In the symmetric formwork system, BPNN can overcome the shortcomings of insufficient randomness of ARMA order estimation. It is worth noting that the coefficient of the model can be better estimated by the least square method when we can accurately determine the order of the model [32,33].

Simulation Settings
This section introduces the simulation methods separately from the simulation of data and the design of the artificial neural network. The paper mainly uses MATLAB 2017a to establish an analytical model, and the relevant calculations are also completed in the software.
In the real world, most systems can be modeled by ARMA (5,5), so in this paper, we set the maximum value of AR and MA order to 5 (AR (1-5) and MA (0-5), the numbers represent the range of values for the corresponding order) [34]. All the simulated datasets in this paper are generated by Monte Carlo simulation. We constructed a time series (X t ) based on the random simulated noise series (Z t ) and the coefficients of the ARMA model. The expectation of the noise sequence is 0 and the variance is 1. The coefficients of the model were generated by a random method and met the conditions of causality and invertibility. The initial value of the time series was determined by the noise series.
For the neural network, we used Equation (8) to calculate the number of neurons in the hidden layer, and its value was a dynamic integer. The maximum number of epochs to train was 100, the performance goal was set to 10 −7 , and the training was terminated when the MSE did not drop for 10 consecutive iterations. The neural network parameters were updated using Adam optimizer, and its learning rate was set to 0.01, which was chosen empirically [35]. Although the ARMA model is a linear time-invariant system, the linear unit (ReLU) activation function of hidden neurons cannot handle occasional discrete data well. Thus, we used a nonlinear activation function (Sigmoid) as the activation function [36][37][38].
Based on the above conditions, we built 30 neural networks (combination of AR (1-5) and MA (0-5)) to analyze time series. For different neural networks, we converted the time series into corresponding datasets, 80% of the processed data was used as the training set and the rest as the validation set. We used the MSE of the validation set as the basis for judging the order. Figures 2 and 3 show a system identification block diagram.
Based on the above conditions, we built 30 neural networks (combination of AR (1)(2)(3)(4)(5) and MA (0-5)) to analyze time series. For different neural networks, we converted the time series into corresponding datasets, 80% of the processed data was used as the training set and the rest as the validation set. We used the MSE of the validation set as the basis for judging the order. Figures 2 and 3 show a system identification block diagram.

Pre-Simulation
According to the settings in Section 3.1, we verified the established model. We tested whether the neural network is suitable for the simulation of the ARMA model to verify the correctness of the derivation in Section 3.2. On the other hand, the proposed method is used for order estimation, and special cases are discussed at the same time. Figure 4 is the MSE loss value of the time series conforming to the ARMA (3, 2) model. Figure 4a,b represent the time series by simulating the neural network of ARMA (3, 2) and ARMA (1, 2) respectively. We use this example to illustrate the judgment theory of neural networks. As shown in Figure 4a, when the time series passes the correct model, Based on the above conditions, we built 30 neural networks (combination of AR (1)(2)(3)(4)(5) and MA (0-5)) to analyze time series. For different neural networks, we converted the time series into corresponding datasets, 80% of the processed data was used as the training set and the rest as the validation set. We used the MSE of the validation set as the basis for judging the order. Figures 2 and 3 show a system identification block diagram.

Pre-Simulation
According to the settings in Section 3.1, we verified the established model. We tested whether the neural network is suitable for the simulation of the ARMA model to verify the correctness of the derivation in Section 3.2. On the other hand, the proposed method is used for order estimation, and special cases are discussed at the same time. Figure 4 is the MSE loss value of the time series conforming to the ARMA (3, 2) model. Figure 4a,b represent the time series by simulating the neural network of ARMA (3, 2) and ARMA (1, 2) respectively. We use this example to illustrate the judgment theory of neural networks. As shown in Figure 4a, when the time series passes the correct model,

Pre-Simulation
According to the settings in Section 3.1, we verified the established model. We tested whether the neural network is suitable for the simulation of the ARMA model to verify the correctness of the derivation in Section 3.2. On the other hand, the proposed method is used for order estimation, and special cases are discussed at the same time. Figure 4 is the MSE loss value of the time series conforming to the ARMA (3, 2) model. Figure 4a,b represent the time series by simulating the neural network of ARMA (3, 2) and ARMA (1, 2) respectively. We use this example to illustrate the judgment theory of neural networks. As shown in Figure 4a, when the time series passes the correct model, MSE has been in a downward trend and, after a sufficient period of the epoch, MSE reaches the target value (10 −7 ). In Figure 4b, when the time series passes through the mismatched model, we find that MSE reaches the optimal value (10 −3 ) at 33 Epochs and does not drop again for ten consecutive times. The reason for this phenomenon is that, for the correct neural network model, the model can approach an analytical solution after sufficient iterations. For the wrong neural network models, the value of MSE will often not drop after reaching MSE has been in a downward trend and, after a sufficient period of the epoch, MSE reaches the target value ( 7 10 − ). In Figure 4b, when the time series passes through the mismatched model, we find that MSE reaches the optimal value ( 3 10 − ) at 33 Epochs and does not drop again for ten consecutive times. The reason for this phenomenon is that, for the correct neural network model, the model can approach an analytical solution after sufficient iterations. For the wrong neural network models, the value of MSE will often not drop after reaching the critical point, and the correct model can obtain a satisfactory MSE. This is also the basis for us to judge the order of the ARMA model through MSE. Next, the problem we needed to solve was how to determine the order of the ARMA model through the BPNN. From Figure 1 and 4, we can see that when determining the best input layer of the BPNN, the p and q of the input layer corresponded to the best order of the ARMA model. Therefore, we expected that the neural network's MSE loss function should be the smallest for the correct model order. Finally, the time series obtained MSE through 30 possible neural network structures. Figure 5 shows the MSE calculation results of two different time series. It can be seen from Figure 5a that, as the order increases, the MSE presents an obvious downward trend, in which the red circle marks the true model orders. When the critical point is reached, the MSE will not change significantly with the increase of the order because high-level neural network models can reflect low-level changes. In the calculation, we found that the calculation result of MSE has the special case shown in Figure 5b. Figure 5b simulates a special case of the ARMA (2, 1) calculation. The three points represented by the red circle in Figure 5 may all be the value of the order, and the MSE is relatively small at these three points. Therefore, in addition to comparing MSE, we also introduce the gradient to determine the order of the model when there are multiple critical points. When the descending gradient of the critical point is the largest and the MSE is small, the point is considered to be the best value for this set of critical points. From Figure 5, we find that the asymmetric ARMA structure is more prone to result judgment difficulties because the descending gradient of its MSE is gentler near the correct values. Next, the problem we needed to solve was how to determine the order of the ARMA model through the BPNN. From Figures 1 and 4, we can see that when determining the best input layer of the BPNN, the p and q of the input layer corresponded to the best order of the ARMA model. Therefore, we expected that the neural network's MSE loss function should be the smallest for the correct model order. Finally, the time series obtained MSE through 30 possible neural network structures. Figure 5 shows the MSE calculation results of two different time series. It can be seen from Figure 5a that, as the order increases, the MSE presents an obvious downward trend, in which the red circle marks the true model orders. When the critical point is reached, the MSE will not change significantly with the increase of the order because high-level neural network models can reflect low-level changes. In the calculation, we found that the calculation result of MSE has the special case shown in Figure 5b. Figure 5b simulates a special case of the ARMA (2, 1) calculation. The three points represented by the red circle in Figure 5 may all be the value of the order, and the MSE is relatively small at these three points. Therefore, in addition to comparing MSE, we also introduce the gradient to determine the order of the model when there are multiple critical points. When the descending gradient of the critical point is the largest and the MSE is small, the point is considered to be the best value for this set of critical points. From Figure 5, we find that the asymmetric ARMA structure is more prone to result judgment difficulties because the descending gradient of its MSE is gentler near the correct values.

Different Coefficients
In the process of simulating data conforming to the ARMA model, the coefficients are restricted by many conditions. According to the definition of the ARMA model, the

Different Coefficients
In the process of simulating data conforming to the ARMA model, the coefficients are restricted by many conditions. According to the definition of the ARMA model, the coefficients need to meet the requirements of causality and invertibility. In [17], the judgments of causality and invertibility are given by Equations (9) and (10).
Causality is equivalent to the condition Invertibility is equivalent to the condition θ(z) = 1 + θ 1 z + · · · + θ q z q = 0 for all |z| ≤ 1 where φ(·) and θ(·) are the pth and qth-degree polynomials. The complex z is used here since the zeros of a polynomial of degree p > 1 or q > 1 may be either real or complex. The region is defined by the set of complex z such that |z| = 1 is referred to as the unit circle. From Equations (9) and (10), the conditions of causality and invertibility are satisfied when the roots of φ(z) = 0 and θ(z) = 0 are outside the unit circle. In the calculation, we found that it is also necessary to consider whether the selected coefficients can effectively reflect the characteristics of the model in addition to causality and invertibility. In order to improve the sensitivity of the model, we set the minimum absolute value of the coefficient to 0.1. This avoids the fact that the coefficients are too small to make the polynomial difficult to identify. In this section, we have simulated three ARMA models, each of which used 25 sets of different coefficients to simulate time series. Since the symmetrical ARMA model was less prone to result judgment difficulties, we used the asymmetrical ARMA model here. The three models were ARMA (1, 2), ARMA (2, 3), and ARMA (4, 2). Table 1 is the model coefficients we obtained through the random method, and Figure 6 is the verification of causality and invertibility.
In Figure 6, the roots of φ(z) = 0 and θ(z) = 0 for each model are shown in a different color. All roots are outside the unit circle, which shows that the coefficients meet the requirements of causality and invertibility. For the coefficients in Table 1, we simulated 30 different realizations of the system's response with time series lengths of 400. From these overdetermined ARMA model orders, the goal was to determine the correct ARMA model order using BPNN and compare its results with AIC and BIC. Another purpose was to study whether the effects of different order determination methods under random parameters were consistent. Model identification using the AIC and BIC was performed using functions in MATLAB R2017a. Figure 7 is a stacked area diagram of the order estimation results. Figure 7 shows that the order estimation accuracy of BPNN is above 90%, AIC is below 10%, and the results of BIC are unstable. Comparing Figure 7a,b, the judgment results of BPNN and AIC are relatively stable, while the judgment efficiency of BIC criteria is significantly reduced and affected by the change of coefficients. For Figure 7c, the judgment effect of the BIC is basically the same as that of the AIC, and there is no obvious change in the BPNN. For the same model, the correct rate of BPNN is the highest, and the AIC is the lowest. The BIC is somewhere in between, but it is more sensitive to changes in model coefficients. For different ARMA models, the order estimation results of BPNN under different coefficients are relatively stable and accurate. In addition, the accuracy of the BIC is significantly reduced when the ARMA order is higher. For example, in ARMA (4, 2), its judgment effect is almost the same as that of the AIC. The influence of the change of order on different judgment criteria is analyzed in detail in Section 4.3. On the whole, the judgment result of BPNN has obvious advantages.  6. The verification of causality and invertibility. Figure 6. The verification of causality and invertibility.

Different Length
In Section 4.1, we discussed the effect of each judgment criterion under different coefficients. Next, we study the effect of the length of the time series on the accuracy. We selected a set of representative coefficients from the three models in Section 4.1, where ARMA (1, 2) selected the 13th group, ARMA (2, 3) selected the 21st group, and ARMA (4, 2) selected the 9th group. Then, we simulated 100 different responses with time series lengths of 200, 400, 600, 800, and 1000. The expressions of the three models are shown in Equation (11).
The model order estimation results are presented in Figure 8. From Figure 8, we can find that the BPNN can provide accurate order estimates for each length of the signal in models of different orders and the accuracy rate is above 90%. The accuracy of AIC and BIC has a certain upward trend with the increase of sequence length, while the estimated result of BIC can reach 90% under certain models. In Figure 8a,b, the judgment effect of the BIC and the BPNN is basically the same in the case of a long sequence. From Figure 8c, we can clearly find that the judgment result of the BPNN is much more accurate than the AIC and the BIC under the high-order model. In Figure 8c, the accuracy of AIC and BIC is below 40%. Similar to Section 4.2, the BIC performs better under low-order models than high-order models. In addition, we find that the accuracy of the AIC is low, but the effect is relatively stable under different models.

Different Order
In this section, we fixed the maximum AR and MA orders at 5 (AR (1-5) and MA (0-5)). We performed 100 Monte Carlo simulations on these 30 models and used the AIC, BIC, and BPNN to estimate the order. In Section 4.1, we found that the coefficient affects the estimate of the order. In order to make the results more representative, we randomized the coefficients of each model and met the conditions of causality and invertibility. In Section 4.2, we already knew that the length of the time series would affect the estimation of the order, so we set the length of the series to 1000 for better performance of all the methods. The setting of coefficient randomization can better study the effect of BPNN order estimation under different models. The length of the time series can make the accuracy of the AIC and the BIC higher, so as to better compare with the estimation results of the BPNN. Figure 9 shows the order estimation results of different models. In Figure 9, the accuracy of BPNN is generally above 90%. The accuracy of the BIC is above 70% when the order is small, but it does not work well under higher-order models. Although the accuracy of the AIC is below 30%, the result is relatively stable. Combined with the present conditions, the order estimation of BPNN can have a prominent performance under random coefficient and different model orders. Another point worth noting is that when the order of the model is higher, only the BPNN can obtain satisfactory estimation results. From this example, it can be found that BPNN still has an excellent estimation effect, even though the ARMA model has mathematical symmetry. Hossain et al., 2020, simulated physiological systems through ARMA and BPNN. As with their findings, the BPNN always shows an accuracy rate of more than 90% under different conditions. However, we found that AIC and BIC accuracy were low in our study, which may be due to a different coefficient selection. This may be because we did not have too much human intervention in the choice of coefficients.

High Formwork Safety Monitoring System
The high formwork safety monitoring system is a solution for real-time automatic safety monitoring of many major safety risk points during the pouring construction process of the tall formwork support system. The system uses wireless automatic networking, high-frequency continuous sampling, real-time data analysis and on-site sound and light alarms. Hossain et al., 2020, simulated physiological systems through ARMA and BPNN. As with their findings, the BPNN always shows an accuracy rate of more than 90% under different conditions. However, we found that AIC and BIC accuracy were low in our study, which may be due to a different coefficient selection. This may be because we did not have too much human intervention in the choice of coefficients.

High Formwork Safety Monitoring System
The high formwork safety monitoring system is a solution for real-time automatic safety monitoring of many major safety risk points during the pouring construction process of the tall formwork support system. The system uses wireless automatic networking, high-frequency continuous sampling, real-time data analysis and on-site sound and light alarms. There are four main components of the high formwork safety monitoring system: collector, analyzer, cloud platform, and client. The collector is responsible for sampling and uploading sensor data; the analyzer networks the collector, transfers the data, alerts, and uploads the data to the cloud platform; the cloud platform is responsible for data storage, display, early warning, data analysis, and other functions on the server; the client mainly implements the remote configuration of the data display and monitoring system on the cloud platform. The structure monitored in this paper has obvious symmetry, and the arrangement of the measuring points is also symmetrical and orderly. The composition of the high formwork safety monitoring system is shown in the Figure 10. Figure 11 shows the 3D model of the high formwork and the installation scheme of the instrument. Table 2 lists the instrument-related parameters.

Application in Stress Sequence
In the past, the ARMA model was often used to remove the noise of the time series to obtain the trend of the series [39][40][41]. These documents select a number of different ARMA models to process the time series and obtain the optimal solution by comparing the results. Selecting a model through the results often takes more time when the amount of data is large, and an accurate estimation of the order can reduce the workload and facilitate the batch processing of data. In this subsection, we demonstrate that the proposed model order selection method based on BPNN can be used to analyze stress sequences. The time series we analyzed comes from a part of the stress change of the high formwork system. According to the loading status of the system, the stress change of the high formwork can be divided into the loading phase and load stabilization phase. For the stress time series in the use phase, the ARMA model generally satisfies the requirements of the series causality. If the time series contains data in the loading phase, then we often need to perform nonlinear processing (differential) on the data to meet the requirements of causality. The accurate processing of monitoring data results is an indispensable part of the high formwork safety monitoring system. Based on the initial data collected by the high formwork safety monitoring system in the actual project, this paper discusses the specific application of the proposed method in the post-processing of monitoring data. In the following, we use two examples to verify the effect of the order selection method proposed earlier.

Example 1.
This example considers a time series that meets the requirements of causality, and the data is obtained from engineering field measurements. The data includes stress changes at 37 positions, and the sequence used in this example is one of the data. Since the increase in the length of the sequence has no adverse effects, we intercepted the sequence with a length of 5000, which is the stress change at the steady stage of the load. We use AIC, BIC, and BPNN to estimate the best order of the model. As before, regarding the setting of the BPNN, we use 80% of the data as the training set and the rest as the test set. The parameter settings are also the same as before. Finally, the ARMA parameters were estimated using the least-squares method for the model order estimated by the BPNN, the AIC and the BIC.  Figure 12b. In order to better describe the distribution of the data, we draw the envelope of the obtained sequence and calculate the average width between the upper and lower envelopes. Compared with the AIC and the BIC, the BPNN method reduces the average width of the envelope by 83.08% and 9.16%. It can be found that the BPNN and BIC have similar and accurate judgments on sequence trends for the data in this example, while the AIC has a higher degree of dispersion. In addition, there is no obvious difference in residuals of the three judgment methods.

Example 2.
The data used in this example includes the loading phase and the load stabilization phase, and the length of the sequence is 29,610. The setup of the neural network is the same as in Example 1. The difference from Example 1 is that this time series does not meet the requirements of causality, so we have performed different processing on it. The differentiated sequence meets the requirements of causality. Similar to before, we estimated the order of the differenced sequence and used the ARMA process for fitting. Finally, we restored the sequence [42]. Figure 13 shows the processing results of different judgment methods. The obvious difference between Figure 12a and 13a is that the sequence of the former is stable, while the latter has an obvious rising stage, which is why the latter needs to be processed by difference. Compared with the AIC and the BIC, the BPNN method reduces the average width of the envelope by 51.91% and 52.14%. Therefore, we find that the BPNN's analysis of data trends is more compact than the AIC and the BIC. This shows that the model obtained by the BPNN for order estimation has a better effect on noise extraction.

Conclusions
This paper proposes a modeling method for the high formwork monitoring data. The details of establishing the ARMA model and BPNN have been presented, and the algorithm of model order estimation by BPNN is introduced. Through the method of Monte Carlo simulation, we studied the accuracy of the three methods under different coefficients, different sequence lengths, and different model orders. For the actual measured stress series, we used three methods to estimate the model order and then used the least square method to estimate the model coefficients. Finally, we applied the established model to the symmetrical high formwork monitoring data. According to the simulation and application results, the following conclusions can be made.

•
For each system, the accuracy rate of the proposed model order selection method is above 90%, and both show better performance than the AIC or the BIC. At the same time, the BIC criterion is better than the AIC when the model order is lower; • In the Monte Carlo simulation, changing the model's coefficients will affect the accuracy of the BIC judgment, the instability increases significantly in the higher-order model. However, the order judgment method of BPNN still has an accuracy rate of

Conclusions
This paper proposes a modeling method for the high formwork monitoring data. The details of establishing the ARMA model and BPNN have been presented, and the algorithm of model order estimation by BPNN is introduced. Through the method of Monte Carlo simulation, we studied the accuracy of the three methods under different coefficients, different sequence lengths, and different model orders. For the actual measured stress series, we used three methods to estimate the model order and then used the least square method to estimate the model coefficients. Finally, we applied the established model to the symmetrical high formwork monitoring data. According to the simulation and application results, the following conclusions can be made.

•
For each system, the accuracy rate of the proposed model order selection method is above 90%, and both show better performance than the AIC or the BIC. At the same time, the BIC criterion is better than the AIC when the model order is lower; • In the Monte Carlo simulation, changing the model's coefficients will affect the accuracy of the BIC judgment, the instability increases significantly in the higher-order model. However, the order judgment method of BPNN still has an accuracy rate of more than 90%; • The mathematically symmetric ARMA model is more likely to make errors in the BPNN method, so this type of model needs to be judged in conjunction with the MSE descent gradient.

•
The judgment efficiency of AIC and BIC will increase as the length of the time series increases. The proposed BPNN order judgment method is not sensitive to the change of sequence length and has a relatively high accuracy rate; • For changes in the order of the model, both AIC and BIC are more sensitive. In particular, the BIC cannot be judged correctly when the model order is high. The BPNN still maintains a good judgment effect; • For the measured data used in the paper that meets the requirements of causality, the judgment effect of BPNN is not significantly different from that of the BIC, but the AIC is obviously inferior. When the time series does not meet the causality requirements, we will transform it into a stationary series. The analysis result shows that the processing result of BPNN increased by about 50%; • The stress sequence of the high formwork can be processed by the ARMA process to obtain its change trend and noise sequence. This is feasible for obtaining effective information on the stress sequence.
Due to the influence of computing resources, the model of all orders is not calculated when considering sequence coefficients and sequence length. In addition, the model established in this paper does not consider the impact of accidental factors. We will discuss the effects of accidental factors on model-building in future studies, and further study the methods of predicting periodic data.