Next Article in Journal
A Taxonomic Hierarchy of Blockchain Consensus Algorithms: An Evolutionary Phylogeny Approach
Next Article in Special Issue
Dynamic Service Function Chain Deployment and Readjustment Method Based on Deep Reinforcement Learning
Previous Article in Journal
Characterization of the Ability of Low-Cost GNSS Receiver to Detect Spoofing Using Clock Bias
Previous Article in Special Issue
A Safety-Aware Location Privacy-Preserving IoV Scheme with Road Congestion-Estimation in Mobile Edge Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sensor Data Reconstruction for Dynamic Responses of Structures Using External Feedback of Recurrent Neural Network

Department of Architectural Engineering, Dankook University, Yongin 16890, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(5), 2737; https://doi.org/10.3390/s23052737
Submission received: 2 February 2023 / Revised: 16 February 2023 / Accepted: 1 March 2023 / Published: 2 March 2023

Abstract

:
An event of sensor faults in sensor networks deployed in structures might result in the degradation of the structural health monitoring system and lead to difficulties in structural condition assessment. Reconstruction techniques of the data for missing sensor channels were widely adopted to restore a dataset from all sensor channels. In this study, a recurrent neural network (RNN) model combined with external feedback is proposed to enhance the accuracy and effectiveness of sensor data reconstruction for measuring the dynamic responses of structures. The model utilizes spatial correlation rather than spatiotemporal correlation by explicitly feeding the previously reconstructed time series of defective sensor channels back to the input dataset. Because of the nature of spatial correlation, the proposed method generates robust and precise results regardless of the hyperparameters set in the RNN model. To verify the performance of the proposed method, simple RNN, long short-term memory, and gated recurrent unit models were trained using the acceleration datasets obtained from laboratory-scaled three- and six-story shear building frames.

1. Introduction

Structural health monitoring (SHM) systems typically include sensors and data acquisition systems and are, thus, referred to as sensor-based monitoring systems. Because monitoring systems increasingly rely on sensor technology, effective management of sensor networks has become important [1,2]. However, sensor faults are frequent and inevitable in real systems owing to problems, such as noise, degradation, and harsh environmental conditions [3,4]. In particular, the sensors installed in buildings are constituted of various elements, such as structures, facilities, and exterior materials, that are difficult to maintain and repair [5,6]. Data loss from the failed sensor channels, which results in sparse sensor networks [7], significantly degrades the structural condition assessment and causes errors in the assessment of the structural health status [8,9]. Data recovery of defective sensor channels is crucial not only for operating sensor networks but also for managing the quality of the SHM [10,11].
Over the last decade, the recovery of missing data caused by the failure of parts of the sensor network deployed in structures has been extensively studied [12,13,14]. If sensors belong to dense sensor networks deployed in a structure, each sensor shares a certain level of correlation with the others [15]. The reconstruction of defective sensor data utilizes this correlation by analyzing the data measured by different sensors [16]. The correlation between sensor data collected for the dynamic behaviors of structures consists of a spatial correlation between sensor channels at a time instant and a temporal correlation in a sequence of the timeline, which is often referred to as spatiotemporal correlation [17]. Correlation analysis for sensor data reconstruction has been mainly conducted using the numerical method of black box model analysis instead of theoretical dynamic analysis based on mechanical interpretation.
Data reconstruction techniques using artificial neural networks have shown excellent performance in recovering lost data by learning nonlinear patterns in correlated data. For example, a convolutional neural network (CNN) was adopted to reconstruct the acceleration responses of bridges indirectly by processing transformed images of time-series data [18,19]. Because the transformation of time series to images involves an inherent loss of partial information in the time domain, recurrent neural networks (RNNs) have been applied in the field of data reconstruction as an alternative to CNN. The RNN is a looped architecture specialized for learning temporal patterns in time-sequential data and is, therefore, utilized in research on natural language and speech processing [20,21,22]. A dominant feature of an RNN is its cyclic structure, in which the current neural networks share information from previous neural networks at the learning stage. The cyclic structure is effective for processing data from a dynamic system in which the current state affects the state in subsequent time steps [23,24]. A series of variants of the RNN, such as long short-term memory (LSTM) [25], gated recurrent unit (GRU) [26], bidirectional RNN (BRNN) [27], and bidirectional LSTM (Bi-LSTM) [28], were also used to reconstruct lost data, and their superior performance was reported [17,29,30].
Numerous hyperparameters determining the network structures, such as the number of input vectors and hidden layers, learning rate, cost function, regularization parameter, batch size, and training epoch, must be set before training the RNN models [31,32]. The repetitive trials to fix them and the subsequent evaluation outputs of the models are referred to as hyperparameter tuning [33]. The number of input vectors and hidden layers directly affects the number of parameters in the model and accordingly determines the accuracy, computational amount, and prediction speed of the model. Setting optimized hyperparameters for excellent data reconstruction performance is a cumbersome task, and the related process is often omitted from data reconstruction studies [29,34].
In this study, by leveraging the spatial correlation between sensor channel data more than the temporal correlation on the dynamic behavior of structures, an RNN model with external feedback is proposed for the reconstruction of data. The proposed method generates a robust and precise model by explicitly feeding back the reconstructed data from the RNN model to the input dataset for the next time step. Accordingly, the advantage of improving the accuracy of the recovered data was confirmed by simplifying the hyperparameter tuning process of the RNN model to recover the lost data. The remainder of this paper is organized as follows: Section 2 describes the architecture of a general RNN model to apply and verify the proposed technique. Section 3 explains the RNN model training and data recovery method that employs the proposed technique. In Section 4, a performance verification of the proposed method is conducted by utilizing the vibration data collected from the structural model. The paper concludes with a brief summary and discussion in Section 5.

2. RNN Architecture for Sensor Data Reconstruction

Various RNN models have been derived according to parameter structures, such as the type of input–output data and the weight inside the layer [28,35]. In the training time series, the RNN structure was upgraded so that parameters can contain time-series characteristics for a longer period of time. Representative RNN models, simple RNN, LSTM, and GRU are presented in Figure 1. These are used as reference models for performance comparison of the difference in layer structures in lost data reconstruction studies [36,37].
A simple RNN, which is a chain of general neural networks, sequentially delivers input data to the cells of adjacent layers, that is, it is the basic form of a recurrent model. Because a cell that has delivered data does not store information about the transmitted data, it cannot remember long-term time-series information. Figure 1a shows the structure of a simple RNN with one memory between the input and output networks. When the activation function is used as a hyperbolic tangent, the memory h t used to predict the output vector is defined as
h t = tan h ( U t x t + W h t 1 + b )
where x is an input vector, U and W are flexible matrices, and b is the bias vector. The training of the model is a process of adjusting the weights such that x t approaches the correct value y t for predicting h t generated by U t , W and h t 1 . Because h t 1 of the previous time step is involved in predicting h t of the current time step, a recurrent structure of the neural network is established. This is a structure in which the results of the previous step are fed back to the computation of the current step, and through this, the RNN enables the processing of time-series data.
LSTM and GRU are models in which the cell structure that connects the input and output vectors is modified to improve the low long-term dependency of a simple RNN (Figure 1b,c). The biggest feature of LSTM is that the cell state parameter, z t , is additionally shared between adjacent layers, and the value of the existing input vector is preserved so that the long-term memory storage capacity is improved. Several parameters determined by the size of the input and output vectors exist inside a single layer, and the output gate, forget gate, and input gate composed of these parameters protect and control the cell state. In particular, the forget gate is built in to directly determine how much to remember and how much to forget the value of the previous vector. GRU is a simple type of LSTM in which the structure of the LSTM is modified. GRU has fewer parameters and faster training than LSTM because the forget gate and input gate of LSTM are merged into a single layer called the update gate.

3. Internal and External Feedback in RNN Model for Sensor Data Reconstruction

Figure 2a depicts a conventional training RNN model for the reconstruction of the i -th lost channel in n sensor networks. As shown in the figure, an input dataset consisting of n 1 channels excluding the i -th channel was used for training the RNN model. Sensor data reconstruction utilizes the correlation in sensor data in black-box analysis. With black-box analysis of spatial correlation in data from sensor channels and temporal correlation in a sequence of time series, the RNN model generates data from the lost channel. In particular, the temporal correlation is analyzed using the inherent recurrence in the RNN model that utilizes the output of the previous step at every time step. In this study, inherent recurrence is referred to as internal feedback.
In addition to internal feedback, a method of feeding the reconstructed loss channel data back to the input dataset is proposed in Figure 2b. In the reconstruction of the i -th lost channel data in the n sensor network, the output of the i -th channel reconstructed from the model is composed of the input dataset in the next step. Thus, the input dataset of the RNN model consists of n datasets. In this study, the looped structure is referred to as the external feedback.
Figure 3 shows a schematic diagram representing the input–output dataset relationships of the RNN model for both the conventional and external feedback methods. As shown in Figure 3a, the input and output, that is, the reconstruction channels, are independently set to X t c and y t , respectively, where c and t are the channel number and time step of the operating sensors, respectively. The input dataset X n c for predicting the n -th element y n of the y t channel consists of the following matrix of size ( c × α ) .
X n c = [ x n α c x n 1 c x n α 1 x n 1 1 ] .  
In the external feedback of the reconstruction data method, as shown in Figure 3b, the y t channel is used simultaneously for the input matrix and output vectors. The input dataset X n c for predicting the n -th element y n of the y t channel consists of the following matrix of size ( ( c + 1 ) × α ) .
X n c = [ y n α y n 1 x n α c x n 1 c x n α 1 x n 1 1 ] .  
The model learned through repeated training is used as a reconstruction model at the point in time when data loss occurred. The reconstructed data of the loss channel are fed back to the input data matrix at the next time step.

4. Experimental Verification

4.1. Vibration Experiment with Multi-Story Shear Building Model

To verify the experimental performance of the real-time feedback of the proposed method for reconstruction of data, a series of vibration experiments was conducted to collect the dynamic response data of a multi-DOF structure. The test structure was a three-story single-bay frame, with a total height of 45 cm and a width of 16 cm (Figure 4). It was assembled with two structural elements: a flexible steel plate (50 cm × 3 cm × 4 mm thick) and a rigid aluminum plate (50 cm × 50 cm × 2 cm thick). Aluminum plates were used as floor plates, and both ends of the four steel plates were joined to an L-shaped plate to support each floor plate. Such a prefabricated structure can be expanded to a six-story structure by repeatedly connecting the same structural elements, as explained in Section 4.4.
The structural model was mounted on a uniaxial shake table driven by a mechanical linear actuator, where the rotary motion of an AC servo motor (HC-SFS502, Mitsubishi, Tokyo, Japan) was converted to linear motion. The column connected to the shake table was excited on a plane axis by an analog signal, which is obtained by converting the digital signal generated in MATLAB Simulink using a digital acquisition (DAQ) module, NI-9375 (National Instruments, Austin, TX, USA). Four accelerometers (731A, Wilcoxon, Frederick, MD, USA) with a 100 Hz sampling rate were installed in the center of the shake table and in each floor of the structure to measure the acceleration by the movement of the shake table. White noise and El Centro seismic signals were adopted as the excitations for the shake table to simulate the ambient and seismic motions of the structure. The acceleration measured from the shake table and structure were used to verify the quality of the generated input signals, and to train the RNN model, respectively.
To build a training dataset for model training, white noise with a frequency range of 0.5 to 30 Hz and a maximum acceleration of approximately 0.3 g was provided as an excitation for 60 s. The behavior of the building, with a maximum acceleration of approximately 0.2 g and a trend of vibration that occurred similarly on the three floors, was used for training and testing for a duration of 40 s and 20 s, respectively (Figure 5a). Two types of validation datasets that were not involved in training were prepared to compare model accuracy: (1) the response of the building to white noise of 200 s duration with a frequency range of 0.5 to 30 Hz and a maximum acceleration of approximately 0.3 g with an acceleration magnitude similar to the dataset used for model generation (Figure 5b); (2) the response of the building excited for 50 s by the El Centro seismic signal with a peak value of 2 g, which is approximately 10-times greater than the maximum acceleration of the dataset used for model training (Figure 5c). In the validation dataset, the sensor on the third floor was assumed to be the lost channel, and its data were reconstructed from the RNN model.

4.2. RNN Model Training and Its Evaluation

Simple RNN, LSTM, and GRU were selected as RNN models for performance comparison of the proposed external feedback of the sensor data reconstruction. The hyperparameters in Table 1 were identical for the three models. The number of input data and hidden layers were set as variables to evaluate the stability of the model generated by the proposed method. The numbers of input data and hidden layers were increased a total of 20 times at intervals of 4, from a minimum of 4 to a maximum of 40, and for a total of 20 times at intervals of 5, from a minimum of 5 to a maximum of 100. The other hyperparameters, such as the optimizer, training loss, learning rate, batch size, and maximum epochs, were fixed.
Model training and evaluation were performed to quantitatively calculate the performance of the external feedback (termed as proposed) and internal feedback inherent in the RNN model (termed as existing). In total, 2400 (12 × 200) models were generated for the number of input vectors and layers in 12 situations, depending on the model type (simple RNN, LSTM, GRU) and the type of excitation (white noise and El Centro seismic signals). Figure 6 shows the training and test losses of the RNN model according to the increase in epochs in the training process of the model and is the average of 200 values of training and test losses acquired in each situation. Overall, there is a rapid loss reduction in less than 10 epochs; subsequently, in the case of the existing method, it gradually decreases to 200 epochs, and in the case of the proposed method, it converges to the minimum loss at approximately 100 epochs. The converged losses of the proposed method were lower than those of the existing method.

4.3. Sensor Data Reconstruction from Trained RNN Models

The validation dataset that is not used for training and testing was input to the previously generated 2400 models, and the third-floor data that were assumed as a lost channel were reconstructed. The root mean square error (RMSE, k ( y k p t r e d i c t e d y k m e a s u r e d ) 2 / n u m b e r   o f   d a t a   p o i n t s ) between the reconstructed data and measured data is presented as 3D mesh plots in Figure 7. In the case of the existing method, starting from high values, the RMSEs tended to decrease up to 20 input vectors, but gradually increased after 20 input vectors. In addition, unstable results were obtained when the number of layers was close to 100. On the other hand, in the case of the proposed method, low RMSEs are obtained regardless of the number of inputs, which confirms a stable trend of RMSEs, even though RMSEs slightly increase after 20 input vectors. In addition, robust and low RMSEs were obtained for all models, except for the high number of hidden layers of the simple RNN with a simple layer structure. There were no differences according to the type of excitation signal used.
The dynamic response features of the structure can be effectively contained as the length of the input vector increases. The lengthened input vector increases the number of parameters inside the RNN model; therefore, the computation becomes more expensive. Thus, a tradeoff occurs in the determination of the input vector length. In the case of loss data reconstruction by correlation analysis of sensor data, it is confirmed that the dynamic response characteristics are related to temporal correlation, and optimization of the input vector length considering the tradeoff is required in the existing internal feedback. In contrast, the proposed external feedback method of feeding the reconstruction output of the lost channel back to the input dataset reduces the influence of temporal correlation by emphasizing the spatial correlation in the input data between sensor channels. In addition, it was verified that the proposed method can make the hyperparameter tuning process robust, even at a high number of layers.
Table 2 shows the number of input vectors and the RMSE of the models generating the least RMSE through hyperparameter optimization in each mesh. Because the difference in accuracy based on the number of layers was insignificant, it was fixed at 50. The number of input vectors of the existing method according to white noise and El Centro seismic signal is 18.7 and 25.3, respectively. However, in the case of the proposed method, it was reduced to 5.3 and 4, respectively. In addition, the model accuracy evaluated by the RMSEs was reduced to 3.107 ×   10 3 g and 7.277 ×   10 2 g for the proposed method, against 8.620 ×   10 3 g and 20.316 ×   10 2 g in the existing method. It is demonstrated that the RNN model for sensor data reconstruction conducted using the proposed method improves the dependency of the hyperparameter setting and accuracy.
The responses of the structure reconstructed from the proposed external feedback method are compared with those of the measurement in Figure 8, where the model accuracies are 2.147 ×   10 3 and 4.878 ×   10 2 , respectively, for the white noise and El Centro seismic signal cases shown in Table 2.

4.4. Extended Six-Story Structure Model

To evaluate the effect of the complexity of the multi-DOF structure system on the proposed method, the same mass and stiffness system was extended to the six-story structure model and sensor channel. A three-story prefabricated structure model composed of a steel plate, aluminum plate, and L-shaped plate was further assembled with members of the same size and expanded to six stories. The host channel was assumed to be the sixth floor, and white noise was excited on the shake table. The other experimental conditions remained the same as that for the previous testing structure. As a result of the training, 1200 (2 × 3 × 200) models were generated according to the existing and proposed methods, model types, number of input vectors, and hidden layers. The lost channel of the 200 s validation dataset that was not used for training and testing was reconstructed. The RMSE of the accuracy of the model is presented in the 3D mesh plots in Figure 9. The overall trend was similar to that of the three-story structure model: in the case of the existing method, a high RMSE occurred when the number of vectors was low and decreased sharply to 16 of the input vectors, and in the case of the proposed method, a low RMSE occurred, regardless of the number of input vectors. Model instability at a high number of layers is found in all models of the existing method and in the simple RNN of the proposed model, but the LSTM and GRU of the proposed method resulted in a low and stable RMSE in all models. The optimized models of the existing and proposed methods were derived from LSTM. The number of input vectors and hidden layers and the RMSE are 28, 32, and 7.319 ×   10 3 g, respectively, for the existing method and 4, 24, and 2.871 ×   10 3 g, respectively, for the proposed method. That is, the number of input vectors is reduced by more than six times, and the RMSE by more than two times. Thus, it is proven that the proposed method can generate a stable and high-accuracy RNN model, even when the complexity of the structure increases.
The performance of the proposed external feedback method using LSTM was further evaluated under the conditions of multiple sensor channel losses: the sixth-floor sensor loss (Case 1), the fifth- and sixth-floor sensor losses (Case 2); the fourth-, fifth-, and sixth- floor sensor losses (Case 3). In comparison experiments on the previous six-story structure model involving three reconstruction models, data commonly generated at the sixth floor were compared and are depicted in Figure 10 and the quantitative information is tabulated in Table 3. Figure 10 shows the time histories of the measured and reconstructed data acquired in Cases 1 to 3. In general, the measured data and the three reconstructed data were similar. In the zoomed plots, the reconstructed signals tended to be underestimated as the number of lost sensors increased.

5. Conclusions

In this study, a real-time external feedback loop was proposed for the use of RNN models, and its effect was quantitatively evaluated through a series of experiments. The proposed RNN model with external feedback was demonstrated and verified with the vibration response dataset obtained from experiments with white noise and El Centro seismic signal excitation in a three-story structure model, and the experiment was extended to a six-story structure model. It was proven that the method proposed simplifies hyperparameter tuning and generates a more accurate model for the RNN-based reconstruction of lost data.
The accuracy of the RNN model was compared using the RMSE between the reconstructed and measured data. Based on the results in the case study, the following conclusions are drawn: the proposed method through the three-story structure model experiment generated a model with robust accuracy, regardless of the number of input data and layers in simple RNN, LSTM, and GRU models. Compared to the use of the conventional RNN models, the number of input data was reduced by four-times, and the RMSEs were reduced by three-times using the proposed external feedback RNN models. In the six-story structure model experiment, under scenarios in which the number of fault sensors was increased up to three channels, robust models with high accuracy were evaluated. For the reconstructed signals on the sixth floor, trivial differences between each reconstruction of the three fault scenarios were confirmed.

Author Contributions

Conceptualization, J.K.; Methodology, J.K.; Software, Y.-S.S.; Resources, Y.-S.S.; Writing—original draft, Y.-S.S.; Writing—review & editing, J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was conducted by the research fund of Dankook University in 2022.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, Q.; Li, C. Nonstationary wind speed data reconstruction based on secondary correction of statistical characteristics. Struct. Control Health Monit. 2021, 28, e2783. [Google Scholar] [CrossRef]
  2. Kim, J.; Lynch, J.P. Autonomous decentralized system identification by Markov parameter estimation using distributed smart wireless sensor networks. J. Eng. Mech. 2012, 138, 478–490. [Google Scholar] [CrossRef] [Green Version]
  3. Kerschen, G.; De Boe, P.; Golinval, J.-C.; Worden, K. Sensor validation using principal component analysis. Smart Mater. Struct. 2004, 14, 36. [Google Scholar] [CrossRef]
  4. Sharifi, R.; Kim, Y.; Langari, R. Sensor fault isolation and detection of smart structures. Smart Mater. Struct. 2010, 19, 105001. [Google Scholar] [CrossRef]
  5. Bhuiyan, M.Z.A.; Wang, G.; Cao, J.; Wu, J. Deploying wireless sensor networks with fault-tolerance for structural health monitoring. IEEE Trans. Comput. 2013, 64, 382–395. [Google Scholar] [CrossRef]
  6. Vasar, C.; Filip, I.; Szeidert, I.; Borza, I. Fault detection methods for wireless sensor networks using neural networks. In Proceedings of the 2010 International Joint Conference on Computational Cybernetics and Technical Informatics, Timisoara, Romania, 27–29 May 2010; pp. 295–298. [Google Scholar]
  7. Xu, Y.; Sun, G.; Geng, T.; Zheng, B. Compressive sparse data gathering with low-rank and total variation in wireless sensor networks. IEEE Access 2019, 7, 155242–155250. [Google Scholar] [CrossRef]
  8. He, J.; Li, Y.; Zhang, X.; Li, J. Missing and Corrupted Data Recovery in Wireless Sensor Networks Based on Weighted Robust Principal Component Analysis. Sensors 2022, 22, 1992. [Google Scholar] [CrossRef]
  9. Kim, Y.; Bai, J.-W.; Albano, L.D. Fragility estimates of smart structures with sensor faults. Smart Mater. Struct. 2013, 22, 125012. [Google Scholar] [CrossRef]
  10. Lin, J.-F.; Li, X.-Y.; Wang, J.; Wang, L.-X.; Hu, X.-X.; Liu, J.-X. Study of building safety monitoring by using cost-effective MEMS accelerometers for rapid after-earthquake assessment with missing data. Sensors 2021, 21, 7327. [Google Scholar] [CrossRef]
  11. Bao, Y.; Yu, Y.; Li, H.; Mao, X.; Jiao, W.; Zou, Z.; Ou, J. Compressive sensing-based lost data recovery of fast-moving wireless sensing for structural health monitoring. Struct. Control Health Monit. 2015, 22, 433–448. [Google Scholar] [CrossRef]
  12. Vedavalli, P.; Ch, D. A Deep Learning Based Data Recovery Approach for Missing and Erroneous Data of IoT Nodes. Sensors 2022, 23, 170. [Google Scholar] [CrossRef]
  13. Bao, Y.; Tang, Z.; Li, H. Compressive-sensing data reconstruction for structural health monitoring: A machine-learning approach. Struct. Health Monit. 2020, 19, 293–304. [Google Scholar] [CrossRef] [Green Version]
  14. Shen, H.; Li, X.; Cheng, Q.; Zeng, C.; Yang, G.; Li, H.; Zhang, L. Missing information reconstruction of remote sensing data: A technical review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 61–85. [Google Scholar] [CrossRef]
  15. Kim, J.; Swartz, R.A.; Lynch, J.P.; Lee, J.-J.; Lee, C.-G. Rapid-to-deploy reconfigurable wireless structural monitoring systems using extended-range wireless sensors. Smart Struct. Syst. 2010, 6, 505–524. [Google Scholar] [CrossRef] [Green Version]
  16. Alippi, C.; Boracchi, G.; Roveri, M. On-line reconstruction of missing data in sensor/actuator networks by exploiting temporal and spatial redundancy. In Proceedings of the 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  17. Jeong, S.; Ferguson, M.; Hou, R.; Lynch, J.P.; Sohn, H.; Law, K.H. Sensor data reconstruction using bidirectional recurrent neural network with application to bridge monitoring. Adv. Eng. Inform. 2019, 42, 100991. [Google Scholar] [CrossRef]
  18. Oh, B.K.; Glisic, B.; Kim, Y.; Park, H.S. Convolutional neural network–based data recovery method for structural health monitoring. Struct. Health Monit. 2020, 19, 1821–1838. [Google Scholar] [CrossRef]
  19. Fan, G.; Li, J.; Hao, H. Lost data recovery for structural health monitoring based on convolutional neural networks. Struct. Control Health Monit. 2019, 26, e2433. [Google Scholar] [CrossRef]
  20. Karita, S.; Chen, N.; Hayashi, T.; Hori, T.; Inaguma, H.; Jiang, Z.; Someki, M.; Soplin, N.E.Y.; Yamamoto, R.; Wang, X. A comparative study on transformer vs rnn in speech applications. In Proceedings of the 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Singapore, 14–18 December 2019; pp. 449–456. [Google Scholar]
  21. Yin, W.; Kann, K.; Yu, M.; Schütze, H. Comparative study of CNN and RNN for natural language processing. arXiv 2017, arXiv:1702.01923. [Google Scholar]
  22. Young, T.; Hazarika, D.; Poria, S.; Cambria, E. Recent trends in deep learning based natural language processing. IEEE Comput. Intell. Mag. 2018, 13, 55–75. [Google Scholar] [CrossRef]
  23. Mousavi, M.; Gandomi, A.H. Prediction error of Johansen cointegration residuals for structural health monitoring. Mech. Syst. Signal Process. 2021, 160, 107847. [Google Scholar] [CrossRef]
  24. Zhu, J.; Wang, Y. Feature Selection and Deep Learning for Deterioration Prediction of the Bridges. J. Perform. Constr. Facil. 2021, 35, 04021078. [Google Scholar] [CrossRef]
  25. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  26. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  27. Schuster, M.; Paliwal, K.K. Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef] [Green Version]
  28. Huang, Z.; Xu, W.; Yu, K. Bidirectional LSTM-CRF models for sequence tagging. arXiv 2015, arXiv:1508.01991. [Google Scholar]
  29. Jeong, S.; Ferguson, M.; Law, K.H. Sensor data reconstruction and anomaly detection using bidirectional recurrent neural network. In Proceedings of the Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2019, Denver, CO, USA, 3–7 March 2019; Volume 10970, pp. 157–167. [Google Scholar]
  30. Jiang, H.; Wan, C.; Yang, K.; Ding, Y.; Xue, S. Continuous missing data imputation with incomplete dataset by generative adversarial networks–based unsupervised learning for long-term bridge health monitoring. Struct. Health Monit. 2022, 21, 1093–1109. [Google Scholar] [CrossRef]
  31. Wu, J.; Chen, X.-Y.; Zhang, H.; Xiong, L.-D.; Lei, H.; Deng, S.-H. Hyperparameter optimization for machine learning models based on Bayesian optimization. J. Electron. Sci. Technol. 2019, 17, 26–40. [Google Scholar]
  32. Zhang, X.; Chen, X.; Yao, L.; Ge, C.; Dong, M. Deep neural network hyperparameter optimization with orthogonal array tuning. In Proceedings of the ICONIP 2019: Neural Information Processing, Sydney, NSW, Australia, 12–15 December 2019; pp. 287–295. [Google Scholar]
  33. Reimers, N.; Gurevych, I. Optimal hyperparameters for deep lstm-networks for sequence labeling tasks. arXiv 2017, arXiv:1707.06799. [Google Scholar]
  34. Kim, J.; Lynch, J.P. Subspace system identification of support excited structures—Part II: Gray-box interpretations and damage detection. Earthq. Eng. Struct. Dyn. 2012, 41, 2253–2271. [Google Scholar] [CrossRef] [Green Version]
  35. Zhang, Q.; Lu, H.; Sak, H.; Tripathi, A.; McDermott, E.; Koo, S.; Kumar, S. Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss. In Proceedings of the ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 7829–7833. [Google Scholar]
  36. Shewalkar, A. Performance evaluation of deep neural networks applied to speech recognition: RNN. LSTM and GRU. J. Artif. Intell. Soft Comput. Res. 2019, 9, 235–245. [Google Scholar] [CrossRef] [Green Version]
  37. Apaydin, H.; Feizi, H.; Sattari, M.T.; Colak, M.S.; Shamshirband, S.; Chau, K.-W. Comparative analysis of recurrent neural network architectures for reservoir inflow forecasting. Water 2020, 12, 1500. [Google Scholar] [CrossRef]
Figure 1. The RNN and its variant models. (a) Simple RNN. (b) LSTM. (c) GRU.
Figure 1. The RNN and its variant models. (a) Simple RNN. (b) LSTM. (c) GRU.
Sensors 23 02737 g001
Figure 2. Schematic diagram of internal and external feedback in the RNN model. (a) Internal feedback. (b) External feedback.
Figure 2. Schematic diagram of internal and external feedback in the RNN model. (a) Internal feedback. (b) External feedback.
Sensors 23 02737 g002
Figure 3. Input and output data flow of internal and external feedback in the RNN. (a) Internal feedback. (b) External feedback.
Figure 3. Input and output data flow of internal and external feedback in the RNN. (a) Internal feedback. (b) External feedback.
Sensors 23 02737 g003
Figure 4. Experimental setup for acquisition of vibration dataset of structure.
Figure 4. Experimental setup for acquisition of vibration dataset of structure.
Sensors 23 02737 g004
Figure 5. Datasets acquired from building models based on excitation type: the plots provided from top to bottom represent signals of the third, second and first floors, and table, respectively. (a) white noise (training and test). (b) white noise (validation). (c) El Centro (validation).
Figure 5. Datasets acquired from building models based on excitation type: the plots provided from top to bottom represent signals of the third, second and first floors, and table, respectively. (a) white noise (training and test). (b) white noise (validation). (c) El Centro (validation).
Sensors 23 02737 g005
Figure 6. Training and testing the loss in existing and proposed methods based on RNN model and excitation type with respect to the training epochs. (a) Simple RNN, white noise (proposed). (b) LSTM, white noise (proposed). (c) GRU, white noise (proposed). (d) Simple RNN, white noise (existing). (e) LSTM, white noise (existing). (f) GRU, white noise (existing). (g) Simple RNN, El Centro (proposed). (h) LSTM, El Centro (proposed). (i) GRU, El Centro (proposed). (j) Simple RNN, El Centro (existing). (k) LSTM, El Centro (existing). (l) GRU, El Centro (existing).
Figure 6. Training and testing the loss in existing and proposed methods based on RNN model and excitation type with respect to the training epochs. (a) Simple RNN, white noise (proposed). (b) LSTM, white noise (proposed). (c) GRU, white noise (proposed). (d) Simple RNN, white noise (existing). (e) LSTM, white noise (existing). (f) GRU, white noise (existing). (g) Simple RNN, El Centro (proposed). (h) LSTM, El Centro (proposed). (i) GRU, El Centro (proposed). (j) Simple RNN, El Centro (existing). (k) LSTM, El Centro (existing). (l) GRU, El Centro (existing).
Sensors 23 02737 g006aSensors 23 02737 g006b
Figure 7. RMSE of existing and proposed methods for different RNN models and excitation types. (a) Simple RNN, white noise (proposed). (b) LSTM, white noise (proposed). (c) GRU, white noise (proposed). (d) Simple RNN, white noise (existing). (e) LSTM, white noise (existing). (f) GRU, white noise (existing). (g) Simple RNN, El Centro (proposed). (h) LSTM, El Centro (proposed). (i) GRU, El Centro (proposed). (j) Simple RNN, El Centro (existing). (k) LSTM, El Centro (existing). (l) GRU, El Centro (existing).
Figure 7. RMSE of existing and proposed methods for different RNN models and excitation types. (a) Simple RNN, white noise (proposed). (b) LSTM, white noise (proposed). (c) GRU, white noise (proposed). (d) Simple RNN, white noise (existing). (e) LSTM, white noise (existing). (f) GRU, white noise (existing). (g) Simple RNN, El Centro (proposed). (h) LSTM, El Centro (proposed). (i) GRU, El Centro (proposed). (j) Simple RNN, El Centro (existing). (k) LSTM, El Centro (existing). (l) GRU, El Centro (existing).
Sensors 23 02737 g007
Figure 8. Acceleration time history of the measured and reconstructed data. (a) structural vibration due to white noise; (b) structural vibration caused by the El Centro seismic signal.
Figure 8. Acceleration time history of the measured and reconstructed data. (a) structural vibration due to white noise; (b) structural vibration caused by the El Centro seismic signal.
Sensors 23 02737 g008aSensors 23 02737 g008b
Figure 9. RMSE of existing and proposed methods for different RNN models. (a) Simple RNN (proposed). (b) LSTM (proposed). (c) GRU (proposed). (d) Simple RNN (existing). (e) LSTM (existing). (f) GRU (existing).
Figure 9. RMSE of existing and proposed methods for different RNN models. (a) Simple RNN (proposed). (b) LSTM (proposed). (c) GRU (proposed). (d) Simple RNN (existing). (e) LSTM (existing). (f) GRU (existing).
Sensors 23 02737 g009
Figure 10. Acceleration time history of reconstructed data of the sixth floor.
Figure 10. Acceleration time history of reconstructed data of the sixth floor.
Sensors 23 02737 g010
Table 1. Hyperparameters set in the RNN models.
Table 1. Hyperparameters set in the RNN models.
HyperparameterValue
Number of input data
(min./max./interval)
4/40/4
Number of hidden layers
(min./max./interval)
5/100/5
OptimizerAdam
Training lossMAE
Learning rate0.001
Batch size72
Maximum epochs200
Table 2. Number of input vectors (NIVs) and RMSE of the optimized RNN models.
Table 2. Number of input vectors (NIVs) and RMSE of the optimized RNN models.
Simple RNNLSTMGRUMean
White noiseProposedNIV4485.3
RMSE (g)4.294 ×   10 3 2.147 ×   10 3 2.881 ×   10 3 3.107 ×   10 3
ExistingNIV20162018.7
RMSE (g)11.495 ×   10 3 5.256 ×   10 3 9.109 ×   10 3 8.620 ×   10 3
El Centro seismic signalProposedNIV4444
RMSE (g)11.037 ×   10 2 4.878 ×   10 2 5.916 ×   10 2 7.277 ×   10 2
ExistingNIV16283225.3
RMSE (g)27.910 ×   10 2 15.373 ×   10 2 17.666 ×   10 2 20.316 ×   10 2
Table 3. Quantitative information related to reconstructed data.
Table 3. Quantitative information related to reconstructed data.
Measured ValueCase 1Case 2Case 3
Peak (g)0.16090.14170.14460.1343
RMS (g)0.03880.03770.03540.0333
RMSE (g)-2.781 × 10−35.751 × 10−38.815 × 10−3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shin, Y.-S.; Kim, J. Sensor Data Reconstruction for Dynamic Responses of Structures Using External Feedback of Recurrent Neural Network. Sensors 2023, 23, 2737. https://doi.org/10.3390/s23052737

AMA Style

Shin Y-S, Kim J. Sensor Data Reconstruction for Dynamic Responses of Structures Using External Feedback of Recurrent Neural Network. Sensors. 2023; 23(5):2737. https://doi.org/10.3390/s23052737

Chicago/Turabian Style

Shin, Yoon-Soo, and Junhee Kim. 2023. "Sensor Data Reconstruction for Dynamic Responses of Structures Using External Feedback of Recurrent Neural Network" Sensors 23, no. 5: 2737. https://doi.org/10.3390/s23052737

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop