Next Article in Journal
Performance Optimization in Frequency Estimation of Noisy Signals: Ds-IpDTFT Estimator
Previous Article in Journal
Validation of a Robotic Testbench for Evaluating Biomechanical Effects of Implant Rotation in Total Knee Arthroplasty on a Cadaveric Specimen
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effects of Data Augmentation on the Nine-Axis IMU-Based Orientation Estimation Accuracy of a Recurrent Neural Network

Inertial Motion Capture Lab, School of ICT, Robotics & Mechanical Engineering, Hankyong National University, Anseong 17579, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(17), 7458; https://doi.org/10.3390/s23177458
Submission received: 1 May 2023 / Revised: 17 August 2023 / Accepted: 24 August 2023 / Published: 28 August 2023
(This article belongs to the Section Wearables)

Abstract

:
The nine-axis inertial and measurement unit (IMU)-based three-dimensional (3D) orientation estimation is a fundamental part of inertial motion capture. Recently, owing to the successful utilization of deep learning in various applications, orientation estimation neural networks (NNs) trained on large datasets, including nine-axis IMU signals and reference orientation data, have been developed. During the training process, the limited amount of training data is a critical issue in the development of powerful networks. Data augmentation, which increases the amount of training data, is a key approach for addressing the data shortage problem and thus for improving the estimation performance. However, to the best of our knowledge, no studies have been conducted to analyze the effects of data augmentation techniques on estimation performance in orientation estimation networks using IMU sensors. This paper selects three data augmentation techniques for IMU-based orientation estimation NNs, i.e., augmentation by virtual rotation, bias addition, and noise addition (which are hereafter referred to as rotation, bias, and noise, respectively). Then, this paper analyzes the effects of these augmentation techniques on estimation accuracy in recurrent neural networks, for a total of seven combinations (i.e., rotation only, bias only, noise only, rotation and bias, rotation and noise, and rotation and bias and noise). The evaluation results show that, among a total of seven augmentation cases, four cases including ‘rotation’ (i.e., rotation only, rotation and bias, rotation and noise, and rotation and bias and noise) occupy the top four. Therefore, it may be concluded that the augmentation effect of rotation is overwhelming compared to those of bias and noise. By applying rotation augmentation, the performance of the NN can be significantly improved. The analysis of the effect of the data augmentation techniques presented in this paper may provide insights for developing robust IMU-based orientation estimation networks.

1. Introduction

For decades, inertial motion capture has been widely used in various fields to determine the precise three-dimensional (3D) orientation of a moving object without in-the-lab constraints, such as in aerospace [1,2,3], robotics [4], and ambulatory human motion tracking [5,6]. In particular, 3D orientation estimation technology using a nine-axis inertial measurement unit (IMU), which consists of a 3-axis accelerometer, 3-axis gyroscope, and 3-axis magnetometer, is a key technology for inertial motion capture systems.
Conventionally, the nine-axis IMU-based 3D orientation can be estimated through the strap-down integration of the gyroscope signal and the reference directions provided by the accelerometer and magnetometer signals (i.e., the direction of the gravitational acceleration and the direction of the Earth’s local magnetic field, respectively). However, each reference direction can be obtained only in specific conditions (i.e., a static status for the vertical direction and a magnetically homogeneous condition for the horizontal direction). On the other hand, results of the strap-down integration are very vulnerable to the bias of the gyroscope. To this end, a wide variety of sensor fusion algorithms, such as Kalman filters and complementary filters, have been proposed for decades to estimate the 3D orientation by fusing three sensor signals [7,8,9,10].
Recently, with the rapid development of computer hardware, deep learning has been utilized in various applications and has achieved remarkable results. In particular, they are widely used in machine vision and natural language processing to perform image classification, segmentation, text generation, and machine translation tasks [11,12,13,14].
In the last several years, influenced by the successful utilization of deep learning in various applications, methods using deep learning for orientation estimation have been proposed [15,16,17]. Specifically, instead of conventional filter algorithms, alternative approaches have been developed by training neural networks (NNs) end-to-end with a large number of datasets, including raw nine-axis IMU signals and ground truth orientation data [18,19,20,21,22,23]. These studies achieved promising results by showing that NN can estimate orientation with better performance than conventional filters under various conditions.
In deep learning, various factors such as parameter tuning, the architecture of the neural network, and the optimizer affect the performance of the network. Thus, a state-of-the-art training algorithm or network architecture is required to improve the performance. However, the limited amount of training data is a major problem in the development of robust networks. To develop a robust network, particularly for a deep network with a large number of parameters, sufficient training data with distributed characteristics are required. However, in general, collecting large and varied training data is a costly and time consuming process. Therefore, there is a need for an efficient approach to obtaining various training data without performing a data collection process in network training.
Data augmentation, which increases the limited amount of training data based on domain knowledge, is a key approach to addressing the data shortage problem. It is also used to prevent generalizability of the trained network. This approach artificially transforms the original data while maintaining identical characteristics to increase the amount of data. It is commonly used for text or image data, and has been shown to improve network performance [24,25,26,27]. Therefore, the importance of data augmentation in orientation estimation networks has been increasingly emphasized.
To effectively utilize data augmentation in an IMU data-based orientation estimation network, it is important to investigate the effect of augmentation techniques on network performance. Weber et al. [21] used data augmentation techniques to increase the amount of IMU data for orientation estimation, but did not explicitly investigate the effect of data augmentation on the performance of neural network-based orientation estimation. Some studies have applied data augmentation techniques to inertial data and evaluated their effects [28,29,30]. To train a convolutional neural network (CNN) for person identification using inertial data, Tran and Choi [28] introduced data augmentation techniques and investigated their effect on identification performance. Li et al. [29] proposed a data augmentation technique for IMU data to improve the performance of CNN for cattle behavior classification and analyzed its effects. In [30], data augmentation for inertial data was used in long short-term memory (LSTM) for driver’s behavior classification, and the effect of the data augmentation in LSTM was examined. These studies performed data augmentation for recognition and classification by using IMU signals. Data augmentation was applied to inertial sensor data in various fields, and the effect of data augmentation was evaluated. However, to our knowledge, there was no study evaluating the effect of data augmentation on the 3D orientation estimation neural network.
To the best of our knowledge, previous studies on orientation estimation using deep learning focus on developing new and powerful network architectures that can better handle the aforementioned task. That is, studies on the effects of data augmentation techniques on estimation performance in NN-based 3D orientation estimation using IMU signals have not yet been conducted. Thus, the main contribution of this study is that it provides insight into the effect of data augmentation on a 3D orientation estimation network. To this end, a comprehensive experiment was constructed to evaluate the effect of the augmentation technique using a large dataset with distributed conditions. To address the data scarcity problem and improve the robustness, we present three data augmentation techniques (i.e., rotation augmentation, bias augmentation, and noise augmentation) that are widely used in NN-based 3D orientation estimation and analyze the effects of each data augmentation technique on the estimation performance. We trained seven models depending on the augmentation strategy to evaluate their effect on their estimation performance.
The remainder of this paper is organized as follows. In Section 2, we introduce the architecture of the network for 3D orientation estimation, the training algorithm, and three data augmentation techniques applied to the training data. In Section 3, the experimental process for acquiring training and test data, and the training scenario for analyzing the effects of each data augmentation technique are explained. In Section 4, the estimation performance of the trained model using each augmentation technique are compared, and the effects of the augmentation techniques are analyzed.

2. Materials and Methods

2.1. Three-Dimensional Orientation Estimation Neural Network

To investigate the effects of the data augmentation technique on the orientation estimation network, we first selected the NN architecture. The NN for 3D orientation estimation is a recurrent neural network (RNN) that specializes in time-series data regression and analysis. We utilized the RNN model proposed in [23], which is an extension of the RNN model introduced in [21]. That is, Section 2.1 is a summary of the model proposed in [23].
The RNN architecture is as follows. The input vector of the network is a 9-dimensional vector x, which is the 9-axis raw signal of IMU as follows:
x = y S A T y S G T y S M T T
Here, y A , y G , and y M are three-dimensional signal vectors of the accelerometer, gyroscope, and magnetometer, respectively. The superscript S indicates that the corresponding vector is expressed in the sensor coordinate system, and the superscript T represents the transpose operation. At each sampling time, the input vector is transformed into a 300-dimensional state vector using a two-layer gated recurrent unit (GRU) with 300 neurons per layer. Here, GRU, which is a type of RNN, was developed to deal with the long-term dependency problem of RNN [31]. Finally, for dimensional reduction, the 300-dimensional state vector is transformed into a 4-dimensional vector through a linear layer (also known as a 1-D convolutional layer). To ensure that the output of the network is a unit quaternion representing a 3D orientation, the 4-dimensional vector is normalized inside the network. The overall architecture of the RNN is visualized in Figure 1.
In the training process, the NN minimizes the error between the estimated quaternion and reference quaternion. At an arbitrary sampling time t, the error quaternion q e ( t ) between the estimated unit quaternion through the network q ^ ( t ) and the reference orientation quaternion q S I r e f ( t ) , which is the sensor coordinate system S with respect to the inertial coordinate system I, can be expressed as follows:
q e ( t ) = q S I r e f ( t ) q ^ ( t ) 1 = [ q w q x q y q z ] T
Here, is a quaternion product, q w is the scalar part of the quaternion, and [ q x q y q z ] T is the vector part of the quaternion. If q e ( t ) is given at an arbitrary sampling time, the angle between the two quaternions (i.e., the error) is determined by a scalar part of q e ( t ) , as follows [32]:
θ e ( t ) = 2 arccos q w
Utilizing the following scalar value θ e ( t ) as the error term, the loss function for training was set as the mean squared error (MSE).
The following algorithms were used for network training. If the RNN backpropagates long sequence time-series data (i.e., a large number of samples) to update its weights, it causes vanishing or exploding gradient problems. Therefore, truncated backpropagation through [33] time was used to address this problem. This algorithm splits long-connected sequence data into mini-batches of short sequences and maintains the last hidden state between the mini-batches. The optimization algorithm used for training was Ranger [34], which combines two optimization algorithms: RAdam and Lookahead. We used two training algorithms for tuning the learning rate. One is the one-cycle learning rate policy [35] for fast convergence of the loss function and the other is the learning rate finder [36] for adopting the optimal maximum learning rate. Network implementation and training were performed in the Google Colab environment using fastai v2 API [37] based on PyTorch.
The number of hyperparameters that we set for the training is three. The first one is the sequence length for the truncated backpropagation and was set to 200. The second one is the epoch, which is the number of training iterations and was set to 300. The last one is the batch size, which was set to 64. The learning rate was determined through a learning rate tuning algorithm.

2.2. Data Augmentation Techniques

To improve the estimation performance of the 3D orientation estimation model and its robustness in various environments, we introduced three augmentation techniques that can be applied to IMU-based orientation data. The augmentation technique transforms the original data to generate virtual data with characteristics identical to those of the original data.
The first data augmentation technique used to increase the size of a dataset is rotation augmentation. In this approach, augmentation is performed by virtually rotating the nine-axis IMU signals (i.e., y A , y G , and y M ), which are used as input vectors for the network, and the reference orientation q S I r e f , which is used as the target value. For virtual rotation, we randomly generated a unit quaternion q S S representing the relative orientation of the virtual sensor coordinate system S with respect to the actual sensor coordinate system. Each 3-axis sensor signal is rotated through a relative orientation quaternion and expressed with respect to the virtual sensor coordinate system S′ as follows:
y S = q S S 1 y S q S S
Using (4), all three signal vectors expressed in the actual sensor coordinate system are expressed in the virtual sensor coordinate system. Therefore, the input vector generated through rotation augmentation is y S A T y S G T y S M T T . In addition, through virtual rotation, the reference orientation quaternion is rotated to the reference quaternion representing the virtual sensor coordinate system with respect to the inertial coordinate system as follows:
q S I r e f = q S I r e f q S S
An arbitrary unit quaternion for the virtual rotation was randomly generated for all experimental trial data. Therefore, each trial datum has a different value. Rotation augmentation is equivalent to tilting the original sensor orientation by an arbitrary constant angle; that is, it has the effect of attaching the sensor to a rigid body in the same location but in a different orientation. Unlike other general augmentation techniques, rotation augmentation transforms the target data as well as the input data. Therefore, when data are transformed through rotational augmentation, virtual data that perform a different type of motion from the original data are obtained.
The second data augmentation technique is bias augmentation, which adds an arbitrary bias value to the gyroscope signal. The basic process of estimating 3D orientation is to perform strap-down integration using the gyroscope signal. However, when the gyroscope signal is integrated, its bias is also integrated, causing boundless orientation drift errors. Therefore, for robust performance against gyroscope bias, data augmentation was performed by adding an arbitrary bias to the gyroscope signal. A randomly generated three-dimensional constant vector b, corresponding to the arbitrary bias, is added to the gyroscope signal as follows:
y S G , b i a s e d = y S G + b
To apply the virtual gyroscope bias differently to all the experimental data, a constant bias vector was randomly generated for each trial. In bias augmentation, the only difference between the original data and the virtually generated data is the gyroscope signal.
The measurement noise included in each sensor signal is also a factor that increases the estimation error. Therefore, the last augmentation technique is noise augmentation, which adds a virtual measurement noise to the sensor signal. Assuming that the noise of each sensor signal follows white Gaussian noise, virtual Gaussian noise is added to the accelerometer, gyroscope, and magnetometer signals as follows.
y A , n o i s e = y A + n A
y G , n o i s e = y G + n G
y M , n o i s e = y M + n M
Here, n A , n G , and n M are virtual triaxial Gaussian noises of accelerometer, gyroscope, and magnetometer with zero mean and arbitrary standard deviations, respectively. Because the noise levels (i.e., the standard deviation of noise) of the accelerometer, gyroscope, and magnetometer are different, we generated virtual Gaussian noise according to the noise level of each sensor. Similar to the bias augmentation, the standard deviation of the virtual noise was randomly generated for each trial data.
The following three data augmentation techniques are applied to the training dataset for network training, then we investigate the effect of each augmentation technique on the 3D orientation estimation performance in the RNN.

3. Experiment and Training Scenario

3.1. Experiment

An experiment was conducted to acquire a large dataset including the nine-axis IMU signal and reference orientation data. Figure 2 shows the experimental environment. A nine-axis IMU module MTw (Xsens Technologies B. V., Enschede, The Netherlands) consisting of an accelerometer, a gyroscope, a magnetometer, and an optical camera system Optitrack Flex 13 (Natural Point, Corvallis, OR, USA) were used for the experiment. Both systems were sampled at 100 Hz for data acquisition. The IMU sensor was attached to a rigid triangular ruler. The local coordinate system of the rigid body can be created using the three markers attached to each vertex of the rigid body, which was used as the reference orientation of the sensor. However, the relative orientation between the actual coordinate system of the sensor and the local coordinate system of the rigid body obtained using the optical camera may increase the error when evaluating the accuracy of the orientation estimation. To align the two coordinate systems, the quaternion-based local frame alignment method proposed in [38] was used.
The experiment was conducted by randomly shaking the sensor-attached rigid body by hand. All experiments were conducted for approximately three minutes, with static periods of 20 s at the beginning and 10 s at the end. That is, each trial data has approximately 18,000 samples. To train the RNN model through various types of motions, experiments were performed using various criteria to ensure that the dataset included a wide range of motion characteristics. Trials can be divided into two criteria. The first is the criterion for limiting the motion of the sensor, which consists of the following three conditions:
  • Rotation: only rotation is performed while maintaining the position of the sensor as much as possible.
  • Translation: only translation is performed while maintaining the orientation of the sensor as much as possible.
  • Combined: rotation and translation are performed randomly.
The second criterion is the speed of motion. The experiment was conducted according to these criteria by dividing it into fast and slow conditions.
To train and verify the RNN model, a dataset containing a large amount of experimental data was required. Therefore, the 3-minute experiment trial was repeated according to each condition. The dataset comprised 123 trials. The 123 trials were divided into a training dataset for network training, and a test dataset to verify the trained model. A total of 31 trials were used for training, and the remaining 92 were used as test datasets.

3.2. Training Scenario

We evaluated the gyroscope bias and noise of the experimental dataset to apply an appropriate level of bias and noise augmentation to the data. The bias and noise were measured in the static state of all trials. The mean of the gyroscope bias magnitudes for all trial data was 0.28 deg/s. For bias augmentation, a constant bias vector b was randomly generated from a Gaussian distribution with zero mean and a standard deviation of 0.5 deg/s so that the network has robust performance for larger gyroscope bias. For all trial data, the mean magnitudes of the standard deviation of the accelerometer, gyroscope, and magnetometer noise (i.e., noise level) were 0.02 m/s2, 0.15 deg/s, and 0.004 a.u. (arbitrary unit), respectively. The virtual noise is added to each sensor signal according to the noise level of each sensor. Therefore, for each sensor, we randomly generated a three-dimensional constant vector from a Gaussian distribution with zero mean and standard deviation of the noise level of the sensor. Then, each three-dimensional constant vector is used as the standard deviation of the virtual Gaussian noises n A , n G , and n M .
To analyze the effects of the three augmentation techniques on network-based 3D orientation estimation in various aspects, the RNN was trained with a training dataset where the three augmentation techniques were applied individually or in combination. The training dataset, in which data augmentation was applied, included an original dataset and an augmented dataset. Therefore, the size of the training dataset was doubled compared with that of the original dataset. Seven models were trained with the training dataset created using various combinations of the three data augmentation techniques, as follows (see Figure 3):
  • Rotation: An RNN model trained with a training dataset in which only rotation augmentation was applied.
  • Bias: An RNN model trained with a training dataset in which only bias augmentation was applied.
  • Noise: An RNN model trained with a training dataset in which only noise augmentation was applied.
  • Rotation and Bias: An RNN model trained with a training dataset in which both rotation and bias augmentation were simultaneously applied.
  • Rotation and Noise: An RNN model trained with a training dataset in which both rotation and noise augmentation were simultaneously applied.
  • Bias and Noise: An RNN model trained with a training dataset in which bias and noise augmentation were simultaneously applied.
  • All: An RNN model trained with a training dataset in which all three augmentations were simultaneously applied.
In the training process of each model, all the training algorithms were used identically, and only the training dataset was set differently. In addition, the number of epochs, which refers to the number of cycles through the entire training dataset, was set to 300. Figure 4 shows the overall flowchart of the network training and verification for the analysis of the effects of data augmentation. Owing to random factors, such as weight initialization, the performance of the model may be different even when training is performed with the same training parameters. Therefore, the training process of each model was repeated five times, and the mean of the five average root mean square error (RMSE) values over all the test data was used for performance comparison.

4. Results and Discussion

To analyze the effects of the data augmentation technique on the orientation estimation performance, each RNN model trained with the seven training datasets was evaluated with a test dataset that was not experienced during the training process. The performance of each model was compared and analyzed using the mean of the RMSEs over all test data. The 3D orientation can be divided into an attitude representing the inclination angle for the gravitational direction and a heading representing an azimuth angle for the direction of the magnetic field of the Earth. Because the two components can be estimated independently, their estimation performance was evaluated independently using the method introduced in [32].
Table 1 shows the orientation estimation performance of the seven network models trained with different augmented training datasets for the test dataset. To quantitatively evaluate the improvement in estimation performance according to each data augmentation technique, the model trained using the augmented training dataset was compared to a model trained using only the original dataset. The estimation performance improvement rate is the average of the performance improvement rates of the attitude and heading angles. The estimation performance of the network model trained with the training dataset containing the augmented data was significantly improved from a minimum of 11.4% to a maximum of 35.2% compared to the RNN model trained using only the original dataset. In addition, the improvement in estimation performance through data augmentation showed an effect on both the attitude and heading angles. That is, all augmentation techniques improved the performance of the orientation-estimation RNN.
We evaluated the effect of each augmentation technique on the estimation performance. When comparing the three models trained on a training dataset with only a single augmentation technique applied, the average improvement rate of the model trained using rotation augmentation was the highest at 30.4%, and the lowest improvement was 11.4%, which was that of the model trained using noise augmentation. In addition, when comparing the three RNN models that simultaneously applied the two augmentation techniques, the improvement in the estimation performance of the two RNN models (28.5% and 27.0%) trained with the rotation augmentation technique was superior to that of the model trained using bias and noise augmentation (21.2%). The improvement rate of the RNN model trained using all three augmentation techniques was 35.2%, showing the best estimation performance among all seven network models. These results indicate that none of the three augmentation techniques adversely affected network training. In addition, these results confirm that training the neural network by applying rotational augmentation has the greatest effect on the improvement of the 3D orientation estimation performance of the network.
One of the reasons for increasing the IMU and reference orientation data through the data augmentation technique is to ensure robust performance against gyroscope bias or sensor measurement noise. Therefore, the three augmentation techniques were individually applied to the test dataset to evaluate the effect of each technique on the three augmentation situations.
Table 2 shows the estimation performance of each RNN model when three augmentation techniques were applied to the test data: (a) when virtual gyroscope bias was added to the test data, (b) when virtual noise was added to the test data, (c) when the test data were virtually rotated, (d) when unseen data from [39] which are openly available were applied.
In the virtual bias or noise-applied test dataset (see (a) and (b) in Table 2), even if virtual gyroscope bias or measurement noise is added to the original test data, the performance was the same as the estimated result over the original test dataset (see Table 1). That is, the estimation performance of the network did not degrade because of bias or noise. In addition, note that the estimation performance of the model trained with bias or noise augmentation did not improve over the model trained without bias or noise augmentation on the virtual bias or noise-applied test dataset. In the case of mathematical modelling-based filter algorithms, gyroscope bias and measurement noise have a significant effect on performance degradation. However, the above results show that in the case of NNs, even if the network is trained with only the gyroscope bias and noise included in the original data, it can sufficiently maintain a robust performance against larger bias and noise.
In the virtually rotated test dataset (see (c) in Table 2), the estimation performance of all models was significantly degraded. In particular, the original model showed the highest error of more than 30° for both the attitude and heading. In addition, the models trained without rotation augmentation showed performance at a level where orientation estimation was impossible, with an attitude error of more than 25° and a heading error of more than 31°. Similarly, for the models trained with rotation augmentation, the estimation performance was significantly degraded with an estimation error of more than 10°. The reasons for such significant degradation of the estimation performance on virtually rotated data are as follows. To obtain the training and test datasets, all experiments were conducted including the static state for the initial 20 s and the last 10 s. For the static state, the sensor was placed on a table with the z-axis pointing upward so that the sensor could maintain a static state. The network model, which was trained without rotation augmentation data, was trained using biased training data in which the z-axis of the sensor pointed upward while the sensor was in a static state. However, in the case of virtually rotated test data, the sensor is placed in an arbitrary orientation even in a static state. Thus, when estimating the orientation of the sensor, the estimation performance of the model trained with biased experimental data is significantly reduced. In other words, it causes an overfitting problem during the training process. In addition, models trained with a dataset in which rotation augmentation was applied similarly showed poor estimation performance, as shown in the above results, because half of the training dataset was original data.
With regard to the results shown in (d) in Table 2, an openly available dataset from [39] was used in order to examine the tendency of the effect of data augmentation on unseen data. In terms of estimation accuracy, the case for the unseen dataset produced worse results than the other cases shown in (a)–(c) in Table 2. However, since this paper deals not with estimation performance but instead with the effects of data augmentation on the estimation performance, an in-depth discussion of estimation performance itself is out of the scope of this paper.
Most importantly, the evaluation results in Table 1 and Table 2 show that, among a total of seven augmentation cases (see Figure 3), four cases including ‘rotation’ (i.e., rotation only, rotation and bias, rotation and noise, and rotation and bias and noise) occupy the top four. Therefore, it may be concluded that the augmentation effect of rotation is overwhelming compared to those of bias and noise. Furthermore, it can be observed that, among the four cases including ‘rotation’, the case of applying ‘rotation and bias and noise’ shows superior performance over the other three cases. This indicates that, no matter how overwhelming the effect of rotation is, augmentation by adding bias and noise is (even a little) better than augmentation only by rotation.
To specifically analyze the effect of rotation augmentation, we trained the RNN network by gradually increasing the size of the training dataset, applying rotation augmentation to the training data many times and evaluating the model performance according to the number of augmentations. Therefore, rotation augmentation was applied to the training dataset incrementally up to nine times. The size of the training dataset with rotation augmentation applied nine times was ten times larger than the size of the original training dataset. Each trained model was evaluated over the same virtually rotated test data as listed in (c) in Table 2. Figure 5 shows the estimation performance according to the number of rotation augmentations applied to the training dataset and the performance improvement rate compared to the model trained with the original dataset. As the number of applications of rotation augmentation increased, the estimation performance further improved. When rotation augmentation is applied once, the improvement rate increased the most (59.67%). As the application number of augmentation increased, the performance improvement rate decreased, and from the seventh application, the improvement rate decreased to less than 1%. These results indicate that there is a limitation in the performance improvement through data augmentation in the orientation estimation RNN. The average RMSE of the model trained applying rotation augmentation nine times was 5.01° and 7.86° for attitude and heading, respectively. This model outperformed all other models (see Table 1), which was evaluated with the original test data. The following results indicate that it is very important to properly use the rotation augmentation technique for the estimation performance of the 3D orientation estimation RNN.

5. Conclusions

This study analyzed the effects of three data augmentation techniques on nine-axis IMU-based 3D orientation estimation performance in an RNN. The three augmentation techniques are rotation augmentation, which virtually rotates the IMU signal and reference orientation; bias augmentation, which adds an arbitrary gyroscope bias; and noise augmentation, which adds virtual measurement noise to each sensor. To investigate the effect of each augmentation technique on estimation performance, seven training datasets were created by combining the three augmentation techniques, and the RNN model proposed in [23] was trained with each training dataset. The validation results showed that among the three augmentation techniques, rotation augmentation had the greatest effect on improving the estimation performance of orientation RNN. In addition, by applying rotation augmentation, the performance of the neural network can be significantly improved.
As a main contribution of this study, we quantitatively investigated the improvement in the estimation accuracy of network-based 3D orientation using data augmentation. To the best of our knowledge, studies on the effects of data augmentation techniques on estimation performance in orientation estimation networks using IMU sensors have not yet been conducted. In this regard, the analysis of the effect of the data augmentation techniques presented in this paper can provide insights for developing robust IMU-based orientation estimation networks. In future works, we aim to develop an IMU-based human motion tracking system based on a NN-based orientation estimation, and investigate the performance difference between conventional filter-based estimation and NN-based estimation in terms of motion tracking accuracy.

Author Contributions

Conceptualization, J.K.L. and J.S.C.; methodology, J.S.C.; validation, J.K.L. and J.S.C.; formal analysis, J.S.C.; investigation, J.S.C.; data curation, J.S.C.; writing—original draft preparation, J.S.C.; writing—review and editing, J.K.L.; supervision, J.K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dong, X.; Gao, Y.; Guo, J.; Zuo, S.; Xiang, J.; Li, D.; Tu, Z. An integrated UWB-IMU-vision framework for autonomous approaching and landing of UAVs. Aerospace 2022, 9, 797. [Google Scholar] [CrossRef]
  2. Chen, P.; Dang, Y.; Liang, R.; Zhu, W.; He, X. Real-time object tracking on a drone with multi-inertial sensing data. IEEE Trans. Intell. Transp. Syst. 2017, 19, 131–139. [Google Scholar] [CrossRef]
  3. Araguás, G.; Paz, C.; Gaydou, D.; Paina, G.P. Quaternion-based orientation estimation fusing a camera and inertial sensors for a hovering UAV. J. Intell. Robot. Syst. 2015, 77, 37–53. [Google Scholar] [CrossRef]
  4. Li, S.; Jiang, J.; Ruppel, P.; Liang, H.; Ma, X.; Hendrich, N.; Sun, F.; Zhang, J. A Mobile Robot Hand-arm Teleoperation System by Vision and IMU. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 10900–10906. [Google Scholar]
  5. Han, Y.C.; Wong, K.I.; Murray, I. Gait phase detection for normal and abnormal gaits using IMU. IEEE Sens. J. 2019, 19, 3439–3448. [Google Scholar] [CrossRef]
  6. Lee, C.J.; Lee, J.K. Wearable IMMU-based relative position estimation between body segments via time-varying segment-to-joint vectors. Sensors 2022, 22, 2149. [Google Scholar] [CrossRef] [PubMed]
  7. Madgwick, S.O.H.; Harrison, A.J.L.; Vaidyanathan, R. Estimation of IMU and MARG Orientation Using a Gradient Descent Algorithm. In Proceedings of the 2011 IEEE International Conference on Rehabilitation Robotics, Zurich, Switzerland, 29 June–1 July 2011; pp. 1–7. [Google Scholar]
  8. Lee, J.K. A parallel attitude-heading Kalman filter without state-augmentation of model-based disturbance components. IEEE Trans. Instrum. Meas. 2019, 68, 2668–2670. [Google Scholar] [CrossRef]
  9. Sabatini, A.M. Quaternion-based extended Kalman filter for determining orientation by inertial and magnetic sensing. IEEE Trans. Biomed. Eng. 2006, 53, 1346–1356. [Google Scholar] [CrossRef] [PubMed]
  10. Valenti, R.G.; Dryanovski, I.; Xiao, J. A linear Kalman filter for MARG orientation estimation using the algebraic quaternion algorithm. IEEE Trans. Instrum. Meas. 2015, 65, 467–481. [Google Scholar] [CrossRef]
  11. Chan, T.H.; Jia, K.; Gao, S.; Lu, J.; Zeng, Z.; Ma, Y. PCANet: A simple deep learning baseline for image classification? IEEE Trans. Image Process. 2015, 24, 5017–5032. [Google Scholar] [CrossRef]
  12. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef]
  13. Zhou, C.; Sun, C.; Liu, Z.; Lau, F. A C-LSTM neural network for text classification. arXiv 2015, arXiv:1511.08630. [Google Scholar]
  14. Singh, S.P.; Kumar, A.; Darbari, H.; Singh, L.; Rastogi, A.; Jain, S. Machine translation using deep learning: An overview. In Proceedings of the 2017 International Conference on Computer, Communications and Electronics (Comptelix), Jaipur, India, 1–2 July 2017; pp. 162–167. [Google Scholar]
  15. Li, R.; Fu, C.; Yi, W.; Yi, X. Calib-Net: Calibrating the low-cost IMU via deep convolutional neural network. Front. Robot. AI 2022, 8, 772583. [Google Scholar] [CrossRef] [PubMed]
  16. Jiang, C.; Chen, S.; Chen, Y.; Zhang, B.; Feng, Z.; Zhou, H.; Bo, Y. A MEMS IMU de-noising method using long short term memory recurrent neural networks (LSTM-RNN). Sensors 2018, 18, 3470. [Google Scholar] [CrossRef]
  17. Chiang, K.W.; Chang, H.W.; Li, C.Y.; Huang, Y.W. An artificial neural network embedded position and orientation determination algorithm for low cost MEMS INS/GPS integrated sensors. Sensors 2009, 9, 2586–2610. [Google Scholar] [CrossRef]
  18. Sun, S.; Melamed, D.; Kitani, K. IDOL: Inertial deep orientation-estimation and localization. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 2–9 February 2021; pp. 6128–6137. [Google Scholar]
  19. Narkhede, P.; Walambe, R.; Poddar, S.; Kotecha, K. Incremental learning of LSTM framework for sensor fusion in attitude estimation. PeerJ Comput. Sci. 2021, 7, e662. [Google Scholar] [CrossRef]
  20. Esfahani, M.A.; Wang, H.; Wu, K.; Yuan, S. OriNet: Robust 3-D orientation estimation with a single particular IMU. IEEE Robot. Autom. Lett. 2019, 5, 399–406. [Google Scholar] [CrossRef]
  21. Weber, D.; Gühmann, C.; Seel, T. RIANN—A robust neural network outperforms attitude estimation filters. AI 2021, 2, 444–463. [Google Scholar] [CrossRef]
  22. Kim, W.Y.; Seo, H.I.; Seo, D.H. Nine-axis IMU-based extended inertial odometry neural network. Expert Syst. Appl. 2021, 178, 115075. [Google Scholar] [CrossRef]
  23. Choi, J.S.; Lee, J.K. Recurrent neural network for nine-axis IMU-based orientation estimation: 3D orientation estimation performance in disturbed conditions. J. Inst. Contr. Robot. Syst. 2022, 18, 123–493. (In Korean) [Google Scholar]
  24. Perez, L.; Wang, J. The effectiveness of data augmentation in image classification using deep learning. arXiv 2017, arXiv:1712.04621. [Google Scholar]
  25. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
  26. Feng, S.Y.; Gangal, V.; Wei, J.; Chandar, S.; Vosoughi, S.; Mitamura, T.; Hovy, E. A survey of data augmentation approaches for NLP. arXiv 2021, arXiv:2105.03075. [Google Scholar]
  27. Wei, J.; Zou, K. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv arXiv:1901.11196, 2019.
  28. Tran, L.; Choi, D. Data augmentation for inertial sensor-based gait deep neural network. IEEE Access 2020, 8, 12364–12378. [Google Scholar] [CrossRef]
  29. Li, C.; Tokgoz, K.K.; Fukawa, M.; Bartels, J.; Ohashi, T.; Takeda, K.I.; Ito, H. Data augmentation for inertial sensor data in CNNs for cattle behavior classification. IEEE Sens. Lett. 2021, 5, 1–4. [Google Scholar] [CrossRef]
  30. Jaafer, A.; Nilsson, G.; Como, G. Data augmentation of IMU signals and evaluation via a semi-supervised classification of driving behavior. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; pp. 1–6. [Google Scholar]
  31. Cho, K.; Van Merriënboer, B.; Bahdanau, D.; Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches. arXiv 2014, arXiv:1409.1259. [Google Scholar]
  32. Laidig, D.; Caruso, M.; Cereatti, A.; Seel, T. BROAD—A Benchmark for Robust Inertial Orientation Estimation. Data 2021, 6, 72. [Google Scholar] [CrossRef]
  33. Jaeger, H. A Tutorial on Training Recurrent Neural Networks, Covering BPPT, RTRL, EKF and the “Echo State Network” Approach; GMD Report; German National Research Center for Information Technology: St. Augustin, Germany, 2002. [Google Scholar]
  34. Wright, L.; Demeure, N. Ranger21: A synergistic deep learning optimizer. arXiv 2021, arXiv:2106.13731. [Google Scholar]
  35. Smith, L.N.; Topin, N. Super-convergence: Very fast training of neural networks using large learning rates. arXiv 2018, arXiv:1708.07120. [Google Scholar]
  36. Smith, L.N. Cyclical learning rates for training neural networks. In Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, 24–31 March 2017; pp. 464–472. [Google Scholar]
  37. Howard, J.; Gugger, S. Fastai: A layered API for deep learning. Information 2020, 11, 108. [Google Scholar] [CrossRef]
  38. Lee, J.K.; Jung, W.C. Quaternion-based local frame alignment between an inertial measurement unit and a motion capture system. Sensors 2018, 18, 4003. [Google Scholar] [CrossRef] [PubMed]
  39. Szczęsna, A.; Skurowski, P.; Pruszowski, P.; Pęszor, D.; Paszkuta, M.; Wojciechowski, K. Reference Data Set for Accuracy Evaluation of Orientation Estimation Algorithms for Inertial Motion Capture Systems. In Computer Vision and Graphics; Chmielewski, L.J., Datta, A., Kozera, R., Wojciechowski, K., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2016; pp. 509–520. [Google Scholar]
Figure 1. Architecture of the recurrent neural network for 3D orientation estimation.
Figure 1. Architecture of the recurrent neural network for 3D orientation estimation.
Sensors 23 07458 g001
Figure 2. Experimental setup.
Figure 2. Experimental setup.
Sensors 23 07458 g002
Figure 3. Various data augmentation schemes for training data.
Figure 3. Various data augmentation schemes for training data.
Sensors 23 07458 g003
Figure 4. The overall flowchart of network training and evaluation.
Figure 4. The overall flowchart of network training and evaluation.
Sensors 23 07458 g004
Figure 5. Estimation performance (left: mean of RMSEs for attitude and heading, right: rate of averaged improvement) of virtually rotated test data according to the number of rotation augmentations.
Figure 5. Estimation performance (left: mean of RMSEs for attitude and heading, right: rate of averaged improvement) of virtually rotated test data according to the number of rotation augmentations.
Sensors 23 07458 g005
Table 1. Estimation performance (mean of RMSEs) for each RNN model over all the test data (unit: °).
Table 1. Estimation performance (mean of RMSEs) for each RNN model over all the test data (unit: °).
AttitudeHeadingAvg. Improvement
Original9.2712.11
Rotation6.598.2630.4%
Bias7.329.4921.3%
Noise7.9811.0311.4%
Rotation and Bias6.638.6528.5%
Rotation and Noise6.609.0527.0%
Bias and Noise7.359.4721.2%
All5.997.8635.2%
Table 2. Estimation performance (mean of RMSEs) for each RNN model over three augmented test datasets (unit: °).
Table 2. Estimation performance (mean of RMSEs) for each RNN model over three augmented test datasets (unit: °).
(a) Results for the test dataset with applied virtual gyroscope bias
AttitudeHeadingAvg. Improvement
Original9.2712.09-
Rotation6.608.2530.3%
Bias7.339.4721.3%
Noise7.9811.0211.4%
Rotation and Bias6.638.6428.5%
Rotation and Noise6.609.0427.0%
Bias and Noise7.379.4521.2%
All6.007.8535.2%
(b) Results for the test dataset with applied virtual noise
AttitudeHeadingAvg. Improvement
Original9.2712.11-
Rotation6.598.2630.4%
Bias7.329.4921.3%
Noise7.9811.0311.4%
Rotation and Bias6.638.6528.5%
Rotation and Noise6.609.0527.0%
Bias and Noise7.369.4721.2%
All5.997.8535.2%
(c) Results for the test dataset with applied virtual rotation
AttitudeHeadingAvg. Improvement
Original30.4338.57-
Rotation13.0014.6359.7%
Bias25.9833.7913.5%
Noise27.2435.998.6%
Rotation and Bias12.3615.7759.3%
Rotation and Noise11.5814.3162.3%
Bias and Noise25.8131.9516.2%
All11.6014.4562.2%
(d) Results for the dataset from [39]
AttitudeHeadingAvg. Improvement
Original31.4954.77-
Rotation26.4031.2429.6%
Bias30.9045.169.71%
Noise33.7250.820.05%
Rotation and Bias24.7732.6930.8%
Rotation and Noise23.7434.7130.6%
Bias and Noise34.3649.010.69%
All24.6331.2832.3%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Choi, J.S.; Lee, J.K. Effects of Data Augmentation on the Nine-Axis IMU-Based Orientation Estimation Accuracy of a Recurrent Neural Network. Sensors 2023, 23, 7458. https://doi.org/10.3390/s23177458

AMA Style

Choi JS, Lee JK. Effects of Data Augmentation on the Nine-Axis IMU-Based Orientation Estimation Accuracy of a Recurrent Neural Network. Sensors. 2023; 23(17):7458. https://doi.org/10.3390/s23177458

Chicago/Turabian Style

Choi, Ji Seok, and Jung Keun Lee. 2023. "Effects of Data Augmentation on the Nine-Axis IMU-Based Orientation Estimation Accuracy of a Recurrent Neural Network" Sensors 23, no. 17: 7458. https://doi.org/10.3390/s23177458

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop