Trajectory reconstruction for pipeline robots depends on strongly temporally correlated data. The gating mechanisms of LSTM accurately capture both long- and short-term temporal dependencies: they alleviate the gradient problems of traditional RNNs for modeling long sequences, adaptively suppress IMU noise, preserve historical trends during odometer slip, and accommodate non-stationary data caused by abrupt changes in pipe geometry, making LSTM particularly suitable for temporal modeling. Moreover, the inclusion of geometric constraints further enhances reconstruction accuracy. The following section provides a detailed description of the proposed GC-LSTM trajectory reconstruction method.
3.2. Data Processing Model Based on GC-LSTM
To fully leverage the complementary advantages of the geometric constraint model for pipelines and the LSTM-based velocity prediction model, this study employs an Extended Kalman Filter (EKF) framework. The initial state selection and covariance configuration for the EKF in this research are as follows. The initial state
is derived from sensor position and velocity data. When external references are unavailable, the initial state is set to the zero vector, and the initial covariance matrix is assigned a relatively large value to account for uncertainty. The process noise Q is designed based on the acceleration measurement noise spectral density
, obtained by discretizing the constant acceleration model as shown in Equation (6).
When no spectral density data is available, is estimated using the variance of acceleration samples. The observation noise R is determined by the position measurement accuracy; if unknown, it is estimated online via the sample covariance of the innovation sequence and updated using smoothing to ensure numerical stability. To improve long-term accuracy, the acceleration bias is treated as an optional extended state and assumed to follow a random walk.
To extract attitude constraints from IMU data, this paper employs a sliding window technique. Let the sliding window size be L, corresponding to the time set {k − L + 1,…,k}. The quaternion estimates obtained via the EKF can be converted into a right-multiplicative rotation matrix, which is then transformed into roll angle , pitch angle , and yaw angle . Based on the changes in Euler angles within the sliding window, the following attitude constraint rules are established.
If , the robot is considered to be in a horizontal pipe section with a pitch angle of 0; if , the robot is considered to be in a vertical pipe section with a pitch angle of 90 degrees; if and , the robot is considered to be in a straight pipe section where both pitch and yaw angles should remain constant; if , the robot is considered to be passing through a standard elbow, and the yaw angle should increase by .
Where is the attitude angle threshold, and M is determined by the sampling time and the robot’s velocity.
By incorporating the aforementioned attitude constraints as supplementary measurement information into the state estimation algorithm, IMU attitude drift can be directly suppressed, attitude estimation errors corrected, and position integration errors indirectly mitigated. This further enhances trajectory spatial accuracy, accommodates complex pipeline geometries, reduces dynamic scene uncertainties, and ultimately improves trajectory reconstruction accuracy.
The attitude constraint module uses a sliding window to interpret pipeline geometry, generating targeted constraints to suppress IMU integration drift and noise, thereby outputting cleaner fundamental attitude and acceleration data. The LSTM module takes historical motion sequences as input to learn odometer velocity patterns, predicting velocity information to compensate for measurement gaps during odometer slippage or failure. Ultimately, the attitude-constrained purified IMU data and the odometer velocity information predicted by LSTM undergo multi-source fusion. Combined with the state quantities output by algorithms such as EKF, they jointly drive the trajectory generation module to obtain the precise operational trajectory of the pipeline robot. The established GC-LSTM network is shown in
Figure 5.
Taking the speed error sequence of the past
moments as the input and the speed error of the current moment as the output, the error characteristics when the odometer slips are learned by LSTM network. Where
denotes the speed,
denotes the speed error at the current moment, and
denotes the error value at each moment in history. When skidding data is detected, the speed error is predicted and compensated by the input speed error at the previous moment and the training model of LSTM, and the prediction and compensation process when the odometer is skidding is shown in
Figure 6.
In LSTM networks, predicting the velocity error at the first moment requires the errors of the previous L moments as an input sequence whose length is the sequence length. The relevant parameters are established as in Equation (7).
where
is the original error sequence,
is the vector sample of the original error sequence,
is the training set, and
is the test set.
The velocity prediction of the LSTM model is introduced as a compensating quantity for the measurement update, followed by the fusion of the pipeline feature model under the Extended Kalman Filter (EKF) as the main framework to correct the attitude. To balance the uncertainty of the feature model with the LSTM model, the fused velocity is defined as in Equation (8).
where
is the predicted velocity of the pipe attitude feature in the EKF framework,
is the original velocity of the odometer,
is the velocity error predicted by the LSTM, and
is the adaptive weight determined by the model covariance.
On the basis of the fusion speed AA, the extended observation vector and covariance matrix are shown in Equation (9).
where
is the attitude angle and acceleration matrix data of the IMU,
is the measurement noise covariance, and
is the noise covariance of the fused velocity observations.
The compensated velocities are used as observations of the EKF for the measurement update, along with the results of the eigenmodel corrections, and the final state of the output is shown in Equation (10).
where
H is the observation matrix.
During the operation of the pipeline robot, the above fusion process is executed in real time to correct the velocity and attitude online. The fused velocity and attitude information is finally utilized to obtain the pipeline trajectory data by integration as shown in Equation (11).
where
q is the output pose,
R(
q) is the rotation matrix of the quaternion transformation, and
is the forward unit vector of the robot body.
An LSTM network was employed to predict velocity errors. The network architecture features an input layer receiving a sequence of 15 historical velocity errors. An LSTM layer with 15 hidden units captures temporal dependencies, followed by a fully connected layer with Rectified Linear Unit (ReLU) activation functions. The final prediction is obtained through a linear output layer. The network was trained using the RMSprop optimizer with an initial learning rate of 0.05 and a batch size of 1000, employing mean squared error as the loss function. An early stopping mechanism was implemented to prevent overfitting. All input and output data underwent Z-score normalization based on statistics from the training set.