Next Article in Journal
Dual-Objective Optimization of G3-Continuous Quintic B-Spline Trajectories for Robotic Ultrasonic Testing
Previous Article in Journal
Graph Neural Networks for Fault Diagnosis in Photovoltaic-Integrated Distribution Networks with Weak Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study on Centroid Height Prediction of Non-Rigid Vehicle Based on Deep Learning Combined Model

School of Mechanical Engineering and Automation, Wuhan Textile University, Wuhan 430200, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(18), 5692; https://doi.org/10.3390/s25185692
Submission received: 16 July 2025 / Revised: 25 August 2025 / Accepted: 8 September 2025 / Published: 12 September 2025
(This article belongs to the Topic Vehicle Dynamics and Control, 2nd Edition)

Abstract

The height of the center of gravity (ZCG) is a critical parameter for evaluating vehicle safety and performance. Systematic errors arise in ZCG measurement via the tilt-table test method due to unlocked suspension systems and variable sprung mass conditions, which compromise accuracy. To address this limitation, a CNN–LSTM–Attention model integrating convolutional neural networks (CNNs), long short-term memory networks (LSTMs), and an attention mechanism is proposed. The CNN extracts spatial correlations among vehicle load transfer, suspension stiffness, and tilt angles. The LSTM captures temporal dependencies in tilt angle sequences, while the attention mechanism amplifies critical load-transfer features near the 0° region. Simulations of vehicles with unlocked suspension and variable sprung mass were conducted in Adams using tilt-table protocols. The CNN–LSTM–Attention model was trained on simulation data and validated with real-world tilt-test data under identical suspension conditions. Results demonstrate that the CNN–LSTM–Attention model achieves at least a 6.9% improvement in computational speed and at least a 0.1% reduction in prediction error compared to CNN, CNN-LSTM, and Transformer baselines. The CNN–LSTM–Attention model demonstrates valid predictive capability for ZCG at 0° tilt angle. This novel approach provides a robust solution for the tilt-table test method ZCG measurement, enhancing practical accuracy in vehicle dynamics parameter quantification.

1. Introduction

ZCG is a critical parameter for evaluating vehicle performance and safety, with its accuracy directly impacting engineering design decisions [1]. Current ZCG measurement methods include the axle-lift method, stabilized pendulum method, and tilt-table test [2]. While suitable for small vehicles, the axle-lift method and stabilized pendulum method exhibit limitations when applied to large vehicles such as semi-trailers or full trailers. In the conventional tilt-table method, the vehicle with locked suspension is mounted on a platform, and load transfer variations are recorded at left/right tilt angles of 6–12°. The ZCG at 0° is then approximated by arithmetic averaging of ZCG values within this angular range, introducing systematic errors due to angle-dependent extrapolation. Suspension lock procedures present operational complexities and safety risks, motivating research into unlocked-suspension measurement approaches [3]. When measuring ZCG without locking the suspension, tilt-induced deformation occurs, violating the rigid-body assumption essential for conventional methods [4]. Consequently, suspension deformation induces coordinate shifts in the longitudinal position of the center of gravity (XCG), the lateral position of the center of gravity (YCG), and ZCG dimensions. Furthermore, a coupling effect between YCG and ZCG makes it challenging to measure either parameter independently with high precision. Aftermarket functional components retrofitted to factory vehicles create variable sprung mass conditions, introducing additional uncertainties. Component heterogeneity and positional variability amplify suspension deformation magnitudes, thereby exacerbating fluctuations in XCG, YCG, and ZCG coordinates. Collectively, three factors compromise ZCG measurement accuracy:
(1)
angle-dependent extrapolation via arithmetic averaging within 6–12° tilt intervals,
(2)
YCG-ZCG coupling under unlocked suspension impeding independent measurement,
(3)
variable sprung mass exacerbating deformation-induced coordinate shifts.
Conventional approaches to these issues include time-domain analysis [5], frequency-domain analysis [6], and wavelet transforms [7]. Recent studies have integrated conventional techniques with machine learning algorithms, such as combining fast Fourier transform (FFT) with support vector machines (SVMs) [8], short-time Fourier transform (STFT) with CNN [9], or wavelet transforms with LSTM [10]. As a pivotal branch of machine learning, deep learning (DL) has been widely applied across multiple domains [11] due to its strengths in feature extraction, time-series modeling, robustness, and generalization capability. For instance, Hwang et al. implemented a CNN–LSTM–Attention model for monitoring human fatigue [12]. Wang Xingfen et al. proposed a CNN–LSTM–Attention-based approach to improve the accuracy of iron ore futures price prediction [13]. Yang Yong et al. developed an intelligent diagnostic approach using a CNN–LSTM–Attention model, which enhances the accuracy and reliability of track circuit fault diagnosis [14]. Xia K. et al. developed a CNN–BiLSTM–Attention-based model for predicting the aging status of automotive wiring harnesses [15]. Chen Xing et al. developed a CNN–LSTM–Attention-based model for predicting monthly domestic water demand, effectively forecasting water consumption patterns [16]. She Chengxi et al. developed an intelligent fault diagnosis and early warning method for production lines using a CNN–LSTM–Attention model, enabling effective fault prediction through data mining [17]. Zhu Anfeng et al. developed a condition monitoring and health evaluation system for wind turbines using a CNN–LSTM–Attention model [18].
Given that the tilt-table method does not lock the suspension system and that variations in sprung mass can affect the measurement of the ZCG, this study proposes a CNN–LSTM–Attention model—integrating CNN, LSTM, and attention mechanisms—to predict the 0° ZCG of a typical two-axle vehicle, leveraging the advantages of deep learning and its successful application cases.

2. Tilt-Table

2.1. Tilt-Table Method

As illustrated in Figure 1a, the tilt-table method is based on a simplified vehicle model assuming non-independent suspension, wherein the suspension and tires remain undeformed during tilting, and fluid movement does not affect the measurement. The tested vehicle is mounted on the tilt-table and tilted to ±12°, respectively. Load transfer data are collected at each tilt angle, and the arithmetic average of the ZCG between 6° and 12° is taken as the estimated ZCG at 0°. Using this approach, the maximum measurement error of ZCG can reach up to 0.31% [19].

2.2. Unlocked Suspension Systems

As illustrated in Figure 1b, the tilt-table method assumes the vehicle to be a rigid body; however, this assumption is invalid when the suspension is not locked. With increasing tilt angle, the suspension deformation becomes more pronounced, resulting in a significant change in YCG. As shown in Table 1, the maximum error introduced in YCG due to this effect can reach up to 0.65%.
Under unlocked suspension conditions, the measurement error in ZCG is exponentially amplified by YCG variations due to interdependent deformation effects.

2.3. Variable Sprung Mass

With increasing functional requirements, retrofitting aftermarket customized components to factory vehicles creates variable sprung mass conditions. This alters the weight distribution borne by the suspension system. Given the component heterogeneity and positional variability, the center-of-gravity position of the sprung mass shifts, consequently modifying the overall vehicle center-of-gravity coordinates. During tilt-table testing, unlocked suspension states (Section 2.2) combined with variable sprung mass dynamics cause the vehicle center-of-gravity position coordinates to vary with tilt angles, thereby amplifying measurement errors in ZCG.
Conventional approaches, constrained by rigid-body assumptions and static physical modeling of vehicles, struggle to resolve the aforementioned issues. In contrast, deep learning leverages end-to-end automatic feature extraction mechanisms through multi-layer nonlinear network architectures, directly processing raw sensor data (such as wheel load time-series signals, body tilt angles) to extract multi-level and multi-scale features of suspension dynamics during tilt-table testing. This methodology not only reveals correlations among tilt angles, suspension deformations, and wheel load variations under variable sprung mass conditions but also captures spatiotemporal dependencies in suspension response sequences. Consequently, it offers a novel solution for ZCG measurement during tilt-table testing with unlocked suspension states and variable sprung mass configurations.

3. ZCG Prediction Model with Unlocked Suspension States and Variable Sprung Mass Configurations

3.1. CNN Model

Owing to its exceptional feature extraction capabilities, the CNN model has gained widespread recognition in deep learning applications [20,21,22]. Through the synergistic interaction of convolutional operations and nonlinear activation functions, this architecture enables the extraction and representation of load transfer characteristics during vehicle roll maneuvers.
As illustrated in Figure 2, a CNN primarily consists of convolutional layers, pooling layers, and fully connected layers. By alternately stacking these layers, CNNs are capable of progressively extracting informative features from raw sequence data [23]. For instance, when input data are presented, feature extraction is performed by the convolutional layer according to Equation (1).
h i = f ( W i X + b i )
where h i denotes the feature map output from layer i ; f ( · ) represents the activation function; W corresponds to the convolution kernel weight matrix; and b i is the bias vector.
CNNs are employed to extract features of wheel load transfer during vehicle roll on a tilt-table test platform, under conditions of unlocked suspension and variable sprung mass. However, given the temporal characteristics inherent in wheel load transfer during the roll process, CNN and LSTM are integrated to enhance prediction performance.

3.2. LSTM Model

LSTM effectively integrates state information across sequential positions. Its unique architecture mitigates the vanishing gradient problem commonly encountered in recurrent neural network training [24]. As illustrated in Figure 3, LSTM regulates historical information through gating structures: the input gate i t , forget gate f t , and output gate O t . These mechanisms enable LSTM to capture long-term dependencies during wheel load transfer while enhancing feature extraction capabilities [25].
The forget gate f t is computed as shown in Equation (2):
f t = σ ( W f · [ H t 1 , X t ] + b f )
where σ denotes the sigmoid activation function; W f is the forget gate weight matrix; H t 1 is the hidden state at time t 1 ; X t is the input vector at time t ; and b f is the bias vector.
The input gate regulates information flow into the cell state C t through Equations (3)–(5):
i t = σ ( W i · [ H t 1 , X t ] + b i )
C t ¯ = t a n h ( W c · [ H t 1 , X t ] + b c )
C t = f t C t 1 + i t C t ¯
Here, i t is the input gate; W i is the input gate weight matrix; b i and b c are bias vectors for the input gate and candidate cell state, respectively; C t ¯ is the candidate cell state at t ; t a n h is the hyperbolic tangent activation; and C t represents the updated cell state.
The output gate governs the final output through Equations (6) and (7):
O t = σ ( W o · [ H t 1 , X t ] + b o )
H t = O t · t a n h ( C t )
where O t is the output gate; W o is the output gate weight matrix; b o is the bias vector; and h t is the hidden state output at time t .
The CNN-LSTM model is capable of extracting local features and capturing roll sequence dependencies. However, in processing extended sequences, it is not sufficient to accentuate critical patterns such as minimal load transfer near 0° roll angle and suspension stiffness–roll angle interdependencies. To enhance focus adaptability, an attention mechanism dynamically weights salient features, thereby improving model prediction accuracy.

3.3. Attention Mechanism

The attention mechanism is a computational model that simulates selective focus on critical information in human cognition. Its core principle dynamically weights input components by computing inter-element correlations, thereby enhancing the model’s focus on salient features [26]. The computational procedure is defined by Equations (8)–(10):
E t = t a n h · H t
a t = s o f t m a x ( w a T · E t )
A t = H t · a t T
where H t is the LSTM output matrix at time t ; E t denotes the t a n h -transformed feature representation; w a T represents the transposed weight vector; a t is the s o f t m a x -normalized attention vector; and A t is the attention-weighted output at t [27].

3.4. Construction of the CNN–LSTM–Attention Model

In this study, the input for each sample is a four-channel spatiotemporal tensor with dimensions of 120 × 4 × 2 × 2, as denoted by Equations (11)–(16).
I ( i ) = I ( i , 1 ) , I ( i , 2 ) , I ( i , 120 ) R 120 × 4 × 2 × 2
I ( i , t ) = M ( i , t ) , M L R ( i , t ) , M F r o n t ( i , t ) , M D i a g ( i , t )
M ( i , t ) = F f l ( i , t ) F f r ( i , t ) F r l ( i , t ) F r r ( i , t ) R 2 × 2
M L R ( i , t ) = F f l F f r F f l F f r F r l F r r F r l F r r
M F r o n t ( i , t ) = F f l + F f r F f l + F f r F r l + F r r F r l + F r r
M D i a g ( i , t ) = F f l + F r r F f l + F r r F r l + F f r F r l + F f r
where the four distinct matrices M ( i , t ) , M L R ( i , t ) , M F r o n t ( i , t ) , and M D i a g ( i , t ) correspond to the raw load matrix, left/right load difference matrix, front/rear axle load matrix, and diagonal coupling load matrix of the vehicle wheels, respectively. The variables F f l , F f r , F r l , and F r r denote the vertical forces (normal to the roll table) acting on the front-left, front-right, rear-left, and rear-right wheels, respectively.
Although CNNs effectively extract load transfer features during vehicle roll via convolutional and pooling layers, they exhibit temporal insensitivity when processing roll sequence data. Conversely, LSTMs excel at capturing long-term dependencies in roll sequences, addressing the challenge of modeling large roll-angle interval dependencies. Feeding CNN-extracted features into LSTM enables more effective analysis of periodic variation patterns among load transfer, suspension stiffness, and roll angle throughout the roll process. Within nonparametric modeling frameworks, the CNN-LSTM architecture demonstrates nonlinear fitting capabilities; however, its hyperparameter optimization often relies on experience-driven trial-and-error, directly impacting training efficacy. Thus, integrating an attention mechanism enhances extraction of sequence-specific features, enabling precise focus on critical information to improve predictive performance. Accordingly, the CNN–LSTM–Attention predictive model illustrated in Figure 4 is proposed, and its structure is shown in Table 2.

3.5. Prediction Workflow of the CNN–LSTM–Attention Model

Under the test conditions of unlocked suspension and variable sprung mass on a tilt-table platform, ZCG is predicted. As illustrated in Figure 5, wheel load parameters—including raw wheel loads, lateral load differences, front/rear axle loads, and cross-axle coupling metrics—undergo preprocessing to ensure data quality and format compatibility. The dataset is partitioned into training and testing subsets, with the former used for model training and the latter for predictive performance validation. Prior to model deployment, parameters are initialized. Subsequent training with the dataset optimizes parameters through loss minimization, enhancing prediction accuracy. Convergence status is monitored; training terminates upon convergence, otherwise iterates. Optimized parameters are then archived. The trained model executes inverse temporal predictions (12–0° roll angles) on test data to generate ZCG estimates.

4. Experimental Analysis

4.1. Dataset Construction

As depicted in Figure 6, a stock vehicle theoretical model with unlocked suspension is established in Adams using the tilt-table method. Suspension stiffness is maintained constant while variable sprung mass conditions are simulated by adjusting component masses. Left/right roll-angle data (0–12°) from the theoretical model comprise the training set for predictive modeling.
As tabulated in Table 3, the test set contains left/right roll-angle data (0–12°) measured on a manufacturer’s tilt-table platform, featuring unlocked suspension and variable sprung mass configurations.

4.2. Parameter Configuration and Evaluation Metrics

The computational environment comprised Windows 11 OS; Intel Core i5-14600KF CPU; NVIDIA GeForce RTX 3080 Ti GPU; 32GB RAM; Python 3.11.10; and PyTorch 2.5.1 framework. Model configurations utilized ReLU activation, Adam optimizer (learning rate = 0.001), early stopping with 15-epoch patience, 500 training iterations, and 120 samples per batch.
To evaluate the performance of the proposed model in predicting the ZCG during tilt-table testing with an unlocked suspension and variable sprung mass, four commonly used metrics were adopted: mean absolute error (MAE), mean squared error (MSE), mean absolute percentage error (MAPE), and root mean squared error (RMSE), as defined in Equations (17)–(20) [28]. Lower values of these metrics indicate higher prediction accuracy of the ZCG.
M A E = i m y i y i ~ m
M S E = 1 m i m ( y i y ¯ ) 2
M A P E = 1 m i = 1 m y i y i ~ y i × 100 %
R M S E = 1 m i = 1 m ( y i y i ~ ) 2
where m is the sample size of vehicle roll-angle data; y i denotes ground-truth ZCG; y i ~ indicates predicted values; and y ¯ represents the prediction mean.

4.3. Model Prediction and Error Analysis

To validate the ZCG prediction performance of the CNN–LSTM–Attention model under tilt-table testing conditions with unlocked suspension and variable sprung mass, comparative evaluations were conducted against CNN, CNN-LSTM, and Transformer models. All models utilized identical training and testing datasets.
As quantified by the MAE, MSE, MAPE, and RMSE metrics in Table 4, the CNN–LSTM–Attention model demonstrates superior fitting performance. Across all six prediction trials, this model consistently outperforms its counterparts in all error metrics, indicating optimal predictive capability and implementation feasibility. The CNN-LSTM model exhibits competitive performance with high feasibility, though marginally inferior to CNN–LSTM–Attention in metric evaluations. Conversely, the Transformer model yields moderate performance metrics, signifying intermediate feasibility levels. The CNN model registers the highest error values across all metrics, reflecting the least desirable prediction accuracy and minimal feasibility.
Figure 7 demonstrates decreasing loss trends across all models with increasing epochs, indicating effective optimization. Convergence initiates around epoch 100, where both CNN–LSTM–Attention and Transformer models exhibit faster convergence rates and lower loss values, signifying enhanced fitting capabilities. Post-epoch 100, the CNN–LSTM–Attention model maintains superior stability and minimal loss. The CNN-LSTM model shows slower convergence with marginally inferior performance. In contrast, the standalone CNN model displays the highest initial loss and slowest convergence, indicating its weakest performance in ZCG prediction.
Figure 8 compares predicted versus ground-truth ZCG height at 0° roll angle under tilt-table testing with unlocked suspension and variable sprung mass. Increasing fluctuation amplitudes in the curves correlate with decreasing prediction accuracy. Among six prediction groups, (1) the standalone CNN model yields the poorest performance; (2) Transformer shows marginal improvement but significant residuals; (3) CNN-LSTM surpasses both predecessors; (4) and CNN–LSTM–Attention achieves optimal performance with minimal curve fluctuations in both near 0° (low roll-angle) and near 12° (high roll-angle) regions. The predicted ZCG values demonstrate the closest alignment with ground-truth data, confirming the superiority of CNN–LSTM–Attention over benchmark models.
As evidenced in Table 5, the CNN–LSTM–Attention model demonstrates superior prediction accuracy and reduced computational time for ZCG estimation compared to CNN, CNN-LSTM, and Transformer benchmarks.
Table 6 compares 0° ZCG prediction errors between six CNN–LSTM–Attention implementations and the arithmetic mean method using 6–12° bilateral roll data. Under tilt-table testing with unlocked suspension and variable sprung mass, the CNN–LSTM–Attention model achieves consistently lower errors than the angular averaging approach.

5. Conclusions

To mitigate the influence of unlocked suspension and variable sprung mass on ZCG measurements in tilt-table testing, a CNN–LSTM–Attention prediction model is proposed. Specifically, (1) CNN extracts fine-grained features among load transfer, suspension stiffness, and roll angle during unlocked suspension articulation; (2) LSTM processes temporal dependencies in roll-angle sequences; and (3) the attention mechanism dynamically weights features to accommodate variable sprung mass conditions. Experimental validation on production vehicles demonstrates superior ZCG prediction accuracy compared to benchmark models, with predictions exhibiting the closest alignment to ground-truth values. This deep learning framework provides a novel solution for vehicle center-of-mass positioning via tilt-table testing, utilizing the CNN–LSTM–Attention model to calculate ZCG coordinates. The trained model extracts intrinsic relationships among ZCG position, suspension elastic deformation, wheel-load transfer, and roll angle. It enables accurate ZCG prediction under unlocked suspension deformation and variable sprung mass scenarios, demonstrating significant practical utility in vehicle statics and dynamics applications.

Author Contributions

Conceptualization, G.P. and Z.X.; methodology, G.P.; software, G.P.; validation, G.P. and Z.C.; formal analysis, Z.C.; investigation, Z.C.; resources, G.P.; data curation, G.P.; writing—original draft preparation, G.P.; writing—review and editing, G.P.; visualization, G.P.; supervision, Z.X.; project administration, Z.X.; funding acquisition, P.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China, grant No. 52401019.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Due to the confidentiality requirements of the tested vehicles, the data provided in this study were provided at the request of the corresponding author. Further research on these data will be conducted in the future.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
XCGThe longitudinal position of the center of gravity
YCGThe lateral position of the center of gravity
ZCGThe height of the center of gravity
CNNsConvolutional neural networks
LSTMsLong short-term memory networks

References

  1. Wang, B.; Yang, C.; Liu, Y.; Cai, P.F. Analysis of Influencing Factors and Data Processing Methods for the Center of Gravity Position in Two-Axis Vehicles. Automot. Appl. Technol. 2022, 105–108. (In Chinese) [Google Scholar] [CrossRef]
  2. GB/T 12538-2023; Determination of Center of Gravity Position for Road Vehicles. State Administration for Market Regulation: Beijing, China, 2023.
  3. Zhou, Z.M.; Min, L.F.; Li, Y.; Yu, C.Y. Research on Measurement Methods for the Center of Gravity Position of Light Trucks. Spec. Purp. Veh. 2023, 11, 84–89. (In Chinese) [Google Scholar] [CrossRef]
  4. Zhang, Y.J.; Wang, W.Y.; Wang, T.; Yuan, L.K.; Zhang, T.Y.; Wang, L.M. Simulation and Experimental Study on the Ride Comfort of Vehicles with Rigid-Flexible Coupling. Automot. Technol. 2014, 5, 20–25. (In Chinese) [Google Scholar] [CrossRef]
  5. Wu, W.X.; Zhang, Z.Y.; Xiao, X.S.; You, W.; Jin, T. Parameter Optimization Design of CLLC Resonant Converters Based on Time-Domain Analysis. Power Syst. Prot. Control 2023, 51, 139–151. (In Chinese) [Google Scholar] [CrossRef]
  6. Li, Y.C.; Gao, Z.Q.; Zhou, X.S.; Guo, S.C. Improved Linear Active Disturbance Rejection Control of Wind Power Converters Combining Active Damping. Electron. Meas. Technol. 2022, 45, 1–9. (In Chinese) [Google Scholar] [CrossRef]
  7. Liu, Y.F.; Sun, R.X. Denoising of Automotive Pressure Sensor Signals Using Wavelet Transform Under Impact Excitation Calibration. Shanxi Electron. Technol. 2024, 6, 35–37. (In Chinese) [Google Scholar] [CrossRef]
  8. Dai, Y.; Wang, J.G.; Cao, G.W.; Zhang, J.X.; Jia, B. Vibration Signal Processing and State Classification for Monitoring Drilling Processes. J. Vibroeng. Test. Diagn. 2022, 42, 89–95, 196–197. (In Chinese) [Google Scholar] [CrossRef]
  9. Xie, M.; Meng, Q.S.; Li, B.; Lu, J.N.; Li, Y.Q.; Yand, Z.Y. Roller Fault Diagnosis Method Combining Short-Time Fourier Transform and Convolutional Neural Networks. J. Eng. Des. 2024, 31, 565–574. (In Chinese) [Google Scholar] [CrossRef]
  10. Wang, T.Y.; Chen, H.; Wang, G.; Wu, N. EEG Sleep Staging Model Using Wavelet Transformand Bidirectional Long Short-Term Memory Networks. J. Xi’an Jiaotong Univ. 2022, 56, 104–111. [Google Scholar] [CrossRef]
  11. Yang, W.; Du, X.F.; Zhang, Y.; Gao, Y. A Review on Deep Learning-Based Vehicle Object Detection Algorithms. Automot. Appl. Technol. 2022, 47, 24–26. (In Chinese) [Google Scholar] [CrossRef]
  12. Hwang, S.; Kwon, N.; Lee, D.; Kim, J.; Yang, S.; Youn, I.; Moon, H.-J.; Sung, J.-K.; Han, S. A Multimodal Fatigue Detection System Using sEMG and IMU Signals with a Hybrid CNN-LSTM-Attention Model. Sensors 2025, 25, 3309. [Google Scholar] [CrossRef]
  13. Wang, X.F.; Zhang, Y.B. Research on Prediction Accuracy of Iron Ore Futures Prices under Different Time Windows—Based on CNN-LSTM-Attention Model Analysis. Price Theory Pract. 2022, 11, 142–145. (In Chinese) [Google Scholar] [CrossRef]
  14. Yang, Y.; Ke, T.; Hu, Q.Z.; Zhang, Z.M. A Fault Diagnosis Method for ZPW-2000A Track Circuits Based on CNN-LSTM-Attention. J. Railw. Sci. Eng. 2025, 22, 2380–2392. (In Chinese) [Google Scholar] [CrossRef]
  15. Xia, K.; Zhu, Q.; Yuan, Q.; Wang, J. Prediction of Automotive Wire Harness Aging Based on CNN-biLSTM-Attention. Sensors 2025, 25, 2910. [Google Scholar] [CrossRef] [PubMed]
  16. Chen, X.; Shen, Z.H.; Xu, Q.; Cai, J. Monthly Domestic Water Demand Forecasting Based onCNN-LSTM-Attention Model. J. China Three Gorges Univ. (Nat. Sci.) 2024, 46, 1–6. (In Chinese) [Google Scholar] [CrossRef]
  17. She, C.X.; Zhang, C.P.; Zhao, P.Y.; Wang, Q.Y. Fault Diagnosis and Early Warning for ProductionLines Based on CNN-LSTM-Attention. J. Syst. Sci. Math. 2025, 1–18. Available online: https://kns.cnki.net/kcms/detail/11.2019.O1.20250318.1120.034.html (accessed on 7 September 2025).
  18. Zhu, A.F.; Zhao, Q.C.; Zhou, L.; Yang, T.L.; Yang, X.B. Condition Monitoring and Health Assessment of Wind Turbine Units Based on CNN-LSTM-Attention. J. Vibroeng. Test. Diagn. 2025, 45, 256–263, 409. (In Chinese) [Google Scholar] [CrossRef]
  19. Jia, X.D.; Xiang, F. Verification Method for Measuring the Height of a Vehicle’s Center of Gravity Using a Tilting Platform. Equip. Manuf. Technol. 2022, 4, 95–98. (In Chinese) [Google Scholar] [CrossRef]
  20. Sun, B.; Ju, Q.Q.; Sang, Q.B. Image Dehazing Algorithm Combining FC-DenseNet and WGAN. J. Comput. Sci. Explor. 2020, 14, 1380–1388. (In Chinese) [Google Scholar] [CrossRef]
  21. Lin, K.Z.; Bai, J.X.; Li, H.T.; Li, A. Small Sample Facial Expression Recognition by Fusing Different Models under Deep Learning. J. Comput. Sci. Explor. 2020, 14, 482–492. (In Chinese) [Google Scholar] [CrossRef]
  22. Jaf, S.; Calder, C. Deep Learning for Natural Language Parsing. IEEE Access 2019, 7, 131363–131373. [Google Scholar] [CrossRef]
  23. Jin, T.T.; Yan, C.L.; Chen, C.H.; Yang, Z.J.; Tian, H.L.; Wang, S.Y. Light Neural Network with Fewer Parameters Based on CNN for Fault Diagnosis of Rotating Machinery. Measurement 2021, 181, 109639. [Google Scholar] [CrossRef]
  24. Zhu, A.F.; Zhao, Q.C.; Yang, T.L.; Zhou, L.; Zeng, B. Condition Monitoring of Wind Turbine Based on Deep Learning Networks and Kernel Principal Component Analysis. Comput. Electr. Eng. 2023, 105, 108538. [Google Scholar] [CrossRef]
  25. Wang, M.; Tian, D.P. Application of an Improved Particle Swarm Optimization Algorithm for Optimizing the CNN-LSTM-Attention Model in Safety Production Accident Prediction. J. Saf. Environ. 2025, 25, 1829–1837. (In Chinese) [Google Scholar] [CrossRef]
  26. Tao, J.; Zhou, H.; Fan, W. Efficient and High-Precision Method of Calculating Maximum Singularity-Free Space in Stewart Platform Based on K-Means Clustering and CNN-LSTM-Attention Model. Actuators 2025, 14, 74. [Google Scholar] [CrossRef]
  27. Liu, Q.; Zhang, Y. Dynamic Prediction for Pollutant Emissions of Coal-Fired Power Plant Based on CNN-LSTM-Attention. J. Phys. Conf. Ser. 2024, 2868, 012014. [Google Scholar] [CrossRef]
  28. Wang, Z.; Song, Y.; Pang, L.; Li, S.; Sun, G. Attention-Enhanced CNN-LSTM Model for Exercise Oxygen Consumption Prediction with Multi-Source Temporal Features. Sensors 2025, 25, 4062. [Google Scholar] [CrossRef] [PubMed]
Figure 1. tilt-table method.
Figure 1. tilt-table method.
Sensors 25 05692 g001
Figure 2. Structure of the CNN model.
Figure 2. Structure of the CNN model.
Sensors 25 05692 g002
Figure 3. Structure of the LSTM model.
Figure 3. Structure of the LSTM model.
Sensors 25 05692 g003
Figure 4. The CNN–LSTM–Attention model.
Figure 4. The CNN–LSTM–Attention model.
Sensors 25 05692 g004
Figure 5. Model prediction process.
Figure 5. Model prediction process.
Sensors 25 05692 g005
Figure 6. Vehicle theoretical model.
Figure 6. Vehicle theoretical model.
Sensors 25 05692 g006
Figure 7. Comparison of training loss curves of four models.
Figure 7. Comparison of training loss curves of four models.
Sensors 25 05692 g007
Figure 8. Comparison of prediction results of different models. (a) Comparison of the prediction results of each model in the first group with the true values; (b) Comparison of the prediction results of each model in the second group with the true values; (c) Comparison of the prediction results of each model in the third group with the true values; (d) Comparison of the prediction results of each model in the fourth group with the true values; (e) Comparison of the prediction results of each model in the fifth group with the true values; (f) Comparison of the prediction results of each model in the sixth group with the true values.
Figure 8. Comparison of prediction results of different models. (a) Comparison of the prediction results of each model in the first group with the true values; (b) Comparison of the prediction results of each model in the second group with the true values; (c) Comparison of the prediction results of each model in the third group with the true values; (d) Comparison of the prediction results of each model in the fourth group with the true values; (e) Comparison of the prediction results of each model in the fifth group with the true values; (f) Comparison of the prediction results of each model in the sixth group with the true values.
Sensors 25 05692 g008
Table 1. The variation of the lateral position YCG of the left and right tilting centroids with the tilting angle.
Table 1. The variation of the lateral position YCG of the left and right tilting centroids with the tilting angle.
Angle (°)Left YCG (mm)Right YCG (mm)Average YCG (mm)Error (%)
02.322.322.320%
225.31−20.662.330.22%
448.50−43.852.330.22%
671.98−67.332.330.22%
895.83−91.172.330.43%
10120.13−115.462.340.65%
12144.96−140.302.330.43%
Table 2. Structure of the CNN–LSTM–Attention model.
Table 2. Structure of the CNN–LSTM–Attention model.
LayerOutput ShapeCore Configuration
Input Layer(120, 4, 2, 2)120 timesteps, 4 channels, 2 × 2 matrices
Convolutional Layer 1(120, 32, 1, 1)Kernel size: 2 × 2, 32 filters, Stride: 1, Activation: ReLU
Pooling Layer 1(120, 32, 1, 1)Max pooling
Convolutional Layer 2(120, 64, 1, 1)Kernel size: 2 × 2, 64 filters, Stride: 1, Activation: ReLU
Pooling Layer 2(120, 64, 1, 1)Max pooling
Flatten Layer(120, 64)Converting spatial features to vectors
LSTM Layer 1(120, 128)Unidirectional LSTM, Hidden units: 128
LSTM Layer 2(120, 64)Unidirectional LSTM, Hidden units: 64
Attention Layer(64)Content-based attention mechanism
Fully Connected Layer(64)Activation: ReLU
Output Layer(1)Regression prediction
Table 3. Partial test set data.
Table 3. Partial test set data.
Angle (°)Ffl (N)Ffr (N)Frl (N)Frr (N)
03159.303145.104345.734331.30
0.13162.703141.674349.394327.64
0.23166.083138.204353.024323.96
11.83426.172611.604594.193722.98
11.93427.222606.104594.683716.42
123428.252600.594595.143709.84
Table 4. The average evaluation indicators of each model in the six groups of experiments.
Table 4. The average evaluation indicators of each model in the six groups of experiments.
ModelMAEMSEMAPERMSE
CNN0.09370.00870.09650.0627
CNN-LSTM0.05720.00480.02410.0210
CNN–LSTM–Attention0.02740.00290.02160.0156
Transformer0.06240.00590.04420.0311
Table 5. Comparative prediction performance and computational efficiency of different models at 0° roll angle.
Table 5. Comparative prediction performance and computational efficiency of different models at 0° roll angle.
ModelZCG (mm)Elapsed Time (s)
Actual value439.440
CNN451.15101.47
CNN-LSTM447.7599.52
CNN–LSTM–Attention442.6492.69
Transformer448.26104.17
Table 6. Comparison of six sets of CNN–LSTM–Attention model prediction results, 6–12° arithmetic average results, and real 0° center of mass height ZCG error.
Table 6. Comparison of six sets of CNN–LSTM–Attention model prediction results, 6–12° arithmetic average results, and real 0° center of mass height ZCG error.
GroupCNN–LSTM–Attention (%)Arithmetic Mean of 6–12°on Both Sides (%)
1−0.16−0.31
2−0.19−0.29
30.270.32
4−0.30−0.31
50.250.30
60.210.31
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pang, G.; Xiao, Z.; Cai, Z.; Wang, P. Study on Centroid Height Prediction of Non-Rigid Vehicle Based on Deep Learning Combined Model. Sensors 2025, 25, 5692. https://doi.org/10.3390/s25185692

AMA Style

Pang G, Xiao Z, Cai Z, Wang P. Study on Centroid Height Prediction of Non-Rigid Vehicle Based on Deep Learning Combined Model. Sensors. 2025; 25(18):5692. https://doi.org/10.3390/s25185692

Chicago/Turabian Style

Pang, Guoqiang, Zhiquan Xiao, Zhanwen Cai, and Pei Wang. 2025. "Study on Centroid Height Prediction of Non-Rigid Vehicle Based on Deep Learning Combined Model" Sensors 25, no. 18: 5692. https://doi.org/10.3390/s25185692

APA Style

Pang, G., Xiao, Z., Cai, Z., & Wang, P. (2025). Study on Centroid Height Prediction of Non-Rigid Vehicle Based on Deep Learning Combined Model. Sensors, 25(18), 5692. https://doi.org/10.3390/s25185692

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop