A Vehicle Velocity Prediction Method with Kinematic Segment Recognition

: Accurate vehicle velocity prediction is of great significance in vehicle energy distribution and road traffic management. In light of the high time variability of vehicle velocity itself and the limitation of single model prediction, a velocity prediction method based on K-means-QPSO-LSTM with kinematic segment recognition is proposed in this paper. Firstly, the K-means algorithm was used to cluster samples with similar characteristics together, extract kinematic fragment samples in typical driving conditions, calculate their feature parameters, and carry out principal component analysis on the feature parameters to achieve dimensionality reduction transformation of information. Then, the vehicle velocity prediction sub-neural network models based on long short-term memory (LSTM) with the QPSO algorithm optimized were trained under different driving condition datasets. Furthermore, the kinematic segment recognition and traditional vehicle velocity prediction were integrated to form an adaptive vehicle velocity prediction method based on driving condition identification. Finally, the current driving condition type was identified and updated in real-time during vehicle velocity prediction, and then the corresponding sub-LSTM model was used for vehicle velocity prediction. The simulation experiment demonstrated a significant enhancement in both the velocity and accuracy of prediction through the proposed method. The proposed hybrid method has the potential to improve the accuracy and reliability of vehicle velocity prediction, making it applicable in various fields such as autonomous driving, traffic management, and energy management strategies for hybrid electric vehicles.


Introduction
With the continuous development of intelligent transportation travel demand, the real-time prediction of road information has become increasingly important.Neural networks (NNs) provide a new solution for vehicle velocity prediction, which is realized by autonomous learning.The combination of NN and machine learning (ML) in the energy management strategy of the vehicle can predict the road condition information in real-time, which can significantly improve the efficiency of the energy management strategy, improve the engine operating interval, and optimize the energy distribution of the vehicle to reduce the incidence of failure.If the driving state of the vehicle (such as speed and demand torque, etc.) can be predicted in advance, the global optimization algorithm can be used to control the vehicle performance to reach the optimal state within a certain time range [1].
At present, the prediction methods of future vehicle velocity mainly include the prediction method based on exponential function and the prediction method based on driving data [2,3].The different methods of vehicle velocity prediction are shown in Figure 1.
If the prediction accuracy is improved, the exponential coefficient needs to change according to the different working conditions [4].Sun et al. [5] studied the prediction accuracy and hybrid electric vehicle (HEV) fuel consumption of the exponential function prediction model.However, this prediction method relies on the assumption of exponential changes in future driving information, and if this assumption is violated, its wide application will be hindered.Prediction methods based on driving data mainly include the Markov chain (MC) model [6][7][8] and NN model [9][10][11].The principle is to collect a large number of historical working condition data, establish the corresponding prediction model, and combine the current and historical driving condition information to predict the vehicle velocity.In terms of the MC prediction model, the future vehicle velocity is predicted by the vehicle velocity at the previous moment, which is suitable for the velocity prediction of a fixed driving route.Therefore, this velocity prediction model is more suitable for urban bus driving scenarios due to their repetitive routes [12,13].Xie et al. [14] proposed a vehicle velocity prediction method based on the backpropagation neural network (BP-NN); both the acceleration sequence and the velocity sequence as the two inputs to the BP-NN were shown to be effective, while the accuracy of the MC predictor as the benchmark was also low.Lin et al. [15] combined BP-NN and radial basis function neural network (RBF-NN) to predict the vehicle velocity and the future vehicle velocity error at the same time, improving the accuracy of vehicle velocity prediction, and applied it to fuel cell vehicle energy management strategy, reducing the hydrogen consumption by 17.07%.Zhou et al. [16], by studying the multivariate correlation between the preview velocity sequence, acceleration sequence, and short-term future velocity sequence and training NN model, found that it could be applied to future velocity prediction in different prediction time horizons.Aiming at the problem of the large cumulative error of the deep neural network (DNN) model, the tolerance sequence correction method was used to transform the setting mechanism, which significantly reduced the impact of prediction error on the system.Liu et al. [17] established a short-term prediction model of driving state by combining stochastic prediction and machine learning.Wang et al. [18] used the global positioning system to obtain the drivers' historical driving data, predict and identify driving routes, and use dynamic programming (DP) algorithms to achieve the optimal battery state adjustment of vehicle charging consumption.Wu et al. [19] combined traffic signals and passenger numbers with energy management strategies based on deep reinforcement learning and used a large number of driving cycles generated by training traffic simulations for training.A NN-based travel model was proposed to help manage the energy distribution of PHEVs, where the simulation results verified that this model could improve the prediction accuracy and contribute to improving fuel economy [20].Nevertheless, the predictive performance of the NN model is easily affected by its fixed network structure, learning rate, and The prediction method based on exponential function represents the future prediction information in the finite time domain in the form of exponential change.When a constant exponential coefficient is adopted, the prediction accuracy of vehicle velocity is low.If the prediction accuracy is improved, the exponential coefficient needs to change according to the different working conditions [4].Sun et al. [5] studied the prediction accuracy and hybrid electric vehicle (HEV) fuel consumption of the exponential function prediction model.However, this prediction method relies on the assumption of exponential changes in future driving information, and if this assumption is violated, its wide application will be hindered.
Prediction methods based on driving data mainly include the Markov chain (MC) model [6][7][8] and NN model [9][10][11].The principle is to collect a large number of historical working condition data, establish the corresponding prediction model, and combine the current and historical driving condition information to predict the vehicle velocity.In terms of the MC prediction model, the future vehicle velocity is predicted by the vehicle velocity at the previous moment, which is suitable for the velocity prediction of a fixed driving route.Therefore, this velocity prediction model is more suitable for urban bus driving scenarios due to their repetitive routes [12,13].Xie et al. [14] proposed a vehicle velocity prediction method based on the backpropagation neural network (BP-NN); both the acceleration sequence and the velocity sequence as the two inputs to the BP-NN were shown to be effective, while the accuracy of the MC predictor as the benchmark was also low.Lin et al. [15] combined BP-NN and radial basis function neural network (RBF-NN) to predict the vehicle velocity and the future vehicle velocity error at the same time, improving the accuracy of vehicle velocity prediction, and applied it to fuel cell vehicle energy management strategy, reducing the hydrogen consumption by 17.07%.Zhou et al. [16], by studying the multivariate correlation between the preview velocity sequence, acceleration sequence, and short-term future velocity sequence and training NN model, found that it could be applied to future velocity prediction in different prediction time horizons.Aiming at the problem of the large cumulative error of the deep neural network (DNN) model, the tolerance sequence correction method was used to transform the setting mechanism, which significantly reduced the impact of prediction error on the system.Liu et al. [17] established a short-term prediction model of driving state by combining stochastic prediction and machine learning.Wang et al. [18] used the global positioning system to obtain the drivers' historical driving data, predict and identify driving routes, and use dynamic programming (DP) algorithms to achieve the optimal battery state adjustment of vehicle charging consumption.Wu et al. [19] combined traffic signals and passenger numbers with energy management strategies based on deep reinforcement learning and used a large number of driving cycles generated by training traffic simulations for training.A NN-based travel model was proposed to help manage the energy distribution of PHEVs, where the simulation results verified that this model could improve the prediction accuracy and contribute to improving fuel economy [20].Nevertheless, the predictive performance of the NN model is easily affected by its fixed network structure, learning rate, and input.
In addition, the evaluation metrics and the amount of training data collected also affect the accuracy of the model.
Ma et al. [21] applied the long short-term memory network (LSTM) to the traffic field for the first time and verified the advantages of LSTM in prediction accuracy by comparing it with many traditional prediction methods.In addition, convolutional neural networks (CNNs) have been used in the prediction research of road network traffic flow and combined with deep neural networks (DNNs) to capture the temporal and spatial characteristics of traffic flow [22][23][24].
In the above vehicle velocity prediction methods, most researchers have built a unique vehicle velocity prediction model to predict the future velocity sequence.However, the number of neurons, the learning rate, batch size, diffusion factor, hidden layer number, epochs, and other parameters in the vehicle velocity prediction model have a wide range of values, and the same parameter will show a different prediction accuracy in different cycle conditions.Therefore, a single predictive model is not sufficient to describe all driving processes.In order to build a more accurate model, we need to decompose the prediction model into multiple sub-prediction models corresponding to different parameter intervals.Moreover, the type of driving condition of a vehicle is a complex variable that is determined by many factors, which will have a significant impact on vehicle performance.Correctly distinguishing the driving condition types will help to decompose the unified model into a series of sub-neural network models corresponding to different driving condition types to improve the accuracy of the prediction model.
Based on the above analysis, the same driving conditions often have similar velocity fluctuation trends.In order to further improve the prediction accuracy of vehicle velocity, a hybrid model of clustering data with LSTM-NN was proposed.Firstly, kinematic fragment samples in typical working conditions were extracted and feature parameters were calculated.Principal component analysis was used to reduce the dimensionality of feature parameters with multiple dimensions, and samples with similar characteristics were clustered by the K-means algorithm.The clustering results could effectively express the historical correlation and trend correlation of vehicle velocity in various working conditions.Then, the LSTM model was trained by using the training data obtained by clustering, the number of neurons, training times, and learning rate of LSTM-NN were optimized by quantum particle swarm optimization algorithm (QPSO), and the K-means-QPSO-LSTM prediction model was constructed for prediction.Finally, a China Heavy-duty commercial vehicle Test Cycle-Tractor Trailer (CHTC-TT) was used as the test cycle.The hybrid prediction model based on K-means-QPSO-LSTM was compared with radial basis function (RBF)-NN and CNN-LSTM, which are commonly used in vehicle velocity prediction.

Data Preprocessing
In this section, kinematic fragment samples in typical driving conditions were extracted and characteristic parameters were calculated.In order to reduce the amount of calculation and avoid information overlap of multi-dimensional characteristic parameters, principal component analysis was used to reduce the dimension of characteristic parameters with multi-dimensionality.

Kinematic Fragment Feature Parameter Extraction
The kinematic segment refers to the driving velocity information between two adjacent stopping points [25].There are many different kinematic segments in the same driving condition, and there are many similar kinematic segments in different driving conditions.In the short time domain, the kinematic segment can better reflect the driving characteristics of the vehicle, so the kinematic segment is considered as a reference for velocity prediction.The extracted kinematic segments are clustered together according to the driving characteristic parameters, and used as samples to train the neural network velocity prediction sub-model, because a certain cluster composed of kinematic segments with similar characteristics can master and express the characteristics and rules of a certain class of working conditions more accurately.The current driving condition is identified according to the past driving information, and the corresponding sub-neural network is used to predict the vehicle velocity.
The sample acquisition process is shown in Figure 2. The kinematic segments of typical driving conditions are extracted, and a multi-feature kinematic segment clustering method based on principal component analysis is proposed.The multi-feature parameters of the kinematic segments were reduced dimensionality according to the principal component analysis method, and the kinematic segments with similar feature parameters were clustered together by the K-means algorithm to obtain the training samples of the submodels of velocity prediction under three working conditions to prepare for the subsequent velocity prediction.
Appl.Sci.2024, 14, x FOR PEER REVIEW 4 of 25 similar characteristics can master and express the characteristics and rules of a certain class of working conditions more accurately.The current driving condition is identified according to the past driving information, and the corresponding sub-neural network is used to predict the vehicle velocity.
The sample acquisition process is shown in Figure 2. The kinematic segments of typical driving conditions are extracted, and a multi-feature kinematic segment clustering method based on principal component analysis is proposed.The multi-feature parameters of the kinematic segments were reduced dimensionality according to the principal component analysis method, and the kinematic segments with similar feature parameters were clustered together by the K-means algorithm to obtain the training samples of the sub-models of velocity prediction under three working conditions to prepare for the subsequent velocity prediction.In the relevant literature, the number of characteristic parameters of the vehicle driving conditions is generally set as 10-18 [26].Considering the calculation amount and accuracy comprehensively, the number of characteristic parameters of the kinematic segment is 10.The expression of 10 characteristic parameters is expressed as follows: { }  In the relevant literature, the number of characteristic parameters of the vehicle driving conditions is generally set as 10-18 [26].Considering the calculation amount and accuracy comprehensively, the number of characteristic parameters of the kinematic segment is 10.The expression of 10 characteristic parameters is expressed as follows: where v ave is the average of velocity value, v max is the maximum velocity value, σ is the standard deviation of velocity, t a is the acceleration time of each motion segment, T is the segment time, t d is the deceleration time of each motion segment, η a is the ratio of the acceleration time to the length of the segment time, η d is the ratio of the deceleration time to the length of the segment time, a max is the maximum acceleration, a min is the minimum acceleration, a a_ave is the value of average acceleration, a d_ave is the value of average deceleration, and S is the mileage of the kinematic segment.
A program was written in MATLAB 2018a to extract the characteristic parameters of 317 kinematic segments under 15 typical driving conditions, and some of the results are shown in Table 1.

Principal Component Analysis
If 10 feature parameters are directly used for clustering, the dimensionality disaster will increase the calculation amount.In addition, the large number of features and the linear correlation and information overlap among parameters will affect the clustering results [27].Therefore, principal component analysis is adopted to reduce the dimensionality of multifeature parameters to improve the velocity and accuracy of clustering samples with multifeature.
Principal component analysis converts the related initial variables into unrelated synthetic variables through orthogonal transformation and ensures that the synthetic variables can retain as much information as possible about the initial variables [28].The key to the transformation and solution of the variable matrix is to obtain a low-dimensional scalar value that can describe the original information through a certain transformation of the original multi-dimensional characteristic parameters through the feature vector.The specific implementation steps are as follows: 1.
The initial variable matrix X is constructed by using the characteristic parameter information of the kinematic fragment above, where l is the number of characteristic parameters 10, and n is the number of samples 317.
The normalized matrix Z is obtained by using the normalized transformation formula shown in the above publicized formula.
where x ij is the parameter value of row i and column j in the initial variable matrix (i = 1, 2, . .., n; j = 1, 2, . .., l), and x j and s 2 j are the mean and variance of column j, respectively.
The correlation coefficient matrix R is calculated based on the correlation of the elements in the matrix.
The element r ij calculation process is expressed as: The eigenvalues and eigenvectors of the correlation coefficient matrix can be obtained by solving the eigenequation of the correlation matrix R.

4.
The principal component contribution rate is calculated from the eigenvalue, which belongs to a part of all data features contained in the sample and is related to the sample features.The larger the eigenvalue, the more features are covered when the eigenvalue is used to describe the original information.Principal components can be selected according to the size of the eigenvalue, and the contribution rate represents the retention degree of each principal component to the initial information.The cumulative contribution rate can determine the number of principal components used to express the features of the original data.The calculation formula of the principal component contribution rate is described below.
The formula for calculating the cumulative contribution rate of each principal component is as follows: where T kj and V kj are the coefficients of the kth eigenvector, λ k is the eigenvalue (k = 1, 2, . .., g) of the kth principal component, and g is the number of principal components.The formula for calculating the score of the sample in the ith sample under the first principal component is expressed as: The principal component score matrix for all samples is obtained as described below: According to the above steps of the principal component analysis, the main data information were obtained as follows.After obtaining the correlation coefficient matrix of the standardized sample, the eigenvalues and their contribution rates can be calculated, which are arranged in descending order according to the size of the eigenvalues, and the cumulative contribution rates are summed line by line starting from the largest eigenvalue.The results are shown in Table 2.The eigenvalue is an evaluation index of the information inclusion ability of the principal component.The larger the eigenvalue, the more information the principal component covers and the stronger the evaluation ability of the problem.The eigenvalues of the first three rows in the table were extracted, and the cumulative contribution rate of the three principal components reached 78.3817%, that is, the information of the initial variables could be better described according to these three principal components.
Table 3 shows the eigenvector corresponding to the three selected eigenvalues, namely the principal component load.The principal component load matrix reflects the correlation coefficients between the three principal components f 1 , f 2 , f 3 , and 10 feature parameters.The original feature parameters can be transformed according to the feature vector to obtain the scalar value of the sample information described by the principal component.It can be seen from the table that the first principal component mainly reflects the information of five characteristic parameters: average velocity value, maximum velocity value, standard deviation of velocity, maximum acceleration, and average acceleration.The second principal component mainly reflects the information of the acceleration time ratio, one characteristic parameter, while the third principal component mainly reflects the information of deceleration time ratio, maximum deceleration velocity, average deceleration velocity, and total mileage, four characteristic parameters.The three principal components basically cover the information described by the 10 feature parameters, so the three principal components can be used to replace the 10 feature parameters to describe the information of the kinematic segments, and the dimensionality reduction of the feature parameters is completed.Table 4 shows the principal component scores of some of the samples.

Kinematic Fragment Clustering
The establishment of the neural network vehicle velocity prediction sub model needs to obtain samples for different driving conditions.Therefore, the collected kinematic fragments need to be classified after the characteristic parameters of 317 kinematic fragments are reduced from ten dimensions to three dimensions.Unlike general classification, kinematic segments have multi-dimensional characteristics, and the classification criteria cannot be determined in advance.Therefore, the clustering method is applicable here.The clustering algorithm will automatically classify the categories based on the features of the input samples and cluster the kinematic fragments with similar features together.Commonly used clustering methods include the partitioning method, hierarchical method, density-based method, grid-based method, and model-based method.In this paper, the K-means algorithm was used to cluster the kinematic fragment samples.The process of the K-means algorithm is as follows [29]: Input: number of k clusters; D: dataset containing k objects.Output: set and class number of k clusters.

1.
According to the randomness principle, the algorithm selects k objects from the dataset as the center of k clusters, which is called the initial cluster center.

2.
Calculate the distance of all other objects to the initial cluster center, classify all objects into this k cluster with the shortest principle of European distance, and obtain the k initial clustering.Distance is used to characterize the similarity between objects, describing the real distance of two objects in space, and the calculation formula is: 3.
Mobile clustering center, where the average value of the data objects in each initial cluster is calculated, and the centroid of each cluster is obtained as the new cluster center.

4.
The distance of all data objects to the new cluster center is calculated, and the data objects are reclassified according to the principle of minimum distance.Repeat the above steps; the algorithm ends when the optimal E value of the objective function is minimum and no longer changes.
The calculation formula of the cluster center is as follows: The optimal E value of the objective function is calculated as follows: where, ξ k is the cluster center (k = 1, 2, 3).Table 5 shows the cluster center after classification.By classifying the three-dimensional principal component information of the samples, the amount of pre-classification of the K-means algorithm was determined to be three categories.The cluster corresponding to the first cluster center ξ 1 represents driving condition type 3 with a long driving distance, high average velocity, and maximum velocity.The cluster corresponding to the second cluster center ξ 2 represents working condition type 1 with a short driving distance, low average velocity, and maximum velocity, while the cluster corresponding to the third cluster center ξ 3 represents working condition type 2 with a medium driving distance, average velocity, and maximum velocity.According to the K-means clustering analysis, three types of training samples can be seen in Figure 3.
Appl.Sci.2024, 14, x FOR PEER REVIEW 9 of 25 above steps; the algorithm ends when the optimal E value of the objective function is minimum and no longer changes.The calculation formula of the cluster center is as follows: The optimal E value of the objective function is calculated as follows: ( ) where, ξk is the cluster center (k = 1,2,3).Table 5 shows the cluster center after classification.By classifying the three-dimensional principal component information of the samples, the amount of pre-classification of the K-means algorithm was determined to be three categories.The cluster corresponding to the first cluster center ξ1 represents driving condition type 3 with a long driving distance, high average velocity, and maximum velocity.The cluster corresponding to the second cluster center ξ2 represents working condition type 1 with a short driving distance, low average velocity, and maximum velocity, while the cluster corresponding to the third cluster center ξ3 represents working condition type 2 with a medium driving distance, average velocity, and maximum velocity.According to the K-means clustering analysis, three types of training samples can be seen in Figure 3.

QPSO Model
The quantum particle swarm optimization (QPSO) algorithm is a particle swarm optimization (PSO) algorithm with quantum behavior improved on the basis of PSO.It uses the wave function to solve the position and velocity of particles in quantum space and obtains the probability density function of a certain position of particles in space by solving the Schrodinger equation.The Monte Carlo stochastic simulation method is used to obtain the particle position equation.In quantum space, the properties of particles satisfying the aggregate state are completely different, and it can be searched in the whole feasible solution space.Therefore, the global search performance of the QPSO algorithm is far better than that of the standard PSO algorithm [30][31][32].The motion state equation of QPSO can be expressed by the following formula [33].
where φ∈(0,1] is a random number, P j i (t) is the individual best position of particle i at time t, G j i (t) is the global best position of the population at time t, and j represents the dimension.In the QPSO algorithm, the Monte Carlo strategy is used to randomly generate new points.The particle update formula for particle i in the round t + 1 iteration can be expressed as follows: where u∈(0, 1] is produced by a uniform distribution, Y j i (t + 1) represents the current position of the particle, and L j i (t) represents the distance between the current position of the particle and the best average best position of the individual, which can be expressed as follows: where Y j i (t) represents the position of the last iteration of the particle, β is the expansion coefficient, E best represents the average optimal position of the particles, N represents the total number of particles, m and n are the expansion coefficient parameters, and T ite,max and T ite represent the maximum and current iterations of the algorithm, respectively.

LSTM Model
Neural networks have made breakthroughs in the fields of image and natural language processing, among which recurrent neural networks (RNNs) are widely used in the processing and prediction of time series because they can learn the correlation information of the time before and after data.As an important variant of RNNs, LSTM uses three "gates" in structural design to control the hidden layer state, which can retain useful information at every moment.In many tasks, LSTM performs better than classical RNNs [33][34][35][36].
The LSTM cell diagram is shown in Figure 4.In the LSTM cell diagram, x = [x 1 , x 2 , x 3 , ..., x t+1 ], t is the time step, x t is the input at time t, h = [h 1 , h 2 , h 3 , ..., h t+1 ], h t is the corresponding output at time t, c = [c 1 , c 2 , c 3 , ..., c t+1 ], and c t is the corresponding state of time t memory cells.Cell state is the memory unit of LSTM, carrying all previous state information [34].Compared with the RNN unit structure module, the LSTM unit structure module is more complex and has more parameters.
Compared with the RNN unit structure module, the LSTM unit structure module is more complex and has more parameters.The forget gate determines what information to forget from the cell state, reads ht−1 and xt, and uses the sigmoid function for processing to determine the proportion of information forgotten at a moment in time on the cell state.The calculation formula is as follows: In the formula, Mf and bf are the weight matrix and bias vector of this stage, respectively.
As for the input gate and select memory stages, selecting the memory stage involves selectively memorizing the input and screening out the important information input to the current state.In this stage, ht−1 and xt are read, and the proportion of new memory t c  written into memory cell state ct is determined by the sigmoid function.The relevant calculation formula is as follows: [ ] ( ) where Mi and bi are the weight matrix and bias vector of the sigmoid layer, respectively; Mc and bc are the weight matrix and bias vector of the tanh layer, respectively, and tanh is a hyperbolic tangent function.
The output gate calculates the proportion of the cell state ct output information through the sigmoid function, and then multiplies the activated cell state to obtain the final output result.
[ ] ( ) where MO and bO are the weight matrix and bias vector of the output gate, respectively.

Prediction Model Based on K-Means-QPSO-LSTM
In recent years, a LSTM neural network prediction model based on QPSO has been widely used in various disciplines.In terms of the PM2.5 concentration prediction, the prediction model of QPSO combined with LSTM can accurately predict the PM2.5 concentration after training [37].In terms of the usage prediction of shared bicycles, the The forget gate determines what information to forget from the cell state, reads h t−1 and x t , and uses the sigmoid function for processing to determine the proportion of information forgotten at a moment in time on the cell state.The calculation formula is as follows: In the formula, M f and b f are the weight matrix and bias vector of this stage, respectively.
As for the input gate and select memory stages, selecting the memory stage involves selectively memorizing the input and screening out the important information input to the current state.In this stage, h t−1 and x t are read, and the proportion of new memory c t written into memory cell state c t is determined by the sigmoid function.The relevant calculation formula is as follows: where M i and b i are the weight matrix and bias vector of the sigmoid layer, respectively; M c and b c are the weight matrix and bias vector of the tanh layer, respectively, and tanh is a hyperbolic tangent function.
The output gate calculates the proportion of the cell state c t output information through the sigmoid function, and then multiplies the activated cell state to obtain the final output result.
where M O and b O are the weight matrix and bias vector of the output gate, respectively.

Prediction Model Based on K-Means-QPSO-LSTM
In recent years, a LSTM neural network prediction model based on QPSO has been widely used in various disciplines.In terms of the PM2.5 concentration prediction, the prediction model of QPSO combined with LSTM can accurately predict the PM2.5 concentration after training [37].In terms of the usage prediction of shared bicycles, the prediction model of QPSO combined with LSTM could predict the number of bicycles needed per hour in the future day [38].The forecast model of QPSO combined with LSTM could predict the freight volume in the future time (hour, day, or month), [39] and each of the above models showed good predictive power when verified.
Therefore, the prediction accuracy can be significantly improved after the optimization of LSTM hyperparameters.When a LSTM neural network is used for model training, the number of hyperparameter hidden layer neurons L a and L b , iterations K, and learning rate I r need to be evaluated.The selection of hyperparameter values has a great impact on the performance and training effect of the LSTM neural network, and the appropriate selection can improve the accuracy of the results.In order to avoid determining appropriate hyperparameters through a large number of experiments, this paper used the QPSO algorithm to optimize the L, K, and I r of the LSTM neural network and constructed a QPSO-LSTM prediction model.
The process is shown in Figure 5, where L a1 , L b1 represent the number of hyperparameter hidden layer neurons of the first sub-LSTM model, K The overall procedure for the algorithm design and optimization is as follows: 1. Data processing.The data processing steps have been detailed in Section 2 and will not be elaborated in this section.Three types of driving condition datasets were obtained through data processing.For the first type of dataset, because there was a lot of data, the first 90% of the dataset was used as the training set, and the last 10% was used as the test set.For the second and third types of datasets, the first 70% of the The overall procedure for the algorithm design and optimization is as follows: 1. Data processing.The data processing steps have been detailed in Section 2 and will not be elaborated in this section.Three types of driving condition datasets were obtained through data processing.For the first type of dataset, because there was a lot of data, the first 90% of the dataset was used as the training set, and the last 10% was used as the test set.For the second and third types of datasets, the first 70% of the driving condition data was used as the training set, and the last 30% was used as the test set.The training set was used to train the sub-LSTM model and optimize its parameters, while the test set was used to evaluate the model's performance.2.
Initialize the particle swarm parameters including the particle swarm size, particle dimension, number of iterations, particle position (hidden layer neuron number (L ai , L bi ), training number (K i ), and learning rate (I ri ) (i = 1, 2, 3).

3.
The velocity and position of each particle are updated, and the fitness value of each particle is calculated.The fitness value of each particle is a function of the mean square error (MSE) of the training set.4.
Update the individual and population history optimal fitness value and location of each particle as well as the other parameters.

5.
The end condition is satisfied by calculating the maximum number of iterations or the minimum difference value of the fitness value between two iterations.If not, step 2 is performed to continue the iteration.6.
If the conditions are met, the global optimal value is output to the sub-LSTM network to achieve the purpose of optimizing the hyperparameters.

Training Setting
The three types of kinematic fragment samples in Figure 3 were used to train the sub-neural network.In order to accurately evaluate the prediction effect of different methods quantitatively, the root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R 2 ) were adopted as evaluation indices in this paper, which are commonly expressed as follows: where Q is the number of test samples; v true and v pre are the truth velocity and predicted velocity at time ε, respectively.v true is the average of the true vehicle velocity in the test set.The smaller the values of RMSE and MAE, the lower the prediction error, indicating a higher performance of the vehicle velocity prediction model.R 2 serves as an indicator to assess the level of model fitting, with a value closer to 1 implying a better fitting effect on the data and increased prediction accuracy.

Parameter Setting and Training
In order to obtain an accurate neural network vehicle velocity prediction model, the sub-LSTM neural network established should be able to accurately approximate the training samples.Therefore, in this section, the hyperparameters of the LSTM neural network were optimized by the QPSO algorithm to establish different sub-LSTM network models.Test samples were selected to verify the approximation degree of the LSTM prediction results, and the sub-LSTM neural network with the highest approximation degree was determined as the final vehicle velocity prediction model.
When the training dataset was the first type of driving condition, the parameters of the QPSO algorithm were set as follows: the number of particles was 10, the number of populations was 100, the number of hidden layer neurons was [1,200], the number of LSTM iterations was [1,100], and the learning rate was [0.0001, 0.01].
The changes in particle fitness with the increase in iterations of the QPSO-LSTM 1 model are shown in Figure 6.The particle fitness finally stabilized at 0.006814.After QPSO optimization of the LSTM model hyperparameters, the final results are as shown in Figure 7 and Table 6, where the number of hidden layer neurons was L a1 = 75, L b1 = 116, the optimal iteration times of LSTM 1 was 90, and the optimal learning rate of LSTM 1 was 0.0094.The changes in particle fitness with the increase in iterations of the QPSO-LSTM 2 model are shown in Figure 8.The particle fitness finally stabilized at 0.002972.After QPSO optimization of the LSTM 2 model hyperparameters, the final results are as shown in Figure 9 and Table 6, where the number of hidden layer neurons was L a1 = 61, L b1 = 80, the optimal iteration times of LSTM 1 was 99, and the optimal learning rate of LSTM 1 was 0.00977.Figure 9 and Table 6, where the number of hidden layer neurons was La1 = 61, Lb1 = 80, the optimal iteration times of LSTM 1 was 99, and the optimal learning rate of LSTM 1 was 0.00977.6, where the number of hidden layer neurons was La1 = 85, Lb1 = 95, the optimal iteration times of LSTM 3 was 92, and the optimal learning rate of LSTM 3 was 0.0093.Figure 9 and Table 6, where the number of hidden layer neurons was La1 = 61, Lb1 = 80, the optimal iteration times of LSTM 1 was 99, and the optimal learning rate of LSTM 1 was 0.00977.6, where the number of hidden layer neurons was La1 = 85, Lb1 = 95, the optimal iteration times of LSTM 3 was 92, and the optimal learning rate of LSTM 3 was 0.0093.The changes in particle fitness with the increase in iterations of the QPSO-LSTM 3 model are shown in Figure 10.The particle fitness finally stabilized at 0.00321.After QPSO optimization of the LSTM 3 model hyperparameters, the final results are as shown in Figure 11 and Table 6, where the number of hidden layer neurons was L a1 = 85, L b1 = 95, the optimal iteration times of LSTM 3 was 92, and the optimal learning rate of LSTM 3 was 0.0093.In general, the LSTM super parameters are mostly based on experience to set up its hyperparameters, which are predicted by a single LSTM model and optimized LSTM model.One of the single LSTM models is to use each historical speed data as input, and then use the LSTM model to predict.This means that the hyperparameters in the LSTM prediction model are not necessarily optimal.Based on the optimized LSTM model, the LSTM, and the QPSO-LSTM, the optimized post-optimized hyperparameters were updated to the LSTM model.This means that the accuracy of the LSTM model was improved, and finally, the vehicle velocity prediction results of a single model and optimized model were compared.
From Figures 12-14, it can be seen that when the vehicle velocity was steady (i.e., there is no sudden increase or decrease in acceleration), the prediction accuracy of the vehicle velocity was more accurate, as was the prediction accuracy of the QPSO-LSTM method .In summary, the QPSO-LSTM algorithm's vehicle velocity prediction performance was better than that of the single LSTM algorithm and the PSO-LSTM algorithm's vehicle velocity prediction effect.When the prediction horizon was 2 s, the trend and numerical size of the predicted vehicle velocity were almost consistent with the actual vehicle velocity, and the predicted vehicle velocity was more consistent with the actual vehicle   In general, the LSTM super parameters are mostly based on experience to set up its hyperparameters, which are predicted by a single LSTM model and optimized LSTM model.One of the single LSTM models is to use each historical speed data as input, and then use the LSTM model to predict.This means that the hyperparameters in the LSTM prediction model are not necessarily optimal.Based on the optimized LSTM model, the LSTM, and the QPSO-LSTM, the optimized post-optimized hyperparameters were updated to the LSTM model.This means that the accuracy of the LSTM model was improved, and finally, the vehicle velocity prediction results of a single model and optimized model were compared.
From Figures 12-14, it can be seen that when the vehicle velocity was steady (i.e., there is no sudden increase or decrease in acceleration), the prediction accuracy of the vehicle velocity was more accurate, as was the prediction accuracy of the QPSO-LSTM method .In summary, the QPSO-LSTM algorithm's vehicle velocity prediction performance was better than that of the single LSTM algorithm and the PSO-LSTM algorithm's vehicle velocity prediction effect.When the prediction horizon was 2 s, the trend and numerical size of the predicted vehicle velocity were almost consistent with the actual vehicle velocity, and the predicted vehicle velocity was more consistent with the actual vehicle In general, the LSTM super parameters are mostly based on experience to set up its hyperparameters, which are predicted by a single LSTM model and optimized LSTM model.One of the single LSTM models is to use each historical speed data as input, and then use the LSTM model to predict.This means that the hyperparameters in the LSTM prediction model are not necessarily optimal.Based on the optimized LSTM model, the LSTM, and the QPSO-LSTM, the optimized post-optimized hyperparameters were updated to the LSTM model.This means that the accuracy of the LSTM model was improved, and finally, the vehicle velocity prediction results of a single model and optimized model were compared.
From Figures 12-14, it can be seen that when the vehicle velocity was steady (i.e., there is no sudden increase or decrease in acceleration), the prediction accuracy of the vehicle velocity was more accurate, as was the prediction accuracy of the QPSO-LSTM method.In summary, the QPSO-LSTM algorithm's vehicle velocity prediction performance was better than that of the single LSTM algorithm and the PSO-LSTM algorithm's vehicle velocity prediction effect.When the prediction horizon was 2 s, the trend and numerical size of the predicted vehicle velocity were almost consistent with the actual vehicle velocity, and the predicted vehicle velocity was more consistent with the actual vehicle velocity curve.Only at the turning points of the vehicle velocity did the prediction error increase.As shown in Table 7, for different prediction horizon lengths, the three prediction algorithms performed consistently, and the prediction accuracy indicators were better for shorter prediction horizons.For example, with a single LSTM prediction algorithm, as the prediction time horizon increases, the RMSE becomes larger, ranging from 1.9662 km/h (type 1, 2 s), 1.8531 km/h (type 2, 2 s), and 2.8681 km/h (type 3, 2 s) to 9.6329 km/h (type 1, 10 s), 8.1622 km/h (type 2, 10 s), and 8.9326 km/h (type 3, 10 s).The MAE also becomes larger, ranging from 1.5097 km/h (type 1, 2 s), 1.288 km/h (type 2, 2 s), and 2.0725 km/h (type 3, 2 s) to 7.0982 km/h (type 1, 10 s), 5.7281 km/h (type 2, 10 s), and 5.7073 km/h (type 3, 10 s).The R 2 becomes smaller, and the fit of the prediction model becomes worse, ranging from 0.9723 (type 1, 2 s), 0.98785 (type 2, 2 s), and 0.99134 (type 3, 2 s) to 0.32817 (type 1, 10 s), 0.78341 (type 2, 10 s), and 0.87092 (type 3, 10 s).The emergence of the same pattern was observed in both the PSO-LSTM and QPSO-LSTM prediction algorithms.In summary, from the prediction accuracy indicators RMSE, MAE, and R 2 , a time horizon setting of 5 s for prediction was considered to be more appropriate, and the vehicle velocity prediction model based on QPSO-LSTM was superior to the LSTM and PSO-LSTM prediction algorithms.As shown in Table 7, for different prediction horizon lengths, the three prediction algorithms performed consistently, and the prediction accuracy indicators were better for shorter prediction horizons.For example, with a single LSTM prediction algorithm, as the prediction time horizon increases, the RMSE becomes larger, ranging from 1.9662 km/h (type 1, 2 s), 1.8531 km/h (type 2, 2 s), and 2.8681 km/h (type 3, 2 s) to 9.6329 km/h (type 1, 10 s), 8.1622 km/h (type 2, 10 s), and 8.9326 km/h (type 3, 10 s).The MAE also becomes larger, ranging from 1.5097 km/h (type 1, 2 s), 1.288 km/h (type 2, 2 s), and 2.0725 km/h (type 3, 2 s) to 7.0982 km/h (type 1, 10 s), 5.7281 km/h (type 2, 10 s), and 5.7073 km/h (type 3, 10 s).The R 2 becomes smaller, and the fit of the prediction model becomes worse, ranging from 0.9723 (type 1, 2 s), 0.98785 (type 2, 2 s), and 0.99134 (type 3, 2 s) to 0.32817 (type 1, 10 s), 0.78341 (type 2, 10 s), and 0.87092 (type 3, 10 s).The emergence of the same pattern was observed in both the PSO-LSTM and QPSO-LSTM prediction algorithms.In summary, from the prediction accuracy indicators RMSE, MAE, and R 2 , a time horizon setting of 5 s for prediction was considered to be more appropriate, and the vehicle velocity prediction model based on QPSO-LSTM was superior to the LSTM and PSO-LSTM prediction algorithms.

Simulation Results Based on K-Means-QPSO-LSTM
To verify the effectiveness of the K-means-QPSO-LSTM vehicle velocity prediction model proposed in this paper, a cyclic test scenario, the China Heavy-duty commercial vehicle Test Cycle-Tractor Trailer (CHTC-TT), which consists of three types of driving conditions, was used as the test scenario for verifying the prediction effect.The test cycle set is shown in Figure 15.As shown in Figure 16, the results of classifying the current driving condition type based on the past 10s of vehicle velocity are shown.The blue line represents the vehicle velocity, and the red line represents the type of driving condition.The sample used for clustering was the kinematic segment between two idle points, which is usually of a longer duration.The time horizon for collecting velocity information was 10s, so the same kinematic segment will be divided into different types, especially in the time horizon near sudden acceleration or deceleration, where the fluctuation of the driving condition type is obvious.As shown in Figure 16, the results of classifying the current driving condition type based on the past 10s of vehicle velocity are shown.The blue line represents the vehicle velocity, and the red line represents the type of driving condition.The sample used for clustering was the kinematic segment between two idle points, which is usually of a longer duration.The time horizon for collecting velocity information was 10 s, so the same kinematic segment will be divided into different types, especially in the time horizon near sudden acceleration or deceleration, where the fluctuation of the driving condition type is obvious.A simulation-based validation of the vehicle velocity prediction method based on kinematic segment identification in the typical cycle condition CHTC-TT is shown in Figure 17, where different vehicle velocity prediction methods are compared, and the prediction results are shown in Figure 17.
As shown in Figure 17, compared with the vehicle velocity prediction method based on kinematic segment recognition, the prediction accuracy of the vehicle velocity methods with LSTM, RBF-LSTM, and CNN-LSTM was not ideal under various driving conditions such as idling, emergency acceleration, and emergency deceleration.Unlike the vehicle velocity prediction without kinematic segment recognition, the vehicle velocity prediction model based on K-means-QPSO-LSTM with driving condition type recognition, although unable to predict the sudden uncertainty of future driving information, could accurately identify the current driving condition based on the past driving feature parameters and update the driving condition type in real-time.This effectively solves the problem of fluctuating driving conditions.For the vehicle velocity prediction model with driving condition type recognition, the training samples of the sub-QPSO-LSTM were obtained by classifying the feature parameters, which are more representative, so the prediction accuracy was better even during the stable driving period.In summary, the vehicle velocity prediction method based on K-means-QPSO-LSTM with kinematic segment recognition improved the prediction accuracy in both changing and stable driving conditions.
Table 8 shows different prediction results of four vehicle velocity prediction models under the input of the CHTC-TT test cycle.It can be seen that the vehicle velocity prediction method based on kinematic segment recognition with K-means-QPSO-LSTM had the best prediction performance.As shown in Figure 17, compared with the vehicle velocity prediction method based on kinematic segment recognition, the prediction accuracy of the vehicle velocity methods with LSTM, RBF-LSTM, and CNN-LSTM was not ideal under various driving conditions such as idling, emergency acceleration, and emergency deceleration.Unlike the vehicle velocity prediction without kinematic segment recognition, the vehicle velocity prediction model based on K-means-QPSO-LSTM with driving condition type recognition, although unable to predict the sudden uncertainty of future driving information, could accurately identify the current driving condition based on the past driving feature parameters and update the driving condition type in real-time.This effectively solves the problem of fluctuating driving conditions.For the vehicle velocity prediction model with driving condition type recognition, the training samples of the sub-QPSO-LSTM were obtained by classifying the feature parameters, which are more representative, so the prediction accuracy was better even during the stable driving period.In summary, the vehicle velocity prediction method based on K-means-QPSO-LSTM with kinematic segment recognition improved the prediction accuracy in both changing and stable driving conditions.
Table 8 shows different prediction results of four vehicle velocity prediction models under the input of the CHTC-TT test cycle.It can be seen that the vehicle velocity prediction method based on kinematic segment recognition with K-means-QPSO-LSTM had the best prediction performance.

Conclusions
In this paper, the kinematic segments of typical driving conditions were taken as samples, a clustering method of multi-dimensional parameter samples based on principal component analysis was proposed, and QPSO-LSTM vehicle velocity prediction sub models were established for three types of samples, respectively.In the process of velocity prediction, the category of current driving conditions is determined by extracting the driving information in the past period of time based on k-means, and the sub model of QPSO-LSTM velocity prediction under this category was used to predict the velocity, and an adaptive velocity prediction method based on kinematic segment recognition was realized.

•
Kinematic segments of typical driving cycles were extracted, and ten characteristic parameters including the average velocity, mean square error of velocity, and maximum velocity were calculated.In order to reduce the computation and avoid the overlap of information, the principal component analysis method was used to reduce the dimension of the eigenvector composed of the eigen parameters.Each principal component is a parameter related to ten features.The three principal components are used to describe the ten feature parameters of each kinematic segment, and the transformation of the feature vector from ten dimensions to three dimensions is realized.
• According to the three principal component information of each kinematic segment sample, the K-means algorithm was used to cluster the samples, and sample sets under three typical working conditions were obtained.These respectively represent the high velocity, long driving mileage type 3, low velocity, long idling time, short driving mileage type 1, and the characteristics between the two driving conditions type 2. • The vehicle velocity prediction sub-model based on LSTM was trained with three kinds of kinematic fragment samples, respectively, and the optimal LSTM hyperparameters were found by iterative correction with the QPSO algorithm.Combining kinematic segment recognition and a vehicle velocity prediction model, a vehicle velocity prediction method based on K-means-QPSO-LSTM was obtained.This method has an adaptive neural network that can change the vehicle velocity prediction models in real-time according to the actual driving information.Compared with the traditional neural network-based vehicle velocity prediction method, the prediction accuracy was improved.
The model proposed in this paper provides a theoretical basis for the energy management strategy of hybrid electric vehicles based on vehicle velocity prediction.However, there are still some limitations.How to strike a balance between the prediction accuracy and model complexity is always a difficult problem.The proposed vehicle velocity prediction model has clear advantages in terms of prediction accuracy, however, the time taken to train the deep model is also of concern.In the future, we consider placing the model in a parallel framework such as Apache Spark to improve the efficiency of the model.In addition, the weather factor was not considered in this paper.In future work, we will add more influencing factors and use a space-time deep learning neural network model to improve the prediction accuracy of the vehicle velocity prediction model.

Figure 3 .
Figure 3.The training sample.(a) The first type of driving condition sample; (b) the second type of driving condition sample; (c) the third type of driving condition sample.

Figure 3 .
Figure 3.The training sample.(a) The first type of driving condition sample; (b) the second type of driving condition sample; (c) the third type of driving condition sample.

25 Figure 5 .
Figure 5. Structure of the vehicle velocity prediction model.

Figure 5 .
Figure 5. Structure of the vehicle velocity prediction model.

25 Figure 6 .
Figure 6.The fitness curve of QPSO with iteration times for LSTM 1.

Figure 7 .
Figure 7.The change in the QPSO hyperparameters with iteration times for LSTM 1.(a) La1 changes with the number of iterations, (b) Lb1 changes with the number of iterations, (c) K1 changes with the number of iterations, (d) Ir1 changes with the number of iterations.

Figure 6 .
Figure 6.The fitness curve of QPSO with iteration times for LSTM 1.

Figure 6 .
Figure 6.The fitness curve of QPSO with iteration times for LSTM 1.

Figure 7 .
Figure 7.The change in the QPSO hyperparameters with iteration times for LSTM 1.(a) La1 changes with the number of iterations, (b) Lb1 changes with the number of iterations, (c) K1 changes with the number of iterations, (d) Ir1 changes with the number of iterations.

Figure 7 .
Figure 7.The change in the QPSO hyperparameters with iteration times for LSTM 1.(a) L a1 changes with the number of iterations, (b) L b1 changes with the number of iterations, (c) K 1 changes with the number of iterations, (d) I r1 changes with the number of iterations.

Figure 8 .
Figure 8.The fitness curve of QPSO with iteration times for LSTM 2.

Figure 9 .
Figure 9.The change in the QPSO hyperparameters with iteration times for LSTM 2. (a) La2 changes with the number of iterations, (b) Lb2 changes with the number of iterations, (c) K2 changes with the number of iterations, (d) Ir2 changes with the number of iterations.The changes in particle fitness with the increase in iterations of the QPSO-LSTM 3 model are shown in Figure 10.The particle fitness finally stabilized at 0.00321.After QPSO optimization of the LSTM 3 model hyperparameters, the final results are as shown in Figure 11 and Table6, where the number of hidden layer neurons was La1 = 85, Lb1 = 95, the optimal iteration times of LSTM 3 was 92, and the optimal learning rate of LSTM 3 was 0.0093.

Figure 8 .
Figure 8.The fitness curve of QPSO with iteration times for LSTM 2.

Figure 8 .
Figure 8.The fitness curve of QPSO with iteration times for LSTM 2.

Figure 9 .
Figure 9.The change in the QPSO hyperparameters with iteration times for LSTM 2. (a) La2 changes with the number of iterations, (b) Lb2 changes with the number of iterations, (c) K2 changes with the number of iterations, (d) Ir2 changes with the number of iterations.The changes in particle fitness with the increase in iterations of the QPSO-LSTM 3 model are shown in Figure 10.The particle fitness finally stabilized at 0.00321.After QPSO optimization of the LSTM 3 model hyperparameters, the final results are as shown in Figure 11 and Table6, where the number of hidden layer neurons was La1 = 85, Lb1 = 95, the optimal iteration times of LSTM 3 was 92, and the optimal learning rate of LSTM 3 was 0.0093.

Figure 9 .
Figure 9.The change in the QPSO hyperparameters with iteration times for LSTM 2. (a) L a2 changes with the number of iterations, (b) L b2 changes with the number of iterations, (c) K 2 changes with the number of iterations, (d) I r2 changes with the number of iterations.

Figure 10 .
Figure 10.The fitness curve of QPSO with iteration times for LSTM 3.

Figure 11 .
Figure 11.The change in the QPSO hyperparameters with iteration times for LSTM 3. (a) La3 changes with the number of iterations, (b) Lb3 changes with the number of iterations, (c) K3 changes with the number of iterations, (d) Ir3 changes with the number of iterations.

Figure 10 .
Figure 10.The fitness curve of QPSO with iteration times for LSTM 3.

Figure 10 .
Figure 10.The fitness curve of QPSO with iteration times for LSTM 3.

Figure 11 .
Figure 11.The change in the QPSO hyperparameters with iteration times for LSTM 3. (a) La3 changes with the number of iterations, (b) Lb3 changes with the number of iterations, (c) K3 changes with the number of iterations, (d) Ir3 changes with the number of iterations.

Figure 11 .
Figure 11.The change in the QPSO hyperparameters with iteration times for LSTM 3. (a) L a3 changes with the number of iterations, (b) L b3 changes with the number of iterations, (c) K 3 changes with the number of iterations, (d) I r3 changes with the number of iterations.
Appl.Sci.2024, 14, x FOR PEER REVIEW 18 of 25 velocity curve.Only at the turning points of the vehicle velocity did the prediction error increase.

Figure 12 .
Figure 12.Prediction results of driving condition type 1 by different methods with 2 s time horizon.

Figure 13 .
Figure 13.Prediction results of driving condition type 2 by different methods with 2 s time horizon.

Figure 12 .
Figure 12.Prediction results of driving condition type 1 by different methods with 2 s time horizon.

Figure 12 .
Figure 12.Prediction results of driving condition type 1 by different methods with 2 s time horizon.

Figure 13 .
Figure 13.Prediction results of driving condition type 2 by different methods with 2 s time horizon.Figure 13.Prediction results of driving condition type 2 by different methods with 2 s time horizon.

Figure 13 .
Figure 13.Prediction results of driving condition type 2 by different methods with 2 s time horizon.Figure 13.Prediction results of driving condition type 2 by different methods with 2 s time horizon.

25 Figure 14 .
Figure 14.Prediction results of driving condition type 3 by different methods with 2 s time horizon.

Figure 14 .
Figure 14.Prediction results of driving condition type 3 by different methods with 2 s time horizon.

4. 3 .
Simulation Results Based on K-Means-QPSO-LSTMTo verify the effectiveness of the K-means-QPSO-LSTM vehicle velocity prediction model proposed in this paper, a cyclic test scenario, the China Heavy-duty commercial vehicle Test Cycle-Tractor Trailer (CHTC-TT), which consists of three types of driving conditions, was used as the test scenario for verifying the prediction effect.The test cycle set is shown in Figure15.

Figure 16 .
Figure 16.Online driving condition type identification.A simulation-based validation of the vehicle velocity prediction method based on kinematic segment identification in the typical cycle condition CHTC-TT is shown in Figure 17, where different vehicle velocity prediction methods are compared, and the prediction results are shown in Figure 17.

Figure 16 .
Figure 16.Online driving condition type identification.

Figure 16 .
Figure 16.Online driving condition type identification.

Figure 17 .
Figure 17.Comparison of the prediction results of different methods.

Figure 17 .
Figure 17.Comparison of the prediction results of different methods.

Table 1 .
Characteristic parameters of the kinematic segments.

Table 2 .
Characteristics and contribution rates of the principal components.

Table 3 .
Principal component load matrix.

Table 4 .
Score data for the three principal components.

Table 5 .
The cluster center.

Table 5 .
The cluster center.

Table 6 .
LSTM hyperparameters obtained by different optimization methods with a 2 s time horizon.

Table 6 .
LSTM hyperparameters obtained by different optimization methods with a 2 s time horizon.

Table 6 .
LSTM hyperparameters obtained by different optimization methods with a 2 s time horizon.

Table 7 .
The evaluation indices of the results obtained by the different methods with different time horizons.

Table 7 .
The evaluation indices of the results obtained by the different methods with different time horizons.

Table 8 .
The evaluation indicators of the predicted results.Appl.Sci.2024, 14, x FOR PEER REVIEW 21 of 25