Next Article in Journal
Structural Damping Analysis of a Vehicle Front Hood: Experimental Modal Parameters Extraction and Simulation Correlation
Previous Article in Journal
Dynamic Performance Analysis of Gas Film Floating Ring Seals Based on the Reynolds–Bernoulli Small-Perturbation Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reinforcement Learning-Based Approach to Reduce Velocity Error in Car-Following for Autonomous Connected Vehicles

by
Abu Tayab
1,*,
Yanwen Li
1,* and
Ahmad Syed
2
1
School of Mechanical Engineering, Yanshan University, Qinhuangdao 066004, China
2
Department of Electrical Engineering, Yanshan University, Qinhuangdao 066004, China
*
Authors to whom correspondence should be addressed.
Machines 2024, 12(12), 861; https://doi.org/10.3390/machines12120861
Submission received: 13 November 2024 / Revised: 21 November 2024 / Accepted: 24 November 2024 / Published: 27 November 2024
(This article belongs to the Section Vehicle Engineering)

Abstract

:
This paper suggests an adaptive car-following strategy for autonomous connected vehicles (ACVs) that integrates a robust controller with an extended disturbance estimator (EDE) and reinforcement learning (RL) to improve performance in dynamic traffic environments. Traditional car-following methods struggle to handle external disturbances and uncertainties in vehicle dynamics. The suggested method addresses this by dynamically adjusting the EDE gain using RL, enabling the system to optimize its control strategy in real time continuously. Simulations were conducted in two scenarios, a single following vehicle and two following vehicles, each tracking a leading vehicle. Results showed significant improvements in velocity tracking, with the RL-based control method reducing velocity error by over 50% compared to conventional approaches. The technique also led to smoother acceleration control, enhancing stability and driving comfort. Quantitative metrics, such as total reward, velocity error, and acceleration magnitude, indicate that the suggested EDE-RL-based strategy provides a robust and adaptable solution for autonomous vehicle control. These findings indicate that RL, combined with robust control, can improve the performance and safety of ACV systems, making it suitable for broader applications in autonomous vehicle platooning and complex traffic scenarios, including vehicle-to-vehicle (V2V) communication.

1. Introduction

The autonomous connected vehicle (ACV) is poised to transform highway transport and represents an unavoidable trend in the future [1,2]. Car-following is a fundamental traffic situation, and different control methods have been utilized in this scenario [3,4]. The car’s ability to follow other vehicles is constrained by control theory in composite transportation environments. Unlike traditional methods based on control theory, the robust controller has demonstrated successful outcomes in the car-following scenario of ACVs.
Due to external disturbances and uncertainties in parameters, car-following systems often experience a decrease in control performance. While compensating for the influence of disturbances is possible with accurate values, measuring these values directly is typically challenging. To address this issue, researchers have created various anti-disturbance methods [5]. The methods have been conveniently labeled as robust controllers. The controller essentially utilizes disturbance estimation values to counteract the effects of disturbances, while employing a traditional controller to maintain stability in the closed-loop system, ultimately achieving the intended objective [6].
The cornerstone of a robust controller lies in the disturbance estimator. Initially, the conventional nonlinear disturbance estimator was presented. A new type of nonlinear disturbance estimator has been suggested for estimating the disturbance [7]. The estimated disturbance value approaches the real disturbance value asymptotically. After that, a nonlinear gain is utilized in the nonlinear disturbance estimator, and the study indicates a significant enhancement in performance due to the nonlinear gain. It is further noted that this improvement occurs when the disturbance’s first derivative is non-zero [8], and a limited estimate error exists in the nonlinear disturbance estimator. By choosing the right gain, the error can be reduced to a negligible level.
In addition to the traditional nonlinear disturbance estimator, there is also a focus on the high-order disturbance estimator. The inclusion of an integral term is utilized in developing a higher-order disturbance estimator, assuming that the higher-order slope of the disturbance is zero. The principal and proportional principal estimators were developed by incorporating multiple integrals to account for measurement noise and internal disturbances in system analysis. Subsequently, the higher-order linear comprehensive state estimator incorporated the nonlinear equation. The initial first-order nonlinear disturbance estimator was expanded to a higher-order version [9,10]. Enhancing the precision of disturbance estimation can be achieved through additional enhancements. Moreover, certain descending mode disturbance estimators enhance estimate precision by incorporating higher-order elements [11,12].
Nevertheless, the disturbance estimator’s coefficients are typically established through manual adjustment relying on past experiences. These coefficients cannot be dynamically modified according to real-time traffic conditions. Consequently, this traditional estimator-based approach may lead to decreased performance in intricate traffic settings. Concurrently, there is growing interest in artificial intelligence techniques such as neural networks [13] and reinforcement learning [14,15,16]. Car research has incorporated the use of reinforcement learning algorithms to enhance the training process. These algorithms enable the maximization of cumulative rewards through interactions with the environment, ultimately leading to improvements in the effects of optimal schemes created on environmental changes [17,18].
The problem mentioned above is addressed by implementing reinforcement learning in the car-following scenario within the robust controller framework. This research introduces a car-following approach for ACVs and makes the following significant contribution:
  • A car-following approach is theoretically suggested using the Lyapunov method within the context of a robust controller. The introduction of EDE into the car-following scenario aims to enhance the performance of car-following.
  • The EDE gain of the suggested strategy is adaptively attuned through RL, as opposed to being manually set based on experience, diverging from the traditional approach.
  • The simulation outcomes for two different scenarios indicate a significant improvement in the car-following performance of both the first and second following vehicles.

2. Related Works

The development of car-following strategies for ACVs has garnered substantial research interest in recent years. Numerous methodologies have been explored to enhance vehicle following performance under varying traffic conditions. A previous article [5] introduces a self-supervised reinforcement learning algorithm designed to mitigate disturbances in vehicle control during car-following situations. This algorithm incorporates an EDO into a robust controller framework, allowing for the dynamic adjustment of EDO gains through reinforcement learning. As a result, the algorithm can provide adaptive and stable responses to disturbances. Simulations demonstrate notable enhancements in performance, underscoring the algorithm’s robust nature and its potential to improve ACV systems across different traffic scenarios. Another paper [19] presents an RL-based CF control strategy for multivehicle scenarios using V2V communication data. It employs the twin-delayed deep deterministic policy gradient (TD3) algorithm and integrates a sequence-to-sequence (seq2seq) predictive module to handle dynamic traffic conditions. Validated with the highD dataset and traffic perturbation scenarios, the suggested strategy shows superior performance in convergence speed, safety, efficiency, comfort, and fuel consumption compared to standard RL algorithms and model predictive controllers (MPCs).
The researcher in [20] presents a coordinated control model for intelligent vehicles, combining hierarchical time series prediction with deep reinforcement learning for lane change and car-following. It employs Long Short-Term Memory (LSTM) for vehicle state prediction, a Deep Q-Network with Double Q-Learning (D3QN) for lane-change decisions, a Genetic Algorithm with LSTM and Backpropagation (GA-LSTM-BP) for trajectory planning, and Deep Deterministic Policy Gradient (DDPG) for velocity control. Simulations show a 17.58% increase in driving velocity, improving efficiency, safety, and stability. Ref. [21] introduces an Expert Demonstration Reinforcement Learning (EDRL) approach to address challenges in deep reinforcement learning (DRL) for car-following control with traffic oscillations. By incorporating expert demonstrations, the method stabilizes training, accelerates learning, and enhances performance. The approach involves offline pre-training with prior traffic knowledge and online training using behavioral cloning from expert demonstrations. Experiments show that EDRL significantly improves training stability, learning speed, and rewards compared to baseline algorithms. The study conducted by [22] introduces a novel data-driven traffic signal control method using deep reinforcement learning (DRL) and deep learning techniques. The approach employs a compressed representation of traffic states to enhance signal control flexibility and efficiency. Simulations demonstrate that the suggested method outperforms existing benchmarks in vehicle speed, queue length, wait time, and traffic density, showing improved convergence and robustness. The study conducted in [23] presents a reinforcement learning-based control method (RL-controller) for high-speed path following in autonomous vehicles. The RL controller addresses the difficulties of accurate dynamic modeling and the strong coupling between longitudinal and lateral controls. It utilizes a deep soft actor-critic method and integrates lane curvature information to improve learning and control performance. The approach outperforms traditional methods like imitation learning and model-based optimal control in terms of driving speed and compound error. Another paper [24] investigates the use of deep reinforcement learning (DRL) for controlling autonomous vehicles (AVs) during pedestrian crossings, addressing challenges in managing speed and decision-making with varying road users and conditions. It presents a DRL model that integrates continuous and discrete action spaces, enhancing the vehicle’s ability to adapt to dynamic environments. The model incorporates a hybrid approach for real-time acceleration control and vehicle-to-pedestrian (V2P) communication via light signals.
Ref. [25] presents a compliant optimization approach for transportation signals and vehicle speed using a multi-agent deep reinforcement learning framework called COTV-MADRL. The system includes two types of agents, a Light-agent for controlling traffic signals and a Vehicle-agent for managing vehicle speeds, aimed at reducing unnecessary stops at intersections and enhancing overall traffic efficiency. Through a combination of macro- and micro-control strategies and the support of cooperative vehicle infrastructure systems, the model improves traffic flow and driving comfort. Experiments conducted on 108 signalized intersections using real-world data demonstrate the superior performance of COTV-MADRL compared to conventional and other DRL-based methods. This study [26] introduces a car-following (CF) model that integrates both longitudinal and lateral control using three degrees of the freEDEm vehicle dynamics model and reinforcement learning. Two models, DDPG (Deep Deterministic Policy Gradient) and MADDPG (Multi-Agent DDPG) are developed and trained using 100 CF segments from the OpenACC database. The models are validated against observed data, showing that vehicles controlled by these models perform better in terms of safety, comfort, and traffic flow efficiency compared to human-driven vehicles. The MADDPG model, in particular, shows superior performance with higher road traffic flow efficiency and reduced lateral offsets. The related works demonstrate significant advancements in car-following strategies and control systems for autonomous connected vehicles. Another [27] paper presents a predictive model-based and control-aware communication strategy for Cooperative Adaptive Cruise Control (CACC), combining Event-Triggered Communication (ETC) with Model-Based Communication (MBC) to reduce communication usage by 82% while maintaining performance (less than 1% speed deviation). It addresses inefficiencies in traditional periodic strategies, ensuring string stability and preventing Zeno behavior. However, gaps remain in integrating MBC with ETC, embedding control dynamics in MBC models, and enhancing scalability for large networks, highlighting areas for future research. The researcher in [28] proposed a Safety Reinforced-CACC strategy to enhance platoon safety during V2V communication failures. By integrating a dual-branch control strategy with a smooth transition algorithm, the system switches to sensor-based ACC when communication fails, reducing aggressive acceleration and braking. Simulations show improved safety under poor communication. Future work includes developing advanced transition algorithms, exploring complex communication environments, and designing diverse scenarios to ensure societal trust in the system.
These studies highlight the effectiveness of various reinforcement learning techniques and optimization methods in improving vehicle safety, comfort, traffic flow, and efficiency across different scenarios. Each approach contributes to the ongoing development of more sophisticated and adaptable systems, providing valuable insights and benchmarks for future research in intelligent transportation. In contrast to these studies, our paper introduces three main innovations that are not extensively represented in the literature: (1) The integration of a robust controller with an EDO for real-time disturbance rejection in car-following tasks. (2) Dynamic adjustment of the EDO gain using reinforcement learning to optimize vehicle stability and control performance. (3) An adaptive control strategy designed to address real-time disturbances specifically in car-following scenarios. The details outlined in Table 1 encapsulate the most recent research results.

3. Vehicle Model and Problem Definition

The ACV model is described in detail, including the integration of longitudinal and lateral control systems. Following this, the problem formulation addresses the challenges in optimizing car-following strategies to enhance vehicle performance and safety in varying traffic conditions. The formulation sets the stage for developing and evaluating the suggested car-following strategy within the context of the extended disturbance estimator and reinforcement learning framework.

3.1. Vehicle Model

This research examines ACV utilizing V2V communication technology while excluding pedestrians from consideration. The primary vehicle travels in a single lane and is not influenced by the vehicles following it. The primary vehicle is defined as   x 0 = v 0 t + x 0 ( 0 ) , where   x 0 ( 0 ) , x 0 , and v 0 represent the initial position, position, and velocity, correspondingly. As per [29], the model for an autonomous-driving car is provided as follows:
x i ˙ = y i v i ˙ = a i a i ˙ = a i / k + u i + ω i
The variables xi, vi, ai, ui, and ωi represent the position, velocity, acceleration, control torque, and external disturbance of the i-th subsequent vehicle, as functions of time t. For instance, xi = xi(t), vi = vi(t), and ai = ai(t). While t is not explicitly shown in Equation (1), the time dependency is implicit. Meanwhile, k stands for the dynamics parameter of the autonomous driving car. This is a sort of time constant that tunes the maximum rate of change in longitudinal acceleration.

3.2. Problem Definition

With the method model described by Equation (1), this research aims to develop a car-following approach ui for the i-th following ACV that enhances car-following capabilities solely through V2V communication between neighboring vehicles.

4. Extended Disturbance Estimator Design for Intelligent Connected Vehicle

This section presents the incorporation of the extended disturbance estimator in the car-following control scheme. The purpose is to estimate the impact of the “equivalent disturbance” on the ACV’s car-following system. Subsequently, the estimator gains are adjusted through reinforcement learning.

4.1. Extended Disturbance Estimator Design

Let y i = x i + 1 x i x v e h x s a f e indicate the positional difference between the safe space xsafe, and xveh is the length of the ego-vehicle and the real space of two neighboring vehicles. Let y v = v d v i indicate the velocity difference between the ego-vehicle and the leading vehicle. In this case, we selected a sliding surface as outlined below:
s i = y i + c 1 y v = y i + c 1 v d v i
This formulation ensures that both position and velocity errors are considered in the control law, allowing for responsive adjustments that maintain safe following distances. The positive controller gain c1 balances the position and velocity terms to enhance stability.
The si function’s derivative can be obtained through the following steps:
s i ˙ = y i ˙ + c 1 y v ˙
By replacing Equation (1) and yi into Equation (3), we obtain the following result:
s ˙ i = v i + 1 v i + c 1 a 0 a i = v i + 1 v i + c 1 a 0 + c 1 k a ˙ i c 1 k u i c 1 k w i
Explain d i = c 1 a 0 + c 1 k a ˙ i c 1 k w i as an “equivalent disturbance”, and Equation (5) is represented as
s ˙ i = v i + 1 v i c 1 k u i + d i
Then, Equation (5) can be reformulated as the following:
d i = s ˙ i v i + 1 + v i + c 1 k u i
By utilizing Equation (6), it is possible to indirectly determine the structure of di by measuring the earlier structure of ui, v i + 1 , vi, ai, and calculating s ˙ i . This can be expressed as defining z1 = di and z2 = di, and rewriting the “equivalent disturbance” can be reformulated as the following:
z ˙ 1 = z 2 z ˙ 2 = d ¨ i
Additionally, Equation (7) can be expressed in the following formula through the definition of A = [0, 1; 0, 0], B = [0, 1]T, and z = [z1, z2]T.
z ¨ = A z + B d ¨ i
Hence, System Equation (8) can be represented by Equation (6). By setting C = [1, 0], the output equation is expressed as follows:
d i = C z
Given the observability of the pair (A, C), the disturbance estimator for an autonomous driving vehicle can be formulated based on the guidelines provided in [30] by establishing d ^ i = C z ^ .
Theorem 1.
Suppose we assume that the second derivative of equivalent disturbance di is confined and pleases  d ¨ i δ 1 ,   for the system. The boundedness of   d ¨ i   is reasonable in typical driving scenarios, as physical disturbances (like drag or road gradient) generally vary gradually. The parameter δ1 is chosen based on expected disturbance levels, ensuring robustness to realistic variations in traffic and road conditions. Considering Equations (8) and (9), EDE can be designed in the following formula:
z ^ = p + L s i p ˙ = A L C p + A L s i L C L s i + v i + 1 v i c 1 k u i
Therefore, the error in estimating the disturbance, denoted as z ~ = z z ^ , can approach a region near the origin through the selection of suitable gains L = [l1, l2]T and an auxiliary variable p.
Equation (10) can be rewritten by substituting the second line into the derivative of the first line.
z ^ ˙ = A L C z ^ + A L s ˙ i v i + 1 + v i + c 1 k u i
Rewriting Equations (6) and (9), and d ^ i = C z ^ in Equation (11) results in
z ^ ˙ = A z ^ + L C z z ^
Equation (8) subtracted by Equation (12) results in
z ~ ˙ = A L C z ~ + B d ¨ i
Selecting suitable gains will result in A ¯ = A L C being Hurwitz due to the observability of the pair (A, C). Therefore, for any positive matrix Q > 0, a distinct positive matrix p > 0 exists.
P A ¯ + A ¯ T P = Q
Select a Lyapunov function.
V 1 = z ~ T P z ~
The time derivative is obtained by taking the derivative of Equation (15) concerning Equations (13) and (14).
V 1 = z ~ T P z ~ ˙ + z ~ ˙ T P z ~ = z ~ T P A ¯ + A ¯ T P z ~ + 2 z ~ T P B d ¨ i = z ~ T Q z ~ + 2 z ~ T P B d ¨ i
Consider λp as the largest eigenvalue of matrix P and λq as the smallest eigenvalue of matrix Q, resulting in
V ˙ 1 λ q | | z ~ | | 2 2 + 2 | | z ~ | | 2 | | P | | F | | B | | 2 | d ¨ i | λ q | | z ~ | | 2 2 + 2 λ q | | z ~ | | 2 δ 1
It can be shown that |.|, ||.||2, and ||.||F represent the utter value of a variable, the 2-norm of a vector, and the Frobenius norm of a matrix, correspondingly. It follows that V ˙ 1 < 0 i f z ~ 2 > 2 λ p δ 1 / λ q . Subsequently, the reduction in V1 leads the system’s path towards a region where z ~ 2 > 2 λ p δ 1 / λ q . Consequently, by selecting suitable gains L = [l1, l2]T, the system’s path will ultimately converge towards a limited origin.
Statement 1.
Traditional EDE relies on manual adjustment of gains, which are determined through experience. However, due to the constantly changing nature of disturbances in transportation environments, the secure gain of EDE falls short in meeting the demands of complex transportation scenarios, ultimately leading to a decrease in the precision of disturbance estimates.

4.2. Estimator Gain Adjusted by Reinforcement Learning

The EDE gain for the suggested car-following system is adjusted using reinforcement learning. This method allows for maximizing returns through trial and error to achieve an ideal approach. DDPG is a common reinforcement learning algorithm that is well-suited for continuous action spaces. Therefore, the DDPG algorithm from Algorithm 1 is utilized in this study to adjust EDE gains. Two car-following scenarios are selected for ease of reinforcement learning application, as depicted in Figure 1.
Algorithm 1: DDPG Algorithm
 1. 
Initialize:
  • Critic network Q(s,aθQ) and actor μ(sθμ) random weights θQ and θμ.
  • Target networks Q′ and μ′ with weights θQ′←θQ.
  • Replay buffer R.
 2. 
For episode = 1 to M do:
  • Initialize a random process N for action exploration.
  • Receive initial observation state s1.
 3. 
For time step t = 1 to T do:
  • Select action a t = μ s t | θ μ + N t according to the current policy and exploration noise.
  • Execute action at and observe reward rt and new state st+1.
  • Store transition (st, at, rt, st+1) in replay buffer R.
 4. 
Sample a random minibatch of N transitions (si, ai, ri, si+1) from R.
 5. 
Set target value for each sampled transition:
y i = r i + γ Q ( s i + 1 , μ s i + 1 θ μ ) θ Q
 6. 
Update critic by minimizing the loss:
L = 1 N i y i Q s i , a i | θ Q 2
 7. 
Update the actor policy using the sampled policy gradient:
θ μ J 1 N i a Q ( s , a θ Q ) s = s i , a = μ s i θ μ μ ( s θ μ ) s i
 8. 
Update target networks:
θ Q τ θ Q + 1 τ θ Q
θ μ τ θ μ + 1 τ θ μ
 9. 
End for
 10. 
End for
In Scenario 1, there is a single leading vehicle and a single following vehicle. The two vehicles exchange information regarding speed, position, and other relevant data through V2V communication. The vehicle highlighted in the blue box in Figure 1 implements the car-following system outlined in this research, where the EDE gain is continuously simplified in the present using reinforcement learning.
The Markov decision process is initially modeled. The DDPG selects the action as the EDE gain of the subsequent vehicle. The state space is then selected as
y i , Δ v i , v i , a i
The universe movement between the following vehicle and the front vehicle is calculated as y i = x i x 0 x v e h x s a f e , where xsafe represents the safe space and xveh represents the vehicle length. The velocity error between the following vehicle and the front vehicle is determined as Δvi = viv0, with vi and ai representing the velocity and acceleration of the following vehicle.
The selection of the compensation purpose is as follows:
r = r Δ v i + r j e r k i
r Δ v i = ω 1 Δ v i v m a x
r j e r k = ω 2 a i k a i k 1 2 a m a x Δ T
In the equation, aik represents the acceleration of the i-th following vehicle at the k-th frame. Additionally, vmax, amax, and ΔT stand for maximum velocity, maximum acceleration, and time step, correspondingly. The theoretical absolute value of a variable is indicated by |.|, while positive coefficients ω1 and ω2 are also included in the equation.
In Scenario 2, there is a front vehicle and two following vehicles. As depicted in Figure 1, the car-following approach presented in this study is utilized by the following vehicles 1 and 2, with real-time updates of EDE gain through reinforcement learning.
We formulate a Markov decision method for Scenario 2, where the exploit of DDPG is selected as the EDE gain for two consecutive vehicles simultaneously. The state space is defined as follows:
{ y i , Δ v i , y i + 1 , Δ v i + 1 , v i , v i + 1 }
The universe relative motion movement between following vehicle 1 and the front vehicle is given by   y i = x i x 0 x v e h x s a f e , while the velocity error is represented by Δvi = viv0. On the other hand, the universe relative motion movement between following vehicle 2 and the front vehicle is denoted by yi+1 = xi+1x0 − 2(xvehxsafe), and the velocity error is Δvi+1 = vi+1v0. The velocities of the following vehicles 1 and 2 are represented by vi and vi+1, respectively.
The incentive function in scenario 2 is calculated as the total of Equation (19) for subsequent vehicles 1 and 2 in the following manner:
r = i = 1 2 r Δ v i + r j e r k i
Statement 2.
In contrast to the conventional EDE method, which relies on experience to adjust gains, the gains of EDE in this research are fine-tuned using RL. RL involves continuous trial and error during training to achieve the strategy with the highest cumulative reward, ensuring accurate disturbance adaptive estimation in intricate traffic situations. Consequently, through reinforcement learning, EDE gains can be optimized to suit various state spaces, enhancing the precision of the disturbance estimate.
Statement 3.
For more than three vehicles, the EDE gain will be adjusted by linking Scenarios 1 and 2. For instance, if there are four ACVs, Scenario 1 will be applied to vehicles 1 and 2, with vehicle 2 being manipulated by the suggested method. On the other hand, Scenario 2 will be applied to vehicles 2, 3, and 4, with vehicles 3 and 4 being operated by the suggested method. Due to the possibility of delay, the suggested car-following structure only takes into account V2V communication between both vehicles, making it suitable for limited platoons.

5. Car-Following Strategy Using Extended Disturbance Estimator

Within this segment, a car-following approach utilizing a protracted disturbance estimator is suggested through the integration of an extended disturbance estimator and traditional sliding model control technique in the following manner:
Theorem 2.
By designing the distributed controller with appropriate parameters, the route of method error for system (1) will be ensured to converge towards a vicinity of the source.
u i = 1 c 1 k v i + 1 v i + c 2 s i + c 3 s i g n s i + d ^ i
The control torque, velocity, and vehicle dynamics parameters are represented by ui, vi, and k, respectively. The disturbance estimation d ^ is defined by Equation (10), while si follows Equation (4). The controller gains c1, c2, and c3 are also included in the system.
 Proof. 
By replacing Equation (24) with Equation (5), we obtain:
s ˙ i = c 2 s i c 3 s i g n s i + d ^ i
Select Lyapunov matrix V2 as
V 2 = 1 2 s i 2 + V 1
Deriving V2 concerning Theorem 1 and Equation (24) results in
V ˙ 2 = s i s ˙ i + V ˙ 1 s i s ˙ i λ q | | z ~ | | 2 2 + 2 λ q | | z ~ | | 2 δ 1
Equation (25) can be replaced with Equation (27) to obtain a new result:
V ˙ 2 c 2 s i 2 c 3 s i + s i d ~ i λ q z ~ 2 2 + 2 λ q z ~ 2 δ 1 c 2 s i 2 c 3 s i + z ~ 2 s i λ q z ~ 2 2 + 2 λ q z ~ 2 δ 1
With a positive value of δ2, it is possible to achieve δ 2 > 0 , z ~ 2 δ 2 as stated in Theorem 1. Subsequently, Equation (28) can be restated as
V ˙ 2 c 2 s i 1 2 c 2 z ~ 2 2 c 3 s i + 1 4 c 2 z ~ 2 2 λ q z ~ 2 2 + 2 λ p z ~ 2 δ 1 c 3 s i λ q 1 4 c 2 z ~ 2 2 + 2 λ p z ~ 2 δ 1 c 3 s i λ q 1 4 c 2 z ~ 2 2 + 2 λ p δ 1 δ 2
Define α = m i n { c 3 , 4 c 2 λ q 1 / 4 c 2 } , β = 2 λ p δ 1 δ 2 . Equation (29) is equivalent to the following formulation:
V ˙ 2 c 3 1 2 s i 2 4 c 2 λ q 1 4 c 2 | | z ~ | | 2 2 + 2 λ p δ 1 δ 2 c 3 V 2 + β
By multiplying eεt to both sides of Equation (30), we obtain
e ε t V ˙ 2 + α V 2 e ε t β
When Equation (31) is combined with the initial value V2(0) of V2(t), the result is obtained.
V 2 ( t ) V 2 ( 0 ) β / α e ε t + β / α
According to Equation (32), it is established that V2(t) is limited by the bounds of α, β, and V2(0). As a result, the trajectory of the closed-loop system will ultimately converge within a limited region. □
Statement 4.
The boundary of the area is determined by the ratio β/α, which in turn is influenced by the parameters c2, c3, λp, λq, δ1, and δ2. The precision of the suggested distributed controller is affected by both the disturbance features and estimate error of EDE, given that  d ¨ i δ 1 and z ~ 2 δ 2 . According to Theorem 1 and Statement 2, the EDE gains L = [l1, l2]T are adapted by RL in this study to enhance the precision of disturbance estimation, thereby improving the accuracy of the distributed controller. This is attributed to the fact that EDE can compensate for the “equivalent disturbance” to a large extent, thereby significantly enhancing the disturbance refusal capability of the multiple controllers in multipart transportation scenarios.
Statement 5.
To prevent prattling issues, the function sign(x) has been substituted with the saturation function  s a t   ( x ) = s i g n   ( x ) × m i n | x | / ς 1 , 1   where 0 < ς1 < 1. Therefore, the car-following controller suggested in Equation (24) can be expressed as follows:
u i = 1 c 1 k v i + 1 v i + c 2 s i + c 3 s a t   s i + d ^ i
Equation (33) allows for the representation of d ^ i to be expressed in a modified form, as illustrated in Equation (10):
d ^ i = l 1 s i + p 1 p ˙ 1 = l 1 d ^ i l 1 v i + 1 v i c 1 k u i + z ^ 2
z ^ 2 = l 2 s i + p 2 p ˙ 2 = l 2 d ^ i l 2 v i + 1 v i c 1 k u i
Additionally, Equation (33) will deteriorate into the traditional car-following controller without taking into account disturbance compensation in the following formula:
u i = 1 c 1 k v i + 1 v i + c 2 s i + c 3 s a t   s i

6. Simulations Results

To evaluate the efficiency of the suggested CF strategy, simulations were conducted on a single-lane setup without accounting for lane changes. The simulation process was performed using a Python–Simulink integration, where Python 3.12 handled the RL training phase, and MATLAB 2023a modeled vehicle dynamics. This combined setup allowed for us to leverage Python’s robust reinforcement learning libraries, particularly for implementing the DDPG algorithm, while using Simulink’s accurate vehicle modeling capabilities to capture real-time dynamics.
The modeling process in Simulink represents a simplified traffic scenario with a leading vehicle and one or two following vehicles. The leading vehicle’s velocity profile is divided into six segments with varying speeds to simulate realistic changes in traffic conditions. The parameters for safe space and car length are defined as xsafe = 6 m and xveh = 4 m, respectively. The initial conditions for the following vehicles are established as yi = xsafe + xveh and vi(0) = v0(0). The RL-based control strategy was trained using the DDPG algorithm. During the RL training phase, Python continuously adjusted the EDO gain for the following vehicle, learning to optimize the control strategy based on simulated traffic disturbances. Training convergence was monitored through reward values, which gradually stabilized as the RL agent learned to minimize velocity errors and optimize the following performance. Training convergence was observed after approximately 10,000 episodes, achieving a stable reward and demonstrating the algorithm’s ability to generalize across different traffic scenarios; each episode comprised a sequence of time steps where the agent select actions (EDE gains) are based on the observed state space. A replay buffer stored past experiences, and minibatch sampling was used to update the critic and actor networks, enabling stable learning and avoiding overfitting to recent experiences. Training convergence was assessed by monitoring the reward value per episode. Figure 2 shows the training convergence of the RL agent over 10,000 episodes. The graph shows the average reward per episode, gradually stabilizing as the RL agent learns effective policies for minimizing velocity error and managing external disturbances. The observed convergence demonstrates the model’s ability to generalize across different traffic scenarios, ensuring consistency and robustness in real-world applications. This convergence ensures that the RL agent’s control decisions are reliable and effective under varying conditions.
In the simulation, Gaussian noise is utilized as dimension noise, with a simulation time step set l1 of 0.2 s. The parameter l1 is adjusted through reinforcement learning, while l2 is fixed at 0.01 s. The parameters for DDPG are detailed in Table 2, noting that their variations are typically minimal. Controller parameters are provided in Table 3.
The effectiveness of the suggested CF method was validated across two scenarios: a single following vehicle and two following vehicles; each scenario was evaluated over three distinct time intervals (0–50 s, 100–150 s, and 300–350 s). Reward values and velocity errors were analyzed to assess performance. The convergence in training reward indicates that the RL agent learned effective policies for minimizing tracking errors and managing external disturbances. By achieving stable reward values, the RL-based controller demonstrated consistency and robustness, making it applicable to real-world car-following tasks.

6.1. Single Following Vehicle in Scenario 1

The velocity comparison between the leading and following vehicles was conducted for three distinct time intervals: 0–50 s, 100–150 s, and 300–350 s. As shown in Figure 3, during the first interval (0–50 s), the following vehicle using RL-based control closely follows the velocity of the leading vehicle, with minimal overshoot. The following vehicle’s velocity quickly converges with the leading vehicle’s changes, demonstrating smooth adaptation with reduced error.
In the second interval (100–150 s), Figure 4 illustrates that the suggested RL-based control continues to outperform conventional methods, maintaining stable velocity tracking despite fluctuating velocities in the leading vehicle. By 300–350 s, as shown in Figure 5, the suggested control strategy still demonstrates excellent performance, with the following vehicle maintaining a smaller velocity error and smoother velocity transitions than the conventional car-following method. Table 4 summarizes the velocity error for both the conventional and RL-based car-following methods. The RL-based control reduces velocity error by over 60% in the 0–50 s interval and continues to perform significantly better across the other intervals.

6.2. Two Following Vehicles in Scenario 2

In Scenario 2, the two following vehicles adapt their velocities based on the leading vehicle’s dynamics. Figure 6 shows the velocity profiles for the leading vehicle and the two following vehicles during the 0–50 s interval. The first following vehicle exhibits a quicker response, closely tracking the leading vehicle, while the second following vehicle demonstrates a more delayed response but still adapts effectively with minimal overshoot.
As depicted in Figure 7 and Figure 8, during the 100–150 s and 300–350 s intervals, both following vehicles continue to show stable velocity tracking. The second following vehicle’s delay is more pronounced, but its tracking accuracy improves over time. The RL-based control successfully reduces the response time for both vehicles, allowing for them to maintain safe following distances with smoother velocity transitions. Table 5 summarizes the velocity errors for both following vehicles in Scenario 2. The first vehicle consistently achieves lower velocity errors, while the second vehicle, despite having a delayed response, gradually reduces its tracking error over time.
The results from both scenarios demonstrate the superiority of the suggested car-following strategy using the EDE combined with RL. In Scenario 1, the following vehicle exhibited significantly reduced velocity error and quicker adaptation to the leading vehicle’s changes, as evidenced by Figure 3, Figure 4 and Figure 5. In Scenario 2, both following vehicles demonstrated stable and smooth velocity tracking, with the first vehicle reacting faster and the second vehicle showing a delayed but effective response, as seen in Figure 6, Figure 7 and Figure 8. The performance metrics summarized in Table 4 further highlight the effectiveness of the suggested method, reducing velocity error by over 50% compared to the conventional CF method. These results showcase the robustness and adaptability of the suggested strategy in dynamic traffic conditions, offering improved tracking accuracy and passenger comfort.

6.3. Comparative Analysis with Benchmark Models

To further evaluate the effectiveness of the suggested EDE-RL model, we conducted a comparative analysis with three benchmark models: a classic Adaptive Cruise Control (ACC) model, CACC, and a reinforcement learning-based control model using DDPG without the EDE. The ACC model, a widely recognized baseline in car-following research, represents a traditional approach to vehicle longitudinal control. The CACC model, which leverages V2V communication for collaborative control, provides a more advanced benchmark, enabling vehicles to exchange information to improve tracking accuracy and stability. In contrast, the DDPG-based RL model without EDE allows for us to isolate and assess the specific contribution of the EDE in enhancing disturbance rejection.
Three primary performance metrics were evaluated across all models: velocity tracking error, acceleration smoothness (jerk), and disturbance rejection. The velocity tracking error metric captures the average deviation from the leading vehicle’s speed, reflecting each model’s accuracy in maintaining consistent following behavior. Acceleration smoothness, measured in terms of jerk, indicates how smoothly the model manages speed adjustments, contributing to passenger comfort by minimizing abrupt accelerations or decelerations. Finally, disturbance rejection assesses each model’s resilience to external disturbances, a critical factor in dynamic traffic environments where unexpected changes in traffic flow can impact vehicle stability. This comparative analysis provides insight into the strengths of the suggested EDE-RL model, particularly in reducing tracking errors, improving acceleration smoothness, and enhancing disturbance-handling capability, thereby validating its robustness and suitability for real-world applications in ACV. Table 6 and Table 7 summarize the results of 0–50 s to predict the best comparative analysis for Scenarios 1 and 2, respectively. In addition to the ACC and RL-based models, a CACC model was included as a benchmark, leveraging V2V communication for collaborative control. Figure 9 and Figure 10 show the velocity tracking performance of the suggested EDE-RL model compared to the CACC and other benchmark models. The EDE-RL model achieves superior tracking accuracy and smoother transitions, highlighting its robustness and adaptability to dynamic traffic conditions.
The comparative results demonstrate that the suggested EDE-RL model significantly outperforms the ACC model, the CACC model, and the RL-based model without EDE across all evaluated metrics. The EDE-RL model achieved the lowest velocity tracking error, indicating its superior ability to maintain consistent following behavior. In addition, the EDE-RL model exhibited the smoothest acceleration profiles, reducing jerk and contributing to enhanced passenger comfort and driving stability. Moreover, the EDE-RL model demonstrated remarkable disturbance rejection capabilities, effectively handling a higher percentage of external disturbances compared to the other models. These results highlight the robustness of the EDE-RL approach in addressing dynamic traffic conditions, ensuring improved tracking accuracy, smoother acceleration transitions, and greater resilience to disturbances, thereby validating its suitability for real-world ACV applications.

7. Conclusions

In this work, we suggested an advanced car-following strategy for ACVs that integrates an EDE with RL to address the challenges posed by dynamic traffic conditions and external disturbances. The suggested approach adapts the EDE gain in real-time, allowing for the system to compensate for uncertainties in vehicle dynamics and disturbances, which conventional car-following strategies often struggle to manage. By incorporating RL, the system can continuously learn and optimize its control behavior based on environmental changes, making it highly adaptive and responsive. Through extensive simulations in two scenarios, a single following vehicle and multiple following vehicles, the suggested method consistently outperformed conventional car-following strategies. In Scenario 1, with a single following vehicle, the RL-based control system demonstrated superior performance, maintaining a significantly lower velocity error and faster convergence to the leading vehicle’s speed changes across various time intervals. Scenario 2, involving two following vehicles, showed that both vehicles maintained stable tracking of the leading vehicle, with the first following vehicle exhibiting faster responses and the second vehicle adapting effectively, despite a slight delay.
Quantitative analysis further confirmed the efficiency of the suggested scheme. The total velocity error was reduced by over 50%, and the total reward, which reflects control efficiency, improved significantly compared to conventional methods. These improvements translate into more accurate tracking of the leading vehicle, smoother transitions between speed changes, and greater overall stability of the vehicle platoon. The reduced acceleration magnitude also points to smoother driving behavior, improving passenger comfort and safety. Overall, the suggested CF (EDE-RL) method offers a robust and adaptive solution for car-following tasks in autonomous driving, enhancing both performance and safety in dynamic and uncertain traffic environments. Its ability to dynamically adjust to real-time changes makes it well-suited for real-world deployment in Intelligent Connected Vehicle systems. Future work could focus on extending this framework to larger vehicle platoons and incorporating more complex scenarios, such as lane changes and varying road conditions. Additionally, investigating the method’s scalability in more congested environments and exploring the integration with other vehicle systems, such as V2V communication, could further enhance its applicability to next-generation autonomous transportation systems.
The comparative analysis with benchmark models, including a conventional ACC model, CACC, and an RL-based control model without EDE, further confirmed the effectiveness of the suggested EDE-RL approach. In both single- and multi-vehicle scenarios, the EDE-RL model consistently outperformed these benchmarks in reducing velocity tracking error, smoothing acceleration transitions, and enhancing disturbance rejection capabilities. These improvements validate the robustness and adaptability of the suggested model over traditional car-following strategies, making it highly suitable for deployment in dynamic and unpredictable traffic environments.

Author Contributions

Conceptualization, A.T.; writing—original draft preparation, Y.L.; supervision A.S.; software, A.T., Y.L. and A.S.; validation and formal analysis, Y.L.; review—writing and editing. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to express thanks for the financial support from the National Natural Science Foundation of China (project 51775474) and (FPHP: 19221909D).

Data Availability Statement

Data are available on request from the corresponding author.

Conflicts of Interest

Authors declare no conflicts of interest.

References

  1. Omeiza, D.; Webb, H.; Jirotka, M.; Kunze, L. Explanations in autonomous driving: A survey. IEEE Trans. Intell. Transp. Syst. 2021, 23, 10142–10162. [Google Scholar] [CrossRef]
  2. Yang, D.; Jiang, K.; Zhao, D.; Yu, C.; Cao, Z.; Xie, S.; Xiao, Z.; Jiao, X.; Wang, S.; Zhang, K. Intelligent and connected vehicles: Current status and future perspectives. Sci. China Technol. Sci. 2018, 61, 1446–1471. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Xu, M.; Qin, Y.; Dong, M.; Gao, L.; Hashemi, E. MILE: Multiobjective integrated model predictive adaptive cruise control for intelligent vehicle. IEEE Trans. Ind. Inform. 2022, 19, 8539–8548. [Google Scholar] [CrossRef]
  4. Guo, J.; Luo, Y.; Li, K. Integrated adaptive dynamic surface car-following control for nonholonomic autonomous electric vehicles. Sci. China Technol. Sci. 2017, 60, 1221–1230. [Google Scholar] [CrossRef]
  5. Li, M.; Li, Z.; Wang, S.; Wang, B. Anti-disturbance self-supervised reinforcement learning for perturbed car-following system. IEEE Trans. Veh. Technol. 2023, 72, 11318–11331. [Google Scholar] [CrossRef]
  6. Zhou, Y.; Ahn, S. Ahn. Robust local and string stability for a decentralized car following control strategy for connected automated vehicles. Transp. Res. Part B Methodol. 2019, 125, 175–196. [Google Scholar] [CrossRef]
  7. Wang, X.; Dai, B.; Miao, Q.; Nie, Y.; Shang, E. An Efficient Deep Reinforcement Learning-based Car-following Method via Rule-constrained Data Augmentation. IEEE Trans. Intell. Veh. 2024, 1–13. [Google Scholar] [CrossRef]
  8. Chen, J.; Shuai, Z.; Zhang, H.; Zhao, W. Path following control of autonomous four-wheel-independent-drive electric vehicles via second-order sliding mode and nonlinear disturbance observer techniques. IEEE Trans. Ind. Electron. 2020, 68, 2460–2469. [Google Scholar] [CrossRef]
  9. Chen, Q.; Zhou, Y.; Ahn, S.; Xia, J.; Li, S.; Li, S. Robustly string stable longitudinal control for vehicle platoons under communication failures: A generalized extended state observer-based control approach. IEEE Trans. Intell. Veh. 2022, 8, 159–171. [Google Scholar] [CrossRef]
  10. Zhao, K.; Jia, N.; She, J.; Dai, W.; Zhou, R.; Liu, W.; Li, X. Robust model-free super-twisting sliding-mode control method based on extended sliding-mode disturbance observer for PMSM drive system. Control. Eng. Pr. 2023, 139, 105657. [Google Scholar] [CrossRef]
  11. Yan, R.; Yang, D.; Wijaya, B.; Yu, C. Feedforward compensation-based finite-time traffic flow controller for intelligent connected vehicle subject to sudden velocity changes of leading vehicle. IEEE Trans. Intell. Transp. Syst. 2019, 21, 3357–3365. [Google Scholar] [CrossRef]
  12. Yan, R.; Yang, D.; Huang, J.; Jiang, K.; Jiao, X. Distributed car-following control for intelligent connected vehicle using improved super-twisting compensator subject to sudden velocity changes of leading vehicle. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6689–6698. [Google Scholar] [CrossRef]
  13. Mou, J.; Zhang, W.; Wu, C.; Guo, Q. Adaptive control of flapping-wing micro aerial vehicle with coupled dynamics and unknown model parameters. Appl. Sci. 2022, 12, 9104. [Google Scholar] [CrossRef]
  14. Liu, X.; Amour, B.S.; Jaekel, A. A reinforcement learning-based congestion control approach for V2V communication in VANET. Appl. Sci. 2023, 13, 3640. [Google Scholar] [CrossRef]
  15. Mu, C.; Wang, K.; Ma, S.; Chong, Z.; Ni, Z. Adaptive composite frequency control of power systems using reinforcement learning. CAAI Trans. Intell. Technol. 2022, 7, 671–684. [Google Scholar] [CrossRef]
  16. Wang, X.; Wang, S.; Liang, X.; Zhao, D.; Huang, J.; Xu, X.; Dai, B.; Miao, Q. Deep reinforcement learning: A survey. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 5064–5078. [Google Scholar] [CrossRef]
  17. Zhu, M.; Wang, Y.; Pu, Z.; Hu, J.; Wang, X.; Ke, R. Safe, efficient, and comfortable velocity control based on reinforcement learning for autonomous driving. Transp. Res. Part C Emerg. Technol. 2020, 117, 102662. [Google Scholar] [CrossRef]
  18. ElSamadisy, O.; Shi, T.; Smirnov, I.; Abdulhai, B. Safe, efficient, and comfortable reinforcement-learning-based car-following for AVs with an analytic safety guarantee and dynamic target speed. Transp. Res. Rec. J. Transp. Res. Board 2023, 2678, 643–661. [Google Scholar] [CrossRef]
  19. Wang, T.; Qu, D.; Wang, K.; Dai, S. Deep Reinforcement Learning Car-Following Control Based on Multivehicle Motion Prediction. Electronics 2024, 13, 1133. [Google Scholar] [CrossRef]
  20. Zhang, K.; Pu, T.; Zhang, Q.; Nie, Z. Coordinated Decision Control of Lane-Change and Car-Following for Intelligent Vehicle Based on Time Series Prediction and Deep Reinforcement Learning. Sensors 2024, 24, 403. [Google Scholar] [CrossRef]
  21. Li, M.; Li, Z.; Cao, Z. Enhancing Car-Following Performance in Traffic Oscillations Using Expert Demonstration Reinforcement Learning. IEEE Trans. Intell. Transp. Syst. 2024, 25, 7751–7766. [Google Scholar] [CrossRef]
  22. Shi, Y.; Wang, Z.; LaClair, T.J.; Wang, C.; Shao, Y.; Yuan, J. A novel deep reinforcement learning approach to traffic signal control with connected vehicles. Appl. Sci. 2023, 13, 2750. [Google Scholar] [CrossRef]
  23. Liu, J.; Cui, Y.; Duan, J.; Jiang, Z.; Pan, Z.; Xu, K.; Li, H. Reinforcement learning-based high-speed path following control for autonomous vehicles. IEEE Trans. Veh. Technol. 2024, 73, 7603–7615. [Google Scholar] [CrossRef]
  24. Brunoud, A.; Lombard, A.; Gaud, N.; Abbas-Turki, A. Application of Hybrid Deep Reinforcement Learning for Managing Connected Cars at Pedestrian Crossings: Challenges and Research Directions. Futur. Transp. 2024, 4, 579–590. [Google Scholar] [CrossRef]
  25. Huang, H.; Hu, Z.; Li, M.; Lu, Z.; Wen, X. Cooperative Optimization of Traffic Signals and Vehicle Speed Using a Novel Multi-agent Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2024, 73, 7785–7798. [Google Scholar] [CrossRef]
  26. Qin, P.; Tan, H.; Li, H.; Wen, X. Deep Reinforcement Learning Car-Following Model Considering Longitudinal and Lateral Control. Sustainability 2022, 14, 16705. [Google Scholar] [CrossRef]
  27. Razzaghpour, M.; Valiente, R.; Zaman, M.; Fallah, Y.P. Predictive model-based and control-aware communication strategies for cooperative adaptive cruise control. IEEE Open J. Intell. Transp. Syst. 2023, 4, 232–243. [Google Scholar] [CrossRef]
  28. Liu, Y.; Wang, W. A safety reinforced cooperative adaptive cruise control strategy accounting for dynamic vehicle-to-vehicle communication failure. Sensors 2021, 21, 6158. [Google Scholar] [CrossRef]
  29. Liu, H.; Jiang, R. Improving comfort level in traffic flow of CACC vehicles at lane drop on two-lane highways. Phys. A Stat. Mech. Its Appl. 2021, 575, 126055. [Google Scholar] [CrossRef]
  30. Liu, H.; Jiang, R. Attitude stabilization of flexible spacecrafts via extended disturbance observer based controller. Acta Astronaut. 2017, 133, 73–80. [Google Scholar]
Figure 1. Two scenarios of CF solution with EDE adapted by reinforcement learning.
Figure 1. Two scenarios of CF solution with EDE adapted by reinforcement learning.
Machines 12 00861 g001
Figure 2. Training convergence of the RL agent over “10,000” episodes, showing the average reward per episode stabilizing as the agent learns optimal policies for minimizing velocity error and managing external disturbances.
Figure 2. Training convergence of the RL agent over “10,000” episodes, showing the average reward per episode stabilizing as the agent learns optimal policies for minimizing velocity error and managing external disturbances.
Machines 12 00861 g002
Figure 3. Velocity comparison (0–50 s) for Scenario 1.
Figure 3. Velocity comparison (0–50 s) for Scenario 1.
Machines 12 00861 g003
Figure 4. Velocity comparison (100–150 s) for Scenario 1.
Figure 4. Velocity comparison (100–150 s) for Scenario 1.
Machines 12 00861 g004
Figure 5. Velocity comparison (300–350 s) for Scenario 1.
Figure 5. Velocity comparison (300–350 s) for Scenario 1.
Machines 12 00861 g005
Figure 6. Velocity comparison (0–50 s) for Scenario 2.
Figure 6. Velocity comparison (0–50 s) for Scenario 2.
Machines 12 00861 g006
Figure 7. Velocity comparison (100–150 s) for Scenario 2.
Figure 7. Velocity comparison (100–150 s) for Scenario 2.
Machines 12 00861 g007
Figure 8. Velocity comparison (300–350 s) for Scenario 2.
Figure 8. Velocity comparison (300–350 s) for Scenario 2.
Machines 12 00861 g008
Figure 9. Velocity tracking comparison for Scenario 1, demonstrating the proposed EDE-RL model’s superior performance.
Figure 9. Velocity tracking comparison for Scenario 1, demonstrating the proposed EDE-RL model’s superior performance.
Machines 12 00861 g009
Figure 10. Velocity tracking comparison for Scenario 2, highlighting the improved tracking capabilities of the EDE-RL model for multiple vehicles.
Figure 10. Velocity tracking comparison for Scenario 2, highlighting the improved tracking capabilities of the EDE-RL model for multiple vehicles.
Machines 12 00861 g010
Table 1. Provides a detailed summary of the existing works.
Table 1. Provides a detailed summary of the existing works.
ReferencesAlgorithmsPerformance MatrixObjective
[5]Anti-Disturbance Self-supervised RLControl efficiency and driving comfortAnti-disturbance policy for car-following, mitigating exogenous and endogenous disturbances
[19]Twin-Delayed Deep Deterministic Policy Gradient (TD3)Control efficiency and smoothnessCar-following policy using RL to reduce disturbances
[17]Deep Deterministic Policy Gradient (DDPG)Increased driving velocity and lane stabilityHierarchical model for lane-change and car-following
[18]Expert Demonstration RL (EDRL)Enhanced reward performance and training speedStabilize and accelerate DRL training for car-following control in traffic oscillations
[19]Deep Reinforcement Learning (DRL)Improved speed, queue length, and wait timeTraffic signal control using compressed traffic states
[20]Deep Soft Actor–CriticImproved speed and path following accuracyControl method for high-speed path-following
[21]Hybrid DRLModel adaptability to varying road conditionsScalable control model for vehicle speed at pedestrian crossings
[22]Cooperative Optimization Multi-agent DRL (COTV-MADRL))Reduced travel time and pollutionCoordinated control for traffic signals and vehicle speed to improve flow and efficiency
[23]DDPGImproved safety, comfort, and traffic flowThe car-following model with longitudinal and lateral control for adaptive cruise control
Table 2. Parameters of DDPG.
Table 2. Parameters of DDPG.
ParameterConfiguration
Learning rate (actor)0.0002
Learning rate (critic)0.0005
Replay buffer size750,000
Batch size64
Soft update coefficient (τ)0.002
Discount factor (γ)0.98
Table 3. Parameters of CF.
Table 3. Parameters of CF.
ParameterConfiguration
c12.5
c21.0
c30.15
ς10.02
k0.9
amax2.2
Table 4. Summary of the velocity error for both the conventional and RL-based car-following methods in Scenario 1.
Table 4. Summary of the velocity error for both the conventional and RL-based car-following methods in Scenario 1.
Time Interval (s)Velocity Error (Conventional) (m/s)Velocity Error (Suggested RL) (m/s)
0–500.950.35
100–1501.250.5
300–3500.800.30
Table 5. Summary of the velocity error for both the conventional and RL-based car-following methods in Scenario 2.
Table 5. Summary of the velocity error for both the conventional and RL-based car-following methods in Scenario 2.
Time Interval (s)Velocity Error (Conventional) (m/s)Velocity Error (Suggested RL) (m/s)
0–500.400.85
100–1500.551.10
300–3500.450.95
Table 6. Comparison of performance metrics for Scenario 1.
Table 6. Comparison of performance metrics for Scenario 1.
MetricACC ModelCACC ModelRL Model (no EDE)Suggested EDE-RL Model
Velocity tracking error (m/s)0.950.700.750.35
Acceleration jerk (m/s3)2.51.82.01.2
Disturbance rejection50657590
Table 7. Comparison of performance metrics for Scenario 2.
Table 7. Comparison of performance metrics for Scenario 2.
MetricACC ModelCACCRL Model (no EDE)Suggested EDE-RL Model
Velocity tracking error (m/s)1.500.800.900.85
Acceleration jerk (m/s3)2.82.02.31.3
Disturbance rejection45687088
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tayab, A.; Li, Y.; Syed, A. Reinforcement Learning-Based Approach to Reduce Velocity Error in Car-Following for Autonomous Connected Vehicles. Machines 2024, 12, 861. https://doi.org/10.3390/machines12120861

AMA Style

Tayab A, Li Y, Syed A. Reinforcement Learning-Based Approach to Reduce Velocity Error in Car-Following for Autonomous Connected Vehicles. Machines. 2024; 12(12):861. https://doi.org/10.3390/machines12120861

Chicago/Turabian Style

Tayab, Abu, Yanwen Li, and Ahmad Syed. 2024. "Reinforcement Learning-Based Approach to Reduce Velocity Error in Car-Following for Autonomous Connected Vehicles" Machines 12, no. 12: 861. https://doi.org/10.3390/machines12120861

APA Style

Tayab, A., Li, Y., & Syed, A. (2024). Reinforcement Learning-Based Approach to Reduce Velocity Error in Car-Following for Autonomous Connected Vehicles. Machines, 12(12), 861. https://doi.org/10.3390/machines12120861

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop