1. Introduction
The increasing severity of traffic congestion and vehicle collisions in recent years has placed substantial pressure on modern transportation systems. To address these challenges, intelligent transportation systems (ITS) have been proposed, offering a unified framework that integrates autonomous driving technologies, vehicle-to-everything (V2X) wireless communications, and advanced computing into traffic management. This integration facilitates seamless interaction between vehicles-to-infrastructure (V2I) communication and vehicles-to-vehicles (V2V) communication, thereby enabling the formation of vehicle platoons that maintain desired inter-vehicle spacing and travel velocities.
Compared to traditional single-vehicle control strategies, the adoption of platoon-based control extends the optimization objective to the entire vehicle group. This holistic coordination allows for significantly reduced inter-vehicle gaps, leading to improved traffic throughput, enhanced longitudinal stability, and notable gains in overall energy efficiency for multi-vehicle systems [
1,
2,
3].
Distributed model predictive control (DMPC) has emerged as a promising control strategy due to its capability to explicitly handle system constraints and address multi-objective optimization problems in a systematic manner. By leveraging future system predictions, DMPC enables comprehensive performance optimization over a receding horizon. As a result, it has been extensively investigated and applied in a wide range of domains, particularly in the cooperative control of vehicle platoons. Zheng et al. [
4] established a four-element modeling framework for vehicle platoon control, comprising node dynamics, information flow topology, platoon geometric configuration, and distributed controllers. Concurrently, a distributed model predictive controller suitable for nonlinear heterogeneous platoons was designed. Zhang et al. [
5] proposed a self-organizing cooperative control strategy, where merging vehicles form local platoons via a dynamic critical interval algorithm, and a DMPC framework coordinates their integration into the main formation. However, this approach still suffers from a high computational burden due to nonlinear modeling and DMPC optimization, relies on idealized assumptions such as homogeneity and perfect communication, and lacks sufficient validation under realistic conditions. Similarly, Hu et al. [
6] proposed a distributed economic MPC strategy with switched feedback gains to improve fuel efficiency in vehicle platoons, and validated its stability and performance through theoretical analysis and simulations. However, the proposed EMPC strategy exhibits limitations in computational efficiency for real-time implementation, relies on idealized communication assumptions, and lacks comprehensive experimental validation under realistic disturbances and heterogeneous vehicle dynamics. Qiang et al. [
7] proposed a DMPC scheme for heterogeneous vehicle platoons with inter-vehicle distance constraints. By adopting a parallel optimization framework with alternating updates between even- and odd-indexed vehicles and a tailored terminal inequality constraint, their approach overcomes key limitations of conventional methods, such as infeasibility in the absence of leader information, difficulty in maintaining spacing constraints, and limited adaptability to varying leader speeds. Pi et al. [
8] further developed a DMPC-based energy-efficient strategy for electric platoons and integrated braking force distribution with energy recovery, which improved stability, tracking accuracy, and energy efficiency. Cen et al. [
9] proposed a DMPC scheme for nonlinear platoon systems that employs a fused state estimator to mitigate missing state information caused by communication delays. However, the proposed framework relies on a linearized vehicle model and lacks a rigorous theoretical analysis of closed-loop and string stability under the combined effects of nonlinear dynamics, communication delays, and state estimation. Xu et al. [
10] tackled critical challenges in highway multi-platoon cooperative control, namely poor system architecture scalability, low control accuracy under non-ideal communication conditions, and inadequate leader vehicle control adaptability in mixed traffic environments. Zhao et al. [
11] and Zhao et al. [
12] presented systematic cloud-based model predictive control schemes with novel delay compensation mechanisms, rigorously guaranteeing both asymptotic and string stability for vehicle platoons under heterogeneous communication delays, thereby enhancing tracking accuracy, fuel efficiency, and practical applicability. Wen et al. [
13] proposed a novel linear control law for connected and automated vehicle platoons that incorporates a vehicle speed prediction model to effectively mitigate the detrimental effects of time-varying communication delays, thereby enhancing platoon stability and safety. Their solution comprises a system architecture combining static management with dynamic switching capability, alongside a delay-compensated DMPC method and a robust MPC strategy for the leader vehicle under disturbances from human-driven vehicles. While the aforementioned studies successfully employed DMPC to achieve effective control of vehicle platoon systems in their respective scenarios, the computational burden associated with solving the DMPC optimization problem becomes significant when considering complex vehicle dynamics characterized by coupled longitudinal-lateral nonlinearities. Consequently, there is a pressing need to investigate computationally efficient solution algorithms to ensure the real-time performance of cooperative platoon control.
The implementation of DMPC requires executing model predictions and solving receding horizon optimization problems at every sampling instant, which imposes a substantial computational burden on vehicle platoon systems. To address this issue, increasing attention has recently been directed toward event-triggered control mechanisms, which aim to reduce unnecessary computations by updating control actions only when certain state-dependent conditions are met. This paradigm offers the potential to alleviate resource consumption while maintaining satisfactory control performance. Zhao et al. [
14] proposed an adaptive event-triggered safety control approach that dynamically adjusts the event-triggering threshold based on abrupt changes in the preceding vehicle’s behavior to optimize communication bandwidth utilization. By integrating an adaptive cost function and a controller gain optimization algorithm, the method ensures the strict string stability of vehicle platoons under limited bandwidth conditions, thereby enhancing both safety and control performance of the platoon. Luo et al. [
15] proposed an event-triggered tube-based DMPC with a shrinking horizon and terminal region to ensure robustness under disturbances and delays. The triggering rule based on control plan variations reduces communication, but may not activate under large errors, posing safety risks. Han [
16] proposed a bandwidth-aware scheduling strategy integrated with event-triggered DMPC to alleviate communication and computational load by dynamically adjusting triggering thresholds and optimization frequency. However, the proposed mechanisms lack practical implementation details for bandwidth state estimation, exhibit high parameter dependency, and insufficiently address robustness concerns under communication delays, packet losses, and potential event storms. Similarly, Selvaraj et al. [
17] developed a memory-based event-triggered platooning control scheme that utilizes only position measurements and dynamic thresholds to improve communication efficiency while guaranteeing closed-loop stability through Lyapunov-based linear matrix inequality (LMI) formulations. Chen et al. [
18] proposed an asynchronous self-triggered stochastic distributed MPC scheme that ensures probabilistic constraint satisfaction and quadratic stability for vehicle platoons under stochastic disturbances, while substantially reducing communication load in vehicular ad hoc networks. Du et al. [
19] proposed a hierarchical dynamic event-triggered control protocol that guarantees zero-error tracking for heterogeneous vehicle platoons with actuator uncertainties and a nonzero-input leader, substantially reducing communication costs. However, the analysis and design are conducted under an ideal communication network assumption, leaving the system’s stability and performance vulnerable to practical network-induced imperfections such as communication delays and packet dropouts. Wang et al. [
20] proposed a novel periodic event-triggered fault detection filter for vehicle platoons, which effectively saves communication resources while ensuring asymptotic stability and H∞ performance of the residual system under actuator faults and external disturbances. Although event-triggered mechanisms can effectively alleviate computational and communication burdens, their effectiveness may deteriorate in certain application scenarios.
A core advantage of the alternating direction method of multipliers (ADMM) in solving QP problems lies in its powerful decomposition capability. The algorithm decomposes complex problems into computationally tractable subproblems, allowing for parallel processing in large-scale applications. It is particularly effective at decoupling and handling complex constraints while maintaining high memory efficiency and ease of parallelization. Consequently, ADMM serves as a powerful framework for large-scale or structured quadratic programming. Li [
21] proposed a heuristic adaptive ADMM approach for solving MPC problems, effectively enhancing the convergence rate of the algorithm. However, the adaptive parameters still require manual tuning, and improper settings may increase the convergence time. Tang [
22] extended Li’s heuristic method to a multi-UAV simulation platform, demonstrating its potential and effectiveness for multi-agent systems and providing a novel approach for multi-agent coordinated control. Mallick et al. [
23] developed a DMPC strategy based on a switching ADMM framework for piecewise affine (PWA) systems. This approach addresses the inherent nonconvex optimization problem by decomposing it into a sequence of convex subproblems. It concurrently coordinates the consensus of coupled states among subsystems, significantly reducing the computational burden while simultaneously ensuring closed-loop stability and recursive feasibility. Feng [
24] addressed longitudinal-lateral coupling and tire nonlinearities using Koopman operator theory. They employed the ADMM to solve the resulting QP problem, achieving longitudinal tracking for vehicle platoons under complex driving conditions. Concurrently, Bai et al. [
25] proposed a parallel DMPC framework based on ADMM, which effectively addresses the optimization problem with coupled safety constraints in cooperative control for connected automated vehicle (CAV) platooning, achieving both computational efficiency and robust safety guarantees. However, the framework lacks a thorough analysis of its convergence rate and computational scalability for large-scale platoons under real-world communication delays and dynamic topologies. However, within the aforementioned studies utilizing ADMM, investigations into its convergence properties remain limited. Furthermore, the design of adaptive parameters is often heavily reliant on domain expertise, and suboptimal parameter selection may fail to achieve peak performance.
The main contributions are summarized as follows.
To overcome the excessive computation time associated with solving the core QP problem within traditional DMPC, which may result in delayed control actions and potential collisions, we design a residual-feedback-based adaptive ADMM algorithm. This algorithm dynamically adjusts the penalty parameter based on primal and dual residuals observed during the iterative process, thereby significantly enhancing computational efficiency within the control cycle.
To further reduce the computational load of the control algorithm and conserve valuable onboard computing resources, an event-triggered mechanism is introduced. This mechanism departs from conventional time-triggered, fixed-period computation paradigms. Instead, the DMPC optimization is initiated only when the system state satisfies predefined triggering conditions.
3. Adaptive ADMM-Based Event-Triggered Distributed Model Predictive Control
3.1. Prediction Model
The predictive model estimates future system states over a finite prediction horizon, utilizing the current state and historical data. For the convenience of controller design, we further construct the augmented variable
,
, Equation (5) can be obtained as follows:
where
,
,
. Here,
m = 1 and
n = 3 denote the dimensions of the control input and the vector state vector, respectively.
directly reflects the tracking error with respect to the designated reference vehicle, since
contains the error state
. Hence, minimizing
in the cost function is equivalent to minimizing the tracking error.
In the DMPC scheme, the prediction horizon and control horizon are typically denoted as
and
, respectively. Assuming that the state variable of vehicle
at time step
is represented by
, the following prediction equation can be derived based on Equation (8):
where
,
,
,
.
3.2. Design of the Cost Function
Building upon the previously described vehicle platoon model, a cost function is defined for each vehicle by incorporating its current state as well as the states of neighboring and leading vehicles. To ensure feasible and stable control performance while avoiding infeasibility issues, a slack variable
is introduced, which guarantees the existence of a feasible solution at each control step. Then, the objective function is designed as follows:
where
represents the weight matrix associated with the tracking performance,
represents the weighting matrix penalizing the variation in control inputs, and
denotes the desired reference trajectory.
The formulated cost function can be transformed into a standard QP problem for optimization. However, traditional QP solvers such as interior-point methods and active-set methods often fail to satisfy the real-time computational requirements of large-scale, multi-controller systems like vehicle platoons. Therefore, a more efficient optimization approach is required to enable fast and accurate tracking of the leader vehicle by multiple follower vehicles.
3.3. Residual-Based Adaptive ADMM Algorithm
The ADMM is a robust optimization framework that decomposes centralized control problems into smaller subproblems, enabling parallel solutions. It is particularly effective for distributed convex optimization, progressively converging to a globally optimal or acceptable locally optimal solution through iterative updates. ADMM has demonstrated strong performance in solving large-scale constrained optimization problems. It is typically applied to problems formulated in the following standard form:
where
and
represent convex functions,
and
are the variables to be optimized, and
denotes the equality constraints to be satisfied.
By introducing the dual variable
, the augmented Lagrange function corresponding to Equation (12) is constructed as follows:
In a typical ADMM framework, the iterative process involves successive updates of the primal variables
and
, the dual variable
and the penalty parameter
. To reduce computational complexity and facilitate implementation, the dual variable is often replaced by its scaled form,
, leading to the following update rules.
During the iterative process, the primal variables and are updated alternately, which contributes to improved computational efficiency and faster convergence.
Existing adaptive ADMM algorithms typically employ heuristic ratio-based rules to adjust the penalty parameter adaptively at each iteration, based on intermediate solutions. However, as a generalization of the standard ADMM, the performance of such ratio-adaptive ADMM methods is highly sensitive to the proper selection of tuning parameters, which are usually specified through prior calibration by expert users. Improper selection of these adaptive parameters by non-expert users may significantly degrade the convergence speed. To address this limitation, this section proposes a residual-ratio-based adaptive ADMM method, which dynamically adjusts the relative magnitudes of the primal and dual residuals. This enables automatic parameter tuning and accelerates convergence, without requiring any user-defined parameters.
In distributed optimization problems, the primal residual represents the violation of the consensus constraints, while the dual residual reflects the convergence behavior of the dual variables. Traditional ADMM employs a fixed penalty parameter , which may lead to an imbalance between these two types of residuals. Specifically, a large tends to reduce the primal residual but significantly increases the dual residual, potentially causing oscillations when . Conversely, a small results in a large primal residual and a small dual residual. The exit condition of the algorithm depends on the primal and dual residuals. Throughout the convergence process, it is crucial to ensure that both residuals remain sufficiently small to accelerate convergence and improve overall efficiency.
The algorithm is considered to have converged when both the primal and dual residuals satisfy the following conditions:
where
,
.
Accordingly, a dynamic balancing criterion is established as follows:
To enhance algorithm robustness, the penalty parameter
is adaptively adjusted to maintain the primal and dual residuals within the same order of magnitude. Based on the dynamic balancing criterion, a normalized residual ratio is introduced. Specifically, the normalized residual ratio between the primal and dual residuals is defined, and the corresponding expression can be formulated as follows:
where
,
denote the initial primal and dual residuals, respectively.
The adaptive penalty parameter update rule, based on the geometric mean principle, is formulated as follows:
The geometric mean principle ensures that the proposed adaptive rule maintains a balance between global search and local convergence. Specifically, when , the penalty parameter is increased to enforce constraint satisfaction, while it is decreased to mitigate oscillations when the dual residual dominates.
Meanwhile, to ensure the continuity of the Lagrange multiplier
, the dual variables are scaled synchronously as follows:
The cost function described in Equation (9) is reformulated into a QP form, which is expressed as follows:
where
,
,
,
,
.
Based on the derived standard QP formulation for the longitudinal tracking control problem of the vehicle platoon, dual variables are introduced to reformulate the problem into the canonical ADMM form. The resulting ADMM formulation is given as follows:
where
, and
is the indicator function.
defined as
.
Based on the Lagrange multiplier method and the preceding ADMM formulation, the augmented Lagrange function is given as follows:
Based on Equation (11), the scaled dual variable
is introduced. Then, by combining the first-order optimality conditions for the primal variable
and the auxiliary variable
, the iterative update rules are derived as follows:
To enhance the convergence rate and computational efficiency of the ADMM algorithm, a relaxation parameter
is introduced. Additionally, the solution from the previous iteration is leveraged in the update process to accelerate subsequent iterations. The modified update equations are given as follows:
The computational complexity of the proposed ADMM-MPC algorithm is dominated by the -update, which involves solving a linear system. Let the decision variable dimension be . Each iteration of ADMM requires operations, and the total complexity is , where is the number of iterations. Compared to classical active-set methods (ASM) and interior-point methods (IPM), ADMM exhibits a similar per-iteration cost but typically converges with fewer iterations in practice, particularly when warm-start strategies are employed.
Extending to the distributed case with vehicles, each vehicle solves its local QP of dimension at every iteration. The total computational complexity of the multi-vehicle system is therefore .
3.4. Design of Event-Triggered Mechanism
In the development of cooperative control systems for vehicle platoons, the traditional time-triggered mechanism was initially applicable in the early stages when in-vehicle networks were relatively underdeveloped. However, with the evolution of intelligent transportation systems towards large-scale and dynamic deployments, the limitations of time-triggered mechanisms have become more apparent. Specifically, their rigid resource utilization leads to unnecessary computation and communication even when the platoon is in a steady cruising state or the tracking error has converged within an acceptable tolerance, thereby resulting in wasted computational resources.
In contrast, the event-triggered mechanism effectively addresses the trade-off between limited computational resources and control timeliness by transitioning from a time-driven to a state-driven paradigm. It has demonstrated significant advantages in reducing resource consumption, lowering computation frequency, and maintaining system stability. Specifically, within the framework of DMPC, where the controller performs receding horizon optimization to generate the optimal control sequence, the triggering condition is defined as follows:
where
denotes the control input at the time step
, and
represents the most recent triggering instant. The position and velocity components of the output are selected to formulate the triggering condition:
The parameter serves as a triggering threshold, which is used to regulate the triggering frequency. When , the event-triggered mechanism reduces to the conventional time-triggered scheme. As increases, the triggering frequency decreases accordingly.
Following the description of the core algorithmic components, the overall workflow of the proposed event-triggered adaptive ADMM-DMPC framework is illustrated in
Figure 2.
The system accepts a reference trajectory as input, while the event-triggered mechanism determines when local optimization should be executed. Each vehicle formulates and solves its local QP using the a-ADMM algorithm in coordination with its neighbors. This workflow highlights the closed-loop integration from trajectory planning to distributed control execution.
To further clarify the real-time operation of the event-triggered DMPC scheme, the detailed algorithmic steps at each sampling instant are summarized as follows:
Check the event-triggering condition (24). If the triggering condition is satisfied, each vehicle initiates the distributed ADMM-based DMPC optimization; otherwise, the vehicles directly reuse the previously computed optimal control sequence and apply the control input corresponding to the current sampling instant, then proceed to step 8.
For each vehicle, formulate the local quadratic programming problem by expressing the objective function in the form of (10) according to its own state-space model and the coupling constraints imposed by the communication topology.
Reformulate (10) into the decomposed form suitable for ADMM optimization (cf. Equation (20)).
Initialize the ADMM solver with the optimal solutions obtained at time
Update the primal and dual variables iteratively according to the distributed ADMM updating rules, as given in (23).
Adaptively update the penalty parameter based on the normalized residual ratio (16) and the update rule (17), so as to dynamically balance the primal and dual residuals and enhance convergence speed.
Check the termination criteria (14). If satisfied, terminate the iteration and transmit the control input corresponding to the current sampling instant from the optimized sequence of each vehicle to the platoon system; otherwise, continue the distributed iteration until the maximum number of iterations is reached.
Advance to the next sampling instant and repeat step 1.
6. Conclusions
This paper addresses the critical challenges of limited onboard computational resources and stringent real-time control requirements in DMPC-based vehicle platoon systems by presenting an event-triggered adaptive ADMM-DMPC framework. A longitudinal vehicle dynamics model and a communication topology are established to facilitate the implementation of the proposed control strategy under the DMPC scheme. Specifically, a residual-based adaptive ADMM algorithm is developed, which dynamically adjusts the penalty parameter based on residual scaling. This significantly accelerates the solution of quadratic programming subproblems in DMPC, thereby ensuring real-time performance. Moreover, the proposed adaptive ADMM method enhances tracking accuracy and improves overall control quality. A dual-state event-triggering mechanism is designed, where optimization is triggered only when position or velocity states exceed predefined thresholds. This substantially reduces unnecessary computations and alleviates computational burden during the control process. The proposed strategy is thoroughly validated through numerical simulations.
To validate the effectiveness of the proposed method, we established a simulation platform within the Matlab/Simulink environment and conducted system simulations. Compared with the velocity-based triggering strategy in [
16], the event-triggered mechanism reduces the average error, maximum error, and triggering frequency by 17.5%, 26.6%, and 20.7%, respectively. The ADMM-based distributed optimization further demonstrates improved computational efficiency, with reductions in average computation time of 30.4% and 62.8%, and reductions in maximum computation time of 65.8% and 38.1%, relative to [
21,
26]. In terms of tracking accuracy, the proposed approach achieves average and maximum errors of 0.033 m and 0.321 m, corresponding to improvements of up to 32.7% and 38.7% over existing methods. These results indicate that the method significantly reduces both the computational time per optimization step and the total number of optimizations performed, effectively mitigating computational demands on vehicle-side processors and markedly improving the real-time performance of cooperative vehicle platooning.
For future work, it is of interest to implement the proposed algorithm in cloud computing, which contributes to further saving onboard computational resources. Furthermore, this study does not consider the coupled lateral and longitudinal control, and it is promising to extend the proposed algorithm to the combined horizontal and longitudinal control of vehicle platoons.