Next Article in Journal
From the Sea to Mosquito Control: The Potential of Halymenia dilatata Marine Alga as an Eco-Friendly Mosquitocidal Agent
Next Article in Special Issue
An Improved Big Data Analytics Architecture Using Federated Learning for IoT-Enabled Urban Intelligent Transportation Systems
Previous Article in Journal
Improving Appraisal Methodology for Land Use Transport Measures to Reduce Risk of Social Exclusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cooperative Cruise Control for Intelligent Connected Vehicles: A Bargaining Game Approach

by
Miguel F. Arevalo-Castiblanco
1,2,
Jaime Pachon
2,
Duvan Tellez-Castro
2 and
Eduardo Mojica-Nava
2,*
1
Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
2
Department of Electrical and Electronics Engineering, Universidad Nacional de Colombia, Bogota 111321, Colombia
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(15), 11898; https://doi.org/10.3390/su151511898
Submission received: 5 July 2023 / Revised: 26 July 2023 / Accepted: 1 August 2023 / Published: 2 August 2023
(This article belongs to the Special Issue Autonomous Systems and Intelligent Transportation Systems)

Abstract

:
Intelligent transportation systems (ITSs) are at the forefront of advancements in transportation, offering enhanced efficiency, safety, and environmental friendliness. To enable ITSs, autonomous systems play a pivotal role, contributing to the development of autonomous driving, data-driven modeling, and multiagent control strategies to establish sustainable and coordinated traffic management. The integration of networked and automated vehicles has garnered significant attention as a potential solution for alleviating traffic congestion and improving fuel economy, achieved through global route optimization and cooperative driving. This study focuses on a predictive control perspective to address the cooperative cruise control problem. Online decision making is employed during the driving process, utilizing information gathered from the network. By employing bargaining games to establish an operating agreement among vehicles, we formalize a synchronization approach based on predictive control theory. Ultimately, these findings are put to the test in an emulation environment within a hardware-in-the-loop system. The results revealed that the proposed cruise control successfully achieved convergence toward the desired reference signal. These results demonstrate the effectiveness of our approach in achieving synchronized platoon behavior and correct bargaining outcomes. These findings underscore the effectiveness and potential of DMPC with bargaining games in coordinating and optimizing vehicular networks. This paves the way for future research and development in this promising area.

1. Introduction

In recent decades, concerns regarding vehicle transportation continue to escalate, and issues such as accidents, traffic congestion, and air pollution have surged, particularly concerning environmental impacts [1,2,3]. In response to these pressing issues, the emergence of cooperative networks among vehicles has gained significant traction. The concept of cooperative networks fosters synchronized and efficient traffic management, further minimizing environmental impacts [4]. These innovative solutions hold tremendous promise for mitigating the adverse effects of the expanding vehicular industry and steering transportation towards a more sustainable and environmentally conscious future [5]. In this way, Intelligent Transportation Systems (ITSs) have emerged as the leading transportation technology to advance toward more efficient, safer, and greener future transportation [6].
Autonomous systems have been recognized as a cornerstone for enabling ITSs through technological advances such as autonomous driving, data-driven modeling, and multiagent control strategies to achieve sustainable, coordinated traffic management. Connected and Automated Vehicles (CAVs) have received great attention as a potential way to improve traffic congestion and fuel economy through global route optimization and cooperative driving [7]. It has been suggested that theoretical control methods should be developed in the future deployment of autonomous systems in ITSs, in particular, the control of multiple distributed heterogeneous vehicles. For this reason, using cooperative control strategies may help as an ITS strategy [8].
Several cooperative control approaches have recently been proposed for networked vehicles [9]. The first works focused on cruise control without employing inter-vehicle communications, then the rise of wireless communications allowed a network of vehicles to be seen as a cooperative networked control system [3]. In the context of cooperative control, the Cooperative Adaptive Cruise Control (CCAC) concept has been developed using synchronization laws based on a reference acceleration profile [10]. The CCAC technique is based on string stability, where vehicles maintain a predefined distance at a constant speed, but it might not result in an optimal response [11]. Likewise, in ref. [12], a recent survey of some experimental results on cooperative control in an intelligent connected vehicle environment is presented, where the importance of cooperative control for urban mobility is emphasized. Active control methods of intelligent connected vehicles and indirect control of regular vehicles through intelligent vehicles are studied for multi-intersection coordinated heterogeneous vehicle traffic flow. On the other hand, in automatic driving systems, where cooperative perception is essential for enhanced safety and decision making, integrating cooperative cruise control with cooperative perception can lead to more intelligent and efficient autonomous driving systems. Notably, prior research on hybrid object detection and tracking for cooperative perception using 3D LiDAR [13] has demonstrated the benefits of cooperative perception through LiDAR-based object detection and tracking. Conversely, hardware-in-the-loop (HIL) emulation, involving embedded systems for dynamic environment simulation, enables the validation of real-time methods when models are unavailable, making it suitable for scenarios like autonomous vehicle networks [14]. HIL has been widely used in electrical and communication systems for validating future installations [15,16]. However, only a few works have explored the application of bargaining games in cooperative control for distributed systems, particularly in vehicle platoons and multi-agent systems, without considering predictive control techniques or game theory in vehicles [17].
Within the active control strategies to be used with ITSs, Model Predictive Control (MPC) methodologies can find the optimal values in terms of energy or performance [18]. For MPC algorithms in cooperative scenarios, distributed predictive control strategies for agent synchronization have been developed using inter-agent communication and constructing cost functions involving network information [19,20]. On the other hand, an MPC involving different agents where decisions depend on others can be considered as a network game [21]. In the same way, if these agents manage a common goal, the problem can be described as a bargaining game, with a target that modifies their actions based on a disagreement between them [22,23,24]. Theoretically, the point of disagreement is defined as the minimum satisfaction expected for the negotiation. In practical implementation, cooperative cruise strategies can present several drawbacks, and only a few authors have validated these developments in highway vehicle platoons [25]. The main practical tests of these approaches are usually validated in dynamic emulation models or robot networks [26,27].
In this work, we aim to address the challenges in coordinating and synchronizing a platoon of vehicles using cooperative cruise control techniques. By applying game theory concepts and optimization algorithms, we seek to achieve better traffic management, reduced fuel consumption, and improved vehicular performance in real-world scenarios. Throughout the article, we have discussed in detail the methods used to formulate and solve the cooperative bargaining game, the dynamics of the platoon vehicles, and the control actions implemented to regulate the speed and maintain inter-vehicle distances. The main contribution of this paper is threefold. First, the appropriation of a networked vehicle control problem is developed from a predictive perspective using a bargaining methodology, which, as far as we know, has never been used. In recent works, only some results have been based on the theory of bargaining games as a problem of cooperative control in distributed systems without considering vehicle platoon emulation [28]. Other works have emulated multiagent systems without considering the application of predictive control techniques or game theory in vehicles [29], including recently in [30]. Identifying the research gap falls at the application level in the case of cooperative cruise control, where the control strategy must solve a distributed optimization problem according to the states of the agents, the point of disagreement, and the predefined cost functions for energy expenditure presets for simulation and emulation. A distributed bargaining methodology is a technique that allows managing control algorithms efficiently without increasing computational expense in the presence of the network. Second is the inclusion of bargaining game theory for the cooperative control problem in simulation. This development allows it to be compared with centralized and decentralized predictive control algorithms for symmetric and non-symmetric cases. It allows observing the response in comparison with conventional methods and validates its efficiency. Finally, a validation of the procedures has been developed in emulation with an HIL system for application. For the initial test of vehicular applications, the emulation of dynamic systems in HIL enables the validation of real-time methods in situations where the models are unavailable (e.g., autonomous vehicle network).
The rest of the paper is organized as follows: Section 2 presents the cooperative cruise control and bargaining games as a tool for the background of Distributed Model Predictive Control (DMPC). Section 3 contextualizes the bargaining problem to solve the cooperative cruise problem. In Section 4, we show the study case application and its simulation. Section 5 shows the emulation of the system in HIL, and finally, Section 6 sets out the conclusions of the work.

2. Background

This section presents the basic foundations of the cooperative cruise control theory and DMPC as a bargaining game.

2.1. Cooperative Cruise Control

The cooperative cruise control problem has been extensively studied in recent years. Recent developments have focused on wireless Vehicle-to-Vehicle (V2V) communication that has grown commercially. The wireless communication led to the definition of the Grand Cooperative Driving Challenge (GCDC) to manage a platoon of vehicles that have this technology. The main objective of cooperative cruise control is synchronizing the vehicles on the road with the traffic profile considered by an established reference. The traffic profile is commonly constituted by the inter-vehicle distance and the speed on a highway, reducing the time to transit in a highway and the fuel consumption. Communication is usually considered by the predecessor vehicle as a string stability case.
A simple description of the CACC setting (for longitudinal dynamics) is considered. In this case, each vehicle is modeled through its physical and mechanical parameters. The dynamics proposed in this case are linear, as follows:
p ˙ ( t ) = v ( t ) , v ˙ ( t ) = a 1 p ( t ) + a 2 v ( t ) + b ( τ ( t ) + f ( p , v ) ) ,
where variables are speed v and position p, respectively. The parameters a 1 and a 2 are transmission parameters, b is related to transmission efficiency, and τ has the dimension of acceleration, or the force when it is multiplied by the vehicle mass. The term f ( p , v ) is associated with an input uncertainty. These parameters can be included in the dynamics due to the approximation by an invertible steady-stable time-invariant model without the presence of uncertainties [31]. This approach considerably reduces the complexity of the model without losing performance, and it has been used as an approximation in previous theories to validate this type of problem [32].
The cooperative cruise control system is based on a simple setting for longitudinal dynamics, where each vehicle is modeled with its physical and mechanical parameters. The system dynamics are represented by a linear affine model, which includes transmission parameters, transmission efficiency, and acceleration. The controller’s goal is to regulate the speed of each vehicle and maintain a safe distance from its neighbors. Wireless communication enables real-time decision making and control actions based on network information and traffic profiles. The cooperative platoon formation exemplified in Figure 1 shows a group of vehicles synchronously positioned on the road while maintaining a predetermined inter-vehicle distance. This innovative strategy aims to optimize traffic flow, minimize gaps between vehicles, and improve fuel efficiency through coordinated speed regulation. The controller must be able to regulate the speed of each vehicle and maintain a distance from its neighbors. The graphic representation of the vehicle’s platoon is shown in Figure 1, where the distance between each vehicle is defined as d ( t ) = p 1 ( t ) p 2 ( t ) , that is, the difference of the position of vehicles 1 and 2 with its respective subscripts p 1 and p 2 .
Let the states of the system be x ( t ) = [ d ( t ) , v ( t ) ] , where the distance is position minus a predefined distance, and the control input u ( t ) = τ ( t ) with a possibly input matched uncertainty f ( p , v ) . It is possible to write the system as a linear affine continuous model of the form
x ˙ ( t ) = A x ( t ) + b ( u ( t ) + f ( x ) ) ,
with
A = 0 1 a 1 a 2 , b = 0 b .
This model considers the acceleration of neighboring vehicles in a k-th instant. For implementation control prediction, it is necessary to use the discrete dynamics of the system. Therefore, (2) is modified as
x ( k + 1 ) = A k x ( k ) + b k ( u ( k ) + f ( x k ) ) ,
with A k = e A T and b k = A 1 ( A k I ) b k .
Note that onboard sensors for controller action measure distance and speed, and the parameters of each vehicle can be different for a heterogeneous case. In the same way, it is important to consider that for the managed approximations of the cooperative cruise control theory, the considered model relates the position, inter-vehicular distance, and speed of the vehicles in the network. In this particular application, a constant acceleration in an instant of time for each agent is the input for the developments. To ensure the robustness of the algorithm, accurate state estimation and localization are vital. State estimation techniques, such as the consensus Kalman filter for sideslip angle estimation, fusion of IMU and GNSS with heading alignment, and automated vehicle sideslip angle estimation considering signal measurement characteristics, play a crucial role in enhancing the algorithm’s robustness against localization error and velocity error. By incorporating advanced state estimation methods, the cooperative cruise control system can better handle uncertainties and errors in sensor measurements, leading to more reliable platoon formation and control actions. Additionally, integrating fault detection and correction methods will further enhance the algorithm’s safety and dependability.

2.2. Distributed Model Predictive Control with Bargaining Games

This section introduces the basic concepts of MPC from a bargaining game perspective. The control problem is contextualized as the negotiation method for solving a distributed optimization problem. The block diagram representing the framework is presented in Figure 2, where it is observed that each vehicle, through its dynamics, enters a bargaining algorithm along a prediction horizon. In this cooperative cruise control process, a network of vehicles equipped with V2V communication technology communicates wirelessly. Each vehicle in the network is considered an agent and possesses a discrete dynamic model, along with its control action u i * ( k ) and a point of disagreement for negotiation β i . This information is fed into a sophisticated negotiation process that combines local cost functions with a global objective. The core of the negotiation lies in solving an optimization problem based on predictive control, aiming to determine the optimal control actions for each vehicle at every instant of time. During the negotiation process, the vehicles exchange their states and control actions, seeking an agreement that aligns with Nash’s notions of achieving the best collective benefit. In case a consensus cannot be reached, the point of disagreement comes into play, defining the minimum satisfaction expected for the negotiation. The algorithm’s distributed nature allows for efficient decision making without the need for iterative solutions, thereby reducing computational overhead and making it highly suitable for applications involving vehicle platoons. The convergence of this cooperative control approach enhances synchronization, minimizes travel time, and reduces fuel consumption, leading to improved efficiency and performance of the overall vehicular network. This block diagram is used for non-symmetric bargaining cases. For symmetric cases, the characteristics of each vehicle are the same. All cases are regulated by regulatory aspects or physical restrictions of the vehicles to be considered.
Definition 1.
Bargaining Game A bargaining game is mathematically defined as the tuple G = ( N , Λ i , ψ i ) , i N .
In this case, N is the number of vehicles involved in the process, Λ is the decision space of the control law, and ψ i is defined as the local cost function of each vehicle. It is assumed that the vehicles are in a negotiating position to achieve a common objective, such as Nash’s notions [33]. In the game, if it is impossible to reach an agreement, the term disagreement point is used for the bargain between vehicles [34].
Assuming the dynamics of each vehicle as in (2) and with its discrete representation (4), the particular objective is to achieve energy-level optimization in each vehicle’s operation. For this optimization problem, a locally distributed cost function is defined as
L i ( x ˜ i ( k ) , u ˜ i ( k ) ) = t = 0 N p ψ i ( x ˜ i ( k ) , u ˜ i ( k ) ) ,
with x ˜ i ( k ) as the representation of vehicle i states built along the prediction horizon [ x i ( k ) , , x i ( k + N p ) ] , and likewise u ˜ i ( k ) = [ u i ( k ) , , u i ( k + N u ) , , u i ( k + N p ) ] considering the control horizon N u with N u N p . Each cost function ψ i is defined as
ψ i ( u ˜ ( k ) ) = u ˜ ( k ) Q u u i u ˜ ( k ) + 2 x i ( k ) Q x u i u ˜ ( k ) + x i ( k ) Q x x i x i ( k ) ,
which is positively defined, convex, and where Q u u i , Q x u i , and Q x x i are weighted positive definite matrices, i.e., Q u u i 0 . This cost function, by taking a conventional quadratic form, does not become the main contribution of the paper, which focuses on the application of this application in the predictive control algorithm.
For the control problem formulation, defining a decision space Λ = i = 1 N Λ i for the whole system according to the physical operating conditions is important. An MPC problem with communication between agents is interpreted as a bargain so that it can be a bargaining game. For the analysis and solution of this type of game, Nash proposes an axiomatic methodology [35], which was used in continuous and static systems [34].
A continuous representation for a bargaining game is with the tuple ( S , β d ) , where S is the game decision space, which is a non-empty closed subset of R N , and β d int ( S ) is the interaction disagreement point. For implementation purposes, it is important to consider the discrete dynamics of the game, so it is then defined as ( Γ ( k ) , β d ( k ) ) k = 0 with Γ ( k ) , a closed non-empty R N subset that contains the profit function values of each vehicle. The values of the states x i , the set Γ , and the point of disagreement β vary dynamically.
The evolution value of the disagreement point varies as
β i ( k + 1 ) = β i ( k ) μ ( β i ( k ) ψ i ( u ˜ ( k ) ) ) i f β i ( k ) ψ i ( u ˜ ( k ) ) , β i ( k ) + ( ψ i ( u ˜ ( k ) ) ψ i ( k ) ) i f β i ( k ) < ψ i ( u ˜ ( k ) ) ,
with 0 μ 1 as an adjustment constant according to the definition of the axioms of the negotiation processes raised from the work of John Nash [36]. In this case, if a vehicle decides to cooperate on the road, the disagreement point is reduced with a μ β i ( k ) ψ i ( u ˜ ( k ) ) factor; otherwise, it is increased by a ψ ( u ˜ ( k ) ) β i ( k ) factor.
Definition 2.
Utopia point ζ i is defined as the utopia point available for the vehicle i as ζ i ( Γ ) = max ψ i , ( ψ i ) i N Γ , i N .
In this case, Θ is defined as the union of the cost functions ψ i of the game, where then the discrete game can be interpreted as { Θ , β ( k ) } k = 0 . Notice that the analysis of a bargaining game can be carried out symmetrically for a game with similar characteristics between its players or non-symmetrically for a game where these characteristics differ, i.e., synchronization of oscillator systems with homogeneous characteristics or control of mechanical systems with heterogeneous physical characteristics. For the solution of a bargaining game, a non-symmetrical centralized scenario is proposed based on [37] as
max u ˜ ( k ) i = 1 N ( λ i log ( β i ( k ) ψ i ( u ˜ i ( k ) ) ) ) , s . t . β i ( k ) > ψ i ( u ˜ ( k ) ) , u ˜ ( k ) Λ ,
with λ i as a weight variable, usually defined as λ i = 1 N , with N as the number of vehicles involved in the process. However, for a distributed control analysis, the solution to the optimization problem is proposed as
max u ˜ i ( k ) r = 1 N ( log ( β r ( k ) κ r ( u ˜ i ( k ) , u ˜ i ( k ) ) ) ) , s . t . β r ( k ) > κ r ( u ˜ i ( k ) , u ˜ i ( k ) ) u ˜ i ( k ) Λ ,
with κ r ( u ˜ i ( k ) , u ˜ i ( k ) ) , a distributed cost function usually defined as quadratic, and u ˜ i ( k ) , the set of the remaining vehicle control actions, except for the agent i.
The optimization problem (9) differs from problem (8) in the sense that it considers u ˜ i ( k ) fixed and only optimizes as a function of u ˜ i ( k ) ; this means that optimization does not involve the decisions of the entire network cooperatively. The solution to this problem then arises as a negotiation model that depends on the context given by the cooperative cruise control theory. This methodology does not use iterative solutions as others commonly used in distributed optimization problems [38], which reduces the computational cost in operation with great benefits in high-impact applications such as vehicle platoon. Bargaining methodology allows the solution to a distributed control problem by solving only one local optimization with the information collected by its neighbors and achieving an agreement based on the Nash equilibrium concept of bargaining through the defined disagreement point. In summary, the objective is to apply a distributed control methodology for a vehicle network on a highway. Based on the communication of their states, a negotiation can be interpreted as the solution to an optimization problem (9). It is important to note that the communication and the parameters sent need to be available at any time. The lack of information can hinder the ability to reach an optimal or efficient agreement.

3. Cooperative Cruise Control as a Bargaining Game

Considering the definition of a DMPC as a bargaining game presented in Section 2, the cooperative cruise control problem is contextualized. In this scenario, the vehicles on a highway synchronize their dynamics from a reference model in vehicle distance and speed. The global cost function of the DMPC must be made up of two terms: one term associated with the tracking error in the distance between vehicles and the other term with the speed of each one during its transit on the road.
The cooperative cruise control model is taken from (4), where matrices A k , B k are obtained from dynamic models and (3). In this case, the output is defined as each vehicle’s speed. The vehicle’s acceleration gives the control action u ( k ) . An operative constraint is defined according to comfort parameters of 32.6 m/s [39].
Initially considering each vehicle independently, the local MPC problem is formulated as
min u ˜ ( k ) J u ˜ , x ( k ) s . t . x i ( k + N p + 1 ) = A ¯ i x i ( k + 1 ) + b ¯ i u i ( k + 1 ) , y i ( k + N p + 1 ) = C i x i ( k + 1 ) + d i u i ( k + 1 ) , x i X , u i Λ ,
with N u < α < N p 1 ; Λ is the decision space of the control law, and A ¯ i and b ¯ i are the state matrix and vector resulting from the prediction along N p . The characteristics of the software and hardware determine any possible delay that may be found in the communication for control operations.
Let κ i ( u ˜ i ( k ) , u ˜ i ( k ) ) be the global cost function of each vehicle defined as
κ i ( u ˜ i ( k ) , u ˜ i ( k ) ) = λ | v ˜ r ( k ) y ˜ v ( u ˜ i ( k ) , u ˜ i ( k ) ) | + [ u ˜ i ( k ) , u ˜ i ( k ) ] H ¯ i [ u ˜ i ( k ) , u ˜ i ( k ) ] + 2 F ¯ i [ u ˜ i ( k ) , u ˜ i ( k ) ] ,
with H ¯ i and F ¯ i as matrices obtained from Q u u and Q u x , respectively. The input restrictions and states are time-independent and may differ for each vehicle. Therefore, the bargaining game for cooperative cruise control is defined as G C C C = { N , { u ˜ i ( k ) , u ˜ i ( k ) } , Λ i } , i N . Each vehicle at the control level has the same objective: minimize the synchronization error to maintain the distance between vehicles and the speed in a stable state. The solution of this game with discrete characteristics of the form { Θ , β ( k ) } k = 0 is solved by Algorithm 1.
Algorithm 1: Distributed bargaining algorithm
Sustainability 15 11898 i001
Algorithm 1 methodologically explains that each vehicle sends its dynamics information to its neighbors as long as the synchronization error e i j is greater than a given constant Δ . The algorithm begins by initializing each vehicle’s control action and the disagreement point. It then enters a loop, where at each iteration, the vehicles exchange their states and current disagreement points with others in the network. Using this shared information, each vehicle independently solves an optimization problem (as described in Problem (9)), seeking to maximize its local cost function while considering the others’ actions and decisions. Upon solving the optimization problem, each vehicle selects the first control action from the solution and communicates its modification to the disagreement point based on the negotiation process. The loop continues until the error term between the vehicles’ control actions falls below a defined threshold, indicating convergence. Through this distributed approach, the algorithm efficiently finds agreement on the optimal control actions, fostering synchronization and enhancing overall performance in cooperative cruise control scenarios. With this information, it is possible to solve the optimization problem (9) in each agent, to subsequently modify the values of the disagreement point until achieving convergence in the network synchronization. Finally, the modification to the point of disagreement is sent back to the neighbors. That process is one of the main contributions of this work since it summarizes the control methodology used in simulation and beyond with implementation for a cooperative cruise control problem.
The most important features of Algorithm 1 are summarized as follows:
  • Distributed and Decentralized Approach: Our algorithm adopts a distributed approach, allowing each vehicle in the platoon to communicate wirelessly through vehicle-to-vehicle (V2V) communication. This decentralized nature enables real-time decision making and control actions without the need for centralized coordination or external infrastructure. By distributing the control process across the platoon, the algorithm enhances scalability, flexibility, and adaptability to different traffic conditions and road scenarios.
  • Optimization and Predictive Control: The algorithm formulates the cooperative cruise control problem as an optimization problem based on predictive control. By considering local cost functions and a global cost function, the algorithm can efficiently optimize the control actions for each vehicle at every instant of time. This optimization approach ensures that the platoon maintains a synchronized formation while minimizing inter-vehicle distances and controlling the vehicles’ speeds to match the desired traffic profile. As a result, traffic congestion is reduced, and fuel consumption is optimized, leading to significant energy savings and reduced greenhouse gas emissions.
  • Robustness and Adaptability: The algorithm incorporates considerations for uncertainties and input fluctuations by including an input uncertainty term, denoted as f ( p , v ) . This robustness ensures that the cooperative platoon remains stable and functional even in the presence of external disturbances or unexpected events. Moreover, the algorithm can handle heterogeneous cases, where different vehicles may have distinct parameters and mechanical characteristics, making it versatile for real-world applications.
  • Wireless Communication and Connectivity: One of the algorithm’s strengths is its reliance on wireless V2V communication, which allows seamless information exchange among vehicles in the platoon. This real-time connectivity ensures quick response times and coordinated actions, improving safety, avoiding collisions, and enhancing overall traffic management.
The bargaining game result is mathematically defined as the tuple ξ ( Θ , β ( k ) ) = { ψ 1 , ψ 2 , , ψ N } composed of the profit of each vehicle. If there is no cooperation on the highway, the disagreement point replaces its value in the tuple.
Proposition 1.
The proposed solution ξ ( Θ , β ( k ) ) of a discrete bargaining game ( Θ , β ( k ) ) k = 0 in k steptime is unique and depends on optimization problem (10) and Θ, which must be convex.
Proof. 
It follows by [28]. □
Notice that we consider two cases, one with similar characteristics of vehicles assimilating a symmetric game and another one with vehicles with non-similar characteristics assimilating a non-symmetrical game. It is important to note that non-symmetric bargaining games are the most commonly found in real-life applications. For the solution of a bargaining game, the following lemma is proposed, based on Algorithm 1.
Lemma 1.
Consider a cooperative cruise control problem as a bargaining game { ( Θ , β ( k ) ) } k = 0 ; then, solution ξ ( Θ , β ( k ) ) is the Nash bargaining solution at time k computed by Algorithm 1.
Proof. 
According to the definition of the game and satisfying its axiomatic analysis ([34], Section 2), the solution of the cooperative cruise problem as a game { ( Θ , β ( k ) ) } k = 0 is defined as the Nash bargaining solution for every step time k obtained through the negotiation problem. □
Explicitly, if required, this theory allows a complete structuring of the network through the transmission of the utility functions or system inputs to benefit the solution of each local optimization problem. These algorithms can make decisions separately, so their implementation does not need an iterative process. That decision-making process considerably reduces the computational burden that Lagrange multiplier-based solution methods can present.

4. Simulation Results

For the mentioned methodology, an application field is proposed based on the problem of an autonomous network of vehicles, which is increasing nowadays, where each vehicle must follow the same patterns (position and speed). The most well-known technique for this problem is cooperative adaptive cruise control, an extension of adaptive cruise control, and a problem working at the platoon level with onboard sensors. In this case, each agent is modeled as a linear second-order system such as in (4). For the experiment, the leading vehicle defines an acceleration profile that all agents must follow through a fixed distance between each one. That profile means, in terms of synchronization, x i x j 0 . It is important to highlight four aspects of this methodology: the vehicle dynamics, the distributed controller, the transmitted information, and the graph communication topology [40].
A numerical simulation is performed to validate the proposed control laws. Figure 3 shows the simulation digraph, with agent 0 acting as the leader node. The formation control idea in the platoon starts from the graph representing the communications; intercommunicating vehicles handle a speed and inter-vehicular distance is contemplated by the reference vehicle.
Table 1 presents the simulation parameters used, highlighting that these are used only for simulation, not for control design. Table 1 establishes the dynamics and control of each vehicle in the network. The Player Coefficients represent the transmission parameters ( a 1 and a 2 ) and the transmission efficiency (b), which are essential for modeling the vehicle’s behavior. Additionally, the table includes the initial conditions for each vehicle, consisting of the speed (v) and inter-vehicle distance ( d i ) at the beginning of the cooperative cruise control process. These initial conditions serve as the starting point for the distributed bargaining algorithm, where each vehicle uses this information to optimize its control action and synchronize with others in the network. The coefficients and initial conditions in Table 1 are inputs for solving the optimization problem, enabling vehicles to achieve energy-level optimization and ensure smooth and efficient operation during cooperative cruise control. Both simulation and implementation show the results of symmetric and non-symmetric games. MATLAB R2018a software and the fmincon optimization problem-solving command are used for the optimization problems. The cost functions and decision variables are packed along the prediction horizon using Kronecker-like structures for all the procedures. External parameters of communication or interaction between the agents are not considered for simulation purposes.

4.1. Symmetric Game

For the simulation of the system, symmetric and non-symmetric cases are presented. In the symmetric case, the cost function is defined as (11), making a grouping according to the theory, and the local cost function is defined as (6).
The following figures show the agents’ response when the solution to the distributed optimization problem is obtained. For simulation purposes, the reference agents’ dynamic is defined with the parameters a 1 = 1 and a 2 = b 1 = 1 . Figure 4a shows the network response with the bargaining model performed, with a convergence time of approximately 60 s, a lower rate for synchronization in this type of application.
We compare a centralized with a decentralized model in a predictive control problem, reflected in Figure 4a–c. Generally, a centralized problem is solved for each agent without any information sent. In a decentralized way, the information is sent, and with it, a single optimization problem per agent is solved. The figures demonstrate that both centralized and decentralized systems exhibit faster response times. Additionally, synchronization is achieved through distributed communication of information, thereby optimizing the processes within the network. It is important to highlight that the distributed controller achieves a synchronization in 40 s, unlike the decentralized and centralized controllers with a time of 50 s; this is justified based on the complexity of the system. These factors may require additional time for data analysis and secure decision making to ensure optimal performance.
Similarly, Figure 5 and Figure 6 shows the evolution of the cost function and control action values, respectively, where it is evident that they achieve synchronization based on Nash equilibrium, even when the value is increasing. Although a variation in the control signal is observed, derived from the analysis at each instant of the time horizon, these variations are very short; given the magnitude at the application level, they tend to be imperceptible.

4.2. Non-Symmetric Game

The same case of cooperative cruise control is used for the game with non-symmetric characteristics. However, the parameters of each agent are taken from Table 1. The cost function used is the same as in the symmetric case (6). Figure 7a–c show the output response of each system under the negotiation model compared with a centralized and decentralized predictive control methodology, where convergence is observed equally when handling the information in a distributed way, with a similar qualitative response in comparison with symmetric cases, validating the behavior in both cases.
In this simulation, vehicles with heterogeneous characteristics engage in negotiation to achieve a common objective, such as optimizing fuel consumption or minimizing travel time. The non-symmetric nature of the game implies that each vehicle has distinct parameters and constraints, resulting in varied decision-making processes. Figure 7a–c illustrate how the distributed bargaining algorithm facilitates negotiation among the vehicles, with some vehicles potentially prioritizing their individual benefits over the collective objective. As a consequence, the simulation outcome reveals fluctuations in the distributed response, showing slower convergence and agreement compared to the symmetric case. The visualization of the non-symmetric game’s results enables a better understanding of the challenges and complexities involved in cooperative cruise control when dealing with vehicles with diverse characteristics, and it can help in refining strategies to achieve more balanced and efficient cooperative behavior.
In this case, the point of disagreement is shown, since there may be a variation in that value under the concept of symmetric and non-symmetric games (even considering symmetric cases with different initial conditions). Then, the response of the disagreement point is observed in Figure 8, where the Nash agreement achieved from the consensus of this value is evidenced with a regulation of its value.
Finally, the cost function response is presented in Figure 9, where it is possible to validate the Nash equilibrium through the convergence of these values in all agents, as well as the application of the control action for each one in Figure 10. An oscillatory response is observed due to the fact that when working with a network of vehicles, the interaction between them can be complex. Among the strategies to combat this complexity is the simplification of the models, as in (1). The actions of one vehicle can affect other nearby vehicles, which can lead to non-linear effects and complicate the predictive control response. The negotiation process becomes more intricate as vehicles with different priorities and preferences seek a compromise between individual objectives and the platoon’s global objective. Additionally, wireless communication delays and sensor noise can influence the perceived distance to neighboring vehicles, leading to temporary control action variations. Furthermore, the algorithm used for distributed bargaining may require iterations to converge, resulting in fluctuations during this convergence process.

5. Implementation Results

Basic experiments are performed to apply the developed algorithms with real-time simulation in HIL. Validation is made through a temporal response of the developed algorithms and physical considerations. We use the National Instruments (Austin, TX, USA) NI CompactRio controllers connected through an Ethernet network. Four controllers of two types are used for development, a NI9045 CompactRio controller and three NI9063 CompactRio controllers. The photo of the modules inside the DESYNC laboratory at Universidad Nacional de Colombia is shown in Figure 11. Similarly, as seen in Figure 12, the inclusion of each controller within each rack is shown. Two NI9063 controllers and their two power supplies are observed. In the other cabinet, the remaining NI9063 and NI9045 controllers are shown next to their power supplies and the Ethernet communication switch. The communications graph is defined in Figure 13.
Similarly, for the dynamic models implemented, the same dynamic (1) is used with a Tustin discretization for system matrices with a sample time of 0.1 s. The simulation parameters are observed in Table 2.
For the implementation, communication is made between Labview and Simulink, where the dynamics of the controllers are emulated. For the response of the symmetric game, the same parameters of the simulation case are used ( a 1 = 1 and a 2 = b 1 = 1 ). Figure 14 shows the controller’s response implemented in a symmetric game. In emulation, the system presents some fluctuations initially, but their response also achieves an adequate bargain. In all those cases, the fluctuations are derived from the fact that by embedding the dynamics and control in each module separately and requiring communication between them, the response does not handle the same synchronization interval as in simulation, where the communication does not have delays.
Similarly, the response of the agents’ cost function is observed in Figure 15, which maintains similarity with the response of the simulation case and also reaches a correct bargain, as well as the application of the control action in Figure 16. The response of the signals take a high value while synchronizing the communication, and after this, they synchronize their value.
The response of the controller for non-symmetric cases is also validated, according to the parameters of Table 2. Figure 17 shows the response of the agents’ output in this case, where it is evidenced that as in the previous cases, the system achieves correct bargaining in a distributed scenario. It is possible to see in the same way an oscillation in the response, especially in some agents, derived from being the agent with the furthest communication from the reference; however, with the passage of the transient, a synchronization is achieved.
Figure 18, Figure 19 and Figure 20 show the response of the point of disagreement, the cost function, and the control action in the non-symmetrical case of emulation; the response, as in the previous cases, shows a fluctuation in their behavior at the first iterations, followed by a correct bargain in both cases.
In the same way, it is important to validate cases where a correct bargain is not achieved. Considering that, a scenario of multiple agents, in the context of mechanical systems based on [41], presents a non-linearity in its systemic base that makes it difficult for the bargain algorithm, as is observed in Figure 21. It is validated that in those scenarios, synchronization of all the systems is not completely achieved when the value of the point of disagreement diverges.

6. Conclusions

In conclusion, this paper investigated the application of Distributed Model Predictive Control (DMPC) with bargaining games for a vehicular network, considering both symmetric and non-symmetric cases in simulation and emulation. One of the main strengths lies in its successful application of predictive control to achieve output synchronization of both symmetric and non-symmetric agents with linear dynamics. The proposed approach extends this methodology to dynamic cases, even considering a discrete perspective for practical implementation. This approach dynamically incorporates the analysis of the disagreement point for negotiation between agents, enabling its application in both symmetric and non-symmetric games. Furthermore, in real-world implementation, through hardware-in-the-loop emulation of dynamic systems, the paper successfully synchronizes the agents’ dynamics, leading to an agreement. The results demonstrated that the proposed approach achieved convergence, albeit with slightly slower dynamics and occasional fluctuations observed in the non-symmetric cases during implementation. Nevertheless, despite these challenges, the system still managed to converge toward the desired reference signal. These findings highlight the efficacy and potential of DMPC with bargaining games for coordinating and optimizing vehicular networks, paving the way for future research and development in this area. Further refinements and optimizations can address the observed limitations, ultimately leading to more robust and efficient cooperative driving solutions for enhanced traffic management, reduced environmental impacts, and improved overall transportation system performance. Looking ahead, future work could strengthen this theory by incorporating security parameters into cost functions and considering unstable models in an open-loop configuration. Additionally, the inclusion of heterogeneity parameters could enhance the response in non-symmetric game implementations, further expanding the practical applicability of the proposed approach.

Author Contributions

Conceptualization, M.F.A.-C., E.M.-N. and D.T.-C.; methodology, M.F.A.-C. and D.T.-C.; software, M.F.A.-C. and J.P.; validation, D.T.-C. and E.M.-N.; formal analysis, M.F.A.-C.; investigation, M.F.A.-C., D.T.-C.; resources, E.M.-N.; data curation, J.P.; writing—original draft preparation, M.F.A.-C., J.P. and E.M.-N.; writing—review and editing, D.T.-C. and E.M.-N.; visualization, M.F.A.-C.; supervision, D.T.-C. and E.M.-N.; project administration, E.M.-N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CAVConnected and Autonomous Vehicle
CCACCooperative Cruise Adaptive Control
DMPCDistributed Model Predictive Control
HILHardware-in-the-Loop
ITSIntelligent Transportation Systems
MPCModel Predictive Control
V2VVehicle to Vehicle

References

  1. Ballinger, B.; Stringer, M.; Schmeda-Lopez, D.R.; Kefford, B.; Parkinson, B.; Greig, C.; Smart, S. The vulnerability of electric vehicle deployment to critical mineral supply. Appl. Energy 2019, 255, 113844. [Google Scholar] [CrossRef]
  2. Jia, D.; Lu, K.; Wang, J.; Zhang, X.; Shen, X. A Survey on Platoon-Based Vehicular Cyber-Physical Systems. IEEE Commun. Surv. Tutor. 2016, 18, 263–284. [Google Scholar] [CrossRef] [Green Version]
  3. van Arem, B.; van Driel, C.J.G.; Visser, R. The Impact of Cooperative Adaptive Cruise Control on Traffic Flow Characteristics. IEEE Trans. Intell. Transp. Syst. 2006, 7, 429–436. [Google Scholar] [CrossRef] [Green Version]
  4. Lee, J.; Park, B.B.; Malakorn, K.; So, J.J. Sustainability assessments of cooperative vehicle intersection control at an urban corridor. Transp. Res. Part C Emerg. Technol. 2013, 32, 193–206. [Google Scholar] [CrossRef]
  5. Kovačić, M.; Mutavdžija, M.; Buntak, K. New Paradigm of Sustainable Urban Mobility: Electric and Autonomous Vehicles: A Review and Bibliometric Analysis. Sustainability 2022, 14, 9525. [Google Scholar] [CrossRef]
  6. Kaffash, S.; Nguyen, A.T.; Zhu, J. Big data algorithms and applications in intelligent transportation system: A review and bibliometric analysis. Int. J. Prod. Econ. 2021, 231, 107868. [Google Scholar] [CrossRef]
  7. Eskandarian, A.; Wu, C.; Sun, C. Research Advances and Challenges of Autonomous and Connected Ground Vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 22, 683–711. [Google Scholar] [CrossRef]
  8. Mcdonald, A.; McGehee, D.; Chrysler, S.; Angell, L.; Askelson, N.; Seppelt, B. National Survey Identifying Gaps in Consumer Knowledge of Advanced Vehicle Safety Systems. Transp. Res. Rec. J. Transp. Res. Board 2016, 2559. [Google Scholar] [CrossRef]
  9. Li, Z.; Duan, Z. Cooperative Control of Multi-Agent Systems: A Consensus Region Approach; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  10. Shladover, S.E.; Su, D.; Lu, X.Y. Impacts of cooperative adaptive cruise control on freeway traffic flow. Transp. Res. Rec. 2012, 2324, 63–70. [Google Scholar] [CrossRef] [Green Version]
  11. Zohdy, I.H.; Rakha, H.A. Intersection management via vehicle connectivity: The intersection cooperative adaptive cruise control system concept. J. Intell. Transp. Syst. 2016, 20, 17–32. [Google Scholar] [CrossRef]
  12. Zhang, L.; Wang, Y.; Zhu, H. Theory and Experiment of Cooperative Control at Multi-Intersections in Intelligent Connected Vehicle Environment: Review and Perspectives. Sustainability 2022, 14, 1542. [Google Scholar] [CrossRef]
  13. Meng, Z.; Xia, X.; Xu, R.; Liu, W.; Ma, J. HYDRO-3D: Hybrid Object Detection and Tracking for Cooperative Perception Using 3D LiDAR. IEEE Trans. Intell. Veh. 2023. [Google Scholar] [CrossRef]
  14. Isermann, R.; Schaffnit, J.; Sinsel, S. Hardware-in-the-loop simulation for the design and testing of engine-control systems. Control Eng. Pract. 1999, 7, 643–653. [Google Scholar] [CrossRef]
  15. Zhang, Y.; Xiong, R.; He, H.; Shen, W. Lithium-ion battery pack state of charge and state of energy estimation algorithms using a hardware-in-the-loop validation. IEEE Trans. Power Electron. 2016, 32, 4421–4431. [Google Scholar] [CrossRef]
  16. Maniatopoulos, M.; Lagos, D.; Kotsampopoulos, P.; Hatziargyriou, N. Combined control and power hardware in-the-loop simulation for testing smart grid control algorithms. IET Gener. Transm. Distrib. 2017, 11, 3009–3018. [Google Scholar] [CrossRef]
  17. Wei, W.; Wu, Q.; Wu, J.; Du, B.; Shen, J.; Li, T. Multi-agent deep reinforcement learning for traffic signal control with Nash Equilibrium. In Proceedings of the 2021 IEEE 23rd Int Conf on High Performance Computing & Communications; 7th Int Conf on Data Science & Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), Haikou, China, 20–22 December 2021; pp. 1435–1442. [Google Scholar]
  18. Yang, H.; Rakha, H.; Ala, M.V. Eco-cooperative adaptive cruise control at signalized intersections considering queue effects. IEEE Trans. Intell. Transp. Syst. 2016, 18, 1575–1585. [Google Scholar] [CrossRef]
  19. Farina, M.; Scattolini, R. Distributed predictive control: A non-cooperative algorithm with neighbor-to-neighbor communication for linear systems. Automatica 2012, 48, 1088–1096. [Google Scholar] [CrossRef]
  20. Trodden, P.A.; Maestre, J.M. Distributed predictive control with minimization of mutual disturbances. Automatica 2017, 77, 31–43. [Google Scholar] [CrossRef] [Green Version]
  21. Grammatico, S. Proximal Dynamics in Multiagent Network Games. IEEE Trans. Control. Netw. Syst. 2018, 5, 1707–1716. [Google Scholar] [CrossRef] [Green Version]
  22. Valencia, F.; López, J.D.; Patino, J.A.; Espinosa, J.J. Bargaining game based distributed MPC. In Distributed Model Predictive Control Made Easy; Springer: Berlin/Heidelberg, Germany, 2014; pp. 41–56. [Google Scholar]
  23. Oszczypała, M.; Ziółkowski, J.; Małachowski, J.; Lęgas, A. Nash Equilibrium and Stackelberg Approach for Traffic Flow Optimization in Road Transportation Networks—A Case Study of Warsaw. Appl. Sci. 2023, 13, 3085. [Google Scholar] [CrossRef]
  24. Dixit, V.V.; Denant-Boemont, L. Is equilibrium in transport pure Nash, mixed or Stochastic? Transp. Res. Part C Emerg. Technol. 2014, 48, 301–310. [Google Scholar] [CrossRef]
  25. Chu, H.; Guo, L.; Gao, B.; Chen, H.; Bian, N.; Zhou, J. Predictive cruise control using high-definition map and real vehicle implementation. IEEE Trans. Veh. Technol. 2018, 67, 11377–11389. [Google Scholar] [CrossRef]
  26. Lin, Y.; Wu, C.; Eskandarian, A. Integrating odometry and inter-vehicular communication for adaptive cruise control with target detection loss. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1848–1853. [Google Scholar]
  27. Rayamajhi, A.; Biron, Z.A.; Merco, R.; Pisu, P.; Westall, J.M.; Martin, J. The impact of dedicated short range communication on cooperative adaptive cruise control. In Proceedings of the 2018 IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018; pp. 1–7. [Google Scholar]
  28. Valencia, F.; Patiño, J.; López, J.D.; Espinosa, J. Game Theory Based Distributed Model Predictive Control for a Hydro-Power Valley Control. IFAC Proc. Vol. 2013, 46, 538–544. [Google Scholar] [CrossRef]
  29. Nguyen, T.L.; Guillo-Sansano, E.; Syed, M.H.; Nguyen, V.H.; Blair, S.M.; Reguera, L.; Tran, Q.T.; Caire, R.; Burt, G.M.; Gavriluta, C.; et al. Multi-agent system with plug and play feature for distributed secondary control in microgrid—Controller and power hardware-in-the-loop Implementation. Energies 2018, 11, 3253. [Google Scholar] [CrossRef] [Green Version]
  30. Khalifa, H.A.E.W.; Kumar, P. Multi-objective optimisation for solving cooperative continuous static games using Karush-Kuhn-Tucker conditions. Int. J. Oper. Res. 2023, 46, 133–147. [Google Scholar] [CrossRef]
  31. Filho, C.M.; Wolf, D.F.; Grassi, V.; Osório, F.S. Longitudinal and lateral control for autonomous ground vehicles. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 588–593. [Google Scholar]
  32. Baldi, S.; Frasca, P. Adaptive synchronization of unknown heterogeneous agents: An adaptive virtual model reference approach. J. Frankl. Inst. 2019, 356, 935–955. [Google Scholar] [CrossRef] [Green Version]
  33. Nash, J.F., Jr. The Bargaining Problem. Econometrica 1950, 18, 155–162. [Google Scholar] [CrossRef]
  34. Peters, H. Axiomatic Bargaining Game Theory; Theory and Decision Library C Series; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
  35. Nash, J.F. Equilibrium points in n-person games. Proc. Natl. Acad. Sci. USA 1950, 36, 48–49. [Google Scholar] [CrossRef] [PubMed]
  36. Peters, H.; Van Damme, E. Characterizing the Nash and Raiffa bargaining solutions by disagreement point axioms. Math. Oper. Res. 1991, 16, 447–461. [Google Scholar] [CrossRef]
  37. Börgers, T.; Sarin, R. Learning through reinforcement and replicator dynamics. J. Econ. Theory 1997, 77, 1–14. [Google Scholar] [CrossRef] [Green Version]
  38. Zou, Y.; Su, X.; Li, S.; Niu, Y.; Li, D. Event-triggered distributed predictive control for asynchronous coordination of multi-agent systems. Automatica 2019, 99, 92–98. [Google Scholar] [CrossRef]
  39. Zoccali, P.; Loprencipe, G.; Lupascu, R.C. Acceleration measurements inside vehicles: Passengers’ comfort mapping on railways. Measurement 2018, 129, 489–498. [Google Scholar] [CrossRef]
  40. Baldi, S.; Rosa, M.R.; Frasca, P.; Kosmatopoulos, E.B. Platooning merging maneuvers in the presence of parametric uncertainty. IFAC-PapersOnLine 2018, 51, 148–153. [Google Scholar] [CrossRef]
  41. Arevalo-Castiblanco, M.F.; Tellez-Castro, D.; Sofrony, J.; Mojica-Nava, E. Adaptive synchronization of heterogeneous multi-agent systems: A free observer approach. Syst. Control Lett. 2020, 146, 104804. [Google Scholar] [CrossRef]
Figure 1. Cooperative platoon formation for cooperative cruise control ( v i : vehicle velocity, d i : distance between vehicles).
Figure 1. Cooperative platoon formation for cooperative cruise control ( v i : vehicle velocity, d i : distance between vehicles).
Sustainability 15 11898 g001
Figure 2. Model predictive control with bargaining games.
Figure 2. Model predictive control with bargaining games.
Sustainability 15 11898 g002
Figure 3. Study case communication graph.
Figure 3. Study case communication graph.
Sustainability 15 11898 g003
Figure 4. Symmetric bargaining simulation results.
Figure 4. Symmetric bargaining simulation results.
Sustainability 15 11898 g004
Figure 5. Symmetric bargaining cost function.
Figure 5. Symmetric bargaining cost function.
Sustainability 15 11898 g005
Figure 6. Symmetric bargaining control action.
Figure 6. Symmetric bargaining control action.
Sustainability 15 11898 g006
Figure 7. Non-symmetric bargaining simulation results.
Figure 7. Non-symmetric bargaining simulation results.
Sustainability 15 11898 g007
Figure 8. Disagreement point of non-symmetric bargaining simulation result.
Figure 8. Disagreement point of non-symmetric bargaining simulation result.
Sustainability 15 11898 g008
Figure 9. Cost function of non-symmetric bargaining simulation result.
Figure 9. Cost function of non-symmetric bargaining simulation result.
Sustainability 15 11898 g009
Figure 10. Control action of non-symmetric bargaining simulation result.
Figure 10. Control action of non-symmetric bargaining simulation result.
Sustainability 15 11898 g010
Figure 11. DESYNC laboratory implementation modules.
Figure 11. DESYNC laboratory implementation modules.
Sustainability 15 11898 g011
Figure 12. Block diagram for HIL implementation.
Figure 12. Block diagram for HIL implementation.
Sustainability 15 11898 g012
Figure 13. Communication graph used in emulation case.
Figure 13. Communication graph used in emulation case.
Sustainability 15 11898 g013
Figure 14. Output synchronization of bargaining game theory implementation in the symmetric case.
Figure 14. Output synchronization of bargaining game theory implementation in the symmetric case.
Sustainability 15 11898 g014
Figure 15. Cost function of bargaining game theory implementation in a symmetric case.
Figure 15. Cost function of bargaining game theory implementation in a symmetric case.
Sustainability 15 11898 g015
Figure 16. Control action of bargaining game theory implementation in a symmetric case.
Figure 16. Control action of bargaining game theory implementation in a symmetric case.
Sustainability 15 11898 g016
Figure 17. Output synchronization of bargaining game theory implementation in a non-symmetric case.
Figure 17. Output synchronization of bargaining game theory implementation in a non-symmetric case.
Sustainability 15 11898 g017
Figure 18. Disagreement point of bargaining game theory implementation in a non-symmetric case.
Figure 18. Disagreement point of bargaining game theory implementation in a non-symmetric case.
Sustainability 15 11898 g018
Figure 19. Cost function of bargaining game theory implementation in a non-symmetric case.
Figure 19. Cost function of bargaining game theory implementation in a non-symmetric case.
Sustainability 15 11898 g019
Figure 20. Control action of bargaining game theory implementation in a non-symmetric case.
Figure 20. Control action of bargaining game theory implementation in a non-symmetric case.
Sustainability 15 11898 g020
Figure 21. Output synchronization of bargaining game theory implementation in a non-bargaining case.
Figure 21. Output synchronization of bargaining game theory implementation in a non-bargaining case.
Sustainability 15 11898 g021
Table 1. Player Coefficients and initial conditions.
Table 1. Player Coefficients and initial conditions.
a 1 a 2 b 1 x 0
A 0 −0.25−0.51 [ 2 1 ]
A 1 −1.2510.5 [ 1 2.1 ]
A 2 −0.52.50.75 [ 1 0.2 ]
A 3 −0.7521.5 [ 3 2.3 ]
A 4 −1.52.51 [ 3 0.6 ]
A 5 −121 [ 2 0.5 ]
A 6 −0.7510.5 [ 1 0.4 ]
Table 2. Agent’s coefficients and initial conditions for emulation case.
Table 2. Agent’s coefficients and initial conditions for emulation case.
a 1 a 2 b 1 x 0
A 0 −0.25−0.51 [ 1 2 ]
A 1 −1.2510.5 [ 1 4 ]
A 2 −0.52.50.75 [ 1 2 ]
A 3 −0.7521.5 [ 1 4 ]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arevalo-Castiblanco, M.F.; Pachon, J.; Tellez-Castro, D.; Mojica-Nava, E. Cooperative Cruise Control for Intelligent Connected Vehicles: A Bargaining Game Approach. Sustainability 2023, 15, 11898. https://doi.org/10.3390/su151511898

AMA Style

Arevalo-Castiblanco MF, Pachon J, Tellez-Castro D, Mojica-Nava E. Cooperative Cruise Control for Intelligent Connected Vehicles: A Bargaining Game Approach. Sustainability. 2023; 15(15):11898. https://doi.org/10.3390/su151511898

Chicago/Turabian Style

Arevalo-Castiblanco, Miguel F., Jaime Pachon, Duvan Tellez-Castro, and Eduardo Mojica-Nava. 2023. "Cooperative Cruise Control for Intelligent Connected Vehicles: A Bargaining Game Approach" Sustainability 15, no. 15: 11898. https://doi.org/10.3390/su151511898

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop