Next Article in Journal
Two-Capacitor Direct Interface Circuit for Resistive Sensor Measurements
Next Article in Special Issue
Vehicle Localization Using 3D Building Models and Point Cloud Matching
Previous Article in Journal
Frequency Selective Auto-Encoder for Smart Meter Data Compression
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Game Theory-Based Approach for Modeling Autonomous Vehicle Behavior in Congested, Urban Lane-Changing Scenarios

1
Chair ITS-Sustainable Transport Logistics 4.0, Johannes Kepler University, 4040 Linz, Austria
2
Department of Communications Technology, Ural Federal University, 620078 Yekaterinburg, Russia
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(4), 1523; https://doi.org/10.3390/s21041523
Received: 25 January 2021 / Revised: 16 February 2021 / Accepted: 18 February 2021 / Published: 22 February 2021
(This article belongs to the Special Issue Sensors Technologies for Intelligent Transportation Systems)

Abstract

:
Autonomous vehicles are expected to display human-like behavior, at least to the extent that their decisions can be intuitively understood by other road users. If this is not the case, the coexistence of manual and autonomous vehicles in a mixed environment might affect road user interactions negatively and might jeopardize road safety. To this end, it is highly important to design algorithms that are capable of analyzing human decision-making processes and of reproducing them. In this context, lane-change maneuvers have been studied extensively. However, not all potential scenarios have been considered, since most works have focused on highway rather than urban scenarios. We contribute to the field of research by investigating a particular urban traffic scenario in which an autonomous vehicle needs to determine the level of cooperation of the vehicles in the adjacent lane in order to proceed with a lane change. To this end, we present a game theory-based decision-making model for lane changing in congested urban intersections. The model takes as input driving-related parameters related to vehicles in the intersection before they come to a complete stop. We validated the model by relying on the Co-AutoSim simulator. We compared the prediction model outcomes with actual participant decisions, i.e., whether they allowed the autonomous vehicle to drive in front of them. The results are promising, with the prediction accuracy being 100% in all of the cases in which the participants allowed the lane change and 83.3% in the other cases. The false predictions were due to delays in resuming driving after the traffic light turned green.

1. Introduction

At this point in time, numerous studies have proven the effectiveness of autonomous vehicles in dealing with challenges such as road safety, fuel consumption, sustainability, etc. [1,2]. Considering the scientific achievements so far, it is difficult to imagine a future where autonomous vehicles will not be used in transportation. However, before introducing self-driving cars on real roads on a large scale, many problems need to be addressed and solved [3].
In the near future, we can expect autonomous and manually driven cars to coexist on most roads. Drivers operating conventional vehicles might expect human-like behavior from autonomous vehicles, which could cause situations of uncertainty and mistrust [4] if the expectations are not fulfilled, and this could ultimately threaten road safety [5]. This is particularly important in certain complex scenarios such as lane-changing maneuvers, in which cooperation with other road users is required. To address this issue, it is vital to design algorithms that are able to analyze human decision-making processes and interaction patterns as well as able to implement models to predict the action of drivers. Such algorithms can be designed by relying on game theory [6] or mathematical models to analyze interactions among several players that make decisions.
A brief description of the game theory is presented in the following paragraph. Game theory is frequently used as a core decision-making layer of autonomous vehicle algorithms [3,7,8]. The participants in the game compete with each other to protect their interests by utilizing a strategy based on the current outcome of the game and their knowledge. For lane-changing scenarios, dynamic noncooperative games are most commonly used. The game itself is a mathematics object that is strictly defined by the following:
  • a fixed number of players N = { 1 , 2 , 3 n } ;
  • a set of strategies S 1 , S 2 , S 3 S n with respective to the number of players;
  • specific outcomes or payoffs for any possible combination of the strategies ( s 1 , s 2 , s 3 s n ) S 1 S 2 S 3 S n (where s i is a current strategy of player i).
The payoff function is defined on a set of all outcomes of the game U i : S 1 S 2 S 3 S n R . Any strategy s i S i is optimal if s i S i , where s i s i and ( s 1 , s 2 , . . s i 1 , s i + 1 . . s n ) satisfy U i ( s 1 , s i 1 , s i , s i + 1 s n ) U i ( s 1 , s i 1 , s i , s i + 1 s n ) .
In this work, we propose a decision-making model based on a dynamic noncooperative game to investigate lane changing in the urban scenario of a congested intersection, which is achieved by the implementation of a model that predicts the decision-making process of drivers. To this end, we assume that the game’s result can be predicted if each vehicle maximizes the payoff of the interaction. The proposed scenario consists of a two-lane road with traffic at an intersection regulated by traffic lights. The left lane is designated for driving through the intersection as well as left turns. The right lane is designated only for driving straight through the intersection. In the proposed scenario the traffic flow in the left lane is much slower than in the right lane, since vehicles in the left lane need to decelerate when a vehicle turns left. The vehicles in our scenario are described as follows:
  • an autonomous vehicle (EGO) (in white) that performs the lane-change maneuver and is controlled by the game theory-based strategy S;
  • the following vehicle (FV) in green; and
  • the leading vehicle in front of the FV, represented by LEAD (in red).
We implement the following use case, as illustrated in Figure 1, which consists of the following series of actions:
  • When the traffic light is red, EGO enters the road from the right lane and stops behind the last vehicle in the queue.
  • EGO needs to perform a lane change to turn left at the intersection. Due to the traffic stopped in the left lane, EGO interacts with FV by activating the turn signal to indicate its intention to merge in front.
  • When the traffic light turns green, FV decides whether to wait and allow EGO to merge in front. This FV decision is made based on the perceived information related to the current EGO acceleration and previously conveyed information by the EGO vehicle regarding its merging intention.
According to related literature, lane-change algorithms are usually divided into the following 3 layers [9]:
  • strategic layer, related to route planning;
  • tactical layer, related to decision-making processes; and
  • operational layer, linked to control tasks.
The proposed model focuses on the decision-making process in the tactical layer. This paper is organized as follows. In Section 2, relevant literature to lane changing in both urban and highway scenarios is reviewed and reported. In Section 3, we introduce the proposed decision-making model in detail, including the mathematical definition of payoff functions. In Section 4, the simulation environment and scenario for model validation are described. The game solution and the results obtained from the validation tests are presented in Section 5. A conclusion and future directions of the presented research are provided in Section 6.

2. Related Work

Decision making in autonomous vehicles (AVs) has been addressed in several works; for example, the authors in [10] developed an approach to mimic human behavior. A variety of driving styles were considered to study driving safety, ride comfort, and travel efficiency as utility functions. Two noncooperative games were implemented using Nash and Stackelberg equilibrium and later evaluated. The authors concluded that the developed algorithms were capable of performing the proper decisions under different driving situations. In [11], the authors combined game theory with reverse reinforcement learning to create an algorithm for predicting driver behavior. To this end, they relied on dynamic noncooperative game concepts and the idea of social value orientation for game formulation. The Next Generation Simulation (NGSIM) data set for highway 101 was then used to obtain the reward function.
A work based on differential games for a fully automated lane-changing and car-following control system (LCCS) was presented in [12]. The controlled vehicles made decisions in order to minimize the predicted costs that resulted from undesirable situations. The authors evaluated the discrete and continuous control variables, such as lane-change decisions and accelerations in a simulated scenario and concluded that the approach delivered optimal lane-change decisions and accelerations for both noncooperative and cooperative controllers.
A further example for lane-changing decision-making based on game theory can be found in [13]. The authors built a two-player nonzero-sum noncooperative game model under incomplete information for mandatory lane changing. Based on Harsanyi transformation [14], they transformed the model into a game that contained imperfect information in order to cover both traditional and connected environments. In order to validate the models, the authors used the NGSIM, I-80-F data set. Their model’s accuracy was 88% and 82% for their three- and two-strategy merging event models, respectively, and 77% and 61% for their three- and two-strategy models in non-merging events.
A similar approach that relied on Harsanyi transformation and the NGSIM data set described a dynamic noncooperative game [15]. Two game models for connected and for traditional environments were implemented utilizing complete and incomplete information, respectively. The game with incomplete information was then transformed to a game of imperfect information. In order to build the payoff functions, the authors used acceleration as a set of players’ strategies. In line with this, a further dynamic noncooperative game approach was implemented in [16] using acceleration as a set of strategies. Parameters such as safety and space between vehicles were in this case used to build payoff functions. The authors also incorporated driver’s aggressiveness into their model as a key factor.
As described in previous work, lane changing has been addressed in a number of studies, with the majority of them focused on highway lane-changing scenarios.
One of the first lane-change models that focused on decision-making in an urban scenario was designed by Gipps [17]. This model suggested a connected structure of decisions for the driver to follow when making a decision for lane changing. It was designed for urban traffic situations and therefore considered the influence of traffic lights, queues, and heavy duty vehicles. Although this rule-based approach was developed in the 1980s, it is still an efficient and applicable decision-making model.
Additional studies for urban scenarios were presented in [18,19]. Lane changing was studied by adopting fundamental triangular diagrams to obtain the optimal longitudinal position for autonomous vehicles in an urban intersection. The authors assumed that a lane change could not be performed when the destination lane was occupied. In real-life scenarios, however, we argue that interaction and cooperation with drivers in the destination lane might enable the maneuver. Therefore, we address exactly this situation and propose accordingly a decision-making model for a lane change in an urban intersection with dense traffic. To this end, we adhered to game theory and adopted a similar interaction process for our decision-making algorithm to the one defined in [20], in which a communication scheme with request and response messages was presented. In our work, we considered this interaction to be a competition between two drivers for a place in the traffic queue, where players make decisions based on space and safety criteria.
As previously mentioned, most of the lane-change studies performed so far are based on highway lane-changing scenarios, where in all vehicles, the lower speed limit is considered to be higher than 30 km/h (depending on the country). Consequently, the reviewed models cannot be directly applicable in more specific scenarios such as congested urban intersections.
We contribute to the research in this main field by filling in the gap in the literature regarding mandatory lane change in intersections with dense traffic in urban scenarios. To this end, we study if, contrary to what the authors in [19] assumed, a lane change can still be performed when the destination lane is occupied. We argue that, in real-life scenarios, interaction and cooperation with surrounding traffic might enable the maneuver, and we propose accordingly a decision-making model based on game theory for lane change. To this end, we adapted the model presented in [16] to implement a dynamic noncooperative game that used acceleration as part of the player set of strategies. We redesigned the payoff functions and pertinent speed and acceleration parameters for a lane-change urban scenario and produced a decision-making model based on two-player nonzero-sum noncooperative dynamic game for lane changing.
A detailed description of the proposed decision-making model is presented in the next section.

3. The Proposed Decision-Making Model

The authors in [21] categorized several game-theory-based models into 5 groups, namely the empirical game-theory, classic Nash equilibrium, incomplete information game, sequential game, and evolutionary game models. Based on the given classification, our proposed decision-making model contributes to the growth of the sequential group of game-theory models.
In our work, we assume that autonomous vehicles can request and obtain information from nearby cars such as current position, speed, and acceleration during the red phase of a traffic light in real time. This can be accomplished by establishing a communication from vehicle to vehicle (V2V) and from vehicle to everything (V2X), or by obtaining data from sensors such as depth cameras, lidar, sonar, etc. [9]. It is important to mention that, for the sake of simplicity, the delays related to obtaining information via sensors or communication are not considered in this work. In this section, we focus on game formulation through a mathematical definition of its payoff functions and the outcome.

3.1. Game Formulation

We propose a decision-making model relying on a dynamic noncooperative game model, as presented in [16]. We aim to investigate lane changing in an urban scenario with dense traffic approaching an intersection. We assume that the game’s result can be predicted if each vehicle maximizes the payoff of the interaction.
To illustrate our approach, we adhere to the scenario introduced in Section 1 (Figure 1). The strategies that can be selected by the EGO and FV vehicles are limited to two, as described in the following lines: The strategy set available for EGO includes
  • to change lanes, namely strategy A 1 , and
  • not to change lanes, strategy A 2 .
The FV’s strategy set consists of the following options:
  • accepting EGO’s lane change or strategy B 1
  • rejecting EGO’s lane change or strategy B 2
The EGO outcome and selected strategy are defined as follows:
  • outcome P x y
  • chosen strategy y
The FV outcome and selected strategy are defined as follows:
  • outcome Q x y
  • chosen strategy x
The description of the game is represented in its game theory normal-form and extensive-form in Table 1 and Figure 2, respectively. The flow of the game is as follows: the EGO vehicle chooses the strategy first and, then, the FV reacts to it, each trying to maximize the outcome of the interaction.
The outcome or total payoff can be defined as a set of payoff functions such as the payoff for time consumption, safety, and position. Each payoff depends on different parameters, including original position of the vehicles, gap size with respect to LEAD, driving style, cooperation degree, and urgency. The result can be predicted by defining the outcome as follows (see Figure 2): P 11 > P 21 > P 12 > P 22 and Q 11 > Q 21 > Q 12 > Q 22 .
The FV will always choose the strategy denoted by B 1 because, in this case, its payoff is the biggest. The EGO will always choose the strategy denoted by A 1 strategy because P 11 > P 12 . Therefore, the players will always select the strategies A 1 or B 1 in order to obtain the P 11 and Q 11 payoffs.
Thus, by formalizing the payoffs in the form of a mathematical equation, it is possible to model the decision-making process of drivers and to predict the result of the game. In this paper, we propose a decision-making model that adopts the same approach. However, instead of the strategies A 1 , A 2 , B 1 , and B 2 , the strategy set of our proposed model is defined as acceleration ranges, as proposed in [13,15,16].
The acceleration range is defined as [0, F ( a L E A D ) ]. The lowest acceleration cannot be a negative value, since our game starts when EGO stops at the adjacent lane between FV and LEAD and the players can select the strategy (acceleration) only once. The upper limit is constrained by the acceleration and position of LEAD, as it affects the possible acceleration strategies and payoff criteria of both EGO and FV. It is important to point out that LEAD is not a player of the game because it does not interact with FV and EGO. The proposed game including the set of strategies and the outcome values in its normal-form representation is depicted in Table 2.

3.2. The Payoff and Penalty Functions

The FV’s outcome is declared as the product of the penalty and the sum of the payoff functions, such as FV’s safety and space (namely, the desire to keep the current gap with the LEAD). The EGO’s outcome is the product of its safety payoff and the penalty. The difference between the EGO’s and FV’S safety payoffs is that the EGO’s safety payoff is based on the LEAD and FV vehicles while the FV’s safety payoff is based on the EGO vehicle.
The penalty function is implemented in order to include the impact of a driver’s driving performance into the decision-making model. It uses the average of the acceleration and the speed driving patterns to determine the driving performance in the other intersections previous to the last one, and it is then able to penalize drivers that deviate from these values. We based our model on the work in [16]. We adapted it by redesigning the initial payoff’s functions for suitability to our scenario, as described in detail through the functions in the following subsections:

3.2.1. Safety Payoff Function of FV

The safety payoff function of FV, U s a f e t y F V is defined by Equation (1) as the difference between the original safety function in time t 0 , S F t 0 F V , and the safety function in time t i , S F t i F V :
U s a f e t y F V = S F t 0 F V S F t i F V
The FV’s safety function S F F V ( t ) depends on the gap between the EGO and the FV and is denoted by Equation (2). In order to convert the function to the range [−1; 1], 0.5 is subtracted from the cumulative distribution function and then the result is multiplied by 2.
S F F V ( t ) = 2 ( F c d f ( G E G O , F V ) 0.5 ) )
where F c d f (see Equation (3)) is a cumulative normal distribution function, in which the value of variable x has been normalized, and the mean and standard deviation values adjusted to create the resulting function G ( t ) .
As illustrated in Figure 3 the mean value μ and standard deviation σ were adapted such that S F ( t ) equals 1 if G v 1 , v 2 ( t ) is less or equal zero and S F ( t ) equals 1 if G v 1 , v 2 ( t ) is greater than the defined 4 maximum gap in meters.
F c d f = 1 ( σ 2 π ) exp ( x μ ) 2 2 σ 2 )
where: σ = 0.6 , μ = 2 , x = G v 1 , v 2 ( t ) .
As denoted in Equation (4), the G v 1 , v 2 ( t ) function returns the distance between two vehicles at time t. In order to obtain the distance, it is necessary to subtract the mean length of the two vehicles from the difference of their longitudinal positions. x ( t ) is defined as the center of the vehicle.
G v 1 , v 2 ( t ) = x ( t ) v 1 x ( t ) v 2 l v 1 + l v 2 2
where x ( t ) v x denotes the longitudinal positions and l v x denotes the length of the vehicles.
The longitudinal position x ( t ) is defined based on the uniform acceleration in Equation (5), where the value of variable a depends on the selected strategy.
x ( t ) = x 0 V 0 + a t 2 2

3.2.2. Space Payoff Function of the FV

Similar to safety payoff, space payoff for FV is defined as the difference between the values of space function S p F ( t ) in time t0 and time ti (Equation (6)).
U s p a c e F V = S p F t 0 F V S p F t i F V
The normalized probability density function denoted by Equation (7) is utilized to obtain the FV’s space payoff function. Similar to the FV’s safety payoff function, the x parameter was adapted in order to satisfy function G v 1 , v 2 ( t ) , as denoted in Equation (4). Furthermore, σ and μ are adjusted in a way that S p F ( t ) equals 1 when the gap between FV and LEAD equals 5 and S p F ( t ) equals −1 when the gap is bigger than 10 or less than 0, as illustrated in Figure 4.
S p F F V ( t ) = 2 ( exp ( 0.5 × ( x μ ) σ ) 0.5 )
where: σ = 5 3 , μ = 5 , x = G L E A D , F V ( t ) .

3.2.3. Safety Payoff Function of EGO

The EGO’s safety payoff function is similar to the FV’s safety payoff function, as denoted by Equation (8). The only difference is that the EGO’s safety payoff function depends on both FV and LEAD.
U s a f e t y E G O = S F t 0 E G O S F t i E G O
The EGO’s safety payoff function S F E G O ( t ) is based on Equation (3) and is defined as the product of two cumulative distribution functions (Equation (9)). The first cumulative distribution function is a function of G E G O , F V ( t ) , and the second one is a function of G L E A D , E G O ( t ) . To illustrate the relationship between the EGO’s safety payoff function and the vehicle’s position, we present an example in which we use FV’s longitudinal position as the origin (0 m) and we set the length of all vehicles to 5 m.
At the same time, three different LEAD cars’ longitudinal positions are chosen, which are used to demonstrate different situations in which the distance between LEAD and FV changes. Figure 5 shows the dependencies between the EGO’s safety function and the position of the EGO vehicle.
S F E G O = 2 ( F c d f ( G E G O , F V ( t ) ) × F c d f ( G L E A D , E G O ( t ) ) 0.5 )

3.2.4. Penalty Function of the Vehicles

The original penalty function described in [16] was directly implemented to create our model, as it suited our considered scenario. We describe the process in this section. The penalty function is defined to consider the driving patterns of the vehicles that interact with EGO in the decision-making model. The function takes as input the FV acceleration and speed values acquired before stopping at the intersection. It calculates the outcome based on the deviation between these values and the selected FV and EGO parameter values for the decision-making process. The penalty function consists of the speed penalty P V and the acceleration penalty P a , as shown in Equation (10).
U p e n a l t y = exp ( ( P V + P a ) )
The speed and the acceleration penalty functions are obtained as the squared deviation of the acquired values before the stop at the final intersection and the selected values applied as a result of the decision to allow EGO to merge in front, as denoted by Equations (11) and (12).
The w s and w a coefficients are used to increase the impact of these functions on the total penalty.
P V = ( V 0 + a t V a ) 2 w s
where w s is the speed penalty coefficient and V a is the acquired speed prior to stopping at the last intersection.
Figure 6 displays the dependency of the penalty function and a selected acceleration strategy. The greater the deviation from the desired option, the less the payoff value will be.
P a = t 2 ( a a a ) 2 w a
where w a is the coefficient of acceleration penalty and a a is the acquired acceleration prior to stopping at the last intersection.

3.2.5. The Outcome of the Vehicles

The outcome for both players is illustrated in Equations (13) and (14).
U E G O = U s a f e t y E G O × U p e n a l t y
The outcome of FV is based on the mean value of its safety and space payoff functions. In the proposed model, we assume that the FV’s space payoff has the same impact on the driver as the safety payoff. Therefore, in Equation (14), we use the mean value between these two payoff functions.
U F V = ( U s a f e t y F V + U p e n a l t y F V 2 ) U p e n a l t y

4. Data Acquisition and Use Case Definition for Model Validation

In order to validate the proposed decision-making model, we designed a simulation environment that replicates the scenario described in Section 1. To this end, we used the simulator for cooperative advanced driver assistance systems (ADAS) and automated vehicles, 3DCoAutoSim, which was developed to test intelligent transportation systems (ITS)-related applications and connects the game engine Unity 3D [22] with Simulation of Urban MObility (SUMO) [23] and ROS [24,25,26,27,28,29,30]. Details of the simulation framework are presented in the following subsection.
The participants manually controlled the player vehicle (Player, denoted until now with FV). They were asked to drive from the origin to destination within a given time interval while following traffic rules. In order to acquire the necessary data regarding driving patterns, the main route included several intersections. At the last intersection, the traffic lights were controlled by a specific program so that they remained red until Player and EGO were both located at the positions defined in the proposed decision-making model (Figure 7(1)).
When Player stopped at the final intersection and entered the queue of traffic waiting for the traffic lights to turn green, an event was triggered for EGO to drive to the left lane in which Player was located (Figure 7(2)).
When EGO stopped and activated the left turn signal to perform the lane change, a further event was triggered to apply the proposed decision-making model (Figure 7(3)).
The last step of the simulation included the interaction between Player and EGO, during which the participants in the driving test needed to decide whether they accepted or rejected EGO’s request to change lanes and to merge. Figure 8 illustrates the validation process flow.

4.1. 3DCoAutoSim Description

As previously mentioned, to perform the pertinent driving experiments to validate the proposed model, we relied on the 3DCoAutoSim framework, which is a driver-centric simulator in which the player controls a vehicle that drives through created or imported scenarios. The simulator is used in order to test different use cases or applications regarding the feasibility of automation and communication features in vehicles. It was developed using the cross platform game engine Unity 3D for developing 3D games with simulated physics [22].
The Player vehicle is controlled using the CSL Elite Wheel Base and pedals from Fanatec, which communicates with Unity3D through an external intrinsic input module of the game engine, as illustrated in Figure 9.
To simulate and visualize traffic, the modular components of 3DCoAutoSim are linked with SUMO to communicate through the traffic control interface (TraCI), which allows external applications to access a running road traffic simulation and to obtain the values of the simulated objects to manipulate their behavior online. Unity uses a C# library of TraCI to access the SUMO parameters and to consequently generate traffic.

4.2. SUMO Description

In the model validation scenario, SUMO controlled all the traffic lights and vehicles in the scenario except the Player vehicle. In order to generate the SUMO road and traffic network simulation, a section of the city of Vienna was imported from OpenStreetMap (OSM) [31] into SUMO. The NETCONVERT [32] and POLYCONVERT [33] commands were then applied to the OSM data to generate the road network as illustrated in Figure 10. To generate the traffic demand, a set of random trips [34] with different vehicle types (vType elements) were also added to the network (Figure 10). The process to generate and visualize the traffic in SUMO is presented in Figure 11.
In addition, we adopted TraCI to study the performance of the proposed decision-making model and the trigger models.
In order to trigger the intended behavior of the EGO vehicle in the congested traffic intersection, we developed a procedure with different behavioral levels, as illustrated in Algorithm 1. If the Player parameter RoadID matches the ID of the last intersection while waiting for the traffic light to turn green, a trigger for EGO is activated for it to merge in front of the Player on the left lane (Figure 7).
Algorithm 1 Trigger for Activating the EGO Vehicle
input: Player class. p v ; RoadID of the last intersection, r I D ;
output: event t;
Sensors 21 01523 i002
Algorithm 2 shows the procedure developed to investigate the performance of the proposed decision-making model. The EGO vehicle is first verified in the algorithm. If the EGO is stopped and its left turn signal is activated to perform a lane change, an event to apply the proposed decision-making model is triggered for the Player. By applying the decision-making model, the Player decides whether to accept or reject the lane-change request from EGO.

4.3. Experimental Setup

We describe in this section the experimental setup.
Sample: eight persons took part in the model validation process through several driving tests. The group included 6 males and 2 females with driving experience ranging from 0.5 year to 31 years and an average age of 31 years. A total of 16 trips from 2 rounds, one without a time limit and one with a time limit, were conducted, from which data were collected.
Algorithm 2 Proposed game theory-based decision-making model.
input: Player vehicle class, p v ; LEAD vehicle class, l v ; EGO vehicle class, e v ;
output: Bool r e s ;
t1 = 0, t2 = 0;
Sensors 21 01523 i001
Task definitions: participants in the experiment were required to perform the following tasks:
  • Drive for 5 min to become familiarized with the simulation platform. No data were collected.
  • Drive from the origin to destination, as described in the first paragraphs of Section 4.
  • Drive from the origin to destination, as described but with a time limitation. This task aimed to add pressure and urgency and to motivate the participant to reject a lane-change request from the EGO side.
Generated scenario: the generated scenario to validate the proposed model depicted a section of the city of Vienna center (Vienna Ring). The structure and the position of the buildings were imported from the Vienna Magistrate’s database [36]. The free and open source 3DBlender software [37] was used to convert and edit the buildings to the Unity format. We additionally equipped the participant vehicle (Player, FV) with a routing navigation system (Navigator) to guide them through the route. A timer completed the setup of the vehicle to notify drivers of the time elapsed (see Figure 9).

5. Results

5.1. Game Solution Description

By relying on the outcome of the previously defined method, we describe in this section the solution for the proposed approach.
Equation (15) shows a bi-level optimization problem, in which the optimal EGO’s strategy a e g o depends on the selected strategy by a F V .
In our proposed decision-making model, we assumed that EGO is aware of the driving behavior of FV prior to stopping at the intersection. Therefore, the EGO can estimate and predict which would be the strategy that FV would pursue and can make a decision accordingly such that EGO can reach its maximum game outcome.
a E G O = arg max a e g o ( U E G O ( a E G O , a F V ) a F V S F V )
where S F V S F V ; meanwhile, with a F V S F V , a F V S F V , and ( a E G O , a F V ) , the inequality U F V ( a E G O , a F V ) U F V ( a E G O , a F V ) is satisfied.
In order to simplify the process of solving the task, we used discrete acceleration values with a step equal to 0.1 m/s 2 . Therefore, the game can be solved by first obtaining the FV’s set of optimal strategies S F V based on any of EGO’s strategies a E G O and then by using the obtained set S F V to find EGO’s optimal strategy a E G O . The extensive-form of the proposed game is illustrated in Figure 12.
The result of the game can therefore be implemented in the proposed decision-making model in the following way.
If a E G O > a F V , the lane change can be performed; otherwise, a collision might occur.

5.2. Driving Experiment Prediction Results

Regarding the results from the driving experiment performed in the simulator to validate the decision-making model proposed in this paper, Table 3 shows the input values for the proposed model collected during the experiment. In lines 1 and 12, the model prediction values did not match the action of the drivers. In 11 of the 16 cases in which FV needed to make a decision to accept or reject the lane-change request by the EGO, FV rejected the request and EGO did not perform the lane-change maneuver. In the other 5 cases for requests, FV enabled the lane change. The proposed decision-making model correctly predicted all 5 outcomes and 9 out of 11 outcomes in the cases in which FV rejected the request from EGO. Table 4 shows these results. Delay from the side of FV in resuming driving after the traffic light turned green (see a and speed v values close to zero in tests 1 and 12 in Table 3) affected the safety payoff function, in this case, the model making the decision to perform the lane-change maneuver. Equations (16) and (17) were used to estimate the acquired speed and acceleration values of Player prior to stopping at the last intersection.
a a = m e a n ( a > 0.01 )
v a = m e a n ( v > 0.3 )
The presented data in Table 3 shows that, in the considered scenario, the gap size had a significant impact on the driver’s final decision: the bigger the gap between the FV and the LEAD, the higher the probability for the driver to accept the lane-change request from the EGO vehicle. At the same time, some exceptions could also been observed in lines 1 and 12. They were related to the time it took the drivers to resume driving after the traffic light turned green. The speed and acceleration were also determining factors on the final decision in the considered scenario. The effect can be noticed when comparing test 4 with test 13. In test 4, the gap is bigger (5.31 m) compared with test 13 (3.79 m), but despite this fact, the driver in test 4 rejected the lane-change request from the EGO car while the driver in test 13 accepted the request. We argue that this behavior was due to the fact that the acceleration and speed values at an instant of time right before being used to calculate the prediction in test 4 were higher than in test 13 even if the acquired speed and acceleration values of Player prior to stopping at the last intersection were similar.
Summarizing all the results presented in the Table 3 and Table 4, we can conclude that the proposed decision-making model accurately predicts the drivers decisions regarding whether they allow the autonomous vehicle to drive in front of them.

6. Conclusions and Future Work

We presented in this work a dynamic noncooperative decision-making model based on a two-player nonzero-sum game theory. The goal was to provide autonomous vehicles with capabilities for analyzing human decision-making processes and for acting accordingly. The ultimate goal was to facilitate the coexistence of manual and autonomous vehicles in a mixed environment to augment road safety.
The results from validation of the proposed game theory-based decision-making model are promising, as the prediction rate was 100% in all of the cases in which the participants allowed the lane change and 83.3% in the other cases, affected by the driver’s reaction time to the traffic light color change from red to green. The driver’s final decision to accept a lane-change request from the EGO vehicle was influenced by the gap size between vehicles and by the speed and acceleration of the Player vehicle.
We conclude that our approach can be used as a tactical-level algorithm for making lane-changing decisions in autonomous vehicles in congested urban intersections. The reliability and prediction accuracy can be increased by evaluating the decision-making process during the entire lane-change action with a fixed “n” millisecond time step. Future work will include this idea and will extend the current approach to a game theory-based model with incomplete information using quantum response equilibrium. This will make it possible to address scenarios in which the EGO vehicle is not capable of obtaining all of the required information from the vehicles in the target lane. In addition, the driver’s reaction time will be considered in the payoff functions and the proposed model will be generalized such that it can be applied to both not crowded and crowded urban scenarios.

Author Contributions

Conceptualization, N.S. and C.O.-M.; methodology, N.S. and C.O.-M.; investigation, N.S.; resources, N.S. and C.O.-M; data curation, N.S.; writing—original draft preparation, N.S. and Y.L.; writing—review and editing, W.M.-A., A.V., and C.O.-M.; visualization, N.S.; supervision, C.O.-M.; project administration, C.O.-M.; funding acquisition, C.O.-M.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Austrian Ministry for Climate Action, Environment, Energy, Mobility, Innovation, and Technology (BMK) Endowed Professorship for Sustainable Transport Logistics 4.0., IAV France S.A.S.U., IAV GmbH, Austrian Post AG, and the UAS Technikum Wien. It was additionally supported by the Zero Emission Roll-Out—Cold Chain Distribution_877493.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lai, F.; Carsten, O.; Tate, F. How much benefit does Intelligent Speed Adaptation deliver?—An analysis of its potential contribution to safety and environment. Accid. Anal. Prev. 2012, 48, 63–72. [Google Scholar] [CrossRef]
  2. Ahmed, S.S.; Sonduru Pantangi, S.; Eker, U.; Fountas, G.; Still, S.E.; Anastasopoulos, P.C. Analysis of Safety Benefits and Security Concerns from the Use of Autonomous Vehicles: A Grouped Random Parameters Bivariate Probit Approach with Heterogeneity in Means. Anal. Methods Accid. Res. 2020, 28, 100134. [Google Scholar] [CrossRef]
  3. Kauffmann, N.; Winkler, F.; Naujoks, F.; Vollrath, M. “What Makes a Cooperative Driver?” Identifying parameters of implicit and explicit forms of communication in a lane change scenario. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58, 1031–1042. [Google Scholar] [CrossRef]
  4. Olaverri-Monreal, C. Promoting trust in self-driving vehicles. Nat. Electron. 2020, 3, 292–294. [Google Scholar] [CrossRef]
  5. Olaverri-Monreal, C. Road safety: Human factors aspects of intelligent vehicle technologies. In Smart Cities, Green Technologies, and Intelligent Transport Systems; Springer: London, UK, 2017; pp. 318–332. [Google Scholar]
  6. Hollander, Y.; Prashker, J.N. The applicability of non-cooperative game theory in transport analysis. Transportation 2006, 33, 481–496. [Google Scholar] [CrossRef]
  7. Banjanovic-Mehmedovic, L.; Halilovic, E.; Bosankic, I.; Kantardzic, M.; Kasapovic, S. Autonomous Vehicle-to-Vehicle (V2V) Decision Making in Roundabout using Game Theory. Int. J. Adv. Comput. Sci. Appl. 2016, 7. [Google Scholar] [CrossRef]
  8. Fox, C.W.; Camara, F.; Markkula, G.; Romano, R.A.; Madigan, R.; Merat, N. When should the chicken cross the road? Game theory for autonomous vehicle-human interactions. In Proceedings of the 4th International Conference on Vehicle Technology and Intelligent Transport Systems–VEHITS 2018, Madeira, Portugal, 16–18 March 2018; pp. 429–431. [Google Scholar] [CrossRef]
  9. Laurgeau, C. Intelligent Vehicle Potential and Benefits; Springer: London, UK, 2012; pp. 1537–1551. [Google Scholar] [CrossRef]
  10. Hang, P.; Lv, C.; Xing, Y.; Huang, C.; Hu, Z. Human-Like Decision Making for Autonomous Driving: A Noncooperative Game Theoretic Approach. IEEE Trans. Intell. Transp. Syst. 2020. [Google Scholar] [CrossRef]
  11. Schwarting, W.; Pierson, A.; Alonso-Mora, J.; Karaman, S.; Rus, D. Social behavior for autonomous vehicles. Proc. Natl. Acad. Sci. USA 2019, 116, 2492–24978. [Google Scholar] [CrossRef] [PubMed][Green Version]
  12. Wang, M.; Hoogendoorn, S.P.; Daamen, W.; van Arem, B.; Happee, R. Game theoretic approach for predictive lane-changing and car-following control. Transp. Res. Part C Emerg. Technol. 2015, 58, 73–92. [Google Scholar] [CrossRef]
  13. Ali, Y.; Zheng, Z.; Haque, M.M.; Wang, M. A game theory-based approach for modelling mandatory lane-changing behaviour in a connected environment. Transp. Res. Part C Emerg. Technol. 2019, 106, 220–242. [Google Scholar] [CrossRef]
  14. Harsanyi, J.C. Games with Incomplete Information Played by “Bayesian” Players, I–III Part I. The Basic Model. Manag. Sci. 1967, 14, 159–182. [Google Scholar] [CrossRef]
  15. Talebpour, A.; Mahmassani, H.S.; Hamdar, S.H. Modeling lane-changing behavior in a connected environment: A game theory approach. Transp. Res. Part C Emerg. Technol. 2015, 59, 216–232. [Google Scholar] [CrossRef]
  16. Yu, H.; Tseng, H.E.; Langari, R. A human-like game theory-based controller for automatic lane changing. Transp. Res. Part C Emerg. Technol. 2018, 88, 140–158. [Google Scholar] [CrossRef]
  17. Gipps, P.G. A model for the structure of lane-changing decisions. Transp. Res. Part B Methodol. 1986, 20, 403–414. [Google Scholar] [CrossRef]
  18. Cao, P.; Hu, Y.; Miwa, T.; Wakita, Y.; Morikawa, T.; Liu, X. An optimal mandatory lane change decision model for autonomous vehicles in urban arterials. J. Intell. Transp. Syst. Technol. Plan. Oper. 2017, 21, 271–284. [Google Scholar] [CrossRef]
  19. Cao, P.; Xu, Z.; Fan, Q.; Liu, X. Analysing driving efficiency of mandatory lane change decision for autonomous vehicles. IET Intell. Transp. Syst. 2019, 13, 506–514. [Google Scholar] [CrossRef]
  20. Sun, D.; Elefteriadou, L. A driver behavior-based lane-changing model for urban arterial streets. Transp. Sci. 2014, 48, 184–205. [Google Scholar] [CrossRef]
  21. Ji, A.; Levinson, D. A review of game theory models of lane changing. Transp. A Transp. Sci. 2020, 16, 1628–1647. [Google Scholar] [CrossRef]
  22. Haas, J. A History of the Unity Game Engine—An Interactive Qualifying Project. 2014. Available online: https://web.wpi.edu/Pubs/E-project/Available/E-project-030614-143124/unrestricted/Haas_IQP_Final.pdf (accessed on 10 January 2021).
  23. Behrisch, M.; Bieker, L.; Erdmann, J.; Krajzewicz, D. SUMO–simulation of urban mobility: An overview. In Proceedings of the SIMUL 2011, The Third International Conference on Advances in System Simulation, Barcelona, Spain, 23–29 October 2011. [Google Scholar]
  24. Olaverri-Monreal, C.; Errea-Moreno, J.; Díaz-Álvarez, A. Implementation and Evaluation of a Traffic Light Assistance System Based on V2I Communication in a Simulation Framework. J. Adv. Transp. 2018, 2018. [Google Scholar] [CrossRef][Green Version]
  25. Olaverri-Monreal, C.; Errea-Moreno, J.; Díaz-Álvarez, A.; Biurrun-Quel, C.; Serrano-Arriezu, L.; Kuba, M. Connection of the SUMO Microscopic Traffic Simulator and the Unity 3D Game Engine to Evaluate V2X Communication-Based Systems. Sensors 2018, 18, 4399. [Google Scholar] [CrossRef][Green Version]
  26. Biurrun-Quel, C.; Serrano-Arriezu, L.; Olaverri-Monreal, C. Microscopic driver-centric simulator: Linking Unity3d and SUMO. In World Conference on Information Systems and Technologies; Springer: London, UK, 2017; pp. 851–860. [Google Scholar]
  27. Hussein, A.; Diaz-Alvarez, A.; Armingol, J.M.; Olaverri-Monreal, C. 3DCoAutoSim: Simulator for Cooperative ADAS and Automated Vehicles. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 3014–3019. [Google Scholar] [CrossRef]
  28. Artal-Villa, L.; Hussein, A.; Olaverri-Monreal, C. Extension of the 3DCoAutoSim to simulate vehicle and pedestrian interaction based on SUMO and unity 3D. In IEEE Intelligent Vehicles Symposium; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2019; Volume 2019, pp. 885–890. [Google Scholar] [CrossRef]
  29. Hussein, A.; García, F.; Olaverri-Monreal, C. Ros and unity based framework for intelligent vehicles control and simulation. In Proceedings of the 2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Madrid, Spain, 12–14 September 2018; pp. 1–6. [Google Scholar]
  30. Liu, Y.; Novotny, G.; Smirnov, N.; Morales-Alvarez, W.; Olaverri-Monreal, C. Mobile Delivery Robots: Mixed Reality-Based Simulation Relying on ROS and Unity 3D. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 15–20. [Google Scholar] [CrossRef]
  31. OpenStreetMap. Available online: https://www.openstreetmap.org/#map=7/47.714/13.349 (accessed on 14 October 2020).
  32. Netconvert-SUMO Documentation. Available online: https://sumo.dlr.de/docs/netconvert.html (accessed on 24 September 2020).
  33. Polyconvert-SUMO Documentation. Available online: https://sumo.dlr.de/docs/polyconvert.html (accessed on 8 October 2020).
  34. Tools/Trip-SUMO Documentation. Available online: https://sumo.dlr.de/docs/Tools/Trip.html (accessed on 7 June 2020).
  35. Validi, A.; Ludwig, T.; Hussein, A.; Olaverri-Monreal, C. Examining the Impact on Road Safety of Different Penetration Rates of Vehicle-to-Vehicle Communication and Adaptive Cruise Control. IEEE Intell. Transp. Syst. Mag. 2018, 10, 24–34. [Google Scholar] [CrossRef]
  36. Geodatenviewer der Stadtvermessung Wien. Available online: https://www.wien.gv.at/ma41datenviewer/public/start.aspx (accessed on 6 November 2020).
  37. Blender.org. Available online: https://www.blender.org (accessed on 28 December 2020).
Figure 1. Illustration of the proposed lane-change scenario and the relevant players. The autonomous vehicle (EGO) (in white) performs a lane change to occupy the position in front of the following vehicle (FV) in green. The leading vehicle (LEAD, in red) is located in front of the FV.
Figure 1. Illustration of the proposed lane-change scenario and the relevant players. The autonomous vehicle (EGO) (in white) performs a lane change to occupy the position in front of the following vehicle (FV) in green. The leading vehicle (LEAD, in red) is located in front of the FV.
Sensors 21 01523 g001
Figure 2. Dynamic noncooperative game description in its extensive-form representation. EGO and FV denote the players; A 1 and A 2 , and B 1 and B 2 denote their respective sets of strategies. Pxy and Qxy denote the outcome of the game, where x is the FV’s selected strategy and y is the strategy selected by EGO.
Figure 2. Dynamic noncooperative game description in its extensive-form representation. EGO and FV denote the players; A 1 and A 2 , and B 1 and B 2 denote their respective sets of strategies. Pxy and Qxy denote the outcome of the game, where x is the FV’s selected strategy and y is the strategy selected by EGO.
Sensors 21 01523 g002
Figure 3. FV’s safety function represented by safety in the form of scored points and the headway or gap between vehicles in meters.
Figure 3. FV’s safety function represented by safety in the form of scored points and the headway or gap between vehicles in meters.
Sensors 21 01523 g003
Figure 4. FV’s space function depending on the cooperation level regarding willingness to let EGO merge (scored points) and the headway or gap between FV and LEAD in meters.
Figure 4. FV’s space function depending on the cooperation level regarding willingness to let EGO merge (scored points) and the headway or gap between FV and LEAD in meters.
Sensors 21 01523 g004
Figure 5. Dependencies between the EGO’s safety payoff function and the position of EGO ( x E G O ). Note that x L E A D is the longitudinal position of LEAD, which determines the distance between FV and LEAD.
Figure 5. Dependencies between the EGO’s safety payoff function and the position of EGO ( x E G O ). Note that x L E A D is the longitudinal position of LEAD, which determines the distance between FV and LEAD.
Sensors 21 01523 g005
Figure 6. Penalty function, with a a = 1.5 m / s 2 , V = 0 m / s , t = 3 s , V a = 5 m / s , w a = 500 , and w d = 500 .
Figure 6. Penalty function, with a a = 1.5 m / s 2 , V = 0 m / s , t = 3 s , V a = 5 m / s , w a = 500 , and w d = 500 .
Sensors 21 01523 g006
Figure 7. Model validation scenario: 1. The player’s vehicle (FV) approaches the target intersection. 2. The player’s vehicle enters the queue to wait for the traffic light to turn green. EGO is activated to move to the left lane; 3. EGO vehicle is located at the position defined in the proposed decision-making model.
Figure 7. Model validation scenario: 1. The player’s vehicle (FV) approaches the target intersection. 2. The player’s vehicle enters the queue to wait for the traffic light to turn green. EGO is activated to move to the left lane; 3. EGO vehicle is located at the position defined in the proposed decision-making model.
Sensors 21 01523 g007
Figure 8. Flow chart describing the validation process.
Figure 8. Flow chart describing the validation process.
Sensors 21 01523 g008
Figure 9. The 3DCoAutoSim simulation platform during a driving test and a screenshot of the scenario setup and in-vehicle systems.
Figure 9. The 3DCoAutoSim simulation platform during a driving test and a screenshot of the scenario setup and in-vehicle systems.
Sensors 21 01523 g009
Figure 10. Trip origin and destination on the generated Simulation of Urban MObility (SUMO) road network to perform the model validation with a section of the generated SUMO simulation with different vehicle types at the destination.
Figure 10. Trip origin and destination on the generated Simulation of Urban MObility (SUMO) road network to perform the model validation with a section of the generated SUMO simulation with different vehicle types at the destination.
Sensors 21 01523 g010
Figure 11. SUMO simulation steps (adapted from [35]).
Figure 11. SUMO simulation steps (adapted from [35]).
Sensors 21 01523 g011
Figure 12. Dynamic noncooperative game description in its extensive-form representation. EGO and FV denote the players; a 0 and a m denote their respective sets of strategies. Pxy and Qxy denote the outcome of the game, where x is the FV’s selected strategy and y is the strategy selected by EGO.
Figure 12. Dynamic noncooperative game description in its extensive-form representation. EGO and FV denote the players; a 0 and a m denote their respective sets of strategies. Pxy and Qxy denote the outcome of the game, where x is the FV’s selected strategy and y is the strategy selected by EGO.
Sensors 21 01523 g012
Table 1. Dynamic noncooperative game description in its normal-form representation. EGO and FV denote the players; A 1 and A 2 , and B 1 and B 2 denote their respective sets of strategies. Pxy and Qxy denote the outcome of the game, where x is the FV’s selected strategy and y is the strategy selected by EGO.
Table 1. Dynamic noncooperative game description in its normal-form representation. EGO and FV denote the players; A 1 and A 2 , and B 1 and B 2 denote their respective sets of strategies. Pxy and Qxy denote the outcome of the game, where x is the FV’s selected strategy and y is the strategy selected by EGO.
ActionEGO Vehicle
A1 (Change Lane)A2 (Do Not Change Lane)
The FV vehicle
B1 (Accept) ( P 11 , Q 11 ) ( P 12 , Q 12 )
B2 (Decline) ( P 21 , Q 21 ) ( P 22 , Q 22 )
Table 2. Game description in its normal-form representation. EGO and FV denote the players. a E G O and a F V denote the sets of strategies for the EGO and FV, respectively. P a F V , a E G O and Q a F V , a E G O denote the outcome values of the game with respect to the players.
Table 2. Game description in its normal-form representation. EGO and FV denote the players. a E G O and a F V denote the sets of strategies for the EGO and FV, respectively. P a F V , a E G O and Q a F V , a E G O denote the outcome values of the game with respect to the players.
ActionEGO Vehicle
0 a EGO F ( LEAD )
FV
0 a F V F ( L E A D ) ( P a F V , a E G O , Q a F V , a E G O )
Table 3. Input values regarding the driving performance data used to predict cooperation for an EGO merging maneuver. a a and v a denote the acquired speed and acceleration values of Player prior to stopping at the last intersection, a and v are the acceleration and speed values at an instant of time right before being used to calculate the prediction, g a p is the existent gap between Player and LEAD, a c t i o n denotes the player’s decision, and p r e d i c t e d is the model prediction value.
Table 3. Input values regarding the driving performance data used to predict cooperation for an EGO merging maneuver. a a and v a denote the acquired speed and acceleration values of Player prior to stopping at the last intersection, a and v are the acceleration and speed values at an instant of time right before being used to calculate the prediction, g a p is the existent gap between Player and LEAD, a c t i o n denotes the player’s decision, and p r e d i c t e d is the model prediction value.
test a a m / s 2 v a m/sa m/s 2 v m/s gap m action predicted
10.848.610.010.077.07rejectaccept
21.4812.010.040.953.50rejectreject
30.806.840.000.007.72acceptaccept
41.178.980.020.565.31rejectreject
51.349.650.000.007.07acceptaccept
61.4211.530.020.272.92rejectreject
71.2210.620.030.341.36rejectreject
82.1112.890.000.213.11rejectreject
91.0810.330.020.175.17acceptaccept
101.4412.030.030.593.25rejectreject
111.037.790.20.274.76rejectreject
121.038.770.000.014.41rejectaccept
131.168.410.000.003.79acceptaccept
141.8011.290.030.481.93rejectreject
151.2410.810.000.007.53acceptaccept
162.0814.280.021.145.21rejectreject
Table 4. Confusion matrix of the validation test.
Table 4. Confusion matrix of the validation test.
n = 16Predicted:
NO
Predicted:
YES
Actual:
NO
9211
Actual:
YES
052
97
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Smirnov, N.; Liu, Y.; Validi, A.; Morales-Alvarez, W.; Olaverri-Monreal, C. A Game Theory-Based Approach for Modeling Autonomous Vehicle Behavior in Congested, Urban Lane-Changing Scenarios. Sensors 2021, 21, 1523. https://doi.org/10.3390/s21041523

AMA Style

Smirnov N, Liu Y, Validi A, Morales-Alvarez W, Olaverri-Monreal C. A Game Theory-Based Approach for Modeling Autonomous Vehicle Behavior in Congested, Urban Lane-Changing Scenarios. Sensors. 2021; 21(4):1523. https://doi.org/10.3390/s21041523

Chicago/Turabian Style

Smirnov, Nikita, Yuzhou Liu, Aso Validi, Walter Morales-Alvarez, and Cristina Olaverri-Monreal. 2021. "A Game Theory-Based Approach for Modeling Autonomous Vehicle Behavior in Congested, Urban Lane-Changing Scenarios" Sensors 21, no. 4: 1523. https://doi.org/10.3390/s21041523

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop