Next Article in Journal
Opportunistic Weather Sensing by Smart City Wireless Communication Networks
Next Article in Special Issue
Cross-Field Road Markings Detection Based on Inverse Perspective Mapping
Previous Article in Journal
Tunable Characteristics of Optical Frequency Combs from InGaAs/GaAs Two-Section Mode-Locked Lasers
Previous Article in Special Issue
Development of Robust Lane-Keeping Algorithm Using Snow Tire Track Recognition in Snowfall Situations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Personalized Shared Control for Automated Vehicles Considering Driving Capability and Styles

1
College of Automotive Engineering, the National Key Laboratory of Automotive Chassis Integration and Bionics, Jilin University, Changchun 130025, China
2
College of Intelligence and Computing, Tianjin University, Tianjin 300354, China
3
Automotive Data Center, CATARC, Tianjin 300000, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(24), 7904; https://doi.org/10.3390/s24247904
Submission received: 26 October 2024 / Revised: 5 December 2024 / Accepted: 6 December 2024 / Published: 11 December 2024

Abstract

:
The shared control system has been a key technology framework and trend, with its advantages in overcoming the performance shortage of safety and comfort in automated vehicles. Understanding human drivers’ driving capabilities and styles is the key to improving system performance, in particular, the acceptance by and adaption of shared control vehicles to human drivers. In this research, personalized shared control considering drivers’ main human factors is proposed. A simulated scenario generation method for human factors was established. Drivers’ driving capabilities were defined and evaluated to improve the rationality of the driving authority allocation. Drivers’ driving styles were analyzed, characterized, and evaluated in a field test for the intention-aware personalized automated subsystem. A personalized shared control framework is proposed based on the driving capabilities and styles, and its evaluation criteria were established, including driving safety, comfort, and workload. The personalized shared control system was evaluated in a human-in-the-loop simulation platform and a field test based on an automated vehicle. The results show that the proposed system could achieve better performances in terms of different driving capabilities, styles, and complex scenarios than those only driven by human drivers or automated systems.

1. Introduction

Automated vehicles have become a dominant trend in improving traffic safety and transportation efficiency, given their advantages in reducing drivers’ driving loads and predicting traffic situations [1,2]. However, some obstacles still exist in enhancing the safety and adaptability of fully automated vehicles. Social dilemmas, represented by the tram issue and driver complacency, are technical problems that human society and ethics present to artificial intelligence theory in automated vehicles [3]. The technique limitations of automated vehicles, such as poor perception accuracy and decision-making ability, restrict the transition process as well [4,5]. Therefore, the use of highly automated vehicles with a human-in-the-loop, which is called shared control, is likely to last for a long time and has been attracting increasing attention. Shared control can be defined as a safe, efficient, friendly, and stable driving mode formed by overcoming the decision-making conflict between the driver with social attributes and the automated system with logical attributes. The human–machine cooperation mechanism constitutes the theory foundation of the shared control system, and the human factors are required to be evaluated in detail.
As the key technique to overcoming the human–machine conflict in shared control, the driving authority allocation strategy (DAAS) needs to be analyzed in depth. The DAAS can be defined as the weight distribution method between the driver and the automated system, with the aim of achieving a safe, efficient, and stable system configuration [6]. The DAAS consists of a switched type and a shared type, and the latter is subdivided into direct and indirect shared controls [7]. With the development of by-wire and intelligent sensing techniques, the basic functionality of the DAAS has been improved, and indirect shared control has developed rapidly by virtue of its framework advantage, signal identification, velocity optimization, and H∞ control theories [8,9]. In recent years, in order to improve system safety, adaptability, and driver acceptability, the DAAS has gradually focused on the cognitive mechanism of human factor attributes, the cognitive capability of complex scenarios, and switching smoothness [10,11]. Drivers’ driving skills are modeled and evaluated based on the optimal driver preview model, overtaking model, and accident assessment model and the drivers’ learning processes and levels are revealed, mainly based on typical physical models [12,13]. Drivers’ driving statuses focus on drivers’ fatigue, sleepiness, distraction, and mood and are identified accurately based on biological signals, vehicle states, and driving actions, such as eye and head movements. Image detection and machine learning theories are used to analyze drivers’ take-over and reaction abilities, and they reveal the inducement and externalization of the driving status [14,15]. Drivers’ driving styles are classified and identified based on physical modeling or a data-driven approach, and the technique details are discussed in the next section. It can be seen that key human factors are analyzed and evaluated in a decoupled way, and few studies were conducted on the comprehensive representation of driving skills, statuses, and styles. The rationality of the DAAS still needs to be improved due to the lack of comprehensive human factors reflecting drivers’ time-varying abilities for vehicle control.
As the main component of a human-like automated driving pattern, a personalized driving pattern is produced by a driver’s driving style [16]. Understanding the drivers’ driving styles that make the shared control more human-like is the key to improving system performance, in particular, driver adaption and acceptance [17]. Four human-like degrees, namely, none, low, medium, and high levels, were researched, and the medium degree received more attention based on its advantage of high computational efficiency and distinct personalized results [18]). The classification and identification methods were mainly developed to obtain up to six typical driving styles [19,20]. The classification mode of driving styles is the primary task used to reflect the intrinsic mechanism of the human-like driving pattern and is mainly affected by scenarios, modal datasets, and characteristic parameter sets [21]. No more than five types of driving styles produce optimal computational efficiency but are mainly used for fuel economy instead of drivers’ acceptability [22]. Subjective and objective classification methods were researched individually by online or offline methods, and the average accuracy was about 85%, which was based on data-driven algorithms, and 80% was based on a questionnaire [23]. The identification process of driving styles takes the moment-based array or time-series-based high-dimensional data as the model inputs, and the identification method is mainly based on machine learning and system modeling [19,24]. Data-driven methods have advantages in processing the human factor data with high-order nonlinearity and uncertainty, and the framework of the system model has low-order nonlinearity and certain physical interpretations. Considering the model’s complexity, related methods are mainly conducted using offline methods to obtain analysis results [25]. Furthermore, few studies focused on scenario generation methods specific to human factors, which are the main factors restricting the evaluation accuracy of human factors as well. Therefore, a combination of classification and identification processes for driving styles needs to be developed in depth, and its evaluation method combining online and offline methods and simulated scenarios for the personalized system needs to be established to improve evaluation accuracy.
The decision-making process is required to generate efficient and stable driving tasks in complex scenarios and becomes the core-level component in shared control [26]. The DAAS and driving styles constitute internal constraints and determine the system features together with the external ones, such as scenarios or vehicle states [27]. Thus, the system features of the decision-making process consist of self-learning, high-order nonlinearity, high data dependency, and personality, and the key challenge is how to deal with the uncertainties and improve driver acceptance [28]. Two typical methods, the physical modeling-based and data-driven-based, are researched for decision-making logic. The physical modeling-based methods, such as the car-following model, the Pipes and Forbes model, and the overtaking maneuver model, aim to reveal the decision-making mechanisms of the human–vehicle closed loop system [29]. The model’s accuracy is improved with the gradual refinement of driving skill sub-models, such as the driver preview model, the unified driver model, or the driver control model [30]. The physical modeling-based decision-making models have clear physical meaning and can realize real-time and strong, robust decision-making tasks. However, these models poorly adapt to scenarios with high uncertainty due to the deterministic internal model structure and the neglect of data support. Data-driven-based methods, such as the multi-criteria, Bayesian network, or deep learning decision-making, are trained and operated based on datasets for the online decision-making of automated vehicles [31,32]. Artificial intelligence and research theories, such as game theory, the Markov decision-making process (MDP), or deep learning, are used to examine high-order nonlinearity and personality characteristics occurring in the decision-making process based on large-scale datasets [33]. However, few studies combine intention-aware uncertainty and personalized factors into the decision-making process in shared control, which affects the acceptance and adaption of shared control to a certain extent. As a result, the data-driven-based decision-making paradigm needs to be developed in depth, where the personality of a driver should be merged.
Motivated by the above-mentioned observations, a personalized shared control framework considering a person’s driving capability and style is proposed. The contributions are as follows.
(1) Drivers’ driving capability is defined and evaluated to improve the rationality of the DAAS. Driving styles combining classification and identification processes are analyzed, characterized, and evaluated in both online and offline ways. The simulated scenario generation method for human factors is established.
(2) The personalized framework of shared control for the automated vehicle is proposed, considering drivers’ driving capabilities and style, and its evaluation criteria are established considering driving safety, comfort, and workload. The intention-aware decision-making logic is proposed based on the mixed observable Markov decision process (MOMDP).

2. Analysis and Evaluation of Drivers’ Driving Capability and Styles

2.1. Simulated Scenario Generation Methods for Human Factors

The system’s simulation and scenario generation method have important significance in improving the rationality and accuracy of evaluation methods and reveal the internal mechanism of human factors, in particular, driving capability and styles. In order to motivate human factors to a maximum extent, the single source-based simulation and the random vehicle–road field (RVRF)-based scenario generation method are proposed for drivers’ driving capability and styles, respectively.
The single source-based simulation consists of signal features, physical mapping, and signal dimension and can be designed as the velocity function of background vehicles based on the signal periodicity, mutability, and spectral characteristics, such as the aperiodic mutation signal and periodic gradient signal as follows:
v L = A p sin ( ω t + ϕ ) + Γ
where vL is the velocity of the background vehicle, Ap is the velocity amplitude, ω is the signal frequency, φ is the initial velocity phase, and Γ is the velocity offset.
Micro-scenarios analyze the physical mechanism and random effect of vehicle–road coupling to reveal a general pattern of microscopic traffic scenarios. Therefore, they are suitable for the evaluation of driving capability, reflecting drivers’ time-varying features. The RVRF can be defined as a strong, random, and steady field formed by the vehicle–road coupling in the simulation environment, and the system framework of RVRF is shown in Figure 1.
The field strength Evef of RVRF consists of the field strength Ekef of the kinetic energy field composed of moving objects, the Evef of the potential field composed of stationary objects, and the Evef of the intention field composed of drivers’ uncertainty. Three components, generated by the transportation participant P at the spatial point (xq, yq), can be expressed as follows:
E v e f = E k e f + E p e f + E i f
E k e f = G R P M P | r P q | k 1 r P q | r P q | e [ k 2 v P , l o n cos ( θ P ) ]
E p e f = G R P M P | r P q | k 1 r P q | r P q |
E i f = G R P M P | r P q | k 1 r P q | r P q | e [ k 2 v P , l o n cos ( θ P ) ] Φ D
where (G, k1, k2) are constants, vP is the longitudinal velocity of P, rPq is the distance vector at (xq, yq), ΦD is the intention factor, MP is the equivalent mass, and RP is road field index.
The RVRF map Ω at the specific moment can be obtained and expressed as follows after slicing the natural scenario data:
Ω = { E S v e f , ζ ( x ζ , y ζ ) , ζ = 1 , , n }
where ESvef,ζ is the field strength of the vehicle–road field (xζ, yζ). Thus, the process of spatial situation clustering is that of extracting the statistical characteristics of the field map and clustering the field map.
Considering the spatial variation feature of the field strength and its neighborhood correlation, the mean neighborhood two-dimensional histogram theory is proposed to extract the statistical characteristics of the field strength. The mean field strength of the N × N neighborhood is centered in (xζ, yζ).
G E ( x ζ , y ζ ) = 1 N × N i = N 1 2 N 1 2 j = N 1 2 N 1 2 E S v e f , ζ ( x ζ + i , y ζ + j ) x ζ [ 1 , Q x ] ; y ζ [ 1 , Q y ]
where GE(xζ, yζ) is the average field strength at (xζ, yζ), and Qx and Qy are field map boundaries. The probability of (ESvef,ζ(xζ, yζ) = Em, GE(xζ, yζ) = En) can be expressed as follows:
H 2 D _ m e a n ( E m , E n ) = P ( E S v e f , ζ ( x ζ , y ζ ) ) = N ( E S v e f , ζ ( x ζ , y ζ ) = E m , G E ( x ζ , y ζ ) = E n ) Q x × Q y
where H2D_mean(Em, En) is the mean neighborhood histogram. The similarity index SI is the sum of the minimum values of H2D_mean, and the spatial situation database DS is as follows, where w is the situation number, and u is the field map number.
D S = { D S , i = Ω j , i = 1 , w , j = 1 , u }
The spatial position of the ego vehicle obeys the probability distribution P(St = (Stx, Sty)), and its set ξ is as follows:
ξ = { ξ i = ( x S t x , y S t x ) ~ P ( x S t x , y S t x ) , i = 1 , w }
where Stx and Sty are the numbers of ego vehicle position, and ξ is the completely observable set with a high density, fragmentation, and strong randomness in the naturalistic scenario data. Therefore, the growing neutral gas (GNG) algorithm [34] is proposed to extract the vehicle’s continuous topology in DS, as follows:
G N G = { χ , ε }
where χ and ε are the node set and the boundary set, and the submanifold consists of {χ[i], i = 1, …, G} and its eigenvector ev[i] = [χ[iStx, χ[iSty]TRg. ξ is taken as the input, and the learning process of χ is that of searching the minimum offset as follows, where De is the Euclidean distance between St and ev.
χ * = arg min χ   i = 1 G D e ( ( x S t x , y S t x ) , e v [ i ] ) P ( S t ) d x
The velocity of the ego vehicle obeys the specific motion pattern in typical DS, and its spatio-temporal variation features can be generalized by Gaussian process regression (GPR) as follows, where f(X) is the function of the mean value η(X) and the covariance ϑ(X, X′).
y = f ( X ) + l , f ~ G P ( η , ϑ ) , l ~ N ( 0 , σ l 2 )
The generation process of the vehicle motion pattern is that of the position searching and the related velocity mapping MP(x), and MP:(Stx, Sty)T → (vtx, vty)T can be expressed as follows:
v M P ~ M P ( x ) = N ( v ¯ M P , Σ v M P ) , S t T ξ
The RVRF model Hru can be abstracted as the probabilistic model consisting of the mutually exclusive region Re, vehicle–road topology {XRe, εRe}, and the state transition matrix Tm as follows:
H r u = { R e , { χ Re , ε Re } , M P Re , T m } R e = { R e i , i = 1 , , ϖ , R e i R e j = φ , i j } T m = P ( R e i | R e j , r u )
where the subscript ru indicates traffic rules, and the spatial probability distribution P(St+Δt|Re,t = Re[i]) at the (t + Δt) moment can be expressed as follows:
P ( S t + Δ t | R e , t = R e [ i ] ) = S t P ( S t + Δ t | S t , R e [ i ] ) P ( S t | R e [ i ] ) = S t v M P P ( S t + Δ t | S t , v M P ) Velocity   integral P ( v M P | S t ) G R P P ( S t | R e [ i ] ) Observed   value
The observation set can be estimated by predicting the vehicle’s spatial distribution, and the maximum likelihood function uR,j can be expressed as follows, where Mj is the observed value number corresponding to ψtt[j] = 1.
u R , j [ i ] = M j / M , j { 1 , , ϖ }

2.2. The Mechanism Analysis and Evaluation Method for Driving Capability

In order to improve the rationality of the DAAS, coupling relationships among human factors are analyzed and shown in Figure 2. The comprehensive human factor consisting of the driving style, skill, and status is proposed as the driver’s driving capability, which can be defined as the driver’s maneuverability to the vehicle with time-varying nonlinear dynamic features.
The mechanism analysis and evaluation framework of the driver’s driving capability is established and shown in Figure 3. The characteristics and mechanism of the driving capability are analyzed offline, and its evaluation method is conducted based on the online Gaussian mixture model (GMM) and introduced in detail in the DAAS in Section 3.
The Hammerstein identification process is proposed as the driving capability identification model (DCIM) in view of its conforming with the time-varying, high-order nonlinear, and dynamic features of the driving capability, as shown in Figure 4. As the set of longitudinal and lateral identification models (LnDCIM and LtDCIM), the DCIM consists of the static nonlinear and dynamic linear elements in series.
The static nonlinear and the dynamic linear elements can be expressed as follows:
M H ( z 1 ) C p ( k ) = N H ( z 1 ) z d S ( k )
{ M H ( z 1 ) = 1 + m H , 1 z 1 + + m H , q z i N H ( z 1 ) = n H , 1 z 1 + + n H , n z j
where Cp(k) consists of the pedal signal Pd(k) and the steering angle signal Sw(k), S(k) is the set {Sln(k), Slt(k)} in the static nonlinear element of DCIM, q and n are orders in the dynamic linear element, and d is the input delay order defined as the integer multiple of the sampling time.
As the key data to reveal the intrinsic attributes of driving capability, the parameters of the DCIM model need to be decoupled to avoid data oversaturation in the regression fitting. The principal component analysis (PCA) method [35] is adopted to decouple and reduce the key parameter dimension in the DCIM. The dataset Pr with ζ-dimension parameters and U-dimension observation variables can be expressed as follows:
P r = [ p r , 11 p r , 12 p r , 1 U p r , 21 p r , 22 p r , 2 U p r , ς 1 p r , ς 2 p r , ς U ] = [ p r , 1 , p r , 2 , , p r , U ]
The contribution rate of the principal component is defined as the percentage of the sum of the eigenvalues of the first q principal components and the sum of all ones, and the cumulative contribution rate Qq is as follows:
Q ψ = i = 1 ψ λ i / i = 1 U λ i
The ψ corresponding to Qψ ≥ 85% is taken as the independent component number, and the principal component matrix PL with ψ × U dimensions can be obtained. The driving capability needs to be classified as the following model set based on the objective and subjective methods.
D c a p = { E x c e l l e n t , S t r o n g , M e d i u m , W e a k , P o o r }
The objective classification method consists of the particle clustering and the mapping process from the clustering results to Dcap, and the subjective method is based on scale analysis. The element in the clustering set OP,ru that has the largest intersection with the specific one in the scale analysis set SP,ij, which is the driving capability element of the same type.
C L P = { C L P , r y , r = 1 , , 5 , i 5 y = e U }
C L P , r y = { O P , r u S P , i j , max y , r = 1 , , 5 }
The driving capability evaluation equation (DCEE) is as follows:
C L P = P ˜ L ρ
where P ˜ L is the subset of PL, and ρ is the regression coefficient.
P ˜ L = [ p l , 11 p l , 12 p l , 1 u p l , 21 p l , 22 p l , 2 u p l , q 1 p l , q 2 p l , q u ] T ; ρ = [ ρ 1 ρ 2 ρ m ] ; C L P = [ C L P , 1 C L P , 2 C L P , u ] ; C L P , i = R { 1 , 2 , 3 , 4 , 5 } , i = 1 , 2 , u ;     u U

2.3. The Characterization and Evaluation Method for Driving Styles

The characterization and evaluation framework of drivers’ driving styles is shown in Figure 5. Accounting for classification application and computation complexities, the driving styles can be labeled as the offline database DSty and are proposed to be classified into three types as follows:
D S t y = { S t e a d y   t y p e , g e n e r a l   t y p e , r a d i c a l   t y p e }
The driving style classification model (DSCM) combines the subjective and objective methods to approximate the truth value of the driving styles. Particle swarm clustering (PSC) is proposed as the objective classification method, and features of drivers’ driving styles are extracted as the physical property and model inputs, which can be expressed as follows:
a ω = [ f 0 f 0 + F P S D ( f ) d f ] 1 2
where aω is the root mean square of acceleration, f0 and F are the initial frequency and the integrating frequency range, and PSD(f) is the power spectral density of acceleration of the ego vehicle. The driver reaction time Ts represents the driver’s sensitivity to the stimuli of the current situation and can be expressed as follows:
T s = T 0 , w h e n   Δ v f i r = v e g o ( T 0 + n p t s p ) v e g o ( T 0 ) n p t s p Δ v 0
where tsp is the sampling period, np is the sampling number, vego is the velocity of the ego vehicle, Δv0 is the velocity threshold, and T0 is the instant that vego achieves Δv0 for the first time. The time headway Tf represents the approaching degree of the ego vehicle relative to the surrounding situation and is as follows:
[ T f , ln T f , l t ] = [ D a v g / V a v g D i n s / V i n s ]
where Davg and Dins are the average relative distance and the instant relative distance when the ego vehicle steers, and Vavg and Vins are the corresponding relative velocities.
Each element in DSty can be modeled as the DSCM, which can be designed as the function consisting of the mean and variance of the aω, Ts, and Tf as follows:
D S C M i = ( a ω M i , a ω V i , T s M i , T s V i , T f M i , T f V i ) ~ D s t y , i = 1 , 2 , 3
where variables with subscript M are those of their mean, and the ones with subscript V are those of their variance. The DSCM consists of the PSC and the mapping process between clustering centers and DSty. The number of the clustering center is 3, and the mapping relation is as follows:
D A g , i = ω 1 C a ω M i C T s M i C T f M i + ω 2 ( C a ω V i C T s V i C T f V i ) , i = 1 , 2 , 3
where [CaωM, CTsM, CTfM, CaωV, CTsV, CTfV] is the clustering center of the DSCM, ω1 and ω2 are the weight coefficients, and Dag is the radicalization factor. The clustering center with the maximum of Dag is that of the radical type, and that with the minimum is that of the steady type.
The questionnaire method [36] is proposed as the subjective classification method and consists of five comfort degrees when respondents are passengers or drivers, respectively. The Cronbach α is used as the scale of confidence to verify the reliability and stability of the results as follows:
α = K K 1 ( 1 i = 1 K σ i 2 σ T o t a l 2 )
where K is the question number, σi and σtotal are standard deviations of the score of the ith question and the total score of all the questions, respectively. The questionnaire for driving styles is shown in Table 1.
The traffic situation assessment will be introduced in detail in the intention-aware MOMDP framework in Section 3. Drivers’ operating signals, all states of the ego vehicle, and their relative states to the surrounding traffic are continuous time series and affect the states at the next moment. Therefore, the multi-dimension Gaussian hidden Markov process (MGHMP) with a set of hidden states qt and a corresponding set of κ possible observations is proposed as the driving style evaluation model (DSEM) as follows:
π = { π i }
π i = P ( q 1 = i ) , 1 i N
where π is the initial state distribution, and the state transition probability matrix A is as follows:
a i j = P ( q t + 1 = j | q t = i ) , 1 i , j N
a i j = P ( q t + 1 = j | q t = i ) , 1 i , j N
The set of observable sequences O = {Vi, i = 1, 2,…, κ}, where V is the possible observation. The observation probability matrix of the jth state is B = {bj(O)} and bj(O) is as follows:
b j ( O ) = k = 1 M c j k N ( O | μ j k , Σ j k ) , 1 j N
+ b j ( O ) d O = 1 , 1 j N
where cjk is the kth mixed-weight coefficient in the jth state, and M is the Gaussian mixture number. N(O|μjk, Σjk) is the Gaussian probability density function with mean μ and covariance Σ. cjk is as follows:
k = 1 M c j k = 1 , c j k 0 , 1 j N , 1 k M
DSEM can be defined as a tuple λ with N states.
λ = ( π , T r , c , μ , Σ )
Each DSEM is calculated as the logarithmic likelihood as follows, and the maximum likelihood in DSEM is mapped to the corresponding driving style.
L o g l i k ( θ ) = ln [ P ( O | λ ) ]

3. Personalized Shared Control Strategy for the Automated Vehicle

Personalized shared control aims at improving driving safety and comfort and minimizing the driving load, which should be the optimal driving control match between the driver and the autonomous system. The personalized paradigm and its evaluation criteria are established for the proposed shared control system.

3.1. The Framework of the Personalized Shared Control

The strong coupling between the shared control and human–vehicle–scenario system is determined by the system function of the shared control. Therefore, the mechanism analysis of the holonomic system X with the human–controller–vehicle–scenario system is the development infrastructure of shared control, which can be expressed as follows:
X = { E , V , D , M , f ( E , V , D , M ) }
where E is the stimulating system, D is the driver system, V is the vehicle system, M is the shared control system, which consists of the personalized system Hs and driving authority allocation system Das, and f(E, V, D, M) is the coupling function.
D and M comprehend the current driving situation of E and adjust the operation mode applied to V based on their status feedback and form the cooperative mode by overcoming the inconsistent understanding of E. The framework of the personalized shared control consisting of the system, strategy, and data configurations is developed based on system features of X and large-scale data acquisition and training methods, as shown in Figure 6. The Hs outputs personalized operation signals based on the Dsty type, and operation and status from both D and Hs are sent to D, and the driving authority is obtained based on the DAAS. The closed loop X is formed when states of V are changed and sent to E. The M outputs the online result of the Dsty type, the real-time result of Dcap, and the MOMDP-based decision-making result, which is based on the offline raw dataset, the offline database, and the online system states.

3.2. The DAAS and Evaluation Criteria for Shared Control

The Das arbitrates the driving authority between D and Hs and outputs the desired control signals to V. The Das can be expressed as follows:
D a s = f { D f , E f ( O E f , T E f ) , Λ ( D f , E f ) , Λ ( D , H s ) }
where Df is the human factor in D, Ef is the situation factor in E, which is affected by the observability factor OEf and the key situation factor TEf, and Λ is the coupling effect function.
The Das in the indirect shared control has advantages in overcoming structure conflict of human–machine operation and improving human comfort, and is proposed as the DAAS in M as follows:
C = { δ c , P a , c , P b , c } = ϑ C d + ( 1 ϑ ) C m
where the control signal C consists of steer signal δc, driving signal Pa,c, and braking signal Pb,c; Cd and Cm are operation signals of D and Hs, respectively; and ϑ is the allocation coefficient, which determines the driving authority. The online evaluation result of driving capability is taken as ϑ, given its maneuverability to determine the control of D. The mapping relation from Dcap = {1, 2, 3, 4, 5} to ϑ ∈ [0, 1] is ϑ = 0.25Dcap − 0.25.
The DAAS is based on the GMM with an offline classification database of driving capability. The GMM consists of multiple single-Gaussian probability distribution models, and their probability density can be described by the Gaussian density function g(x, μ, Σ) as follows:
g ( x ; μ , Σ ) = 1 ( 2 π ) d | Σ | exp [ 1 2 ( x μ ) T Σ 1 ( x μ ) ]
where x is the random vector, μ is the mean vector, Σ is the covariance matrix, and d is the dimension of the random vector. Multidimensional Gaussian probability density functions p(x) need to be superimposed in the DSIM with multidimensional state inputs as follows:
p ( x ) = j = 1 M α j g j ( θ j ) ,   w h e r e   j = 1 M α j = 1
where xj, μj, and Σj are the jth random vector, mean vector, and covariance matrix, respectively, and αj is the weight coefficient. The likelihood function of N-dimensional sample XG is as follows:
l ( X G | Θ ) = i = 1 N log j = 1 M α j g j ( θ j )
where θj = (xj, μj, Σj), Θ = (θ1,…, θM). The EM algorithm is proposed to train (α, μ, Σ) and consists of the E and M steps.
Accurate and objective evaluation criteria need to be established for shared control to evaluate the system’s performance. The system performance index of M is as follows:
Ξ i = 1 , , n = ε 1 η d s + ε 2 η c w + ε 3 η d c   w h e r e   ε 1 + ε 2 + ε 3 = 1
where Ξ is the composite index of shared control, and the subscript i is the number of E. Ξ consists of the safety index ŋds, the driving load index ŋcw, and comfort index ŋdc, and ε1, ε2, and ε3 are the corresponding weight coefficients. In order to reveal the system’s performance, values of ε1 and ε3 should be higher than ε2 and can be set as {ε1, ε2, ε3} = {0.4, 0.2, 0.4}. The ŋds, ŋcw, and ŋdc in the longitudinal and lateral scenarios are as follows:
[ η d s η c w η d c ] = [ ( 0 T D c a p , l n d t ) 1 0 T [ ( P a c + P ˙ a c ) + ( P b r + P ˙ b r ) ] d t N ( a ω M , h a ω M , j ) + N ( T s M , h T s M , j ) + N ( T f M , h T f M , j ) ]
[ η d s η c w η d c ] = [ ( 0 T D c a p , l t d t ) 1 0 T ( S w + S ˙ w ) d t N ( t max , h t max , j ) + N ( T s M , h T s M , j ) + N ( T f M , h T f M , j ) ]
where Dcap,ln and Dcap,lt are the longitudinal and lateral driving capabilities, Pac and Pbr are the positions of the gas and braking pedals, and Sw is the angle of the steering wheel. The variables corresponding to the subscript h are the statistics of the driving features in the case that ϑ = 1 and N is the normalization operator, which represents the deviation degree between the statistics and the specified value of the driver. Ɩmax is the maximum curvature, and the subscript j = 1, 2, 3 represents three modes: the shared control, human, and automated driving modes.

3.3. The Personalized Decision-Making Logic in the Intention-Aware MOMDP Framework

The personalized decision-making logic is developed based on MOMDP to improve the acceptance and adaptation of shared control, and the intention-aware-based assessment method is proposed for the uncertainty in complex scenarios based on the MGHMP. Vehicles’ motion intention in the micro-traffic scenarios can be regarded as the set of longitudinal and lateral relative intentions in spatial evolution, and their reaction behaviors can be regarded as the system response to states s0 of the background vehicles and their relative states ds0 to the ego vehicle. The coupling mechanism of motion intentions in micro-traffic scenarios is shown in Figure 7. The driving motion intention model (DMIM) appears to be extended infinitely with an increase in the coupling region quantity. Therefore, the reactive motion intention model (RMIM) is proposed for adjacent background vehicles to improve model universality and controllability of the model scale.
The motion intention of background vehicles can be expressed as follows:
I m = { I R , I D } = { f R ( s 0 , d s 0 ) , f D ( s 0 ) }
where IR is the motion intention set based on the RMIM, and ID is that based on the DMIM. The RMIM is the function of states s0 and ds0, and the DMIM is the function of s0 as follows:
I R = { I F A , I H T , I N M , I C I } , I D = { I L E , I R I , I F O }
where {IFA, IHT, INM, ICI} are the intentions of bearing off, hesitation, maintaining, and approaching, and {ILE, IRI, IFO} are those of turning left, turning right, and keeping straight.
The observation sequences consisting of states s0 and ds0 are continuous time series and affect the states at the next moment. Therefore, the MGHMP with a set of hidden states qt and a corresponding set of κ possible observations is proposed as the online identification model of the traffic situation (TSIM) for both DMIM and RMIM, as shown in Figure 8. The derivation process of the TSIM is similar to that of the DSEM.
The MOMDP-based personalized decision-making process can be expressed as a tuple as follows:
{S, A, T, Z, O, R, γ}
where S is the state space, A is the action space, T(s′, s, a) = Pr(s′|s, a): S × A × S is the transition function, Z is the observation space, and O(z, s′, a) = Pr(z|s′, a): S × A × Z is the observation function. R(s, a): S × A is the reward function, γ ∈ [0,1] is the discount factor. The uncertainty factor of the MOMDP is the motion intention of vehicles, and the belief bB and the Bayes rule-based updated belief b′ = τ(b, a, z) is proposed for the uncertainty, where B is the belief set, and τ is the belief updating function, with a and observation z as follows:
b ( s ) = η O ( s , a , z ) s S T ( s , a , s ) b ( s )
where ŋ is the normalization coefficient as follows:
η = ( s S O ( s , a , z ) s S T ( s , a , s ) b ( s ) ) 1
The MOMDP-based personalized decision-making process aims at searching the strategy π* corresponding to maximize R as follows, where π is the mapping strategy with the specified action a = π(b) and b0 is the initial belief.
π * = arg max π ( E ( t = 0 γ t R ( s t , π ( b t ) ) | ( b 0 , π ) ) )
Abundant messages for the decision-making process should be contained in S, given the property of the Markov process, and can be expressed as follows:
{ S = { X s , Y s }                                                 X s = [ x , y , θ , V x , V y , a x , a y , Y a w ] Y s = I m                                                              
where [x, y, θ] is the vehicle position and [Vx, Vy, ax, ay, Yaw] are the velocity and acceleration in longitudinal and lateral directions and yaw velocity, respectively. The current state set s = [sego, st1, st2,…, stN], where sego is the state of the ego vehicle, and the others are those of background vehicles. Therefore, the mixed observable MDP is established and consists of the complete observable state set Xs, the inference-based state set Ys, and the action space A as follows:
A = { a l o n , a l a t } ; a l o n = [ a l o n , a , a l o n , d , a l o n , c ] ; a l a t = [ a l a t , a , a l a t , d , a l a t , c ] ;
where [aln, alt] are the longitudinal and lateral actions, and the corresponding subscripts [a, d, c] are the acceleration, deceleration, and cruise control, respectively. The observation space z = [zego, zt1, zt2,…, ztN], where zego is that of the ego vehicle, and the others are those of background vehicles. The state transition model T(s′, s, a) = Pr(s′|s, a) predicts the dynamic randomness of the system affected by the ego vehicle and background vehicles given the current sego and aego as follows:
Pr ( s | s , a ) = Pr ( s e g o | s e g o , a e g o ) Π i = 1 N Pr ( s i | s i )
where aego is the current action of the ego vehicle, Pr(s′ego|sego, aego) is the transition probability from the sego and aego to those of the next moment as per Equation (61), and Pr ( s i | s i ) is the transition probability of background vehicles as per Equation (62).
[ x y V x a x θ V y a y Y a w ] = [ x y V x a x θ V y a y Y a w ] + [ V x Δ t + 1 2 a l o n Δ t 2 V y Δ t + 1 2 a l a t Δ t 2 a l o n Δ t a l o n 0 a l a t Δ t a l a t 0 ]
Pr ( s i | s i ) = a i Pr ( s i | s i , a i ) Pr ( a i | s i )
where Pr ( s i | s i , a i ) and Pr ( a i | s i ) need to be conducted as follows:
Pr ( s i | s i , a i ) = Pr ( x i | x i , a i ) Pr ( I m , i | x i , x i , I m , i , a i )
where Pr ( x i | x i , a i ) can be expressed as Equation (59), and Pr(s′i|si, ai) can be expressed as follows when Im is supposed to be maintained in the current moment and changes with the sample of the input data in the next moment.
Pr ( a i | s i ) = x e g o Pr ( a i | x e g o , x i , I m , i ) Pr ( x e g o | x i , I i )
where Pr ( x e g o | x i , I m , i ) can be obtained based on Equation (61) and Pr(ai|x′ego, xi, Im,i) can be conducted as follows, which is based on the deterministic model.
a i = a i , l o w ,     i f   a i , c o m f t < a i , l o w a i = a i , c o m f t + a i , l o w 2 ,         i f   a i , c o m f t a i , l o w
where ai,low and ai,comft are the lower and upper limits of acceleration. The observation model is the data sequence in the data acquisition and measurement process as follows:
Pr ( z | a , s ) ] = Pr ( z e g o | s e g o ) Π i = 1 N Pr ( z i | s i ) w h e r e   Pr ( z e g o | s e g o ) ~ N ( z e g o | x e g o , Σ z e g o )
The reward function R is required to be mapped with personalized factors, obey the traffic rules, consider driving safety and comfort, and complete in the shortest time, as follows:
R ( s , a ) = μ 1 R s a f ( s , a ) + μ 2 R g o a l ( s , a ) + μ 3 R l a w ( s , a ) + μ 4 R c o m f ( s , a )
where [Rsaf, Rgoal, Rlaw, Rcomf] are the safety, time, traffic rules, and comfort rewards, and [μ1, μ2, μ3, μ4] are the weight coefficients. The policy selection pseudocode is shown in Figure 9.

4. Experiment Platform and Scenario Generation

The evaluation process for the personalized shared control aims to verify the shared control performance, the decision-making logic, and the key human factors. The rationality and validity of personalized shared control for automated vehicles are proposed to be evaluated in the human-in-the-loop simulation platform and the field test platform based on the automated vehicle and the related testing scenarios.

4.1. The Real-Time Human-in-the-Loop Simulation Platform

The core of the personalized simulation platform is the driver and shared controller of the automated system, and the main components of the simulation platform are by-wire assembly and driving simulator. The human-in-the-loop co-simulation platform with a real-time concurrent pattern is shown in Figure 10, which consists of the virtual simulation environment, driving and real-time simulators, and the real-time controller. The shared control runs in dSPACE MicroAutoBox II from dSPACE GmbH in Paderborn, Germany, in real-time, and the driver operates the G29 driving simulator in a dynamic virtual simulation environment from Panosim 8.1. The vehicle dynamic model and traffic scenarios run in the dSPACE simulator, and the real-time controller receives the driver’s operation and system states and outputs the control signals as feedback.

4.2. The Automated Vehicle Platform for Shared Control

The driver and on-board controller are key components in the field test. Therefore, the types of equipment in the field test platform for shared control consist of the on-board sensors, the on-board controller, the by-wire actuators, and the V2V communication equipment, as shown in Figure 11. The field test platform consists of an ego vehicle and a background vehicle; the RT3000 IMU measures vehicle states, and the RT-Range is equipped with V2V communication equipment. The on-board controller MicroAutoBox II achieves a real-time process for the shared control. The position sensor of the braking pedal and the angle sensor of the steering wheel are equipped to acquire the driver’s braking and steering signals, respectively. The steering robot, electronic throttle, and iBooster are used as the actuators for the shared control based on their by-wire performance, while the signals for the actuators are decoupled with those from the driver and the shared control.

4.3. Scenario Generation Results of Stimuli and RVRF for Human Factors

The natural driving scenario database with about 30,000 km and 66,600 scenario sections was established for the training and test sets for the RVRF. A total of 30 drivers, aged 25 to 40 years old with more than 10 driving years and a driving frequency of more than 14 h per week, were selected, as shown in Table 2. As shown in Table 2, typical driving scenarios were collected based on the road topology, and typical driving behavior was collected during the large-scale data acquisition process. Ωs under time slices of forthright and their H2D_mean(Em, En) and SI were calculated. Scenario slices with SI > 95% were clustered as the same types, and the clustering results are shown in Table 2.
Three-lane, forthright scenarios without side parking are taken as an example for the verification of the RVRF. The ego vehicle enters into a low-field strength region of the vehicle–road space with particle nature and randomness, as shown in Figure 12a. Figure 12b is the extraction result of the topology structure in vehicle–road space, consisting of orange nodes for the scenario samples and a black topology structure. Errors’ RMSs of GNG with good convergence were calculated, as well.
A total of 3/5 data were taken as the training set, and the other 2/5 as the test set to achieve the accuracy verification results, and the vehicle–road spatio-temporal states based on Voronoi are shown in Figure 12, respectively. The node set and boundary set were obtained, and the position and states of the ego vehicle were determined by the RVRF, as shown in Figure 13; the probability of state transition between mutually exclusive regions is represented by an arrow vector colored with probability values. The imitative effect and volatility of the RVRF are compared with the NaSch model with a cell length of 1.5 m, sample period of 1 s, and vehicle length of 4.5 m. The macroscopic traffic flow results show high consistency between RVRF and NaSch models, as shown in Figure 14. According to the specific combination of χ and ε, the RVRF can be applied to different geographical or cultural contexts.

5. Experiment Verification and Performance Analysis

5.1. Analysis and Evaluation Results of Driving Capability

The stimuli with typical topology structures generated from the RVRF based on the scenario occurrence frequency from the naturalistic driving scenario database are proposed to reveal the mechanism of driving capability and to analyze its evaluation effectiveness, as shown in Figure 15 and Figure 16. Five drivers aged 25 to 40 years old with more than 10 driving years and a driving frequency of more than 14 h per week were chosen to conduct the continuous cyclic tests consisting of more than 6 h and 10 groups. A single test in the cyclic tests lasted from 10 s to 60 s; the end of the cyclic test ended when four continuous accidents occurred by the driver or the driver was unable to continue driving. The S-type function was taken as the static nonlinear element and orders q = 3, n = 3, d = 1 in the dynamic linear element.
The results of the DCIM of the NO.1 driver are taken as an example, as shown in Figure 17. Figure 17a is the identification result Aid of the longitudinal driving capability in single test NO.165 of the NO.1 cyclic test. Figure 17b is that of the lateral driving capability in single test NO. 219 of the NO.1 cyclic test. Figure 17c,d are the prediction results for Apr of the single tests NO.166 and NO.223. The average accuracy of both Aid and Apr are above 85%, as shown in Table 3.
The longitudinal ln and lateral lt parameter configurations of the static nonlinear and dynamic linear elements are as follows:
H γ = { S F γ , D Y γ } ,   γ = { l n , l t }
where SFγ and DYγ are the parameter dimensions in the S-type function and dynamic linear element, and Hγ is the total dimension. ψ is calculated from the DCIM and is shown in Table 4.
The principal component dimensions of the longitudinal and lateral driving capabilities of the five tested drivers are {14(ln)/21(lt), 16/24, 14/22, 16/23, 15/22}, which are 1/3 to 1/4 of Hγ. The clear and reasonable classification results of the driving capabilities are obtained by combining the subjective and objective classification methods, as shown in Figure 18.
Valid classification sets from six sets of the cyclic test were used as the training set, and those of the other four sets were used to verify the DCEE. As an example, the longitudinal and lateral training sets of the NO.1 driver are 1792 and 2710, and the verification set are 1105 and 1623, respectively. The longitudinal and lateral regression results are shown in Figure 19a,b, and the effective regression results are obtained based on the DCEE in Figure 20a,b.
It can be seen that both longitudinal and lateral driving capabilities represent their nonlinear, time-varying, and gradual features, as well as the volatility and randomness in several adjacent repeatability tests. These reasonable and effective results are obtained based on evaluation methods in conjunction with the DCIM and DCEE.

5.2. Analysis and Evaluation Results for Driving Styles

The single source-based stimuli with an ego vehicle and a background vehicle (BV) are proposed in the analysis and evaluation for driving styles, as shown in Figure 21 and Figure 22. Based on the periodicity and mutability of the longitudinal velocity of the BV, the BV drives with preset velocity sequences, and the driver in the ego vehicle is required to keep a certain level of tension. The scenario of the urban structured road without side parking and 64 drivers consisting of 41 males and 23 females was chosen for data collection.
Five data from each driver in a typical stimuli situation were selected, and 320 groups of data were taken as the classification sample. The objective features of the driving styles in typical stimuli are shown in Figure 23. Significant differences exist in aωM, TsM, and TfM; ranges of aωM and TsV are smaller, while TfM is bigger. The weight coefficients ω1 = 10 and ω2 = 103, and the classification results of the driving styles are shown in Figure 24.
The questionnaire results of the subjective classification method are shown in Table 5. It can be seen that the more radical the driver is, the more steady he is to perceive others, and his passenger tends to perceive him as the radical one. The Sine 3 and Step 3 are proposed as the stimuli for the DSEM with Ap = 30, ω = 2π/40, and φ = 0, Γ = 45, as well as the velocity sequence 0 → 20 → 50 → 70 → 50 → 20 → 0 km/h. A total of 192 groups of data were selected for model training, the Baum–Welch method [37] was proposed for the model training of the DSEM, and the verification data were the 128 groups. The input sets for the DSEM are shown in Table 6.
The four principal elements of the DSEM, the M, N, training period TP, and identification period IP, needed to be optimized, and the orthogonal experimental method is proposed with table L9(34), where M = {4, 8, 12}, N = {4, 5, 6}, TP = {80, 90, 100}, and IP = {80, 85, 90}. Combining input sets and principal elements of the DSEM, optimal identification results are obtained, as shown in Table 7. It can be seen that the optimal input sets are those of 1 + 3 and 1 + 2 + 3 with the optimal principal element set {M = 12, N = 6, TP = 80, IP = 80}, and the identification accuracy is above 95%.
Based on the offline evaluation results, the online evaluation of driving styles was conducted with 50 drivers, and the naturalistic driving scenarios consisted of T-road, roundabouts and forthright with one to four lanes. The average online identification periods of driving styles are 1345 s of steady style, 1034 s of general style, and 762 s of radical style, which achieved an acceptable identification period with high accuracy.

5.3. Results of Personalized Shared Control for Automated Vehicles

Personalized shared control was verified in the simulation platform and field test, respectively. The scenario configurations consisted of the car-following scenario in the forthright and taking-over scenario in the double lanes and comprised of 18 drivers, consisting of 9 males and 9 females aged from 24 to 45 years old with more than 14 driving hours per week, as shown in Figure 25 and Figure 26.
The radical male driver NO.1 and steady male driver NO.7 are taken as examples in the car-following simulation scenario under strong and weak driving capabilities, as shown in Figure 27 and Figure 28. The car-following effects of the human mode and automated driving mode have significant differences under strong driving capability, and similarities exist in the car-following sequences in the three modes. The drivers obtained high and stable authority from the DAAS. In contrast, the radical driver obtained frequent velocity fluctuations and even conducted emergency braking, while the steady driver showed a tendency to drive away from a BV under weak driving capabilities. In this case, the drivers obtained lower authorities but the personalized system shared more from the DAAS to guarantee driving safety and improve the driver’s comfort.
The composite index Ξ of the NO.1 driver and the subindex are shown in Table 8. The shared control mode combines system features from both the human mode and automated driving mode, and the smallest and optimal Ξ can be obtained in different degrees of driving capabilities.
The results for the NO.1 and NO.7 drivers in the taking-over simulation scenario under strong and weak driving capabilities are shown in Figure 29 and Figure 30. Similar results to those in the car-following scenario were obtained, except for a frequent steering wheel angle fluctuation from the radical driver. Shared control can guarantee driving safety, improve driver’s comfort, and reduce the workload in both longitudinal and lateral scenarios.
The Ξ of the NO.1 driver and the subindex are shown in Table 9. The smallest and optimal Ξs were obtained in different degrees of driving capabilities, as well.
Tests for the NO.1 and NO.7 drivers in the car-following field test under strong and weak driving capabilities were conducted. The preset velocity sequence of a BV is a step stimulate of 0 → 40 km/h, and similar results with those in the car-following simulation scenarios were obtained. The Ξ of the NO.1 driver and the subindex are shown in Table 10. The smallest and optimal Ξs were obtained, which is similar to those in the simulation results.
The radical male driver NO.1 and steady male driver NO.7 in the taking-over field test under strong and weak driving capabilities were conducted. The BV drove at a constant speed of 30 km/h, and the initial speed of the ego vehicle was 40 km/h. Similar results to those in the simulation scenario were obtained. Shared control can guarantee driving safety, improve driver’s comfort, and reduce workloads in both longitudinal and lateral scenarios. The Ξ of the NO.1 driver and the subindex are shown in Table 11, which shows the optimal performance of shared control. Furthermore, it is worth noting that although humans make moral decisions in principle, there are individual and cultural differences in moral justice [38], which should be the principle of observance during personalized shared control.

6. Conclusions

A personalized shared control, while considering drivers’ driving capabilities and styles, is proposed to improve the acceptance and adaptation of shared control by human drivers. As per theoretical and validation analyses, the following conclusions can be obtained:
The simulation scenario generation method for human factors was established. The RVRF theory reveals a wave-particle duality of macro- and micro-traffic flows. Stimuli and the RVRF achieved the ability to motivate driving capability and styles to the maximum extent. Drivers’ driving capability was defined and evaluated, the average accuracy of both Aid and Apr were above 85%, and its physical properties with nonlinearity, time gradient, randomness, and predictive ability were extracted and analyzed. Drivers’ driving styles were analyzed, characterized, and evaluated, and their accuracy was higher than 95% within a short identification period.
The MOMDP-based decision-making process shows advantages in dealing with uncertain motion intention and personalized logic. The personalized shared control system obtained excellent performance in both the human-in-the-loop simulation platform and field tests. The proposed system combines the randomness of human factor attributes in a single test, multi-objective optimization, and personalized characteristics of the driver and the automated driving system. Personalized shared control can achieve better performance in driving safety, comfort, and workload, corresponding to different driving capability degrees and driving style types than those only driven by human drivers or automated systems.
With the advantage of deep mixing decision-making between human and automated systems, personalized shared control will achieve better driver acceptance in future automated driving tasks.

Author Contributions

Conceptualization, B.S.; Data curation, B.S. and S.Z.; Investigation, B.S. and S.Z.; Methodology, B.S.; Resources, G.W.; Supervision, G.W.; Writing—original draft, B.S.; Writing—review and editing, Y.S. and F.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded partly by the National Nature Science Foundation of China under grant 52102457, in part by the Science and technology research project of the education department of Jilin Province under grant JJKH20241266KJ, in part by the Natural Science Foundation of Jilin Province under grant 20220101213JC, in part by the Natural Science Foundation of Sichuan Province under grant 23NSFSC4461, in part by the National Nature Science Foundation of China under grant 52394261, and in part by the Science and Technology Development Project of Jilin Province under grant 202302013.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The authors do not have permission to share data.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Huang, T.; Pan, H.; Sun, W.; Gao, H. Sine resistance network-based motion planning approach for autonomous electric vehicles in dynamic environments. IEEE Trans. Transp. Electrif. 2022, 8, 2862–2873. [Google Scholar] [CrossRef]
  2. Lin, Z.; Yang, J.; Wu, C.; Chen, P. Energy-Efficient Task Offloading for Distributed Edge Computing in Vehicular Networks. IEEE Trans. Veh. Technol. 2024, 73, 14056–14061. [Google Scholar] [CrossRef]
  3. Bonnefon, J.F.; Shariff, A.; Rahwan, I. The social dilemma of autonomous vehicles. Science 2016, 352, 1573–2576. [Google Scholar] [CrossRef] [PubMed]
  4. Hakobyan, G.; Yang, B. High-performance automotive radar: A review of signal processing algorithms and modulation schemes. IEEE Signal Process. Mag. 2019, 36, 32–44. [Google Scholar] [CrossRef]
  5. Karle, P.; Geisslinger, M.; Betz, J.; Lienkamp, M. Scenario understanding and motion prediction for autonomous vehicles—Review and comparison. IEEE Trans. Intell. Transp. Syst. 2022, 23, 16962–16982. [Google Scholar] [CrossRef]
  6. Wu, J.; Zhang, J.; Tian, Y.; Li, L. A novel adaptive steering torque control approach for human–machine cooperation autonomous vehicles. IEEE Trans. Transp. Electrif. 2021, 7, 2516–2529. [Google Scholar] [CrossRef]
  7. Saito, T.; Wada, T.; Sonoda, K. Control authority transfer method for automated-to-manual driving via a shared authority mode. IEEE Trans. Intell. Veh. 2018, 3, 198–207. [Google Scholar] [CrossRef]
  8. Anderson, S.J.; Karumanchi, S.B.; Iagnemma, K.; Walker, J.M. The intelligent copilot: A constraint-based approach to shared-adaptive control of ground vehicles. IEEE Intell. Transp. Syst. Mag. 2013, 5, 45–54. [Google Scholar] [CrossRef]
  9. Soualmi, B.; Sentouh, C.; Popieul, J.C.; Debernard, S. Automation-driver cooperative driving in presence of undetected obstacles. Control Eng. Pract. 2014, 24, 106–119. [Google Scholar] [CrossRef]
  10. Huang, M.; Gao, W.; Wang, Y.; Jiang, Z.P. Data-driven shared steering control of semi-autonomous vehicles. IEEE Trans. Hum.-Mach. Syst. 2019, 49, 350–361. [Google Scholar] [CrossRef]
  11. Ghasemi, A.H.; Jayakumar, P.; Gillespie, R.B. Shared control architectures for vehicle steering. Cogn. Technol. Work 2019, 21, 699–709. [Google Scholar] [CrossRef]
  12. Rahman, M.; Chowdhury, M.; Xie, Y.; He, Y. Review of microscopic lane-changing models and future research opportunities. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1942–1956. [Google Scholar] [CrossRef]
  13. Lee, H.; Kim, H.; Choi, S. Driving skill modeling using neural networks for performance-based haptic assistance. IEEE Trans. Human-Mach. Syst. 2021, 51, 198–210. [Google Scholar] [CrossRef]
  14. Poorna, S.S.; Arsha, V.V.; Aparna, P.T.A.; Gopal, P.; Nair, G.J. Drowsiness detection for safe driving using PCA EEG signals. Progress Comput. Anal. Netw. 2018, 710, 419–428. [Google Scholar]
  15. Yang, C.; Wang, X.; Mao, S. Unsupervised drowsy driving detection with RFID. IEEE Trans. Veh. Technol. 2020, 69, 8151–8163. [Google Scholar] [CrossRef]
  16. Martinez, C.M.; Heucke, M.; Wang, F.Y.; Gao, B.; Cao, D. Driving style recognition for intelligent vehicle control and advanced driver assistance: A survey. IEEE Trans. Intell. Transp. Syst. 2018, 19, 666–676. [Google Scholar] [CrossRef]
  17. Suzdaleva, E.; Nagy, I. An online estimation of driving style using data-dependent pointer model. Transp. Res. Part C Emerg. Technol. 2018, 86, 23–36. [Google Scholar] [CrossRef]
  18. Wang, W.; Xi, J.; Chen, H. Modeling and recognizing driver behavior based on driving data: A survey. Math. Probl. Eng. 2014, 2014, 245641. [Google Scholar] [CrossRef]
  19. Li, G.; Zhu, F.; Qu, X.; Cheng, B.; Li, S.; Green, P. Driving style classification based on driving operational pictures. IEEE Access 2019, 7, 90180–90189. [Google Scholar] [CrossRef]
  20. Meiring, G.; Myburgh, H. A review of intelligent driving style analysis systems and related artificial intelligence algorithms. Sensors 2015, 15, 30653–30682. [Google Scholar] [CrossRef]
  21. Huang, J.; Chen, Y.; Peng, X.; Hu, L.; Cao, D. Study on the driving style adaptive vehicle longitudinal control strategy. IEEE/CAA J. Autom. Sin. 2020, 7, 1107–1115. [Google Scholar] [CrossRef]
  22. Gilman, E.; Keskinarkaus, A.; Tamminen, S.; Pirttikangas, S.; Röning, J.; Riekki, J. Personalised assistance for fuel-efficient driving. Transp. Res. Part C Emerg. Technol. 2015, 58, 681–705. [Google Scholar] [CrossRef]
  23. Chen, K.T.; Chen, H.Y.W. Driving style clustering using naturalistic driving data. Transp. Res. Rec. 2019, 2673, 176–188. [Google Scholar] [CrossRef]
  24. Mudgal, A.; Hallmark, S.; Carriquiry, A.; Gkritza, K. Driving behavior at a roundabout: A hierarchical Bayesian regression analysis. Transp. Res. Part D Transp. Environ. 2014, 26, 20–26. [Google Scholar] [CrossRef]
  25. Qi, G.; Du, Y.; Wu, J.; Hounsell, N.; Jia, Y. What is the appropriate temporal distance range for driving style analysis? IEEE Trans. Intell. Transp. Syst. 2016, 17, 1393–1403. [Google Scholar] [CrossRef]
  26. Markkula, G.; Romano, R.; Madigan, R.; Fox, C.W.; Giles, O.T.; Merat, N. Models of human decision-making as tools for estimating and optimizing impacts of vehicle automation. Transp. Res. Rec. 2018, 2672, 153–163. [Google Scholar] [CrossRef]
  27. Li, M. Shared control with a novel dynamic authority allocation strategy based on game theory and driving safety field. Mech. Syst. Signal Process. 2019, 124, 199–216. [Google Scholar] [CrossRef]
  28. Huang, C.; Naghdy, F.; Du, H.; Huang, H. Shared control of highly automated vehicles using steer-by-wire systems. IEEE/CAA J. Autom. Sin. 2019, 6, 410–423. [Google Scholar] [CrossRef]
  29. Wang, Y.; Zhu, X. A robust design of hybrid fuzzy controller with fuzzy decision tree for autonomous intelligent parking system. In Proceedings of the 2014 American Control Conference, Portland, OR, USA, 4–6 June 2014; pp. 5282–5287. [Google Scholar]
  30. Liu, Y.; Ozguner, U. Human driver model and driver decision making for intersection driving. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 642–647. [Google Scholar]
  31. Gao, H.; Shi, G.; Wang, K.; Xie, G.; Liu, Y. Research on decision-making of autonomous vehicle following based on reinforcement learning method. IR 2019, 46, 444–452. [Google Scholar] [CrossRef]
  32. Wang, Y.; Wang, C.; Zhao, W.; Xu, C. Decision-making and planning method for autonomous vehicles based on motivation and risk assessment. IEEE Trans. Veh. Technol. 2021, 70, 107–120. [Google Scholar] [CrossRef]
  33. He, X.; Yang, H.; Hu, Z.; Lv, C. Robust lane change decision making for autonomous vehicles: An observation adversarial reinforcement learning approach. IEEE Trans. Intell. Veh. 2022, 8, 184–193. [Google Scholar] [CrossRef]
  34. Heinke, D.; Hamker, F.H. Comparing neural networks: A benchmark on growing neural gas, growing cell structures, and fuzzy ARTMAP. IEEE Trans. Neural Netw. 1998, 9, 1279–1291. [Google Scholar] [CrossRef] [PubMed]
  35. Rutledge, D.N. Comparison of principal components analysis, independent components analysis and common components analysis. J. Anal. Test. 2018, 2, 235–248. [Google Scholar] [CrossRef]
  36. Özkan, T.; Lajunen, T. What causes the differences in driving between young men and women? The effects of gender roles and sex on young drivers’ driving behaviour and self-assessment of skills. Transp. Res. Part F Traffic Psychol. Behav. 2006, 9, 269–277. [Google Scholar] [CrossRef]
  37. Lewis, M.E.; Puterman, M.L. A probabilistic analysis of bias optimality in unichain Markov decision processes. IEEE Trans. Autom. Control 2001, 46, 96–100. [Google Scholar] [CrossRef]
  38. Awad, E.; Dsouza, S.; Kim, R.; Schulz, J.; Henrich, J.; Shariff, A.; Bonnefon, J.F.; Rahwan, I. The moral machine experiment. Nature 2018, 563, 59–64. [Google Scholar] [CrossRef]
Figure 1. The system framework of RVRF.
Figure 1. The system framework of RVRF.
Sensors 24 07904 g001
Figure 2. Coupling relationships among human factors.
Figure 2. Coupling relationships among human factors.
Sensors 24 07904 g002
Figure 3. The analysis and evaluation framework of driver’s driving capability.
Figure 3. The analysis and evaluation framework of driver’s driving capability.
Sensors 24 07904 g003
Figure 4. The Hammerstein identification process is based on the offline identification model of driving capability.
Figure 4. The Hammerstein identification process is based on the offline identification model of driving capability.
Sensors 24 07904 g004
Figure 5. The characterization and evaluation framework of drivers’ driving styles.
Figure 5. The characterization and evaluation framework of drivers’ driving styles.
Sensors 24 07904 g005
Figure 6. The framework of the personalized shared control.
Figure 6. The framework of the personalized shared control.
Sensors 24 07904 g006
Figure 7. The coupling mechanism of motion intentions in micro-traffic scenarios.
Figure 7. The coupling mechanism of motion intentions in micro-traffic scenarios.
Sensors 24 07904 g007
Figure 8. The framework of I m = { I R , I D } = { f R ( s 0 , d s 0 ) , f D ( s 0 ) } for the online TSIM.
Figure 8. The framework of I m = { I R , I D } = { f R ( s 0 , d s 0 ) , f D ( s 0 ) } for the online TSIM.
Sensors 24 07904 g008
Figure 9. The policy selection pseudocode.
Figure 9. The policy selection pseudocode.
Sensors 24 07904 g009
Figure 10. Human-in-the-loop real-time co-simulation platform.
Figure 10. Human-in-the-loop real-time co-simulation platform.
Sensors 24 07904 g010
Figure 11. Field test platform for shared control.
Figure 11. Field test platform for shared control.
Sensors 24 07904 g011
Figure 12. Topology structure results.
Figure 12. Topology structure results.
Sensors 24 07904 g012
Figure 13. The result of the vehicle–road spatio-temporal states in the virtual micro RVRF.
Figure 13. The result of the vehicle–road spatio-temporal states in the virtual micro RVRF.
Sensors 24 07904 g013
Figure 14. Comparison of traffic flow fluctuation results in the virtual macro RVRF.
Figure 14. Comparison of traffic flow fluctuation results in the virtual macro RVRF.
Sensors 24 07904 g014
Figure 15. Configuration of longitudinal stimuli.
Figure 15. Configuration of longitudinal stimuli.
Sensors 24 07904 g015
Figure 16. Configuration of lateral stimuli.
Figure 16. Configuration of lateral stimuli.
Sensors 24 07904 g016
Figure 17. The identification and prediction results of typical longitudinal and lateral driving capabilities. (a) Aid of the longitudinal driving capability. (b) Aid of the lateral driving capability. (c) Apr in single tests NO. 166. (d) Apr in single tests NO. 223.
Figure 17. The identification and prediction results of typical longitudinal and lateral driving capabilities. (a) Aid of the longitudinal driving capability. (b) Aid of the lateral driving capability. (c) Apr in single tests NO. 166. (d) Apr in single tests NO. 223.
Sensors 24 07904 g017
Figure 18. Classification results of the longitudinal and lateral driving capabilities.
Figure 18. Classification results of the longitudinal and lateral driving capabilities.
Sensors 24 07904 g018
Figure 19. Regression results of typical longitudinal and lateral driving capabilities. Longitudinal and lateral prediction results of the 8th cyclic test of the NO.1 driver are shown in (a,b).
Figure 19. Regression results of typical longitudinal and lateral driving capabilities. Longitudinal and lateral prediction results of the 8th cyclic test of the NO.1 driver are shown in (a,b).
Sensors 24 07904 g019
Figure 20. Fitting results of the driving capability evaluation equation. (a) The longitudinal effective regression results. (b) The lateral effective regression results.
Figure 20. Fitting results of the driving capability evaluation equation. (a) The longitudinal effective regression results. (b) The lateral effective regression results.
Sensors 24 07904 g020
Figure 21. The configuration of the longitudinal stimuli.
Figure 21. The configuration of the longitudinal stimuli.
Sensors 24 07904 g021
Figure 22. The configuration of the lateral stimuli.
Figure 22. The configuration of the lateral stimuli.
Sensors 24 07904 g022
Figure 23. Extraction results of driving style features in typical longitudinal stimuli.
Figure 23. Extraction results of driving style features in typical longitudinal stimuli.
Sensors 24 07904 g023
Figure 24. Classification results of driving styles in typical longitudinal stimuli.
Figure 24. Classification results of driving styles in typical longitudinal stimuli.
Sensors 24 07904 g024
Figure 25. Configuration of the car-following forthright scenario.
Figure 25. Configuration of the car-following forthright scenario.
Sensors 24 07904 g025
Figure 26. The configuration of the taking-over scenario in double lanes.
Figure 26. The configuration of the taking-over scenario in double lanes.
Sensors 24 07904 g026
Figure 27. Simulation results of three modes corresponding to strong driving capability in the forthright.
Figure 27. Simulation results of three modes corresponding to strong driving capability in the forthright.
Sensors 24 07904 g027
Figure 28. Simulation results of three modes corresponding to weak driving capability in the forthright.
Figure 28. Simulation results of three modes corresponding to weak driving capability in the forthright.
Sensors 24 07904 g028
Figure 29. Simulation results of three modes corresponding to strong driving capability in double lanes.
Figure 29. Simulation results of three modes corresponding to strong driving capability in double lanes.
Sensors 24 07904 g029aSensors 24 07904 g029b
Figure 30. Simulation results of three modes corresponding to weak driving capability in double lanes.
Figure 30. Simulation results of three modes corresponding to weak driving capability in double lanes.
Sensors 24 07904 g030
Table 1. Questionnaire for driving styles.
Table 1. Questionnaire for driving styles.
Questionnaire Content1 Point2 Point3 Point4 Point5 Point
Feeling for the driver when as the passengerVery softSoftComfortRelatively radicalRadical
Feeling for passenger when as the driverVery softSoftComfortRelatively radicalRadical
Table 2. Clustering results of vehicle–road space situation.
Table 2. Clustering results of vehicle–road space situation.
TypeMain CaseSubcaseResults
ForthrightLane number(1) Side parking
(2) Sparseness
degree of the
Surrounding
traffic
182
Curve roadCurve and lanes519
T-roadAngle and lanes198
IntersectionTraffic lights341
RoundaboutCurvature and driving actions447
Table 3. Identification and prediction accuracies of DCIM.
Table 3. Identification and prediction accuracies of DCIM.
Driver NO.Average AidAverage Apr
193.26%/95.92%91.31%/95.16%
291.47%/96.77%88.44%/95.93%
389.23%/94.81%85.93%/93.44%
490.37%/96.42%87.62%/96.21%
589.22%/93.01%86.94%/92.89%
Total90.710%/95.386%88.048%/94.726%
Table 4. Parameter dimensions of DCIM.
Table 4. Parameter dimensions of DCIM.
Dimensionsγ = lnγ = lt
SFγ2948
DYγ2030
Total4978
Table 5. Results of questionnaire classification method.
Table 5. Results of questionnaire classification method.
Driving StylesMean (Q1/Q2)Variance (Q1/Q2)
Radical type2.04/4.350.23/0.18
General type3.32/3.530.39/0.19
Steady type4.12/1.580.07/0.12
Total3.44/3.340.29/0.17
The total variance is 0.79, and Cronbach α = 0.835 > 0.6
Table 6. Input sets for the DSEM.
Table 6. Input sets for the DSEM.
NO.Input SetStates
1Driver operation setPac, Pma
2State set of the ego vehicleVx, ax
3Relative statesDVln, DAln
Table 7. Identification and accuracies in types stimuli.
Table 7. Identification and accuracies in types stimuli.
StimuliInput Set to DSEM(%)
121 + 21 + 32 + 31 + 2 + 3
Sine392.285.993.899.296.998.4
Step393.091.492.210096.199.2
Table 8. Results of Ξ and normalization subindex.
Table 8. Results of Ξ and normalization subindex.
ModesCar-Following in Single Lane
ŋdsŋcwŋdcΞ
Human0.540.830.310.506
Automated driving0.160.520.360.312
Shared control0.180.490.320.298
Table 9. Results of Ξ and normalization subindex.
Table 9. Results of Ξ and normalization subindex.
ModesTaking over in Double Lanes
ŋdsŋcwŋdcΞ
Human0.530.850.320.510
Automated driving0.180.460.430.336
Shared control0.200.470.350.314
Table 10. Results of Ξ and normalization subindex.
Table 10. Results of Ξ and normalization subindex.
ModesCar-Following in Single Lane
ŋdsŋcwŋdcΞ
Human0.630.770.280.518
Automated driving0.150.390.360.282
Shared control0.170.360.290.256
Table 11. Results of Ξ and normalization subindex.
Table 11. Results of Ξ and normalization subindex.
ModesTaking over in Double Lanes
ŋdsŋcwŋdcΞ
Human0.660.750.240.510
Automated driving0.210.310.320.274
Shared control0.240.290.250.254
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, B.; Shan, Y.; Wu, G.; Zhao, S.; Xie, F. Personalized Shared Control for Automated Vehicles Considering Driving Capability and Styles. Sensors 2024, 24, 7904. https://doi.org/10.3390/s24247904

AMA Style

Sun B, Shan Y, Wu G, Zhao S, Xie F. Personalized Shared Control for Automated Vehicles Considering Driving Capability and Styles. Sensors. 2024; 24(24):7904. https://doi.org/10.3390/s24247904

Chicago/Turabian Style

Sun, Bohua, Yingjie Shan, Guanpu Wu, Shuai Zhao, and Fei Xie. 2024. "Personalized Shared Control for Automated Vehicles Considering Driving Capability and Styles" Sensors 24, no. 24: 7904. https://doi.org/10.3390/s24247904

APA Style

Sun, B., Shan, Y., Wu, G., Zhao, S., & Xie, F. (2024). Personalized Shared Control for Automated Vehicles Considering Driving Capability and Styles. Sensors, 24(24), 7904. https://doi.org/10.3390/s24247904

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop