Next Article in Journal
A Sparse Bayesian Approach for Forward-Looking Superresolution Radar Imaging
Next Article in Special Issue
Review on the Traction System Sensor Technology of a Rail Transit Train
Previous Article in Journal
Design and Validation of a Breathing Detection System for Scuba Divers
Previous Article in Special Issue
Structural Health Monitoring for a Z-Type Special Vehicle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Driver’s Intention of Lane Change by Augmenting Sensor Information Using Machine Learning Techniques

1
Hyundai Motor Company, Hwaseong-si 18280, Korea
2
Department of Mechanical Engineering, Korea University, Seoul 02841, Korea
3
Department of Control and Instrumentation Engineering, Korea University, Sejong 30019, Korea
*
Authors to whom correspondence should be addressed.
Sensors 2017, 17(6), 1350; https://doi.org/10.3390/s17061350
Submission received: 26 April 2017 / Revised: 1 June 2017 / Accepted: 7 June 2017 / Published: 10 June 2017
(This article belongs to the Special Issue Sensors for Transportation)

Abstract

:
Driver assistance systems have become a major safety feature of modern passenger vehicles. The advanced driver assistance system (ADAS) is one of the active safety systems to improve the vehicle control performance and, thus, the safety of the driver and the passengers. To use the ADAS for lane change control, rapid and correct detection of the driver’s intention is essential. This study proposes a novel preprocessing algorithm for the ADAS to improve the accuracy in classifying the driver’s intention for lane change by augmenting basic measurements from conventional on-board sensors. The information on the vehicle states and the road surface condition is augmented by using an artificial neural network (ANN) models, and the augmented information is fed to a support vector machine (SVM) to detect the driver’s intention with high accuracy. The feasibility of the developed algorithm was tested through driving simulator experiments. The results show that the classification accuracy for the driver’s intention can be improved by providing an SVM model with sufficient driving information augmented by using ANN models of vehicle dynamics.

1. Introduction

As the number of vehicles increases worldwide, the traffic situation becomes increasingly complicated in terms of safety. The automotive industry has been developing various safety technologies, and driver assistance systems, such as headway distance control, automatic braking system and evasive steering system, have become one of the major features of a vehicle for the safety of the driver and passengers. As an active safety system, the advanced driver assistance system (ADAS) has been developed to assist the driver for improved safety and better vehicle control. The ADAS equipped with advanced sensors and intelligent video systems is designed to alert the driver to potential traffic hazards or to take over control of the vehicle to avoid impending collisions and accidents. The ADAS is activated when the predetermined conditions for the driver’s operation and the state of the vehicle are met. In conventional ADAS, a threshold is set for driver’s control input, such as the steering wheel angle, the steering wheel angular velocity, or the pedal position. If the driver’s control input is greater than the predetermined threshold, the ADAS is activated. In the activation of ADAS, however, there can be conflicting situations where the intervention of the ADAS can interfere with the driver’s intention of operation. Correct prediction of driver’s intention is an essential part to determine whether the ADAS should engage to override the driver’s control inputs [1].
For the most time during driving, the driver is required to maneuver the steering wheel, and the lane change maneuver is one of the main causes of road traffic accidents [2]. It was reported that the percentage of fatal accidents related to lane change increased from 18% in 2005 to 23.6% in 2014, while the total number of fatal crash in the U.S. gradually decreased from 50,000 to 38,000 during the same period [3]. ADAS technologies, such as Lane Support Systems, Lane Keeping Assistance System (LKAS) and Lane Departure Warning System (LDWS), enable automated lane control. The lane change control of the ADAS is based on the driver’s control input and surrounding traffic situation. With the current technologies of the ADAS, however, there are possibilities of unwanted lane change against the driver’s intention, which may lead to a situation that endangers the safety of the driver’s vehicle and its surrounding vehicles.
To alleviate the risk of misjudging the driver’s intention, many studies have attempted to incorporate machine learning techniques to identify the driver’s intention for lane change control with the ADAS [4,5,6,7,8,9,10]. Machine learning has proven its utility in estimation, classification and prediction of system behaviors. For identification of the driver’s intention, in particular, many researchers have investigated classification techniques, such as Hidden Markov Model (HMM), Support Vector Machine (SVM), and Bayesian network. Kuge et al. (2000) [4] developed an HMM-based steering behavior model for emergency lane change, normal lane change, and lane keeping. They reported that the classification accuracy of the model was higher than 98.3%. Jin et al. (2011) [5] developed an algorithm for lane change recognition using the steering wheel angle and the angular velocity as input data to a HMM model. With their method, the classification accuracies for lane change left (LCL), lane change right (LCR) and lane keeping (LK) were 84%, 88% and 94%, respectively. Tran et al. (2015) [6] investigated the performance of a HMM-based system with two different sets of inputs: one only with the driver’s control input (steering wheel angle and gas and brake pedal positions) and the other both with the driver’s control input and with the vehicle states (velocity, acceleration and yaw rate). It was confirmed that with the driver’s control input and the vehicle states the HMM model shows far superior performance in terms of classification time and accuracy. Mandalia and Salvucci (2005) [7] compared by experiment the overlapping window method with the non-overlapping window method. The accuracy of the overlapping method was about 1.2% higher than the non-overlapping method. Aoude et al. (2011) [8] compared SVM- and HMM-based methods in classification of law-abiding and violating drivers. They reported that the SVM-based method has higher accuracies that the HMM-based method in most cases. Kumar et al. (2013) [9] proposed a machine learning algorithm that combines SVM and Bayesian filter. Relevance vector machine (RVM) is an SVM-based Bayesian inference model for probabilistic classification. Morris and Doshi (2011) [10] introduced a RVM model that is capable of classifying the driver’s intention within 3 s before an actual lane change happens. Liu et al. (2010) [11] employed the parallel Bayesian network (PBN) to identify the driver’s lane change behavior. They reported that the PBN model can reduce the response time and error rate. Schubert et al. (2010) [12] developed a classification technique for lane change maneuvers by using camera vision and a radar sensor. In other studies, the Bayesian network was employed to classify the driver intention [13,14,15,16,17].
To classify the driver’s intention at a high level of accuracy, abundant information on the vehicle states should be provided to the machine learning algorithms. The studies mentioned above employed rather expensive sensors to measure various vehicle states, such as the lateral velocity, the heading angle, the side slip angle and the lateral position, to identify the driver’s intention for lane change. Those sensors, however, are impractical to be used in commercial passenger vehicles. Recently, many commercial vehicles are being equipped with on-board sensors to provide basic measurements, such as the steering wheel angle, the yaw rate, the longitudinal and lateral accelerations, and the wheel speed, at an affordable cost [18]. While the on-board sensors are unable to provide the ADAS algorithm with sufficient information on the vehicle states, the vehicle states other than the direct measurements from the on-board sensors may be estimated using machine learning techniques based on the measured data. Along with the vehicle states, the road condition such as the friction coefficient of the road surface is an important factor to be considered to classify the driver’s intention for lane change, and it can also be estimated from the vehicle states measured by the on-board sensors [19,20].
While complex vehicle dynamics models with nonlinear differential equations can be used in augmenting the information on the vehicle states and the road surface condition, real-time computation of numerical integration requires great computation power [21]. For fast real-time computation, purely numerical models with better computational efficiency, such as artificial neural network (ANN), were suggested rather than physical mathematical models of vehicle dynamics [22,23]. An ANN model of vehicle dynamics is suitable for real-time information augmentation, since it only requires summation and product operations of matrices, rather than time-consuming numerical integration of nonlinear differential equations.
In this study, we propose a novel preprocessing algorithm as a practical solution to improving the accuracy of the ADAS in determining the driver’s intention for lane change by augmenting basic measurements from conventional on-board sensors. The inputs to the algorithm include the measured data from the on-board sensors and the augmented vehicle states along with the road surface condition estimated from the measured data. The vehicle states and the road surface condition are estimated by using ANN models that simulate nonlinear dynamics of the vehicle and the interaction between the tires and the road. The ANN models trained by the data obtained from a driving simulator provide augmented information on the vehicle states and the road condition based on the limited information from the on-board sensors. The augmented information from the ANN models along with the direct sensor measurements is then fed to an SVM mode to classify the driver’s intention for lane change. The effectiveness of the proposed algorithm was verified through driving simulator experiments, and the experimental results show that the classification accuracy for the driver’s intention can be improved by providing an SVM model with sufficient driving information augmented by using ANN models of vehicle dynamics and vehicle-road interaction.
This paper is organized as follows: Section 2 illustrates the preprocessing algorithm for the ADAS developed in this study. Section 3 describes the driving simulator experiments to evaluate the performance for the proposed algorithm, and Section 4 presents the experimental results. Finally, Section 5 contains conclusions and future work directions.

2. Classification of Driver’s Intention for Lane Change Using Augmented Sensor Information

This section describes the preprocessing algorithm for the ADAS, which detects the driver’s intention for lane change based on the augmented information on the road surface condition and the vehicle states. Figure 1 illustrates the schematic diagram of the algorithm. The augmented information is estimated from the basic measurements acquired from the on-board sensors commonly equipped on commercial passenger vehicles. The on-board sensors provide the basic measurements of the vehicle states and the driver’s control inputs: the measured vehicle states include the longitudinal and lateral accelerations, the yaw rate, and the wheel speed, while the measured driver’s control inputs include the steering wheel angle and the throttle position. Based on the sensor measurements, the algorithm estimates the road surface condition (non-slippery or slippery). The estimated condition of the road surface, along with the sensor measurements, is then used to estimate the vehicle states that cannot be measured by the on-board sensors. The vehicle states augmented by estimation include the lateral velocity, the side slip angle, the lateral tire force, the roll rate, the suspension spring compression, and the heading direction (Figure 2). The augmented information on the vehicle states is then provided to the algorithm to classify the driver’s intention for lane change. The identified driver’s intention is used to determine whether to activate the ADAS and override the driver’s control inputs for lane change.
The preprocessing algorithm for the ADAS consists of three main modules as illustrated in Figure 1: the road condition classification module, the vehicle state estimation module and the driver intention detection module. The road condition classification module determines whether the road surface condition is non-slippery or slippery by using an ANN-based pattern recognition technique. The vehicle state estimation module augments the vehicle states by using an ANN model representation of vehicle dynamics. The driver intention detection module identifies the driver’s intention for lane change by using an SVM model with the augmented information as inputs.

2.1. ANN Models for Road Condition Classification and Vehicle State Estimation

ANN-based models are used for the road condition classification module and the vehicle state estimation module of the preprocessing algorithm for the ADAS. Artificial neural network (ANN) is a computational learning approach inspired by how biological neural networks learn from experiences. Since ANN can effectively solve nonlinear problems of vehicle dynamics, an ANN model can be a pertinent solution to augmenting sensor information that is insufficient to determine the driver’s intention for lane change.
The basic structure of the three-layered ANN is illustrated in Figure 3, where the network consists of an input layer, a hidden layer, and an output layer. Each node in the hidden layer and the output layer has an activation function, which defines the output of that node given its own input. The type of the activation function can be chosen properly based on the purpose of the network. In learning phase, the weighted connections between nodes of the network are adjusted.

2.1.1. Road Condition Classification Module

Table 1 summarizes the friction coefficients of four road conditions: dry asphalt, gravel, wet, and snowy [24,25]. In this study, dry asphalt and gavel are grouped as the non-slippery road condition, and wet and snowy are grouped as the slippery road condition. The road condition classification module classifies the road surface conditions into the two classes: non-slippery and slippery.
The road condition classification module is designed to activate when the throttle signal is detected, since the identification of the road friction coefficient is easier during acceleration or deceleration than during constant-speed driving. The signals from the on-board sensors from the time the driver step one’s foot on the acceleration pedal to the time when the driver takes one’s foot off the pedal are used to determine the road condition.
The road condition classification module has the structure of the three-layered ANN with the softmax activation function in the output layer. The three-layered model was employed based on the guideline suggested by Panchal et al. (2011) [26]. The input, hidden, and output layers have six, thirty, and two nodes, respectively. At the output node, the softmax activation function yields the probability values of the classification represented by the node. For the nodes in the hidden layer, the bipolar sigmoid is commonly used as the activation function. In the training phase of the module, the performance index defined by cross-entropy is minimized, and in the testing phase, the class labels are determined by applying the one-hot-encoding to the output probability values. The detailed architecture of the neural network is shown in Figure 4. In the input layer, the six nodes are for the signals from the on-board sensors (the longitudinal and lateral accelerations, the yaw rate, the wheel speed, the steering wheel angle and the throttle position).

2.1.2. Vehicle State Estimation Module

The vehicle state estimation module estimates the vehicle states based on the data from the on-board sensors and the road condition classification module. This module uses the NARX (nonlinear autoregressive with exogenous input) neural network, which is a type of recurrent neural network particularly useful for time series analysis.
The NARX model is known to be an effective tool for time series prediction compared with other feedforward ANN models, since it enables to relate the current value of a time series to the past values of the time series and the exogenous inputs [27,28].
The mathematical form of the NARX model is given as follows:
y ^ ( k ) = f [ y ( k 1 ) ,   y ( k 2 ) ;   u 1 ( k 1 ) ,   u 1 ( k 2 ) ,   u 1 ( k 3 ) ;   u 2 ( k 1 ) ,   u 2 ( k 2 ) ,   u 2 ( k 3 ) ;     ;   u 5 ( k 1 ) ,   u 5 ( k 2 ) ,   u 5 ( k 3 ) ]
where u ( k ) and y ^ ( k ) denote the inputs and output of the model at discrete time step k, respectively. The filter orders for input u ( k ) and output y ^ ( k ) are d u = 3 and d y = 2 , respectively. f ( · ) is a nonlinear function with universal approximation capability. The nonlinear function f ( · ) of the NARX model plays an important role to model nonlinear relations among the vehicle states of vehicle dynamics.
There are two modes for the NARX model: series-parallel (SP) mode and parallel (P) mode. SP mode is mainly used for single step prediction or short term prediction since the values from the previous step are inserted as an input vector for the prediction at the next step. P mode has a feedback loop structure and the estimated output values are included as an input vector of network, and its performance is better than SP mode in multi-step or mid-and-long term prediction tasks [29,30,31]. Combinations of the two modes can be used for training and testing of the neural network [31,32,33,34,35]. For the vehicle state estimation module, P mode was used for training and testing of the module, since the module is designed to carry out long term prediction.
The structure of the NARX neural network in P mode is shown in Figure 5. The NARX neural network has a multi input-single out (MISO) structure. To yield six vehicle states (the lateral velocity, the side slip angle, the lateral tire force, the roll rate, the suspension spring compression, and the heading direction), six separate NARX models are required. Each NARX model consists of the input layer, the output layer, and the hidden layer. The input, hidden and output layers have 17, 10 and one nodes, respectively. In the hidden layer, the bipolar sigmoid function is used as the activation function. For the output layer, a linear activation function is employed for the activation function. Five measurements from the on-board sensors (the longitudinal and lateral accelerations, the yaw rate, the wheel speed, and the steering wheel angle) and the estimated output are used as inputs to the nodes in the input layer. For each of the five measurements from the on-board sensors, the values at two time steps and one time step before the present time and the present time are used ( d u = 3). For the estimated input, the estimation at one step before the present time and the presentation are used ( d y = 2). These form the total of 17 inputs. For the hidden layer, 10 nodes were used after having tested the number of nodes from five to 20 nodes. The output layer has one node that yields one of the six vehicle states.

2.2. Driver Intention Detection Module

The driver intention detection module classifies the driver’s intention for lane change based on the augmented information from the on-board sensors and the vehicle state estimation module. This module employs a support vector machine (SVM) model for classification. SVM classification is known to have good generalization abilities. For example, binary classification using SVM can find the optimal hyper-plane that maximizes the separation margin between two classes. When dealing with non-separable data, SVM utilizes the feature map φ to transform a low dimensional input space into a feature space of a higher dimension where linear classification is more feasible as illustrated in Figure 6.
The problem of finding the optimal hyper plane can be formulated as the constrained optimization problem by introducing the slack variable ξ of the so-called soft-margin method, as follows:
Minimize   J ( ω ,   ξ ) = 1 2 ω 2 + C i = 1 N ξ i Subject   to   t i ( ω T Φ ( x i ) + b ) 1 ξ i   ,   i = 1 ,   ,   N ξ i 0   ,   i = 1 ,   ,   N
By introducing the Lagrange multipliers and applying the Karush–Kuhn–Tucker (KKT) conditions, Equation (7) becomes the following Lagrange dual problem:
Maximize   L ( ˜ α ) = i = 1 N α i 1 2 i = 1 N j = 1 N α i α j t i t j K ( x i , x j ) Subject   to i = 1 N α i t i = 0 ,   0 α i C ,   i = 1 ,   ,   N
where K ( x i , x j ) is the kernel function defined by the inner-product of two feature vectors φ( x i ) and φ( x j ). The dual problem of Equation (8) can be solved by utilizing the KKT condition to yield the optimal values for α i and b . With the acquired constants α i and b , the decision function is defined as follows:
f ( x ) = s g n ( i = 1 N α i y i K ( x , x i ) + b )
With the input x , the decision function yields the binary value, either positive or negative, to classify two classes. This can be extended to multi-class classification, such as one-against-all classification.
Figure 7 shows the schematic diagram of the driver intention detection module. The 11 inputs to the driver intention detection module includes five measurements from the on-board sensors and six vehicle states from the vehicle states estimation module. For signal extraction, overlapping sliding windows are applied to the 11 input signals. It was reported that the overlapping sliding window has better classification ability than the non-overlapping sliding window [7]. The window size and the window slide size used in the module are 0.5 s and 0.2 s, respectively.
For feature extraction, the average and the variance of the windowed signals are calculated, and the principal component analysis (PCA) is performed on the windowed signals. The feature sets obtained from feature extraction are fed to the pre-trained SVM to classify the driver’s intention for lane change into three classes: lane change left (LCL), lane change right (LCR), and lane keeping (LK).
This is a multi-class classification problem with three classes. One-against-all method is employed for the multi-class SVM, and K-fold cross validation is used for the performance evaluation of the driver intention detection module, which is known to be an effective method to deal with a small set of data. For the kernels for training, the quadratic and Gaussian kernels are used for non-slippery and slippery road conditions, respectively.

3. Driving Simulation Experiments

The preprocessing algorithm for the ADAS described in Section 2 was implemented in a PC-based driving simulator (Figure 8). The driving simulator is controlled by PreScan software ver. 7.3 (TASS International, Rijswijk, The Netherlands), CarSim software (Mechanical Simulation Corporation, Ann Arbor, MI, USA), and Simulink (MathWorks, Natick, MA, USA). PreScan is used as a physics-based simulation platform, and CarSim is used for simulation of vehicle dynamics. The vehicle used in the simulator was front-wheel drive. The data from the driving simulator can be collected at the rate of 500 Hz, which is the maximum sampling frequency with PreScan and CarSim software.
A human subject (27-year-old male with six years of driving experience) was instructed to perform three maneuvers of lane change control: lane change left (LCL), lane change right (LCR) and lane keeping (LK). The subject was asked to drive within the speed range 30–80 km/h on a one-way road with three lanes of the width of 3.5 m. The lanes were separated by cones placed at the interval of 50 m.
The three modules of the algorithm were trained using the data collected from the driving simulator. The driving simulator was also used to test the performance of the trained modules.

3.1. Training of Road Condition Classification Module

To train the ANN model of the road condition classification module, the driving simulation was performed under four different road surface conditions (dry asphalt, gavel, wet, and snowy) as listed in Table 1. The four road conditions are grouped into two classes (non-slippery for dry asphalt and gavel; slippery for wet and snowy). The module was trained by using a total of 120,000 data sets with 60,000 for the non-slippery road condition and 60,000 for the slippery road condition.

3.2. Training of Vehicle State Estimation Module

The vehicle state estimation module was trained under two different road conditions (non-slippery and slippery). For each road condition, six NARX models yield six vehicle states. Thus, the vehicle state estimation module is composed of 12 sub-modules.
To train each sub-module, the input data from the on-board sensors (five measurements) and the target data from the driving simulator (six vehicle states) were provided to train the vehicle state estimation module by using the Levenberg–Marquart back-propagation algorithm. It should be noted that all 12 sub-modules have the same structure and training data sets, while the target data sets are all different.
For the training of the module, we used 80,000 data sets with 40,000 for the non-slippery road condition and 40,000 for the slippery road condition, which was sampled at 500 Hz. For the two sub-modules to estimate the heading direction in the non-slippery and slippery road conditions, the on-board sensor data, down-sampled at 50 Hz, were used for training, and the total number of the training data sets was 3000 (1500 for the non-slippery road and 1500 for the slippery road).

3.3. Training of Driver Intention Detection Module

The driver intention detection module was trained using the input data from the on-board sensors and the vehicle state estimation module and the target data of the driver’s intention of lane change obtained from the questionnaire with the human subject performing the driving simulation. From the windowed signals, the features (the average, the variance, and the principal components) of the signals were extracted to be used as the input to the SVM module. The SVM module was trained to classify the three intentions of the driver labeled as LCL, LCR and LK by using 581 data sets for the non-slippery road and 550 data sets for the slippery road.
The classification performance of the SVM can be improved by selecting the optimal combination of the input signals, rather than using all the available input data [36,37]. We tested six combinations of input signals to the SVM, as listed in Table 2, and compared the classification abilities of the combinations. In the table, the vehicle states estimated by ANN are marked by boldface.

4. Experimental Results

The performance of the three of the three modules of the preprocessing algorithm of the ADAS was tested through driving simulation experiments. The three modules were tested with the driving simulation data that is different from the data used to train the modules. This section presents the experimental results of the road condition classification, the vehicle state estimation and the driver intention detection modules.

4.1. Classification of Road Condition

The road condition classification determines the road surface based on the on-board sensor signals during the time when the throttle is on. Figure 9 shows the throttle position and the road surface condition estimated while the throttle is on. As shown in the Figure 9, the module correctly classified the road condition (slippery) with a high level of confidence (100%, 62.7%, and 100%). Table 3 lists the results from 52 test trials. Out of 52 trials, there were 51 correct classifications with only one misclassification (highlighted in grey in the Table 3). By using the measurements from the on-board sensors, the module can identify the road surface condition with fairly high accuracy of 98%.

4.2. Estimation of Vehicle State Parameters

The vehicle states estimated by the trained vehicle state estimation module are compared with those from the on-board sensors of the driving simulator under four different road surface conditions in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15. As shown in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, the estimated lateral velocity, side slip angle, tire lateral force, roll rate, suspension spring compression and heading direction represented by dotted lines are close to those from the on-board sensors of the driving simulator represented by solid lines.
The errors between the estimated vehicle states and the measured vehicle states were evaluated by the root mean square error (RMSE) and the normalized mean square error (NMSE) given as follows [38]:
RMSE = 1 N i = 1 N ( y i y ^ i ) 2
NMSE = i = 1 N ( y i y ^ i ) 2 i = 1 N ( y i y ¯ ) 2
Table 4 lists RMSE and NMSE under four different road surface conditions. The errors were evaluated from the driving data collected for the duration of 100 s. The results show that the orders of errors for NMSE range from 10−3 to 10−1 for the six vehicle states in the four road conditions. The vehicle state estimation module represents the nonlinear vehicle dynamics with high level of accuracy.

4.3. Detection of Driver Intention

The driver’s intention estimated by the trained SVM was compared with the driver’s true intention for lane change. The SVM was trained with six feature sets with six different combinations of the input signals (Table 2). Table 5 compares the accuracy rates with the six feature sets under four road surface conditions.
The results show that the feature set with the yaw rate, the longitudinal acceleration, the steering wheel angle, the lateral velocity, the roll rate and the heading direction (Set 6) shows the highest accuracy rate of detection in any road surface condition. It should be noted that the accuracy of detection is better than that of the feature set with all the available input signals (Set 4) and that of the feature set with on-board sensor measurements only (Set 1). Sets 4–6 with the heading direction show much higher accuracy rates than Sets 1–3 without the heading direction. Thus, the heading direction is an input signal of major importance for identifying the driver’s intention for lane change.
Figure 16 shows typical lane change maneuvers on dry asphalt and the driver’s intentions classified by the driver intention detection module using the optimal feature set (Set 6). Figure 16a plots the steering wheel angle during lane change maneuvers. Figure 16b compares the driver’s true intention (solid line) and the detected intention by the module (marked by o). In Figure 16, LCR, LK, and LCL are labeled as −1, 0, and 1, respectively.
As can be seen in Figure 16, the module correctly detected the driver’s intention for lane change, while there are time delays before correct detections. The delay is mainly attributed to the update rate of detection, which is dependent on the window slide size (0.2 s). The module requires a time longer than 0.2 s to determine the driver’s intention from sufficient information on the pattern of lane change maneuver. In addition, it appears that this time delay mainly contributes to the errors listed in Table 5.
Table 6 lists the average time to correctly detect the driver’s intention from the onset of the driver’s lane change maneuver under four different road surface conditions. The results show that the average time delays for LCL and LCR range from 0.4 to 0.45 s, while that for LK is between 0.146 and 0.222 s. The experimental results demonstrate that the trained driver intention detection module can identify the driver’s intention accurately and quickly.

5. Conclusions

In this study, we propose a novel method to classify the driver’s intention for lane change, based on measured and estimated information on the driver’s control inputs, the vehicle states, and the road condition. By using machine learning-based estimation techniques, the road surface condition and the extra vehicle states are augmented from the measured data obtained from the basic on-board sensors, which are commonly equipped on recent passenger vehicles.
For the classification of the driver’s intention, an SVM-based model is employed. As the inputs to the SVM model, the road surface condition and the extra vehicle states are estimated by ANN-based models that are trained a priori using the dynamics simulation data from a driving simulator. The augmented information estimated by the ANN models (the friction coefficient of the road surface, the lateral velocity, the side slip angle, the lateral tire force, the roll rate, the spring compression, and the heading direction) is essential to capture the dynamic situations of the driving vehicle.
The effectiveness of the proposed method was tested using driving simulation data. The results demonstrate that the driver’s intention for lane change can be detected more accurately using both measured and estimated data than using only measured data. The simulation results also show that the classification accuracy is the highest with the yaw rate, the lateral acceleration, the steering wheel angle, the lateral velocity, the roll rate and the heading direction as the inputs to the SVM module, rather than with all the available information from measurement and estimation as the inputs. Among the estimated vehicle states, the heading direction, the lateral velocity, and the roll rate appear to play an important role to improve the classification accuracy of the SVM model. The classification accuracy with the augmented information was higher than 90% in any road surface condition.
The proposed method can be utilized as a preprocessing algorithm for the ADAS by accurately and effectively detecting the driver’s intention for lane change. The developed algorithm can allow us to replace expensive sensors with economical on-board sensor algorithms. Due to its superior computational efficiency to numerical integration, the ANN-based vehicle dynamics model can be more effective than complex differential equation-based approaches. The proposed method can also be applicable to analysis of the driver's driving pattern. Based on the analysis, the ADAS may be adaptively activated according to the driver’s driving pattern.
For our future works, we are planning to implement the developed algorithm on actual vehicle systems with multiple human subjects with different characteristics. In addition, we will investigate other advanced machine learning algorithms, such as long short-term memory (LSTM), for different types of driver intentions. Another important issue with driving safety is human factors. It was reported that about 90% of vehicle accidents were caused by human errors [39]. We also plan to study algorithms to differentiate the driver’s true intention from the driver’s erroneous maneuver.

Acknowledgments

This work was supported by the Agency for Defense Development (ADD) under the contract UD140073ID. The work of Il-hwan Kim was supported by Hyundai Motor Company.

Author Contributions

Il-Hwan Kim, Jooyoung Park, and Shinsuk Park conceived and designed the system. Il-Hwan Kim and Jae-Hwan Bong performed the experiments and analyzed the experimental data. Finally, Il-Hwan Kim wrote the paper with the help of Jooyoung Park and Shinsuk Park.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hou, H.; Jin, L.; Niu, Q.; Sun, Y.; Lu, M. Driver intention recognition method using continuous hidden markov model. Int. J. Comput. Intell. Syst. 2011, 4, 386–393. [Google Scholar] [CrossRef]
  2. Tomar, R.S.; Verma, S.; Tomar, G.S. Prediction of lane change trajectories through neural network. In Proceedings of the 2010 International Conference on Computational Intelligence and Communication Networks, Bhopal, India, 26–28 November 2010; pp. 249–253. [Google Scholar]
  3. FARS Encyclopedia. Vehicles Involved in Single- and Two-Vehicle Fatal Crashes by Vehicle Maneuver. National Highway Traffic Safety Administration. Available online: http://www-fars. nhtsa.dot.gov/Vehicles/ VehiclesAllVehicles.aspx (accessed on 15 September 2016).
  4. Kuge, N.; Yamamura, T.; Shimoyama, O.; Liu, A. A Driver Behavior Recognition Method Based on a Driver Model Framework; Delphi Automotive Systems: Gillingham, UK, March 2000. [Google Scholar]
  5. Jin, L.S.; Hou, H.J.; Jiang, Y.Y. Driver intention recognition based on continuous hidden markov model. In Proceedings of the International Conference on Transportation, Mechanical, and Electrical Engineering (TMEE), Changchun, China, 16–18 December 2011. [Google Scholar]
  6. Tran, D.; Sheng, W.; Liu, L.; Liu, M. A Hidden Markov Model based driver intention prediction system. In Proceedings of the 2015 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), Shenyang, China, 8–12 June 2015; pp. 115–120. [Google Scholar]
  7. Mandalia, H.M.; Salvucci, M.D.D. Using support vector machines for lane-change detection. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Los Angeles, CA, USA, 26–30 October 2015; Volume 49, pp. 1965–1969. [Google Scholar]
  8. Aoude, G.S.; Desaraju, V.R.; Stephens, L.H.; How, J.P. Behavior classification algorithms at intersections and validation using naturalistic data. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 601–606. [Google Scholar]
  9. Kumar, P.; Perrollaz, M.; Lefevre, S.; Laugier, C. Learning-based approach for online lane change intention prediction. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Gold Coast City, Australia, 23–26 June 2013; pp. 797–802. [Google Scholar]
  10. Morris, B.; Doshi, A.; Trivedi, M. Lane change intent prediction for driver assistance: On-road design and evaluation. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 895–901. [Google Scholar]
  11. Liu, L.; Xu, G.Q.; Song, Z. Driver lane changing behavior analysis based on parallel bayesian networks. In Proceedings of the 2010 Sixth International Conference on Natural Computation, Yantai, China, 10–12 August 2010; Volume 3, pp. 1232–1237. [Google Scholar]
  12. Schubert, R.; Schulze, K.; Wanielik, G. Situation assessment for automatic lane-change maneuvers. IEEE Trans. Intell. Trans. Syst. 2010, 11, 607–616. [Google Scholar] [CrossRef]
  13. Lefèvre, S.; Laugier, C.; Ibañez-Guzmán, J. Exploiting map information for driver intention estimation at road intersections. In Proceedings of the Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 583–588. [Google Scholar]
  14. Kumagai, T.; Akamatsu, M. Prediction of human driving behavior using dynamic bayesian networks. IEICE Trans. Inf. Syst. 2006, 89, 857–860. [Google Scholar] [CrossRef]
  15. McCall, J.C.; Trivedi, M.M.; Wipf, D.; Rao, B. Lane change intent analysis using robust operators and sparse bayesian learning. IEEE Trans. Intell. Transp. Syst. 2007, 8, 431–440. [Google Scholar] [CrossRef]
  16. Gindele, T.; Brechtel, S.; Dillmann, R. A probabilistic model for estimating driver behaviors and vehicle trajectories in traffic environments. In Proceedings of the 13th International IEEE Conference on Intelligent Transportation Systems (ITSC), Funchal, Portugal, 19–22 September 2013; pp. 1066–1071. [Google Scholar]
  17. Lefèvre, S.; Laugier, C.; Ibañez-Guzmán, J. Risk assessment at road intersections: Comparing intention and expectation. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium (IV), Madrid, Spain, 3–7 June 2012; pp. 165–171. [Google Scholar]
  18. Turnip, A.; Fakhrurroja, H. Estimation of the wheel-ground contact tire forces using extended Kalman filter. Int. J. Instrum. Sci. 2013, 2, 34–40. [Google Scholar]
  19. Castillo Aguilar, J.J.; Cabrera Carrillo, J.A.; Guerra Fernández, A.J.; Carabias Acosta, E. Robust road condition detection system using in-vehicle standard sensors. Sensors 2015, 15, 32056–32078. [Google Scholar] [CrossRef] [PubMed]
  20. Zareian, A.; Azadi, S.; Kazemi, R. Estimation of road frictio coefficient using extended Kalman filter, recursive least square, and neural network. Proc. Inst. Mech. Eng. Part K 2016, 230, 52–68. [Google Scholar]
  21. Mikelsons, L.; Brandt, T.; Schramm, D. Real-time vehicle dynamics using equation-based reduction techniques. In IUTAM Symposium on Dynamics Modeling and Interaction Control in Virtual and Real Environments; Springer: Dordrecht, The Netherlands, 2011; pp. 91–98. [Google Scholar]
  22. Doumiati, M.; Victorino, A.; Charara, A.; Lechner, D. A method to estimate the lateral tire force and the sideslip angle of a vehicle: Experimental validation. In Proceedings of the 2010 American Control Conference, Baltimore, MD, USA, 30 June–2 July 2010; pp. 6936–6942. [Google Scholar]
  23. Guarneri, P.; Rocca, G.; Gobbi, M. A neural-network-based model for the dynamic simulation of the tire/suspension system while traversing road irregularities. IEEE Trans. Neural Netw. 2008, 19, 1549–1563. [Google Scholar] [CrossRef] [PubMed]
  24. Elvik, R.; Vaa, T.; Erke, A.; Sorensen, M. The Handbook of Road Safety Measures; Emerald Group Publishing: Bingley, UK, 2009; pp. 363–368. [Google Scholar]
  25. Engineering Dynamics Corporation. EDVAP Program Manual; Engineering Dynamics Corporation: Beaverton, OR, USA, 1994. [Google Scholar]
  26. Panchal, G.; Ganatra, A.; Kosta, Y.P.; Panchal, D. Behaviour analysis of multilayer perceptrons with multiple hidden neurons and hidden layers. Int. J. Comput. Theory Eng. 2011, 3, 332–337. [Google Scholar] [CrossRef]
  27. Diaconescu, E. The use of NARX neural networks to predict chaotic time series. WSEAS Trans. Comput. Res. 2008, 3, 182–191. [Google Scholar]
  28. Hagan, M.T.; Demuth, H.B.; Beale, M.H.; De Jesús, O. Neural Network Design; PWS Publishing Company: Boston, MA, USA, 1996; Volume 20. [Google Scholar]
  29. Kargar, M.J.; Charsoghi, D.K. Predicting annual electricity consumption in Iran using artificial neural networks (NARX). Indian J. Sci. Res. 2014, 5, 231–242. [Google Scholar]
  30. Lin, T.N.; Giles, C.L.; Horne, B.G.; Kung, S.Y. A delay damage model selection algorithm for NARX neural networks. IEEE Trans. Signal Process. 1997, 45, 2719–2730. [Google Scholar]
  31. Jiang, C.; Song, F. Sunspot forecasting by using chaotic time-series analysis and NARX network. J. Comput. 2011, 6, 1424–1429. [Google Scholar] [CrossRef]
  32. Choudhary, I.; Assaleh, K.; AlHamaydeh, M. Nonlinear AutoRegressive eXogenous Artificial Neural Networks for predicting buckling restrained braces force. In Proceedings of the 2012 8th International Symposium on Mechatronics and Its Applications (ISMA), Sharjah, UAE, 10–12 April 2012; pp. 1–5. [Google Scholar]
  33. Xie, H.; Tang, H.; Liao, Y.H. Time series prediction based on NARX neural networks: An advanced approach. In Proceedings of the 2009 International Conference on Machine Learning and Cybernetics, Baoding, China, 12–15 July 2009; pp. 1275–1279. [Google Scholar]
  34. Abdulkadir, S.J.; Yong, S.P. Empirical analysis of parallel-NARX recurrent network for long-term chaotic financial forecasting. In Proceedings of the 2014 International Conference on Computer and Information Sciences (ICCOINS), Kuala Lumpur, Malaysia, 3–5 June 2014; pp. 1–6. [Google Scholar]
  35. Menezes, J.M.P.; Barreto, G.A. Long-term time series prediction with the NARX network: An empirical evaluation. Neurocomputing 2008, 71, 3335–3343. [Google Scholar] [CrossRef]
  36. Myung, I.J.; Pitt, M.A. Applying Occam’s razor in modeling cognition: A Bayesian approach. Psychonol. Bull. Rev. 1997, 4, 79–95. [Google Scholar] [CrossRef]
  37. Berndt, H.; Emmert, J.; Dietmayer, K. Continuous driver intention recognition with hidden markov models. In Proceedings of the 2008 11th International IEEE Conference on Intelligent Transportation Systems, Beijing, China, 12–15 October 2008; pp. 1189–1194. [Google Scholar]
  38. Ardalani-Farsa, M.; Zolfaghari, S. Chaotic time series prediction with residual analysis method using hybrid Elman–NARX neural networks. Neurocomputing 2010, 73, 2540–2553. [Google Scholar] [CrossRef]
  39. AlertDriving Magazine. Human Error Accounts for 90% of Road Accidents. Available online: http://channel.alertdriving.com/home/fleet-alertmagazine/international/human-error-accounts-90-road-accidents (accessed on 30 September 2016).
Figure 1. A schematic diagram of the system developed for driver intention classification.
Figure 1. A schematic diagram of the system developed for driver intention classification.
Sensors 17 01350 g001
Figure 2. Vehicle states measured by on-board sensors and estimated by ANN models.
Figure 2. Vehicle states measured by on-board sensors and estimated by ANN models.
Sensors 17 01350 g002
Figure 3. Basic network architecture of three-layered ANN.
Figure 3. Basic network architecture of three-layered ANN.
Sensors 17 01350 g003
Figure 4. Artificial neural network model for road condition classification module.
Figure 4. Artificial neural network model for road condition classification module.
Sensors 17 01350 g004
Figure 5. NARX Neural network model for estimation of vehicle state parameters ( z 1 is the unit time delay).
Figure 5. NARX Neural network model for estimation of vehicle state parameters ( z 1 is the unit time delay).
Sensors 17 01350 g005
Figure 6. Use of feature map for non-separable problem.
Figure 6. Use of feature map for non-separable problem.
Sensors 17 01350 g006
Figure 7. Operation procedure of driver intention recognition using SVM.
Figure 7. Operation procedure of driver intention recognition using SVM.
Sensors 17 01350 g007
Figure 8. Setup for driving simulator experiments: (a) schematic diagram of driving simulator; and (b) setup of Steering wheel and pedal for obtaining driving data.
Figure 8. Setup for driving simulator experiments: (a) schematic diagram of driving simulator; and (b) setup of Steering wheel and pedal for obtaining driving data.
Sensors 17 01350 g008
Figure 9. Classification of the road condition while the throttle is on.
Figure 9. Classification of the road condition while the throttle is on.
Sensors 17 01350 g009
Figure 10. Estimated Lateral Velocity depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snow.
Figure 10. Estimated Lateral Velocity depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snow.
Sensors 17 01350 g010
Figure 11. Estimated Side slip angle depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snowy.
Figure 11. Estimated Side slip angle depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snowy.
Sensors 17 01350 g011
Figure 12. Estimated Lateral Tire Force depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snowy.
Figure 12. Estimated Lateral Tire Force depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snowy.
Sensors 17 01350 g012
Figure 13. Estimated Roll rate depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snowy.
Figure 13. Estimated Roll rate depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snowy.
Sensors 17 01350 g013
Figure 14. Estimated Suspension Spring Compression depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snowy.
Figure 14. Estimated Suspension Spring Compression depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snowy.
Sensors 17 01350 g014
Figure 15. Estimated Heading (Yaw) depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snowy.
Figure 15. Estimated Heading (Yaw) depending on road surface condition: (a) Dry asphalt; (b) Gravel; (c) Wet; and (d) Snowy.
Sensors 17 01350 g015
Figure 16. Lane change maneuvers and driver’s intention: (a) steering wheel angle; and (b) driving state (−1 is LCR, 0 is LK, 1 is LCL).
Figure 16. Lane change maneuvers and driver’s intention: (a) steering wheel angle; and (b) driving state (−1 is LCR, 0 is LK, 1 is LCL).
Sensors 17 01350 g016
Table 1. Road friction coefficient and road surface condition.
Table 1. Road friction coefficient and road surface condition.
Road Surface ConditionsFriction Coefficients
Dry Asphalt0.8
Gravel0.6
Wet0.4
Snowy0.3
Table 2. Combinations of Input Signals.
Table 2. Combinations of Input Signals.
Feature SetCombinations of Input Signals
1Yaw rate, Longitudinal acceleration, Lateral acceleration, Steering wheel angle, Wheel speed
2Yaw rate, Longitudinal acceleration, Lateral acceleration, Steering wheel angle, Wheel speed, Lateral velocity, Roll rate
3Yaw rate, Longitudinal acceleration, Lateral acceleration, Steering wheel angle, Wheel speed, sideslip angle, Lateral tire force, Spring compression
4Yaw rate, Longitudinal acceleration, Lateral acceleration, Steering wheel angle, Wheel speed, Lateral velocity, Roll rate, sideslip angle, Lateral tire force, Spring compression, Heading
5Yaw rate, Lateral acceleration, Steering wheel angle, Lateral velocity, Roll rate, sideslip angle, Lateral tire force, Spring compression, Heading
6Yaw rate, Lateral acceleration, Steering wheel angle, Lateral velocity, Roll rate, Heading
Table 3. Result of classification test depending on friction coefficient.
Table 3. Result of classification test depending on friction coefficient.
12345678910111213Rate
Dry Asphalt (NS)NS 87.9%NS 69.6%NS 79.8%NS 75.9%NS 96.0%NS 83.2%NS 57.9%NS 77.5%NS 94.2%NS 77.1%NS 63.9%NS 96.7%NS 96.5%13/13
Gravel (NS)NS 92.5%NS 70.5%NS 93.0%NS 71.0%NS 67.0%NS 53.3%NS 53.6%NS 75.7%NS 88.4%NS 75.9%NS 79.9%NS 94.7%NS 100%13/13
Wet (S)S 58.1%S 93.4%S 91.5%S 53.6%S 82.3%S 100%S 87.8%S 100%S 92.9%S 95.6%S 96.2%S 95.7%S 65.4%13/13
Snowy (S)S 100%S 62.7%S 100%S 76.0%S 80.8%S 100%S 72.8%NS 56.1%S 73.4%S 77.0%S 95.0%S 100%S 100%12/13
Table 4. Range of data, RMSE and NMSE in each case.
Table 4. Range of data, RMSE and NMSE in each case.
Road ConditionDataRMSENMSEOrder (NMSE)
Lateral VelocityDry asphalt−0.6~0.60.01480.013910−2
Gravel−0.6~0.60.02290.0195
Wet−0.6~0.60.02180.0082
Snowy−0.6~0.60.02360.0077
Side Slip AngleDry asphalt−0.6~0.60.01330.012310−2
Gravel−0.6~0.60.02330.0167
Wet−0.9~0.90.03740.0142
Snowy−0.9~0.90.04030.0097
Lateral Tire ForceDry asphalt−3000~200034.60.0008910−3
Gravel−3000~200084.60.0066
Wet−2500~200038.20.0021
Snowy−2000~200027.50.0021
Roll rateDry asphalt−8~80.3320.030410−1
Gravel−6~60.3630.0475
Wet−5~50.3530.0774
Snowy−4~40.3040.1096
Spring CompressionDry asphalt50~850.7080.014510−2
Gravel50~850.9620.0303
Wet60~800.5940.0192
Snowy60~800.6980.0479
HeadingDry asphalt−15~151.050.022610−2
Gravel−15~150.9620.0154
Wet−15~150.5510.008
Snowy−15~150.5450.0166
Table 5. Detection accuracy in four different road conditions.
Table 5. Detection accuracy in four different road conditions.
Set 1 (%)Set 2 (%)Set 3 (%)Set 4 (%)Set 5 (%)Set 6 (%)
(a) Dry Asphalt
LCL70.5171.7965.3888.4691.0391.03
LK96.3095.0695.6896.9196.9196.91
LCR67.1474.2975.7191.4390.0091.43
(b) Gravel
LCL66.1572.3156.9290.7792.3092.30
LK95.5796.2095.5796.2096.8496.84
LCR56.9668.3564.5689.8789.8791.14
(c) Wet
LCL54.2967.1460.0092.8692.8692.86
LK97.1497.7197.7197.7197.7197.14
LCR60.6673.7770.4990.1690.1690.16
(d) Snowy
LCL62.2662.2652.8390.5790.5790.57
LK97.8497.8497.8497.3097.3097.30
LCR71.4373.2175.0089.2989.2991.07
Table 6. Average time delay for correct detection.
Table 6. Average time delay for correct detection.
Driver ManeuverDry AsphaltGravelWetSnowy
LCL0.45 s0.4 s0.4 s0.4 s
LK0.15 s0.222 s0.182 s0.146 s
LCR0.45 s0.433 s0.433 s0.4 s

Share and Cite

MDPI and ACS Style

Kim, I.-H.; Bong, J.-H.; Park, J.; Park, S. Prediction of Driver’s Intention of Lane Change by Augmenting Sensor Information Using Machine Learning Techniques. Sensors 2017, 17, 1350. https://doi.org/10.3390/s17061350

AMA Style

Kim I-H, Bong J-H, Park J, Park S. Prediction of Driver’s Intention of Lane Change by Augmenting Sensor Information Using Machine Learning Techniques. Sensors. 2017; 17(6):1350. https://doi.org/10.3390/s17061350

Chicago/Turabian Style

Kim, Il-Hwan, Jae-Hwan Bong, Jooyoung Park, and Shinsuk Park. 2017. "Prediction of Driver’s Intention of Lane Change by Augmenting Sensor Information Using Machine Learning Techniques" Sensors 17, no. 6: 1350. https://doi.org/10.3390/s17061350

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop