Next Article in Journal
Attacks to Automatous Vehicles: A Deep Learning Algorithm for Cybersecurity
Previous Article in Journal
Mobile Charging Strategy for Wireless Rechargeable Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inattentive Driving Detection Using Body-Worn Sensors: Feasibility Study

1
Graduate School of Engineering, Toyohashi University of Technology, Toyohashi 441-8580, Japan
2
Department of Intelligent Mechanical Engineering, Hiroshima Institute of Technology, Saeki-ku, Hiroshima 731-5193, Japan
3
Department of Industrial Engineering and Management, College of Industrial Technology, Nihon University, Narashino 275-8575, Japan
4
Department of Information Technology and Media Design, Nippon Institute of Technology, Miyashiro-machi, Saitama 345-8501, Japan
5
Research Center for Space Science, Advanced Research Laboratories, Tokyo City University, Setagaya-ku, Tokyo 158-0082, Japan
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(1), 352; https://doi.org/10.3390/s22010352
Submission received: 29 October 2021 / Revised: 27 December 2021 / Accepted: 31 December 2021 / Published: 4 January 2022
(This article belongs to the Section Sensor Networks)

Abstract

:
This study aims to build a system for detecting a driver’s internal state using body-worn sensors. Our system is intended to detect inattentive driving that occurs during long-term driving on a monotonous road, such as a high-way road. The inattentive state of a driver in this study is an absent-minded state caused by a decrease in driver vigilance levels due to fatigue or drowsiness. However, it is difficult to clearly define these inattentive states because it is difficult for the driver to recognize when they fall into an absent-minded state. To address this problem and achieve our goal, we have proposed a detection algorithm for inattentive driving that not only uses a heart rate sensor, but also uses body-worn inertial sensors, which have the potential to detect driver behavior more accurately and at a much lower cost. The proposed method combines three detection models: body movement, drowsiness, and inattention detection, based on an anomaly detection algorithm. Furthermore, we have verified the accuracy of the algorithm with the experimental data for five participants that were measured in long-term and monotonous driving scenarios by using a driving simulator. The results indicate that our approach can detect both the inattentive and drowsiness states of drivers using signals from both the heart rate sensor and accelerometers placed on wrists.

1. Introduction

Inattentive driving is one of the leading causes of traffic accidents, accounting for approximately 20% of fatal accidents in Japan [1]. The inattentive state of a driver in this study is an absent-minded state caused by a decrease in driver vigilance levels due to fatigue or drowsiness. Thus, a driver assistance system for monitoring these drivers’ internal states, such as the driver’s vigilance or drowsiness, is important to prevent such accidents.
Inattention detection systems can be classified into two categories according to the types of their input signals. One category uses vehicle signals such as the speed and steering angle [2,3,4,5,6,7]. Kume et al. [3] reported that the sensitivity of detecting an inattentive state using in-vehicle sensors was 53% with a fixed threshold, and 82% with a manually set threshold. However, it is difficult to detect weak drowsiness that the driver is unaware of from vehicle signals [8].
The other category is the use of physiological and behavioral signals from a driver that can directly capture changes in the driver’s internal state. The major measurements include eye and facial movements [8,9,10,11,12,13,14], heart rate variability [15,16,17], and brain activities [18,19]. Algorithms based on image processing can detect eye and facial movements with non-contact; however, it is difficult to stably measure the eye and facial movements during the day or night because image processing is affected by the level of illumination. For detection methods using an electroencephalogram (EEG), it is necessary to attach electrodes to the head of the driver, which is impractical for everyday use in a real car environment.
In their methods, it is difficult to stably measure the fine changes in the signals due to sleepiness and the detection accuracy varies greatly among individuals. Therefore, there exists a need for more stable measures to improve detection accuracy, such as the combination of biological and behavioral indicators. As a behavioral indicator, the driver’s body movements, called subsidiary behaviors, which are not directly related to driving operations, have been investigated. Roge et al. observed the driver’s behavior during long-term monotonous driving tasks on a driving simulator and suggested a relationship between the frequency of occurrence of subsidiary behaviors and the arousal level [20]. Matsuo and Abdelaziz proposed a method to estimate the sleepiness level via detection of the eye closure rate, swaying of the head, and frequency of subsidiary behaviors based on image-processing [21]. Sunagawa et al. also proposed using physiological signals, such as electrocardiogram (ECG) or heart rate analysis, and the change in the driver’s posture, as indicators of the change in sleepiness level [22]. However, these methods require cameras or in-vehicle seat sensors to detect driver behavior. On the other hand, wearable sensors, such as wrist-worn accelerometers, have the potential to detect driver behavior more accurately and at a much lower cost [23,24,25,26].
In our previous studies, Tsubowa et al. [27] measured the frequency of subsidiary behaviors using wrist-worn sensors, including not only yawning and head swaying, but also arm and hand activities, associated with changes in sleepiness level. Then, the feasibility of driver monitoring via wrist-worn sensors was discussed. Additionally, Nagasawa et al. [28] measured the body movement of a car driver using three-axis accelerometers for detecting features of an inattentive driving state. The results suggested that the variance of the acceleration data on the wrists may be reflecting a change in a driver’s internal state; thus, drowsiness and the inattentive state of a driver may be detected by analyzing both physiological and behavioral signals, which can be measured by devices, such as a heart rate monitor or an accelerometer built in a smartwatch.
This study presents a new detection algorithm for inattentive driving that not only uses a heart rate sensor, but also uses body-worn inertial sensors, based on our previous studies. In our method, an anomaly detection algorithm is applied. In other words, it is an unsupervised method, and the learning procedure is different from supervised binary classification, that is, the data with a normal state only is used for model construction. Furthermore, we have verified the accuracy of the algorithm using experimental data measured by a driving simulator, where five participants were assigned. The driving scenario was designed for collecting the two types of data; one is the condition that simulates normal driving, and another is the condition for inducing a decrease in driver’s performance. The proposed method has the advantage of being implementable in a smartphone or a small wearable device such as that shown in Figure 1, without requiring in-vehicle sensors or a camera.

2. Overview of the Proposed System

The overview of our proposed system for detecting inattentive driving is shown in Figure 1. It consists of three parts: data acquisition using body-worn sensors, feature extraction, and internal state detection. In addition, the detection part includes three types of detection: body movement, drowsiness, and inattention.

2.1. Detection Algorithm

Inattentive driving is defined as a state in which driving performance is low due to a decrease in the level of vigilance. Vigilance is different from the concept of drowsiness, which refers to a person’s tendency to fall asleep [18]. Unlike a drowsiness state, it is difficult for us to judge the inattentive state from a driver’s facial expression. In addition, it is difficult to recognize the moment when driver himself falls into the inattentive state. For these reasons, measuring the actual state of a driver remains difficult. Therefore, our algorithm is constructed on the basis of the approach of anomaly detection; the detection model is constructed using data from the normal driving state, and a decrease in the vigilance is detected by the degree of the dissimilarity from the normal driving state.
As shown in Figure 2a, the algorithm consists of three detection models; body movement, drowsiness, and inattention; each model performs in parallel. The reason for constructing the three independent models is the difference in the timescale of each phenomenon. For example, the changes in drowsiness are on the order of minutes, while the changes in body movements and the decrease in vigilance are of the order of seconds. Thus, the length of the signal used for the input to each model, called a window size, is different for each model. The window moves over sensor data at specified intervals called a sliding interval, and each feature is computed over the data in the window.
Then, features are calculated from the sensor data for each sliding interval and set as input to each model. From the detection results of each model, one state is finally determined according to the flow in Figure 2a. The role and function of each model are as follows:
  • Body Movement Detection: Based on the anomaly detection approach, this model detects wrist movements during monotonous steering operation as a non-anomaly (i.e., “no body movement’’) and other large body movements such as changing the hand position or releasing the steering wheel as an anomaly. Features of wrist movements such as the mean and variance of acceleration, which are widely used in Human Activity Recognition (HAR) [29,30,31], are computed from accelerometer data and used as input to the model. If “no body movement” is determined, the next model determines the state.
  • Drowsiness Detection: Based on the anomaly detection approach, this model detects a state with an ordinary arousal level as a non-anomaly (i.e., “no drowsiness”), and a decrease in arousal from the normal state is considered an anomaly. Features of heart rate variability (HRV), widely used as an indicator of autonomic nervous system activity, are computed from RR interval (RRI) data and used as input to the model. The RR interval is the time interval of the R wave, which is the positive peak included in the ECG waveform, as shown in Figure 1. If “no drowsiness” is determined, the next model determines the state.
  • Inattention Detection: Based on the anomaly detection approach, this model detects wrist movements during monotonous steering operation as a non-anomaly (i.e., “no inattention”), and wrist movements during monotonous steering operation in a state of reduced vigilance are considered an anomaly. Motion features described in the below section are computed from accelerometer data to obtain the fine-grained changes of the wrist movement and used as input in the model. If “no inattention” is determined, the state is defined as “normal state”.
As shown in Figure 2b, the “anomaly” in each model is determined by measuring the distance between an input sample and the mean of the learned samples used to model construction. Each model is constructed using a multivariate statistical process control (MSPC) described in Section 2.3, one of the anomaly detection algorithms. In this study, however, we show the results of verifying the two detection models separately, excluding the body movement detection model as a feasibility study.

2.2. Feature Extraction

To detect the drowsiness and inattentive driving, two types of features are extracted from RRI data and accelerometer data using the following procedures.

2.2.1. Heart Rate Variability Features

Heart rate variability (HRV) features are calculated according to [15,32]. The following five features in the time domain are directly calculated from the raw RRI data; mean value of RR intervals (meanNN) [ms], standard deviation of RRI (SDNN) [ms], root mean square of successive RRI differences (RMSSD) [ms], total power [ms2], and the number of adjacent RRIs that differ from each other by more than 50 ms (NN50). In addition, the following three features in the frequency domain are obtained through the power spectrum density (PSD) of the resampled RRI data, and the PSD is calculated using an autoregressive model; Low Frequency (LF) [ms2] ( 0.04 0.15 Hz), High Frequency (HF) [ms2] ( 0.15 0.4 Hz), and LF/HF [%].

2.2.2. Motion Features

On monotonous roads such as highways, we had to maintain continuous and delicate steering operations to prevent meandering vehicles due to road gradients or crosswinds. However, when driver attention is reduced due to fatigue or low arousal, there is a delay in recognizing the vehicle’s deviation from the lane. Then, there is a delay in driving operation, and the temporal change in both vehicle speed and steering angle becomes small and slow in an inattentive driving state [3,33]. These disturbances in the driving operation are also considered to be reflected in the driver’s limb movement. Therefore, to emphasize and detect these changes from the wrists movements, we define the time difference of the acceleration signals measured at the wrists in the following procedure:
  • Consider a subsequence X extracted by the sliding window [34] of length W [s] from the acceleration signal of α th-axis.
  • The subsequence X is divided into N sw sub-windows and calculated by taking the difference in average amplitude between the adjacent sub-window:
    d i α = A i + 1 α A i α , i = 1 , 2 , , N sw 1 ,
    where A i is the average amplitude of the acceleration signal at the ith sub-window.
  • Calculate a histogram for the d i α , and its statistics are used as the motion features. In this study, we use three statistics: variance, skewness, and kurtosis to describe the distribution shape. Note that the three motion features, variance/skewness/kurtosis of the difference values within the subsequence, were defined experimentally, which provided the best overall estimation.
By calculating the difference between the averaged values for each region, the changes in the movement of the wrists can be captured regardless of vehicle vibration or steering position.
Figure 3a,b shows an example of the dynamic change obtained by dividing the Y-axis acceleration data measured on the left wrist into N sw = 60 equally over for a window length of W = 60 s. In Figure 3a, fine-grained movements associated with steering operation can be seen, but in (b), the change in motion is smaller than in (a). Note that (b) is a state of inattentive driving based on the reaction time measured by the Vigilance task; the reaction time (RT) in (b) is longer than (a) (RT = 528.5 ms > 352.5 ms). These trends might consider that the variation in the difference value became smaller because the awareness of driving decreased and the change in the movement of the wrist became smaller. Figure 3c shows the result of overlaying the difference values of (a) and (b) in a histogram. In the normal driving of (a), the difference value is large, so the frequency distribution is wide, and in the inattentive driving of (b), the change in body movement is small, and the tendency to concentrate is near zero.

2.3. Multivariate Statistical Process Control Model

MSPC has been used for monitoring the multivariate process in the field of process control, medical diagnosis, and other numerous processes [15,35,36]. The MSPC models the correlation among variables with principal component analysis (PCA).
Here, consider a data matrix X R N × P whose ith row is the ith feature vector x i R P , where P and N represent the number of variables and samples, respectively. Note that each variable is assumed to be standardized. The singular value decomposition of the data matrix X is written as follows:
X = USV = U R U 0 S R 0 0 S 0 V R V 0 ,
where U and V are orthogonal matrices, and R ( P ) denotes the number of principal components to be retained in a PCA model and A is the transpose of A . The diagonal matrix S has singular values s r in its diagonal elements in decreasing order. From Equation (2), the score matrix is given by,
T R = X V R = U R S R .
Using Equation (3), P-dimensional data can be projected onto the R-dimensional subspace. In addition, Hotelling’s T 2 statistics are used to monitor anomalies on the subspace spanned by principal components:
T 2 = x V R S R 2 V R x ,
where x R P is a newly measured sample. When the T 2 statistics calculated for the sample x exceeds the control limit, the sample x is determined as an anomaly.

3. Experiment

Accelerometer data and RRI data were collected through experiments using a driving simulator to build and validate the detection models in Section 2.1.

3.1. Participants

A total of five legally licensed drivers (2 males and 3 females) participated in the experiment. Their average age was 32 years old (range of 20–45 years). They had held a driver’s license for a mean period of 12.6 years (range of 2–25 years). Before collecting data, we explained the contents of the experiment. In addition, we obtained informed consent from each subject to use the collected data for research purposes.

3.2. Experimental Design and Procedures

A relatively simple and monotonous driving task was used to simulate a situation in which the driver’s performance had deteriorated. The driving scenario consisted of a course that simulated a two-lane highway with an eight-shaped loop of approximately 30 km in one time around. Participants were instructed to follow a leading vehicle while maintaining a safe distance but not to change lanes or overtake the leading vehicle. The speed of the leading vehicle was set as 80 km/h. Figure 4a shows a part of the course scene on the driving simulator, which is designed by using UC-Win/Road (FORUM8 Co., Ltd., Tokyo, Japan).
The experiments were conducted in the order shown in Figure 4b while changing the traveling time and traffic situation around the participant’s vehicle. After setting up body-worn sensors, participants were instructed to drive approximately 5 min to allow participants to become familiar with the driving simulator. Then, to obtain the baseline for participants, we set Conditions A and C to drive for approximately 6 min on the course to simulate normal driving. In the condition B, each participant drove for approximately 30 min on the course to simulate the situation of inattentive driving.
To obtain more obvious changes in sleepiness level, all experiments lasted from 2:00 p.m. to 6:00 p.m. In addition, all participants were asked to avoid drinking alcohol and drinks, including caffeine, a day before the experiment.

3.3. Annotation of Driver State

To evaluate the drowsiness level, the face image data were captured by a USB webcam installed on the dashboard shown in Figure 4c. The captured images were observed by three trained human referees and were evaluated for the drowsiness level based on the participant’s facial expressions and gestures on a 6-point scale at 5 s intervals, referring to the Ref. [37]. Then, the average value provided by the three referees was calculated. From the results, an interval with an average value of 2.0 or more was defined as “a drowsiness state”, and the other intervals were defined as a “non-drowsiness state”.
To evaluate the inattentive level, the simple reaction time (RT) was measured by the following vigilance task. Participants were instructed to push the button installed on the steering wheel shown in Figure 4c as quickly as possible when they noticed the LED light was illuminated. The LED light installed on the dashboard was programmed to turn on for 2 s at random intervals within 10 s. RTs were calculated from the onset of the LED lights to the participant’s button press. From the results, an interval with the RTs of mean + 1SD or more in each participant was defined as “an inattentive state”, and the other intervals were defined as a “non-inattentive state”.

3.4. Data Acquisition

We used a 6-axis inertial sensor (ATR-Promotions Inc., TSND121 [38]) to record the acceleration and gyroscope signals from the body movement of a driver while driving. Five inertial sensors were attached to the driver’s body segments, and their locations and number are shown in Figure 5. These measured signals were sampled at 100 Hz in each sensor module and transmitted to the host computer via Bluetooth. To remove high-frequency noise, all signals were filtered by a third-order Butterworth Low Pass filter with a cut-off frequency of 12.5 Hz.
In addition, to detect driver drowsiness, electrocardiogram (ECG) data were recorded synchronously with the inertial sensor data by using an ECG amplifier module [38]. The ECG signal was sampled at 1 kHz and transmitted to the host computer via Bluetooth. After that, the RR interval (RRI) was extracted from the ECG signal. Since raw RRI data were not sampled at equal intervals, it was interpolated by using spline and resampled at one-second intervals for frequency analysis. Note that, in this study, we measured ECG data on the chests of participants. We could also acquire heart rate data by using wristwatch-type devices, such as the Apple Watch [39].

4. Results and Discussions

As a feasibility study, the two detection models separately, excluding the body movement detection model shown in Figure 2a, were constructed by the MSPC method, and their detection rates were verified. In the following, to construct each MSPC model, the dataset collected under condition A described in Section 3.2 was used as a dataset for the normal state. Then, the dataset collected under condition B was used for evaluating the detection accuracy of the constructed models. However, with participants #2 and #4, the dataset of condition C was used for model construction instead of condition A because the datasets of condition A in these participants had missing values due to a problem in ECG measurement. In addition, the MSPC models were constructed and evaluated for each participant using their dataset. In other words, it is a user-dependent model.
Only one principal component was adopted in all MSPC models. The control limit of T 2 statistics was set so that 90% of the samples used for model construction were below the control limit, and the other 10% above the control limit.

4.1. Drowsiness Detection Model

The RRI data were sliced by the sliding window, and eight HRV features described in Section 2.2.1 were calculated within each subsequence. Here, the window size was 120 s and the sliding interval was 1 s. In addition, the autoregressive model using the Yule-Walker method with an order of 10 was used for calculating the PSD of RRI data, referring to [15]. Then, using the procedure provided in Section 2.3, a drowsiness detection model was constructed for each participant of the experiment, where P = 8 and N = N drw . Here, N drw represents the number of subsequences measured since the vehicle started running on the specified course under condition A. The average of N drw for five participants was 191 subsequences.

4.2. Inattentive Detection Model

The accelerometers data from the wrists (S3 and S4 shown in Figure 5), which was selected taking practicality into consideration, were used for the model construction of inattentive detection. The accelerometers data were sliced by the sliding window, and motion features described in Section 2.2.2 were calculated within each subsequence. The three motion features, variance/skewness/kurtosis of the difference values within the subsequence, were defined experimentally, which provided the best overall estimation. Here, the window size was W = 60 s, the sliding interval was 1 s, and the number of sub-window was N sw = 60 . Here, the posture and movement of the hand holding the steering wheel is different for each individual; however, in this study, to detect fine-grained movement associated with steering operation for each direction of both hands, the motion features in Equation (1) were computed for data of the 3-axis acceleration of both wrists. Furthermore, by building a model for each individual, a model of the normal state reflecting each individual’s driving style was constructed. Thus, using the procedure provided in Section 2.3, an inattentive detection model was constructed for each participant of the experiment, where P = 18 (2 sensors × 3 axis × 3 motion features) and N = N atv . Here N atv represents the number of subsequences measured since the vehicle started running on the specified course under condition A. The average of N atv for five participants was 251 subsequences.

4.3. Detection Results

Table 1 shows the detection results of the constructed drowsiness and inattentive detection models for each participant. These sensitivities and specificities were obtained from the datasets in condition B for each participant. The average sensitivity of the drowsiness and inattentive states, which mean the drowsiness and inattentive detection rates, was 52% and 71%, respectively. Here, participant #3 was excluded from the evaluation of drowsiness detection because the facial expression could not be evaluated because of a problem with the captured face images, and consequently, the drowsiness level could not be evaluated.
Figure 6a–e shows detection results of drowsiness and inattentive states, respectively, for five participants: the graph of time change of actual state defined by the method provided in Section 3.3 (top); the estimation result of detection model in which a high level is defined when the T 2 statistics exceed its control limit (center); and the time change of T 2 statistics (bottom).

4.4. Discussion

The drowsiness detection rate obtained in this study was 52%, which is lower than the results shown by Abe et al., whose drowsiness detection rate was 68% [15]. The reasons for this may be due to the missing rate of the RRI data. The average missing rate of the RRI data in this study was 17%, which is a higher rate than the prior research, whose missing rates were less than 2% for all participants. Thus, the obtained results in this study could be improved by detecting the RRIs stably by suppressing the effects of background noise or motion artifacts. On the other hand, Lee et al. [40] used a recurrence plot of ECG data and a convolutional neural network (CNN) to detect drowsy driving. Iwamoto et al. [17] reported that the detection accuracy could be improved by using long short-term memory (LSTM) and raw RRI data instead of conventional HRV features. Although a large enough dataset is needed for the training of these models, further investigation and comparisons of these deep learning-based detection methods are needed in future work.
Further, the inattentive detection rate in this study was 71%, and the sensitivity of three of the participants exceeded 87%, whereas that of one participant was less than 12%. This means that the control limits should be adopted according to each participant. To construct an optimal detection model with the highest sensitivity, it is necessary to choose a personalized control limit for each detection model. To solve this problem, for example, the receiver operating characteristic curves based on the signal detection theory will be used. In the inattentive detection model, the false positive rate was 42.5 %. The reasons for this include the fact that when the body movement of the driver occurred, the motion features notably fluctuated, which were determined as anomalies by the inattentive detection model. This means that the body movement detection model can detect movements not related to driving, and may play a role in improving the false positive rate. Further studies will investigate and verify a design that integrates the three detection models described in Section 2.1. On the other hand, Kume et al. [3] showed the specificity of inattentive state was 87% (i.e., the false-positive rate was 13%) with a manually set threshold by using data of steering angle and vehicle speed. Note that in their study, the inattentive state was defined by subjective evaluation and predicated at 3 min intervals. Moreover, we constructed and evaluated the inattentive detection model using sensor data from both wrists. However, the tendency of change in the motion features could also be affected by the difference in traffic rules of the country or region, for example, left-/right-hand drive. Thus, further investigation of the invariant features that does not depend on the usage environment or individual is needed. Another issue is how to set the control limit in consideration of individual differences.
Here, we summarize the comparison with other approaches, which are referred to by this paper, in Table 2. Additionally, an overview of the categories of the driver’s internal state shown in Table 2 is summarized in Figure 7. Table 2 and Figure 7 show that similar studies for detecting inattentive driving in an absent-minded state are still very limited, compared to drowsiness and distraction detection. Moreover, from “ground truth” in Table 2, we see that the definition of driver’s internal state is one of the most important issues because the standard labeling method has not yet been developed. Additionally, the experimental conditions, such as scenario and platform, are different for each study. Because of that, it is immediately more difficult to discuss and compare the results for each study directly.

5. Conclusions

In this study, we proposed an algorithm for detecting inattentive states and drowsiness, which occurs in long, monotonous driving scenarios, such as highways, using body-worn accelerometers and a heart rate sensor. The proposed method consists of three detection models, that is, body movement, drowsiness, and inattention detection based on an anomaly detection algorithm. We also reported on the evaluation results of two of these detection models: drowsiness, and inattentive detection. In our approach, to detect inattentive driving, motion features extracted from the acceleration signals on wrists were used. Additionally, to detect drowsiness driving, HRV features extracted from RRI data were used. Each detection model was constructed on the basis of an anomaly detection algorithm using MSPC. Our approach and its verification result indicate the feasibility of driver monitoring via wrist-worn sensors. Moreover, since our method can be realized by using wearable sensor devices, a driver’s state can be measured without dependence on the type of vehicle. On the other hand, in non-monotonous driving scenarios, our algorithm could be challenging to apply because there is a large variability in motion features of the wrists associated with steering operations of overtaking or right/left turn.
The results of this study were from a limited number of participants. According to NHTSA guidelines [41], further investigation of the effect of detecting a driver’s internal state for a wide range of age groups is required. Moreover, in the experimental design of this study, the amount of usable data is limited for building and verifying the user-dependent models; that is, there are only three sessions of data for each participant. Therefore, improving the experimental design, such as increasing the session, is needed to construct and expand the dataset. In future work, we will integrate the detection models and expand the size of the dataset for verification, thus scaling our approach for real-car usage.

Author Contributions

Conceptualization, T.A. (Takuma Akiduki), J.N. and H.T.; methodology, T.A. (Takuma Akiduki), J.N., Z.Z., Y.O., T.A. (Toshiya Arakawa) and H.T.; software, T.A. (Takuma Akiduki), J.N. and Y.O.; validation, T.A. (Takuma Akiduki), J.N., Z.Z., Y.O., T.A. (Toshiya Arakawa) and H.T.; investigation, T.A. (Takuma Akiduki) and J.N.; writing and original draft preparation, T.A. (Takuma Akiduki), J.N., T.A. (Toshiya Arakawa) and H.T.; writing, review and editing, T.A. (Takuma Akiduki), J.N., Z.Z., Y.O., T.A. (Toshiya Arakawa) and H.T.; supervision, T.A. (Takuma Akiduki) and Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Young Scientists [19K14924; T. Akiduki], [19K20062; Y. Omae], and JSPS Grant-in-Aid for Scientific Research (C) [20K12090; H. Takahashi].

Institutional Review Board Statement

All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Toyohashi University of Technology (protocol code 27-14 and H30-13).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data in this research are available upon request to T. Akiduki.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Japanese National Police Agency. Report of Traffic Accident Statistics. Available online: https://www.e-stat.go.jp/stat-search/files?page=1&layout=datalist&lid=000001202708 (accessed on 18 October 2021). (In Japanese).
  2. Boer, E.R.; Liu, A. Behavioral Entropy as an Index of Workload. In Proceedings of the 44th Annual Meeting of the Human Factors and Ergonomics Society, San Diego, CA, USA, 30 July–4 August 2000; Volume 44, pp. 125–128. [Google Scholar]
  3. Kume, T.; Naito, T.; Ishida, K.; Kawai, S.; Matsunaga, S.; Nishii, K.; Kitajima, H. Development of Absentminded State Detection and Resolution Methods Using Vehicle Equipments. Trans. Soc. Automot. Eng. Jpn. 2014, 45, 567–572. [Google Scholar]
  4. Saito, Y.; Itoh, M.; Inagaki, T. Driver Assistance System with a Dual Control Scheme: Effectiveness of Identifying Driver Drowsiness and Preventing Lane Departure Accidents. IEEE Trans. Human-Mach. Syst. 2016, 46, 660–671. [Google Scholar] [CrossRef] [Green Version]
  5. Arefnezhad, S.; Samiee, S.; Eichberger, A.; Nahvi, A. Driver Drowsiness Detection Based on Steering Wheel Data Applying Adaptive Neuro-Fuzzy Feature Selection. Sensors 2019, 19, 943. [Google Scholar] [CrossRef] [Green Version]
  6. Akhtar, Z.U.A.; Wang, H. WiFi-Based Driver’s Activity Monitoring with Efficient Computation of Radio-Image Features. Sensors 2020, 20, 1381. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Jeon, Y.; Kim, B.; Baek, Y. Ensemble CNN to Detect Drowsy Driving with In-Vehicle Sensor Data. Sensors 2021, 21, 2372. [Google Scholar] [CrossRef]
  8. Omi, T. Detecting Drowsiness with the Driver Status Monitor’s Visual Sensing. Denso Tech. Rev. 2016, 21, 93–102. [Google Scholar]
  9. Bergasa, L.M.; Nuevo, J.; Sotelo, M.A.; Barea, R.; Lopez, M.E. Real-Time System for Monitoring Driver Vigilance. IEEE Trans. Intell. Transp. Syst. 2006, 7, 63–77. [Google Scholar] [CrossRef] [Green Version]
  10. Dinges, D.F.; Grace, R. PERCLOS: A Valid Psychophysiological Measure of Alertness as Assessed by Psychomotor Vigilance; FHWA-MCRT-98-006; Federal Highway Administration: Washington, DC, USA, 1998.
  11. Mbouna, R.O.; Kong, S.G.; Chun, M.G. Visual Analysis of Eye State and Head Pose for Driver Alertness Monitoring. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1462–1469. [Google Scholar] [CrossRef]
  12. Naurois, C.J.; Bourdin, C.; Stratulat, A.; Diaz, E.; Vercher, J.L. Detection and Prediction of Driver Drowsiness Using Artificial Neural Network Models. Accid. Anal. Prev. 2019, 126, 95–104. [Google Scholar] [CrossRef]
  13. Zhang, X.; Wang, X.; Yang, X.; Xue, C.; Zhu, X.; Wei, J. Driver Drowsiness Detection Using Mixed-effect Ordered Logit Model Considering Time Cumulative Effect. Anal. Methods Accid. Res. 2020, 26, 100114. [Google Scholar] [CrossRef]
  14. Tamanani, R.; Muresan, R.; Al-Dweik, A. Estimation of Driver Vigilance Status Using Real-Time Facial Expression and Deep Learning. IEEE Sens. Lett. 2021, 5, 6000904. [Google Scholar] [CrossRef]
  15. Abe, E.; Fujiwara, K.; Hiraoka, T.; Yamakawa, T.; Kano, M. Development of Drowsiness Detection Method by Integrating Heart Rate Variability Analysis and Multivariate Statistical Process Control. SICE J. Control Meas. Syst. Integr. 2016, 9, 10–17. [Google Scholar] [CrossRef] [Green Version]
  16. Fujiwara, K.; Abe, E.; Kamata, K.; Nakayama, C.; Suzuki, Y.; Yamakawa, T.; Hiraoka, T.; Kano, M.; Sumi, Y.; Masuda, F.; et al. Heart Rate Variability-based Driver Drowsiness Detection and Its Validation with EEG. IEEE Trans. Biomed. Eng. 2019, 66, 1769–1778. [Google Scholar] [CrossRef] [PubMed]
  17. Iwamoto, H.; Hori, K.; Fujiwara, K.; Kano, M. Real-driving-implementable Drowsy Driving Detection Method Using Heart Rate Variability Based on Long Short-term Memory and Autoencoder. IFAC-PapersOnLine 2021, 54, 526–531. [Google Scholar] [CrossRef]
  18. Guo, Z.; Pan, Y.; Zhao, G.; Cao, S.; Zhang, J. Detection of Driver Vigilance Level Using EEG Signals and Driving Contexts. IEEE Trans. Reliab. 2018, 67, 370–380. [Google Scholar] [CrossRef]
  19. Arif, S.; Arif, M.; Munawar, S.; Ayaz, Y.; Khan, M.J.; Naseer, N. EEG Spectral Comparison Between Occipital and Prefrontal Cortices for Early Detection of Driver Drowsiness. In Proceedings of the of 2021 International Conference on Artificial Intelligence and Mechatronics Systems (AIMS), Jakarta, Indonesia, 28–30 April 2021; pp. 1–6. [Google Scholar]
  20. Roge, J.; Pebayle, T.; Muzet, A. Variations of the Level of Vigilance and of Behavioural Activities During Simulated Automobile Driving. Accid. Anal. Prev. 2001, 33, 181–186. [Google Scholar] [CrossRef]
  21. Matsuo, H.; Abdelaziz, K. The Measurement, Observation and Evaluation of the Vehicle Driver Drowsiness. IEICE Trans. Inf. Syst. 2015, J98-D, 700–708. [Google Scholar]
  22. Sunagawa, M.; Shikii, S.; Nakai, W.; Mochizuki, M.; Kusukame, K.; Kitajima, H. Comprehensive Drowsiness Level Detection Model Combining Multimodal Information. IEEE Sens. J. 2020, 20, 3709–3717. [Google Scholar] [CrossRef]
  23. Lee, B.G.; Lee, B.L.; Chung, W. Wristband-Type Driver Vigilance Monitoring System Using Smartwatch. IEEE Sens. J. 2015, 15, 5624–5633. [Google Scholar] [CrossRef]
  24. Jiang, L.; Lin, X.; Liu, X.; Bi, C.; Xing, G. SafeDrive: Distracted Driving Behaviors Using Wrist-Worn Devices. ACM Interact. Mob. Wearable Ubiquitous Technol. 2017, 1, 144. [Google Scholar] [CrossRef]
  25. Tanaka, R.; Akiduki, T.; Takahashi, H. Detection of Driver Workload Using Wrist-Worn Wearable Sensors: A Feasibility Study. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, CA, USA, 11–14 October 2020; pp. 1723–1730. [Google Scholar]
  26. Sun, W.; Si, Y.; Guo, M.; Li, S. Driver Distraction Recognition Using Wearable IMU Sensor Data. Sustainability 2021, 13, 1342. [Google Scholar] [CrossRef]
  27. Tsubowa, K.; Akiduki, T.; Zhong, Z.; Takahashi, H.; Omae, Y. A Study of Effects of Driver’s Sleepiness on Driver’s Subsidiary Behaviors. Int. J. Innov. Comput. Inf. Control. 2021, 17, 1791–1799. [Google Scholar]
  28. Nagasawa, J.; Akiduki, T.; Zhang, Z.; Miyake, T.; Takahashi, H. A Study of Detection Method of Aimless Driving State by Using Body-Worn Sensors. In Proceedings of the JSME Annual Conference on Robotics and Mechatronics, Yokohama, Japan, 8–11 June 2016; p. 1P1-12a5. [Google Scholar]
  29. Andreas, B.; Ulf, B.; Bernt, S. A Tutorial on Human Activity Recognition Using Body-worn Inertial Sensors. ACM Comput. Surv. 2014, 46, 33. [Google Scholar]
  30. Suto, J.; Oniga, S.; Pop, P.C. Feature Analysis to Human Activity Recognition. Int. J. Comput. Commun. Control. 2017, 12, 116–130. [Google Scholar] [CrossRef]
  31. Suto, J.; Oniga, S.; Pop, P.C. Comparison of Wrapper and Filter Feature Selection Algorithms on Human Activity Recognition. In Proceedings of the IEEE International Conference on Computers, Communications and Control 2016, Oradea, Romania, 10–14 May 2016; pp. 124–129. [Google Scholar]
  32. Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology, Heart rate variability: Standards of measurement, physiological interpretation. Eur. Heart J. 1996, 17, 354–381. [CrossRef] [Green Version]
  33. Matsunaga, S.; Naito, T.; Kato, T.; Oguri, K. Driver Condition Estimation Technology Using Vehicle Signal and Heart Rate. In Proceedings of the 2011 JSAE Annual Congress, Yokohama, Japan, 18–20 May 2011; Volume 122, pp. 13–16. [Google Scholar]
  34. Ravi, N.; Dandekar, N.; Mysore, P.; Littman, M. Activity Recognition from Accelerometer Data. AAAI 2005, 5, 1541–1546. [Google Scholar]
  35. Kano, M.; Hasebe, S.; Hashimoto, I.; Ohno, H. A New Multivariate Statistical Process Monitoring Method Using Principal Component Analysis. Comput. Chem. Eng. 2001, 25, 1103–1113. [Google Scholar] [CrossRef]
  36. Fujiwara, K.; Miyajima, M.; Yamakawa, T.; Abe, E.; Suzuki, Y.; Sawada, Y.; Kano, M.; Maehara, T.; Ohta, K.; Sasai-Sakuma, T.; et al. Epileptic Seizure Prediction Based on Multivariate Statistical Process Control of Heart Rate Variability Features. IEEE Trans. Biomed. Eng. 2016, 63, 1321–1332. [Google Scholar]
  37. Kitajima, H.; Numata, N.; Yamamoto, K.; Goi, Y. Prediction of Automobile Driver Sleepiness (1st Report, Rating of Sleepiness Based on Facial Expression and Examination of Effective Predictor Indexes of Sleepiness). Trans. Jpn. Soc. Mech. Eng. C 1997, 63, 3059–3066. [Google Scholar] [CrossRef] [Green Version]
  38. ATR-Promotions, Inc. Compact Wireless Multifunction Sensor TSND121 and Its Amplifier Module TS-EMG01. Available online: https://www.atr-p.com/products/TSND121.html (accessed on 18 October 2020). (In Japanese).
  39. Arakawa, T. A Review of Heartbeat Detection Systems for Automotive Applications. Sensors 2021, 21, 6112. [Google Scholar] [CrossRef]
  40. Lee, H.; Lee, J.; Shin, M. Using Wearable ECG/PPG Sensors for Driver Drowsiness Detection Based on Distinguishable Pattern of Recurrence Plots. Electronics 2019, 8, 192. [Google Scholar] [CrossRef] [Green Version]
  41. NHTSA. Visual-manual NHTSA Driver Distraction Guidelines for In-vehicle Electronic Devices. Fed. Regist. 2013, 78, 24817–24890. [Google Scholar]
Figure 1. The overview of the proposed system.
Figure 1. The overview of the proposed system.
Sensors 22 00352 g001
Figure 2. Prediction algorithm in this study: (a) flowchart of driver state discrimination; (b) schematic diagram for each detection model based on anomaly detection algorithm.
Figure 2. Prediction algorithm in this study: (a) flowchart of driver state discrimination; (b) schematic diagram for each detection model based on anomaly detection algorithm.
Sensors 22 00352 g002
Figure 3. An example of motion features: (a,b) the difference value d i α corresponding to the normal driving and inattentive driving; (c) the histograms of the difference value for (a,b).
Figure 3. An example of motion features: (a,b) the difference value d i α corresponding to the normal driving and inattentive driving; (c) the histograms of the difference value for (a,b).
Sensors 22 00352 g003
Figure 4. Experimental setup using the driving simulator: (a) a course scene on driving simulator; (b) driving scenarios; (c) layout of LED, push button switch, and web camera.
Figure 4. Experimental setup using the driving simulator: (a) a course scene on driving simulator; (b) driving scenarios; (c) layout of LED, push button switch, and web camera.
Sensors 22 00352 g004
Figure 5. Sensor layout for measuring the body movement of limbs, including wrists and ECG on the chest.
Figure 5. Sensor layout for measuring the body movement of limbs, including wrists and ECG on the chest.
Sensors 22 00352 g005
Figure 6. Detection result for five participants: (ae) correspond to Subj. 1, 2, 3, 4 and 5, respectively. At each figure, the upper two graphs show the results for the drowsiness detection model; The lower two graphs show the results for the inattentive detection model. The ground truth (Orange line), detection result (Blue line), and the time change of the T 2 statistic are shown in each graph.
Figure 6. Detection result for five participants: (ae) correspond to Subj. 1, 2, 3, 4 and 5, respectively. At each figure, the upper two graphs show the results for the drowsiness detection model; The lower two graphs show the results for the inattentive detection model. The ground truth (Orange line), detection result (Blue line), and the time change of the T 2 statistic are shown in each graph.
Sensors 22 00352 g006
Figure 7. An overview of the categories of the driver’s internal state. Note that the categories in this graph correspond to Table 2. Also, the study of drowsiness driving using body-worn sensors, which are referred to in this paper, can be found in the literature [15,17,23,40], the study of distracted driving, in [24,25,26], and the study of driving in the absent-minded state, in [3] and this study.
Figure 7. An overview of the categories of the driver’s internal state. Note that the categories in this graph correspond to Table 2. Also, the study of drowsiness driving using body-worn sensors, which are referred to in this paper, can be found in the literature [15,17,23,40], the study of distracted driving, in [24,25,26], and the study of driving in the absent-minded state, in [3] and this study.
Sensors 22 00352 g007
Table 1. Detection accuracy for all participants.
Table 1. Detection accuracy for all participants.
DrowsinessInattention
Subj.SensitivitySpecificitySensitivitySpecificity
10.590.330.940.47
20.060.860.730.75
3--0.120.99
41.000.260.870.17
50.410.910.910.49
Avg.0.520.590.710.58
Table 2. Comparison table of related studies for detecting a driver’s internal state using wearable-type sensors. Note that the exception is that Kume et al. [3] is an in-vehicle sensor-based method. Additionally, n/a means that no description for details can be found.
Table 2. Comparison table of related studies for detecting a driver’s internal state using wearable-type sensors. Note that the exception is that Kume et al. [3] is an in-vehicle sensor-based method. Additionally, n/a means that no description for details can be found.
StudyCategoryMeasuring MethodParticipant# (Male:Female, Age)ScenarioPlatformGround Truth
Abe et al. (2016)
[15]
DrowsinessWearable RRI telemetry27 (17:10, 20 s to 40 s)Driving on a highway loop line at night for two hoursDSFacial expression rating by human referees
Lee et al. (2019) 
[40]
DrowsinessWristwatch-type PPG and Chest-belt-type ECG sensor6 (n/a, 20 to 35)n/aDSVisual evaluation of facial and body movement
Iwamoto et al. (2021) 
[17]
DrowsinessECG with chest electrode25 (17:8, mean 21 ± 1.8 )A monotonous driving task in a dark room for three hoursDSLabeled based on sleep specialist’s score
Lee et al. (2015) 
[23]
DrowsinessWristwatch-type PPG and Wrist-worn IMU sensors12 (9:3, 21 to 45)Highway driving simulationDSKarolinska sleepiness scale (KSS) every 2 min
Jiang et al. (2018) 
[24]
Manual distractionWrist-worn IMU sensor (on the right wrist)20 (10:10, 25 to 35)Participants perform five different hand gestures, such as smartphone useRealManually labeled
Tanaka et al. (2020) 
[25]
Cognitive distractionWrist-worn IMU sensors7 (7:0, mean 22 ± 1.5 )A monotonous driving task with a cognitive task called N-back taskDSThe task level, that is, N in the N-back task
Sun et al. (2021) 
[26]
Manual distractionWrist-worn IMU sensor (on the right wrist)20 (14:6, 21 to 35)Participants perform four types of gestures; three manual distractions and one regular driving motionRealManually labeling by a passenger
Kume et al. (2014) 
[3]
Drowsiness and absentminded stateSteering wheel angles and vehicle speed34 (16:18, 20 s to 60 s)Driving for 1.5 h on the specified highway sectionRealSubjective evaluation on a 5-point scale per 3 min
This studyDrowsiness and absentminded stateWrist-worn IMU sensors and ECG with chest electrode5 (2:3, 20 to 45)A monotonous driving task for approximately an hourDSFacial expression rating and reaction time (see Section 3.3)
DS: driving simulator, Real: real vehicle platform, IMU: inertial measurement unit, PPG: photoplethysmogram.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Akiduki, T.; Nagasawa, J.; Zhang, Z.; Omae, Y.; Arakawa, T.; Takahashi, H. Inattentive Driving Detection Using Body-Worn Sensors: Feasibility Study. Sensors 2022, 22, 352. https://doi.org/10.3390/s22010352

AMA Style

Akiduki T, Nagasawa J, Zhang Z, Omae Y, Arakawa T, Takahashi H. Inattentive Driving Detection Using Body-Worn Sensors: Feasibility Study. Sensors. 2022; 22(1):352. https://doi.org/10.3390/s22010352

Chicago/Turabian Style

Akiduki, Takuma, Jun Nagasawa, Zhong Zhang, Yuto Omae, Toshiya Arakawa, and Hirotaka Takahashi. 2022. "Inattentive Driving Detection Using Body-Worn Sensors: Feasibility Study" Sensors 22, no. 1: 352. https://doi.org/10.3390/s22010352

APA Style

Akiduki, T., Nagasawa, J., Zhang, Z., Omae, Y., Arakawa, T., & Takahashi, H. (2022). Inattentive Driving Detection Using Body-Worn Sensors: Feasibility Study. Sensors, 22(1), 352. https://doi.org/10.3390/s22010352

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop