Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (20)

Search Parameters:
Authors = Alexey Kashevnik ORCID = 0000-0001-6503-1447

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 4503 KiB  
Article
Personality Traits Estimation Based on Job Interview Video Analysis: Importance of Human Nonverbal Cues Detection
by Kenan Kassab and Alexey Kashevnik
Big Data Cogn. Comput. 2024, 8(12), 173; https://doi.org/10.3390/bdcc8120173 - 28 Nov 2024
Cited by 1 | Viewed by 2810
Abstract
In this research, we delve into the analysis of non-verbal cues and their impact on evaluating job performance estimation and hireability by analyzing video interviews. We study a variety of non-verbal cues, which can be extracted from video interviews and can provide a [...] Read more.
In this research, we delve into the analysis of non-verbal cues and their impact on evaluating job performance estimation and hireability by analyzing video interviews. We study a variety of non-verbal cues, which can be extracted from video interviews and can provide a framework that utilizes the extracted features, and we combine them with personality traits to estimate sales abilities. Experimenting on the (Human Face Video Dataset for Personality Traits Detection) VPTD dataset, we proved the importance of smiling as a valid indicator for estimating extraversion and sales abilities. We also examined the role of head movements (represented by the rotation angles, roll, pitch, and yaw) since they play a crucial role in evaluating personality traits in general and extraversion and neuroticism in particular. The testing results show how these non-verbal cues can be used as assisting features in the proposed approach to provide a valid, reliable, and accurate estimation of sales abilities and job performance. Full article
Show Figures

Figure 1

28 pages, 1866 KiB  
Article
Human Operator Mental Fatigue Assessment Based on Video: ML-Driven Approach and Its Application to HFAVD Dataset
by Walaa Othman, Batol Hamoud, Nikolay Shilov and Alexey Kashevnik
Appl. Sci. 2024, 14(22), 10510; https://doi.org/10.3390/app142210510 - 14 Nov 2024
Cited by 1 | Viewed by 1922
Abstract
The detection of the human mental fatigue state holds immense significance due to its direct impact on work efficiency, specifically in system operation control. Numerous approaches have been proposed to address the challenge of fatigue detection, aiming to identify signs of fatigue and [...] Read more.
The detection of the human mental fatigue state holds immense significance due to its direct impact on work efficiency, specifically in system operation control. Numerous approaches have been proposed to address the challenge of fatigue detection, aiming to identify signs of fatigue and alert the individual. This paper introduces an approach to human mental fatigue assessment based on the application of machine learning techniques to the video of a working operator. For validation purposes, the approach was applied to a dataset, “Human Fatigue Assessment Based on Video Data” (HFAVD) integrating video data with features computed by using our computer vision deep learning models. The incorporated features encompass head movements represented by Euler angles (roll, pitch, and yaw), vital signs (blood pressure, heart rate, oxygen saturation, and respiratory rate), and eye and mouth states (blinking and yawning). The integration of these features eliminates the need for the manual calculation or detection of these parameters, and it obviates the requirement for sensors and external devices, which are commonly employed in existing datasets. The main objective of our work is to advance research in fatigue detection, particularly in work and academic settings. For this reason, we conducted a series of experiments by utilizing machine learning techniques to analyze the dataset and assess the fatigue state based on the features predicted by our models. The results reveal that the random forest technique consistently achieved the highest accuracy and F1-score across all experiments, predominantly exceeding 90%. These findings suggest that random forest is a highly promising technique for this task and prove the strong connection and association among the predicted features used to annotate the videos and the state of fatigue. Full article
Show Figures

Figure 1

12 pages, 1001 KiB  
Article
Intelligent Human Operator Mental Fatigue Assessment Method Based on Gaze Movement Monitoring
by Alexey Kashevnik, Svetlana Kovalenko, Anton Mamonov, Batol Hamoud, Aleksandr Bulygin, Vladislav Kuznetsov, Irina Shoshina, Ivan Brak and Gleb Kiselev
Sensors 2024, 24(21), 6805; https://doi.org/10.3390/s24216805 - 23 Oct 2024
Cited by 3 | Viewed by 1852
Abstract
Modern mental fatigue detection methods include many parameters for evaluation. For example, many researchers use human subjective evaluation or driving parameters to assess this human condition. Development of a method for detecting the functional state of mental fatigue is an extremely important task. [...] Read more.
Modern mental fatigue detection methods include many parameters for evaluation. For example, many researchers use human subjective evaluation or driving parameters to assess this human condition. Development of a method for detecting the functional state of mental fatigue is an extremely important task. Despite the fact that human operator support systems are becoming more and more widespread, at the moment there is no open-source solution that can monitor this human state based on eye movement monitoring in real time and with high accuracy. Such a method allows the prevention of a large number of potential hazardous situations and accidents in critical industries (nuclear stations, transport systems, and air traffic control). This paper describes the developed method for mental fatigue detection based on human eye movements. We based our research on a developed earlier dataset that included captured eye-tracking data of human operators that implemented different tasks during the day. In the scope of the developed method, we propose a technique for the determination of the most relevant gaze characteristics for mental fatigue state detection. The developed method includes the following machine learning techniques for human state classification: random forest, decision tree, and multilayered perceptron. The experimental results showed that the most relevant characteristics are as follows: average velocity within the fixation area; average curvature of the gaze trajectory; minimum curvature of the gaze trajectory; minimum saccade length; percentage of fixations shorter than 150 ms; and proportion of time spent in fixations shorter than 150 milliseconds. The processing of eye movement data using the proposed method is performed in real time, with the maximum accuracy (0.85) and F1-score (0.80) reached using the random forest method. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

12 pages, 5125 KiB  
Article
Remote Heart Rate Estimation Based on Transformer with Multi-Skip Connection Decoder: Method and Evaluation in the Wild
by Walaa Othman, Alexey Kashevnik, Ammar Ali, Nikolay Shilov and Dmitry Ryumin
Sensors 2024, 24(3), 775; https://doi.org/10.3390/s24030775 - 25 Jan 2024
Cited by 11 | Viewed by 2137
Abstract
Heart rate is an essential vital sign to evaluate human health. Remote heart monitoring using cheaply available devices has become a necessity in the twenty-first century to prevent any unfortunate situation caused by the hectic pace of life. In this paper, we propose [...] Read more.
Heart rate is an essential vital sign to evaluate human health. Remote heart monitoring using cheaply available devices has become a necessity in the twenty-first century to prevent any unfortunate situation caused by the hectic pace of life. In this paper, we propose a new method based on the transformer architecture with a multi-skip connection biLSTM decoder to estimate heart rate remotely from videos. Our method is based on the skin color variation caused by the change in blood volume in its surface. The presented heart rate estimation framework consists of three main steps: (1) the segmentation of the facial region of interest (ROI) based on the landmarks obtained by 3DDFA; (2) the extraction of the spatial and global features; and (3) the estimation of the heart rate value from the obtained features based on the proposed method. This paper investigates which feature extractor performs better by captioning the change in skin color related to the heart rate as well as the optimal number of frames needed to achieve better accuracy. Experiments were conducted using two publicly available datasets (LGI-PPGI and Vision for Vitals) and our own in-the-wild dataset (12 videos collected by four drivers). The experiments showed that our approach achieved better results than the previously published methods, making it the new state of the art on these datasets. Full article
(This article belongs to the Special Issue Biomedical Sensors for Diagnosis and Rehabilitation)
Show Figures

Figure 1

14 pages, 3622 KiB  
Article
AI-Based Approach to One-Click Chronic Subdural Hematoma Segmentation Using Computed Tomography Images
by Andrey Petrov, Alexey Kashevnik, Mikhail Haleev, Ammar Ali, Arkady Ivanov, Konstantin Samochernykh, Larisa Rozhchenko and Vasiliy Bobinov
Sensors 2024, 24(3), 721; https://doi.org/10.3390/s24030721 - 23 Jan 2024
Cited by 4 | Viewed by 2253
Abstract
This paper presents a computer vision-based approach to chronic subdural hematoma segmentation that can be performed by one click. Chronic subdural hematoma is estimated to occur in 0.002–0.02% of the general population each year and the risk increases with age, with a high [...] Read more.
This paper presents a computer vision-based approach to chronic subdural hematoma segmentation that can be performed by one click. Chronic subdural hematoma is estimated to occur in 0.002–0.02% of the general population each year and the risk increases with age, with a high frequency of about 0.05–0.06% in people aged 70 years and above. In our research, we developed our own dataset, which includes 53 series of CT scans collected from 21 patients with one or two hematomas. Based on the dataset, we trained two neural network models based on U-Net architecture to automate the manual segmentation process. One of the models performed segmentation based only on the current frame, while the other additionally processed multiple adjacent images to provide context, a technique that is more similar to the behavior of a doctor. We used a 10-fold cross-validation technique to better estimate the developed models’ efficiency. We used the Dice metric for segmentation accuracy estimation, which was 0.77. Also, for testing our approach, we used scans from five additional patients who did not form part of the dataset, and created a scenario in which three medical experts carried out a hematoma segmentation before we carried out segmentation using our best model. We developed the OsiriX DICOM Viewer plugin to implement our solution into the segmentation process. We compared the segmentation time, which was more than seven times faster using the one-click approach, and the experts agreed that the segmentation quality was acceptable for clinical usage. Full article
(This article belongs to the Special Issue Multimodal Image Analysis with Advanced Computational Intelligence)
Show Figures

Figure 1

20 pages, 1647 KiB  
Article
A Machine Learning-Based Correlation Analysis between Driver Behaviour and Vital Signs: Approach and Case Study
by Walaa Othman, Batol Hamoud, Alexey Kashevnik, Nikolay Shilov and Ammar Ali
Sensors 2023, 23(17), 7387; https://doi.org/10.3390/s23177387 - 24 Aug 2023
Cited by 10 | Viewed by 2736
Abstract
Driving behaviour analysis has drawn much attention in recent years due to the dramatic increase in the number of traffic accidents and casualties, and based on many studies, there is a relationship between the driving environment or behaviour and the driver’s state. To [...] Read more.
Driving behaviour analysis has drawn much attention in recent years due to the dramatic increase in the number of traffic accidents and casualties, and based on many studies, there is a relationship between the driving environment or behaviour and the driver’s state. To the best of our knowledge, these studies mostly investigate relationships between one vital sign and the driving circumstances either inside or outside the cabin. Hence, our paper provides an analysis of the correlation between the driver state (vital signs, eye state, and head pose) and both the vehicle maneuver actions (caused by the driver) and external events (carried out by other vehicles or pedestrians), including the proximity to other vehicles. Our methodology employs several models developed in our previous work to estimate respiratory rate, heart rate, blood pressure, oxygen saturation, head pose, eye state from in-cabin videos, and the distance to the nearest vehicle from out-cabin videos. Additionally, new models have been developed using Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) to classify the external events from out-cabin videos, as well as a Decision Tree classifier to detect the driver’s maneuver using accelerometer and gyroscope sensor data. The dataset used includes synchronized in-cabin/out-cabin videos and sensor data, allowing for the estimation of the driver state, proximity to other vehicles and detection of external events, and driver maneuvers. Therefore, the correlation matrix was calculated between all variables to be analysed. The results indicate that there is a weak correlation connecting both the maneuver action and the overtaking external event on one side and the heart rate and the blood pressure (systolic and diastolic) on the other side. In addition, the findings suggest a correlation between the yaw angle of the head and the overtaking event and a negative correlation between the systolic blood pressure and the distance to the nearest vehicle. Our findings align with our initial hypotheses, particularly concerning the impact of performing a maneuver or experiencing a cautious event, such as overtaking, on heart rate and blood pressure due to the agitation and tension resulting from such events. These results can be the key to implementing a sophisticated safety system aimed at maintaining the driver’s stable state when aggressive external events or maneuvers occur. Full article
Show Figures

Figure 1

35 pages, 14988 KiB  
Article
OperatorEYEVP: Operator Dataset for Fatigue Detection Based on Eye Movements, Heart Rate Data, and Video Information
by Svetlana Kovalenko, Anton Mamonov, Vladislav Kuznetsov, Alexandr Bulygin, Irina Shoshina, Ivan Brak and Alexey Kashevnik
Sensors 2023, 23(13), 6197; https://doi.org/10.3390/s23136197 - 6 Jul 2023
Cited by 6 | Viewed by 5238
Abstract
Detection of fatigue is extremely important in the development of different kinds of preventive systems (such as driver monitoring or operator monitoring for accident prevention). The presence of fatigue for this task should be determined with physiological and objective behavioral indicators. To develop [...] Read more.
Detection of fatigue is extremely important in the development of different kinds of preventive systems (such as driver monitoring or operator monitoring for accident prevention). The presence of fatigue for this task should be determined with physiological and objective behavioral indicators. To develop an effective model of fatigue detection, it is important to record a dataset with people in a state of fatigue as well as in a normal state. We carried out data collection using an eye tracker, a video camera, a stage camera, and a heart rate monitor to record a different kind of signal to analyze them. In our proposed dataset, 10 participants took part in the experiment and recorded data 3 times a day for 8 days. They performed different types of activity (choice reaction time, reading, correction test Landolt rings, playing Tetris), imitating everyday tasks. Our dataset is useful for studying fatigue and finding indicators of its manifestation. We have analyzed datasets that have public access to find the best for this task. Each of them contains data of eye movements and other types of data. We evaluated each of them to determine their suitability for fatigue studies, but none of them fully fit the fatigue detection task. We evaluated the recorded dataset by calculating the correspondences between eye-tracking data and CRT (choice reaction time) that show the presence of fatigue. Full article
Show Figures

Figure 1

12 pages, 410 KiB  
Article
VPTD: Human Face Video Dataset for Personality Traits Detection
by Kenan Kassab, Alexey Kashevnik, Alexander Mayatin and Dmitry Zubok
Data 2023, 8(7), 113; https://doi.org/10.3390/data8070113 - 22 Jun 2023
Cited by 8 | Viewed by 5599
Abstract
In this paper, we propose a dataset for personality traits detection based on human face videos. Ground truth data have been annotated using the IPIP-50 personality test that every participant is implementing. To collect the dataset, we developed a web-based platform that allows [...] Read more.
In this paper, we propose a dataset for personality traits detection based on human face videos. Ground truth data have been annotated using the IPIP-50 personality test that every participant is implementing. To collect the dataset, we developed a web-based platform that allows us to acquire spontaneous answers for predefined questions from the respondents. The website allows the participants to record an interactive interview in order to imitate the real-life interview. The dataset includes 38 videos (2 min on average) for people of different races, genders, and ages. In the paper, we propose the top five personality traits calculated based on the test, as well as the top five personality traits calculated by our own developed model that determines this information based on video analysis. We introduced a statistical analysis for the collected dataset, and we also applied a K-means clustering algorithm to cluster the data and present the clustering results. Full article
Show Figures

Figure 1

8 pages, 3360 KiB  
Proceeding Paper
Lightweight 2D Map Construction of Vehicle Environments Using a Semi-Supervised Depth Estimation Approach
by Alexey Kashevnik and Ammar Ali
Eng. Proc. 2023, 33(1), 28; https://doi.org/10.3390/engproc2023033028 - 15 Jun 2023
Cited by 1 | Viewed by 1316
Abstract
This paper addresses the problem of constructing a real-time 2D map for driving scenes from a single monocular RGB image. We presented a method based on three neural networks (depth estimation, 3D object detection, and semantic segmentation). We proposed a depth estimation neural [...] Read more.
This paper addresses the problem of constructing a real-time 2D map for driving scenes from a single monocular RGB image. We presented a method based on three neural networks (depth estimation, 3D object detection, and semantic segmentation). We proposed a depth estimation neural network architecture that is fast and accurate in comparison with the state-of-the-art models. We designed our model to work in real time on light devices (such as an NVIDIA Jetson Nano and smartphones). The model is based on an encoder–decoder architecture with complex loss functions, i.e., normal loss, VNL, gradient loss (dx, dy), and mean absolute error. Our results show competitive results in comparison with the state-of-the-art methods, as our method is 30 times faster and smaller. Full article
(This article belongs to the Proceedings of 15th International Conference “Intelligent Systems” (INTELS’22))
Show Figures

Figure 1

16 pages, 2602 KiB  
Article
Neural Network Model Combination for Video-Based Blood Pressure Estimation: New Approach and Evaluation
by Batol Hamoud, Alexey Kashevnik, Walaa Othman and Nikolay Shilov
Sensors 2023, 23(4), 1753; https://doi.org/10.3390/s23041753 - 4 Feb 2023
Cited by 18 | Viewed by 3721
Abstract
One of the most effective vital signs of health conditions is blood pressure. It has such an impact that changes your state from completely relaxed to extremely unpleasant, which makes the task of blood pressure monitoring a main procedure that almost everyone undergoes [...] Read more.
One of the most effective vital signs of health conditions is blood pressure. It has such an impact that changes your state from completely relaxed to extremely unpleasant, which makes the task of blood pressure monitoring a main procedure that almost everyone undergoes whenever there is something wrong or suspicious with his/her health condition. The most popular and accurate ways to measure blood pressure are cuff-based, inconvenient, and pricey, but on the bright side, many experimental studies prove that changes in the color intensities of the RGB channels represent variation in the blood that flows beneath the skin, which is strongly related to blood pressure; hence, we present a novel approach to blood pressure estimation based on the analysis of human face video using hybrid deep learning models. We deeply analyzed proposed approaches and methods to develop combinations of state-of-the-art models that were validated by their testing results on the Vision for Vitals (V4V) dataset compared to the performance of other available proposed models. Additionally, we came up with a new metric to evaluate the performance of our models using Pearson’s correlation coefficient between the predicted blood pressure of the subjects and their respiratory rate at each minute, which is provided by our own dataset that includes 60 videos of operators working on personal computers for almost 20 min in each video. Our method provides a cuff-less, fast, and comfortable way to estimate blood pressure with no need for any equipment except the camera of your smartphone. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

11 pages, 1856 KiB  
Article
DriverSVT: Smartphone-Measured Vehicle Telemetry Data for Driver State Identification
by Walaa Othman, Alexey Kashevnik, Batol Hamoud and Nikolay Shilov
Data 2022, 7(12), 181; https://doi.org/10.3390/data7120181 - 15 Dec 2022
Cited by 7 | Viewed by 4111
Abstract
One of the key functions of driver monitoring systems is the evaluation of the driver’s state, which is a key factor in improving driving safety. Currently, such systems heavily rely on the technology of deep learning, that in turn requires corresponding high-quality datasets [...] Read more.
One of the key functions of driver monitoring systems is the evaluation of the driver’s state, which is a key factor in improving driving safety. Currently, such systems heavily rely on the technology of deep learning, that in turn requires corresponding high-quality datasets to achieve the required level of accuracy. In this paper, we introduce a dataset that includes information about the driver’s state synchronized with the vehicle telemetry data. The dataset contains more than 17.56 million entries obtained from 633 drivers with the following data: the driver drowsiness and distraction states, smartphone-measured vehicle speed and acceleration, data from magnetometer and gyroscope sensors, g-force, lighting level, and smartphone battery level. The proposed dataset can be used for analyzing driver behavior and detecting aggressive driving styles, which can help to reduce accidents and increase safety on the roads. In addition, we applied the K-means clustering algorithm based on the 11 least-correlated features to label the data. The elbow method showed that the optimal number of clusters could be either two or three clusters. We chose to proceed with the three clusters to label the data into three main scenarios: parking and starting driving, driving in the city, and driving on highways. The result of the clustering was then analyzed to see what the most frequent critical actions inside the cabin in each scenario were. According to our analysis, an unfastened seat belt was the most frequent critical case in driving in the city scenario, while drowsiness was more frequent when driving on the highway. Full article
Show Figures

Figure 1

14 pages, 16189 KiB  
Article
3D Vehicle Detection and Segmentation Based on EfficientNetB3 and CenterNet Residual Blocks
by Alexey Kashevnik and Ammar Ali
Sensors 2022, 22(20), 7990; https://doi.org/10.3390/s22207990 - 20 Oct 2022
Cited by 10 | Viewed by 3901
Abstract
In this paper, we present a two stages solution to 3D vehicle detection and segmentation. The first stage depends on the combination of EfficientNetB3 architecture with multiparallel residual blocks (inspired by CenterNet architecture) for 3D localization and poses estimation for vehicles on the [...] Read more.
In this paper, we present a two stages solution to 3D vehicle detection and segmentation. The first stage depends on the combination of EfficientNetB3 architecture with multiparallel residual blocks (inspired by CenterNet architecture) for 3D localization and poses estimation for vehicles on the scene. The second stage takes the output of the first stage as input (cropped car images) to train EfficientNet B3 for the image recognition task. Using predefined 3D Models, we substitute each vehicle on the scene with its match using the rotation matrix and translation vector from the first stage to get the 3D detection bounding boxes and segmentation masks. We trained our models on an open-source dataset (ApolloCar3D). Our method outperforms all published solutions in terms of 6 degrees of freedom error (6 DoF err). Full article
(This article belongs to the Special Issue Smartphone Sensors for Driver Behavior Monitoring Systems)
Show Figures

Figure 1

13 pages, 4681 KiB  
Article
DriverMVT: In-Cabin Dataset for Driver Monitoring including Video and Vehicle Telemetry Information
by Walaa Othman, Alexey Kashevnik, Ammar Ali and Nikolay Shilov
Data 2022, 7(5), 62; https://doi.org/10.3390/data7050062 - 11 May 2022
Cited by 17 | Viewed by 10784
Abstract
Developing a driver monitoring system that can assess the driver’s state is a prerequisite and a key to improving the road safety. With the success of deep learning, such systems can achieve a high accuracy if corresponding high-quality datasets are available. In this [...] Read more.
Developing a driver monitoring system that can assess the driver’s state is a prerequisite and a key to improving the road safety. With the success of deep learning, such systems can achieve a high accuracy if corresponding high-quality datasets are available. In this paper, we introduce DriverMVT (Driver Monitoring dataset with Videos and Telemetry). The dataset contains information about the driver head pose, heart rate, and driver behaviour inside the cabin like drowsiness and unfastened belt. This dataset can be used to train and evaluate deep learning models to estimate the driver’s health state, mental state, concentration level, and his/her activity in the cabin. Developing such systems that can alert the driver in case of drowsiness or distraction can reduce the number of accidents and increase the safety on the road. The dataset contains 1506 videos for 9 different drivers (7 males and 2 females) with total number of frames equal 5119k and total time over 36 h. In addition, evaluated the dataset with multi-task temporal shift convolutional attention network (MTTS-CAN) algorithm. The algorithm mean average error on our dataset is 16.375 heartbeats per minute. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

12 pages, 2131 KiB  
Article
Threats Detection during Human-Computer Interaction in Driver Monitoring Systems
by Alexey Kashevnik, Andrew Ponomarev, Nikolay Shilov and Andrey Chechulin
Sensors 2022, 22(6), 2380; https://doi.org/10.3390/s22062380 - 19 Mar 2022
Cited by 4 | Viewed by 2825
Abstract
This paper presents an approach and a case study for threat detection during human–computer interaction, using the example of driver–vehicle interaction. We analyzed a driver monitoring system and identified two types of users: the driver and the operator. The proposed approach detects possible [...] Read more.
This paper presents an approach and a case study for threat detection during human–computer interaction, using the example of driver–vehicle interaction. We analyzed a driver monitoring system and identified two types of users: the driver and the operator. The proposed approach detects possible threats for the driver. We present a method for threat detection during human–system interactions that generalizes potential threats, as well as approaches for their detection. The originality of the method is that we frame the problem of threat detection in a holistic way: we build on the driver–ITS system analysis and generalize existing methods for driver state analysis into a threat detection method covering the identified threats. The developed reference model of the operator–computer interaction interface shows how the driver monitoring process is organized, and what information can be processed automatically, and what information related to the driver behavior has to be processed manually. In addition, the interface reference model includes mechanisms for operator behavior monitoring. We present experiments that included 14 drivers, as a case study. The experiments illustrated how the operator monitors and processes the information from the driver monitoring system. Based on the case study, we clarified that when the driver monitoring system detected the threats in the cabin and notified drivers about them, the number of threats was significantly decreased. Full article
(This article belongs to the Special Issue Smartphone Sensors for Driver Behavior Monitoring Systems)
Show Figures

Figure 1

12 pages, 1857 KiB  
Article
Car Tourist Trajectory Prediction Based on Bidirectional LSTM Neural Network
by Sergei Mikhailov and Alexey Kashevnik
Electronics 2021, 10(12), 1390; https://doi.org/10.3390/electronics10121390 - 9 Jun 2021
Cited by 18 | Viewed by 3125
Abstract
COVID-19 has greatly affected the tourist industry and ways of travel. According to the UNTWO predictions, the number of international tourist arrivals will be slowly growing by the end of 2021. One of the ways to keep tourists safe during travel is to [...] Read more.
COVID-19 has greatly affected the tourist industry and ways of travel. According to the UNTWO predictions, the number of international tourist arrivals will be slowly growing by the end of 2021. One of the ways to keep tourists safe during travel is to use a personal car or car-sharing service. The sensor-based information collected from the tourist’s smartphone during the trip allows his/her behaviour analysis. For this purpose, we propose to use the Internet of Things with ambient intelligence technologies, which allows information processing using the surrounding devices. The paper describes a solution to the car tourist trajectory prediction, which has been the demanding subject of different research studies in recent years. We present an approach based on the usage of the bidirectional LSTM neural network model. We show the reference model of the tourist support system for car-based attraction-visiting trips. The sensor data acquisition process and the bidirectional LSTM model construction, training and evaluation are demonstrated. We propose a system architecture that uses the tourist’s smartphone for data acquisition as well as more powerful surrounding devices for information processing. The obtained results can be used for tourist trip behaviour analysis. Full article
(This article belongs to the Special Issue Ambient Intelligence in IoT Environments)
Show Figures

Figure 1

Back to TopTop