The automotive sector became one of the largest investors in augmented (AR) and virtual reality (VR) technologies and is expected to reach about
$673 Billion by 2025 [
1]. Particularly, the reduced time to market, the need for innovative products, application of new technologies and the need to continually improve quality in the automotive sector has led to an increase in the use of VR/AR applications for design, manufacturing, training and component evaluation [
2,
3]. Another important factor is the possibility to identify problems already in the early development process without the need for a time consuming and expensive implementation of a physical prototype. Technical improvements over recent years in hardware, software products and display technologies allow not only to model the design of the different parts of an automobile, but also to simulate their functionality in interactive and realistic driving simulations. These developments demonstrate that VR/AR technologies are the place where interdisciplinary fundamental research and engineering sciences meet to design and evaluate the techniques and processes of tomorrow. Such environments are the perfect “what-if machine” for creating rich audiovisual-haptic immersions and situated experiences. They enable interactive systems that combine (real and virtual) sensory, motor and cognitive experiences and can be explored in an application and result-oriented manner, without the limitations of the technical feasibility of hardware components and the structure/space requirements for different laboratory environments and test car availability. In this way, designers, developers and customers can interact with products that are still in the development stage, with virtual rooms for physically correct visualizations and immersive simulations of driving dynamics in driving simulators. Acceptance and UX studies in automated vehicles have so far primarily been examined using surveys or flat-screen-based driving simulators. These simulators often evolved into VR simulators, providing a higher degree of immersion. Therefore, in recent times, VR simulators have become a helpful tool for researchers to investigate the influence of different factors on driver response behaviours under different environmental and driving conditions [
4].
Another important development to study and predict driver behaviour became reasonable through the rapid developments in AI technologies. With the advent of the 3rd AI wave around 2020, AI started leading to machines capable of learning in a way that is much more similar to how humans learn. These systems will be able to learn from human behaviour, to understand context and meaning, and to use these capacities to adapt to different application fields (in contrast to current systems which only work well in a particular application environment). Modern AI approaches will also require fewer data samples for training, and will learn and function with less supervision as well as communicate with humans in a natural, adaptive and anticipative way, leading finally towards personalized and anticipative AI assistants. The three terms going along with the 3rd wave of AI applications are: explainability, interpretability and transparency. Explainable AI (XAI) means artificial intelligence in which the system outcome is made transparent so that it can be interpreted and understood by a human (in contrast to current systems where the input parameters and decision behaviour are hidden in a “black box”) [
5,
6]. With these novel competencies, XAI aims to promote a greater understanding and trustiness between humans and machines [
7]. Therefore, future AI models must be able to explain their decisions, make their decision behaviour transparent, and accept corrective input from the user (augmented intelligence). The feedback from the user can also be used to further train the AI.
Bringing both technologies (XAI and VR/AR simulations) together opens up new perspectives in the development of intelligent vehicles which can learn from multimodal driver data recorded in VR/AR simulations, but can also use the insights to provide novice drivers with individualized and context-sensitive assistance if a deviation from expert behaviour is detected. This allows not only optimized training environments, but also the development of Advanced Driver Assistance Systems (ADAS), which can adapt individually to continuous human driver input and explain their decision behaviour, as well as provide adequate feedback. This will not only lead to a higher acceptance rate for future ADAS, but also increase the drivers’ trust in these systems. This is crucial, as drivers cannot be forced to make use of the system while logistics companies seek to limit the cost of damage and unnecessary delays. This paper illustrates the approaches toward such an intelligent VR-simulator platform for the design and evaluation of personalized ADAS for the truck docking process.
To explore these novel approaches, a specific driving scenario was chosen. In this scenario, truck drivers perform a rearward docking manoeuver towards a loading bay (dock). The advantage of using this scenario lies in the limited operational design domain, which is less complex and well-structured compared with other well-known driving scenarios in which ADAS systems operate, such as lane change assistance or adaptive cruise control. The rearward docking scenario, by nature, limits the number of vehicles and the speed with which they operate. Additionally, by exploring this driving scenario, it is also possible to profit from the fact that the loading bays are static and their position is well known. This driving scenario has been explored for several years, first within a project INTRALOG targeting the development of autonomous rearward docking [
8] and currently through the VISTA project [
9] (see
Section 1.2). In the context of the VISTA project, a VR Truck-Docking Simulator was developed on which this paper is based and which is described in more detail in
Section 2.
1.1. Background and Previous Work
Virtual reality technology has been applied in different areas of the automotive industry by diverse automotive manufacturers [
10]. Marketing and sales are some of the application areas in which manufacturers explored the potential of VR technologies. For instance, VR environments that allow customers to configure a vehicle offer a more immersive and enhanced experience to customers. Additionally, the manufacturer benefits from a virtual showroom that saves resources such as space. Moreover, in the area of marketing and sales, the possibility to offer a virtual test drive was explored. The advantage here is that the automotive companies have a novel and powerful channel to advertise their product, even when the car is not yet released. In the application area of vehicle design, VR is used to support the development of the product. VR brings the advantage of improving the decision-making process and the product quality and it also allows rapid prototyping that might potentially reduce costs and the time-to-market.
In line with the application of VR by different automotive manufactures, the research and development of novel concepts have also been explored with the support of VR platforms. In fact, VR simulators have recently been used in the development and evaluation of ADAS. A good example is the application of VR to training. Educating users to interact with automated systems is considered highly important [
11,
12].
For example, learning about ADAS functionalities using a written manual has some drawbacks such as misinterpretation or forgetfulness. On the other hand, training in a real context might offer a richer and effective learning experience. However, there are not many automatic driving cars available for testing. Additionally, training drivers in a real setting is not risk-free: training drivers in some more dangerous scenarios (e.g., driver distraction) would be valuable, but such scenarios are naturally avoided. Hence, researchers explored VR simulators to train drivers. This provides drivers with the opportunity to have a safe and risk-free interaction with ADAS, promoting the experience of different scenarios such as the adaptive-cruise-control, or the automatic take-over request [
13]. VR has several benefits: (1) prototypical traffic and environmental situations can be repeated as many times as necessary; (2) concepts such as a novel Human–Machine Interaction (HMI) approach can be simulated as digital prototypes and can be easily integrated into the VR simulation, saving time and money; (3) intensive tests with many participants might be executed, leading to valuable insights.
In a recent study, researchers investigated the effectiveness of a light VR-simulator to train automated vehicle drivers by comparing it against a fixed-base simulator [
13]. This light VR system, composed of a Head-Mounted Display (HMD) and a game racing wheel, proved to promote an adequate level of immersion for learning about ADAS functionalities and offered the advantage to be portable and cost-effective. Despite reinforcing the idea that a VR-simulator is a valuable tool for training purposes in automated vehicles, the results of this study also suggest that participants preferred the light VR system in terms of usefulness, ease of use and realism.
Until now, researchers explored VR simulators and focused on the study of ADAS and automotive HMI on passenger cars. However, the literature has rarely mentioned VR simulators in the context of truck driving. Due to the increasing integration of ADAS functionalities in trucks, there is a necessity to find new ways of developing and designing ADAS for trucks. In a recent study, a light VR-simulator was used to examine different HMI designs with the intention to assess culture-related effects on the perception of and preference across German and Japanese truck drivers [
14]. In this study, researchers highlighted that the light VR-simulator allowed the quick and efficient evaluation of HMI designs. Besides this, researchers also suggested that the integration of eye-tracking might be valuable to better understand driving behaviour and HMI usage.
Recently, there has been an increase in the usage of neural network approaches for the analysis and prediction of driver behaviour based on input data retrieved from sensors within the car and from the outside world. To predict the driving behaviour, habits and intentions of the respective user in response to the external world and internal car-events, driver models were developed [
15,
16]. To date, these models are static systems, tailored to the average driver while not being sensitive to inter-driver and intra-personal differences, needs or preferences [
17]. Therefore, each evaluation test has to be performed with several driver models representing the various driver types. A personalization of the system currently can only be done at the beginning of the drive or by manual driver interaction. It is believed that a continuous, individual adaptation to personal driving habits is crucial for a good driving experience: a non-personalized system might otherwise annoy the driver with too much or irrelevant information. Potential consequences might be that the driver gets sidetracked or even disables the system, as well as loses trust and confidence in the system. A continuous adaptation requires the synchronized integration of data streams from different components. Only a holistic system that continuously adapts to the individual user can reduce the number of accidents and road deaths by providing user- and situation-relevant feedback. This guarantees an optimal user experience and a reduction in the cognitive load for the driver. The application of neural network techniques for time-series prediction and classification of individual driver behaviour can open up new possibilities for the implementation of more flexible driver models.
Recurrent Neural Nets (RNN) are currently used in the automobile sector [
18,
19] and are mainly used for time-series prediction and classification. They can analyze data and behaviour over time, to identify temporal patterns and behaviours in the provided data sets. RNNs are used to track and predict paths of moving objects (e.g., pedestrians) and therefore to determine potential collisions. RNNs can also be used to predict human actions and potential future events. For example, Li et al. [
18] trained an RNN with long short-term memory units to learn a human-like driver model that can predict future steering wheel angles based on road curvature, vehicle speed and previous steering movements. They argue that their approach provides more human-like steering wheel movements compared to preview-based models and can therefore increase the acceptance of autonomous vehicles if these will behave more like human drivers.
A research area that is gaining interest in the automotive research community concerns the individualization of ADAS. The concept aims at the adaptation of the ADAS’ assistance functions to the drivers’ preferences, skills and driving behaviour [
17]. The authors suggest that individualization is a continuous process in which the ADAS’ assistance functions adapt based on the driver behaviour. In that sense, the implementation of the individualization module must follow a driver-centric approach in which it continuously integrates the input of the driver. Darwish and Steinhauer [
20] recently explored an approach that uses deep reinforcement learning to personalize the driving experience, focusing on a scenario in which the adaptive cruise control function is used to keep an optimal distance to the car in front. In this work, the authors emphasize that driver’s behaviour and preferences are volatile, meaning that they can change rapidly depending on their experience and the novelty of the situations in which drivers have to operate. Hence, the individualization of ADAS is only achievable within a data-driven approach that is capable of performing adaptation in real-time and by relying on a limited amount of data. A personalized ADAS must be able to predict the driving behaviour, habits and intentions of the respective user. Additionally, external environmental data (traffic signs, pedestrians, obstacles, events) has to be available to the system using visual object recognition.
In fact, one of the greatest opportunities in exploring a VR-simulator in truck driving contexts is that multi-dimensional driver data can be recorded to inform the design and evaluation of ADAS functionalities, HMI concepts and driver models. For example, not only data about the vehicle (e.g., steering, braking, gear, truck position), but also data about the driver can be recorded. For instance, the integration of a microphone can be useful to record verbal behaviour and hand-tracking allows the study of manual interaction with any vehicle device including HMI or non-verbal communication. Eye-tracking offers a multitude of possibilities, such as determining where the driver is looking or the evaluation of driving fatigue [
21].
1.2. Content and Goals of the VISTA Project
While performing the docking manoeuvre at the distribution centre, the driver has to operate a truck with one or even two attached trailers (truck combination) towards the loading bay. Handling the manoeuvre properly is a challenge: the driver has constrained visibility from the cabin and the manoeuvre area is limited in space. Additionally, the manoeuvre must be performed in a place that has a dynamic nature (e.g., other trucks manoeuvring, people walking, etc.), meaning that accidents can happen, resulting in significant costs due to collision damage. For that reason, the Intelligent Truck Applications in Logistics (INTRALOG) project developed and investigated the effect of an automated docking system [
8]. Path tracking and path planning algorithms have been developed for logistic vehicle combinations with one or two articulations, for both forward and rearward docking manoeuvres.
Although some of the main problems of this driving scenario could be attenuated with the usage of sensors on the trailing vehicles and the usage of an automated docking system, it would also imply that all the trucks need the same technology integrated, resulting in significant costs. Moreover, trailers and trucks are often owned by different companies, which complicates the possibility for trailer instrumentation. Furthermore, logistic companies also have challenges in attracting skilled drivers, so they have a desire to provide support to less-trained drivers. This induces a desire for docking support functionality, suiting drivers with varying skills and experience. Based on these observations, a novel approach is currently being explored in the project VISTA (VIsion Supported Truck docking Assistant). The VISTA project aims to develop a framework integrating a camera-based localization system to track in real-time the position of truck and trailer(s) during the docking process at a distribution centre. Based on this localization system, the optimal docking path from the current position to the final unloading station is calculated and provided in the form of audiovisual assistive instructions (e.g., steering recommendation) either displayed as an appropriately coloured light array located above the windshield or on an HMI module displayed on a tablet placed on top of the cockpit in the truck cabin.
The target of the present research is to develop the VISTA-Sim, a platform that uses a VR-simulator as an environment to investigate driver performance, train drivers and develop and evaluate different forms of context-sensitive and personalized driver assistance feedback systems. The driving scenario in which this platform is studied refers to the docking of a truck combination towards the loading bay in a distribution centre. VISTA-Sim has two main goals. First, the simulator will serve as a tool that allows the validation of an HMI and different feedback components (such as lights or audiovisual hints, e.g., indication arrows or verbalized steering recommendations) designed for the VISTA project. This offers a driver-centred design approach that allows making decisions in an early stage of the project. The second goal is to use this platform as a tool to record data that can be used by machine-learning algorithms to learn about novice- and expert behaviours, allowing the implementation of a driver model. These insights can be used to estimate individual driving behaviour or to detect differences from expert driver performance, in order to provide context- and user-adequate feedback.
In summary, this paper proposes a novel and holistic approach to develop and evaluate personalized driver assistance using VR technology.