1. Introduction
Driving simulators have emerged as a powerful alternative to real-world experiments for studying driver behavior, vehicle dynamics, and traffic interactions. Their advantages are twofold: they ensure participant safety, and they offer unparalleled control over experimental conditions [
1]. In contrast to on-road testing, simulators enable the reproduction of rare or dangerous scenarios—such as sudden merging, near-miss events, or collisions—without risk to human life [
2]. Furthermore, in a simulator-based setup, researchers can precisely manipulate variables like the number and behavior of surrounding vehicles, road geometry, weather conditions, and even the time of day, thereby eliminating confounding effects that often complicate field studies [
2,
3,
4,
5]. However, a critical challenge remains in determining how closely participant behavior mimics real-world driving [
6]. Drivers may behave differently if the simulation environment lacks realistic sensory cues. To address this, modern simulators rely on immersive technologies such as head-mounted VR headsets, motion platforms that simulate inertial feedback, and visually convincing physics engines that govern vehicle behavior [
7,
8,
9]. The convergence of gaming hardware and high-fidelity simulation software has significantly reduced the cost barrier. For example, commercial racing platforms like Assetto Corsa now offer realistic vehicle physics, advanced rendering, and customization options suitable for academic use [
10].
While these developments support naturalistic driving, a major limitation persists: most driving simulators are single-participant systems. In such setups, a human driver operates in a virtual world surrounded by simulated vehicles whose behaviors follow predefined, non-adaptive scripts. This restricts interaction to a unidirectional dynamic, where the human driver reacts to static or rule-based traffic, but the surrounding vehicles do not respond to the driver’s behavior in any meaningful or realistic way [
11,
12,
13,
14]. As a result, these systems fail to capture the rich, bidirectional human–human interactions observed in real-world traffic—such as mutual negotiation during lane changes, merging behavior, or implicit signaling between drivers. The literature identifies three dominant approaches to studying driver interactions: simulation models that assume driver behavior through mathematical or rule-based formulations [
15,
16]; conventional simulators that include only one human driver surrounded by scripted traffic [
11,
17,
18]; and naturalistic studies conducted on real roads with human drivers [
19,
20,
21]. While the first two approaches lack authenticity in capturing human–human interactions, the third introduces safety risks and uncontrollable environmental variables [
11,
21,
22,
23]. To overcome these limitations, networked driving simulators (
Figure 1) have emerged, connecting multiple human drivers to a shared virtual environment. These systems enable dynamic, reciprocal behavior between drivers, creating opportunities to study how complex traffic interactions unfold when all participants are real people [
4,
11,
12,
24,
25]. Importantly, drivers tend to behave more attentively and naturally when surrounded by other humans compared to simulated traffic, and without this human presence, it is not scientifically sound to draw conclusions about interactive driving behavior. Despite their growing relevance, very few studies have used networked simulators to collect data to build realistic models of multi-human driving interaction.
Driving simulators not only allow for the observation of behavioral responses but also provide a rich stream of vehicle telemetry, such as speed, acceleration, steering angle, brake pressure, and throttle input. These data can be supplemented with physiological measures to gain insight into drivers’ internal states. One of the most powerful tools in this context is electroencephalography (EEG), which captures brain activity with high temporal resolution and has been widely used to study attention, cognitive workload, and stress during dynamic tasks [
26,
27]. Traditional EEG systems, designed for clinical use, are often bulky and impractical for simulator environments, particularly those incorporating motion platforms. For such applications, dry scalp EEG systems—which require no conductive gel and can be worn comfortably—are preferable, despite their reduced signal fidelity [
28,
29,
30]. Integrating EEG into a multi-participant simulator, however, introduces significant technical challenges related to data synchronization. Each participant’s EEG and driving data are captured using separate software platforms, such as Assetto Corsa for driving telemetry and a dedicated system for EEG signals. These must be precisely synchronized to a common time base to allow for their valid interpretation. This complexity is compounded when scaling to multiple participants, where synchronization must be ensured not just within each participant’s data but also across all participants involved in the experiment. Such alignment is crucial when studying collaborative or competitive driving tasks—like highway merging or overtaking—where researchers may wish to compare the timing of brake pressure and neural firing across individuals. In this paper, we present, for the first time, a comprehensive framework for a multi-participant networked driving simulator integrated with dry EEG, with the full synchronization of behavioral and neural data streams. We introduce a custom-built software that synchronizes data acquisition across multiple systems and participants, making it significantly easier for other researchers to adopt this cost-effective and scalable approach.
2. System Architecture
The experimental system consists of a set of immersive, networked driving simulators designed to study human driving behavior and cognitive activity in a shared virtual environment. Each simulator functions as an independent node within a multi-participant traffic simulation, allowing for multiple participants to drive simultaneously while their driving telemetry and EEG data are recorded in real time. This section describes the core components of the system, including the hardware and software used in each simulator and the EEG apparatus.
2.1. Driving Simulator Configuration
The driving simulator used in this study is designed to mimic the physical and perceptual cues of real-world driving as closely as possible, while remaining modular and cost-effective. Each simulator (
Figure 2) includes the following: a physical driving cockpit (seat and frame), a motion platform to provide inertial feedback, a steering wheel and pedal set, a visual interface (either a high-resolution curved monitor or a VR headset), the Assetto Corsa simulation software, and a dry-electrode EEG headset paired with a trigger hub. The simulators are connected via a local server running Assetto Corsa, allowing participants to interact with one another in a shared virtual scenario. While the configuration supports flexibility in hardware selection, such as different wheelbases or display types, compatibility with Assetto Corsa must be ensured. Together, these elements create an immersive and interactive driving environment where naturalistic driver behavior can be observed and recorded. The system is built with flexibility and reproducibility in mind, allowing for individual components to be modified or upgraded based on experimental needs and participant preferences.
2.1.1. Driving Cockpit
The driving cockpit serves as the physical foundation of the simulator. It consists of a rigid frame and an adjustable racing seat, both sourced from Next Level Racing. The frame is built from aluminum profiles and steel reinforcements, offering the structural integrity necessary to support a motion platform and high-torque steering systems. It also includes mounting interfaces for the pedal box, steering column, and motion platform, ensuring ergonomic alignment and minimal vibration interference.
The cockpit’s adaptable design is crucial for participant comfort and realism. Seat position, tilt, and distance from the pedals can all be adjusted to accommodate drivers of different heights and postures, allowing them to assume a natural driving position. This is important not only for realism, but also for minimizing muscular strain during extended trials and reducing motion artifacts in EEG recordings. The cockpit also includes brackets and cable channels for securing wiring and peripheral devices, reducing clutter and preventing mechanical interference during operation.
2.1.2. Motion Platform
A key feature of the simulator is the integration of a motion platform between the seat and the base of the cockpit. The motion platform used in this setup is the Motion Platform V3 by Next Level Racing, which is designed for racing and flight simulators. It provides two degrees of freedom—pitch and roll—simulating longitudinal and lateral vehicle dynamics such as braking, acceleration, and cornering forces.
The motion feedback adds kinesthetic realism to the visual experience, enabling participants to feel the forces associated with driving maneuvers. This haptic feedback plays a critical role in immersing the participant and eliciting naturalistic responses to stimuli, such as adjusting steering angle or braking force in response to perceived deceleration. Calibration of the platform was performed using the manufacturer’s Platform Manager software, which is iteratively tuned to provide realistic yet non-fatiguing movement profiles. The gain and scaling of the motion response were adjusted to match the physical limits of the driving scenario while accounting for participant safety and comfort.
Importantly, the motion platform must operate silently and with minimal mechanical delay to avoid introducing latency between the visual and haptic feedback, as this can result in motion sickness or reduced immersion. The careful integration and isolation of vibrations were also implemented to ensure they did not introduce noise into the EEG signal during recording. To reduce mechanical noise that could interfere with EEG recordings, vibration isolation was implemented by mounting and tuning steel springs between the base of the motion platform and the cockpit frame. These springs absorb residual vibrations and decouple platform motion from the EEG headset, contributing to a more stable neural signal acquisition.
2.1.3. Steering Wheel and Pedals
The steering and pedal system forms the primary input interface for the driver. In our setup, the steering wheel system includes a direct-drive servo motor base and an interchangeable wheel rim, both compatible with Assetto Corsa. Direct-drive steering systems are preferred over gear- or belt-driven alternatives due to their high torque resolution, instantaneous force feedback, and absence of mechanical lag. This ensures the driver feels realistic resistance and feedback during cornering, collisions, or road texture changes, enhancing behavioral fidelity.
The pedal set includes independent modules for the clutch, brake, and accelerator, each with customizable tension and travel distance. Load-cell-based braking is used to measure actual pedal force rather than displacement, providing a more accurate representation of real-world braking behavior. This level of granularity is essential in experiments focused on fine motor control and reaction time measurements. Both the steering and pedal systems are mounted securely to the cockpit frame to eliminate flex and shifting, preserving consistent input-response characteristics across trials.
2.1.4. Visual Interface
Visual immersion is achieved through either a high-resolution curved monitor (Samsung R55 series) or a VR headset (HTC Vive Pro 2), depending on participant preference and experimental requirements. The curved monitor offers a panoramic field of view that helps approximate the peripheral vision available in a real vehicle. With a wide aspect ratio and high refresh rate, it ensures smooth rendering of the simulation environment, minimizing aliasing and screen tearing.
For a more immersive experience, the VR headset is used to place the participant directly within the 3D driving environment. The HTC Vive Pro 2 features dual OLED displays with a combined resolution of 4896 x 2448 and a refresh rate of 120 Hz, providing crisp visuals and minimal latency. Head tracking allows the simulation to dynamically adjust the driver’s point of view based on head orientation, reinforcing spatial awareness and realism.
However, VR is not suitable for all participants. Some may experience discomfort, disorientation, or motion sickness due to the sensory mismatch between the visual and vestibular systems. For this reason, the system offers the option to use either visual interface, allowing researchers to select the most appropriate display mode based on participant tolerance and experimental goals. Regardless of their choice, the simulator delivers a consistent visual experience by maintaining identical field-of-view angles, screen distances, and rendering parameters across modalities.
2.1.5. Simulation Software: Assetto Corsa
The simulation environment is powered by Assetto Corsa, a commercially available racing simulator known for its realistic vehicle physics, detailed graphics, and extensive modding capabilities. Although designed primarily for entertainment, Assetto Corsa’s open-source asset architecture makes it highly suitable for academic research. Many commercial driving simulator platforms are expensive and do not support multi-driver scenarios [
10]. Frutos et al. [
10] evaluated multiple racing simulators for their suitability in transportation research and found that Assetto Corsa scored highly in both graphics quality and physical realism.
Its physics engine models tire friction, suspension behavior, drivetrain dynamics, and aerodynamic forces with high fidelity, creating a convincing driving experience. This realism is crucial when studying naturalistic behavior, as drivers are more likely to engage meaningfully with the environment when it behaves as expected. Additionally, Assetto Corsa supports multiple input types, customizable force feedback profiles, and telemetry output through shared memory, allowing researchers to collect real-time data on speed, throttle, braking, steering angle, gear selection, and more.
One of Assetto Corsa’s most valuable features for this project is its ability to host local multiplayer sessions via a dedicated server. This enables multiple simulators to be networked into a shared driving scenario, allowing human drivers to interact with one another in real time. Because all vehicles operate in the same virtual environment, it is possible to observe genuine driver–driver interactions, such as merging, overtaking, and cooperative maneuvers. The system supports the importing of custom-designed tracks and vehicles, allowing experiments to be tailored to specific road geometries, traffic densities, or behavioral hypotheses.
The combination of Assetto Corsa’s physics accuracy, real-time data access, and multiplayer networking makes it a powerful platform for cognitive and behavioral driving research. When paired with physiological data collection methods like EEG, it provides a rich multimodal dataset for investigating human performance in complex traffic scenarios.
2.2. EEG Apparatus and Configuration
To capture participants’ brain activity, the EEG setup includes the DSI-24 dry electrode headset and a trigger hub (
Figure 3), both manufactured by Wearable Sensing [
31,
32]. The DSI-24 collects EEG signals using 21 dry electrodes arranged according to the international 10–20 system [
33,
34]. This electrode placement method ensures standardized coverage of key cortical regions while allowing for rapid deployment and minimal participant discomfort.
Each electrode records small variations in scalp potential due to underlying neural activity. These signals are sampled and streamed using the DSI Streamer software, which also supports synchronized data capture across multiple headsets. The trigger hub provides an external pulse to all EEG headsets at the start of each experiment, enabling the precise temporal alignment of EEG recordings across participants.
Dry-electrode EEG systems present a compelling solution for traffic research involving driving simulators, particularly in studies requiring rapid participant turnover, naturalistic behavior, and integration with immersive technologies like VR. Compared to traditional wet systems, dry EEG offers significant practical advantages, such as a minimal setup time, enhanced participant comfort, and better compatibility with head-mounted displays, making them ideal for non-clinical, real-world experimental settings [
35,
36,
37]. These strengths are especially valuable in dynamic experiments where participants are exposed to motion feedback and frequent headset reuse is needed. While dry EEG systems typically exhibit higher electrode impedance and may be more susceptible to motion artifacts and environmental noise, these challenges can be mitigated with proper headset fit and calibration, particularly for participants with dense or curly hair. The lower electrode count does reduce spatial resolution, but for applications focused on cognitive state estimation rather than fine-grained source localization, this tradeoff is acceptable. Overall, the balance of usability and operational efficiency makes dry EEG a practical and effective choice for large-scale, motion-enabled traffic simulation studies [
35,
36,
37].
3. Methodology
3.1. Data Collection
The system supports the simultaneous recording of two parallel data streams for each participant: EEG signals and driving telemetry. EEG data can be sampled at 300 Hz using the DSI-24 headset and managed through the DSI Streamer software. In parallel, the system can capture detailed driving telemetry, including speed, steering angle, brake pressure, and throttle input, via a custom Python 3.13 script that interfaces with the Assetto Corsa’s shared memory API [
38]. While Assetto Corsa’s server provides updates at approximately 8 Hz, the script is designed to sample key driving variables at up to 300 Hz, aligning with the EEG sampling rate to facilitate high-resolution synchronization across modalities.
To ensure synchronized EEG recording across multiple simulators, the system uses a trigger hub that sends a simultaneous pulse to all EEG headsets at the start of each experiment. This mechanism enables the consistent alignment of EEG data across participants, independent of individual system clocks. Driving telemetry logging, in contrast, is initiated manually on each simulator PC, which means timestamps across telemetry files are not inherently synchronized. The system architecture is designed to generate two parallel raw data streams per participant, one for EEG signals and one for driving telemetry, supporting flexible post-processing and alignment strategies.
The EEG files include time-indexed data from 21 electrodes, metadata fields, and trigger markers. The driving data files include time-series values for vehicle parameters, a local PC clock variable (pc_time), and a clock (session_time_remaining) that represents the countdown of the simulation session.
3.2. Data Processing and Synchronization
To enable meaningful cross-participant comparisons, the synchronization of these datasets is essential. The session_time_remaining variable is first converted to a forward-counting value, time_elapsed, and all time variables are reformatted into consistent units (seconds). Coordinate strings are parsed and converted into numerical values where necessary.
Since all EEG headsets are triggered simultaneously using the trigger hub, the EEG datasets are inherently synchronized. Similarly, driving telemetry files from different simulators can be aligned using the shared session_time_remaining clock. However, to avoid introducing artifacts from EEG interpolation, synchronization is anchored to the EEG Time variable. All driving telemetry data are interpolated to match this reference timeline.
The pc_time values are synchronized by anchoring them to the EEG Time, which is calibrated across participants using a hardware trigger provided by the EEG trigger hub. Driving data streams from each simulator are synchronized using the time_elapsed variable derived from the Assetto Corsa server clock session_time_remaining, which itself is synchronized via NTP across machines. Additionally, each recording captures a reference pc_time at the start of both EEG and simulator data collection. Using the initial EEG pc_time and incremental Time values, a synthetic pc_time series is constructed. The simulator’s pc_time is then locally aligned with this synthetic EEG clock. Global synchronization is achieved by referencing all data streams to the unified EEG Time.
The final result is a fully synchronized dataset in which each time sample corresponds to one EEG time point, containing EEG data and aligned driving telemetry from all participants. This unified dataset allows researchers to explore high-resolution patterns in driver behavior and brain activity under both solo and interactive driving conditions. The process of data synchronization is illustrated in
Figure 4.
Although individual differences exist in EEG signals and driving behavior, synchronization in this framework is based on a shared temporal reference rather than behavioral or cognitive similarity. All data streams are aligned using a unified time base derived from the EEG trigger pulse and the simulator’s session clock. This approach ensures consistent cross-participant alignment while preserving the uniqueness of each participant’s response dynamics.
This synchronization ability is the central contribution of the presented work. While each EEG signal is inherently individual, the unified temporal alignment allows for future exploration of between-participant neural relationships, such as inter-brain coupling in different driving scenarios—for example, collaborative (merging) or competitive (overtaking) tasks.
Graphical User Interface for Data Synchronization
To support the synchronization approach described above and enhance overall usability, a custom graphical user interface (GUI) called
syncApp was developed using MATLAB R2024a and JavaScript, and is publicly accessible at
https://www.subhradeeproy.com/software (accessed on 31 June 2025).
A snapshot of this GUI is shown in
Figure 5. The GUI enables users to load raw EEG and driving data, execute synchronization routines, and visualize time-aligned data streams. Researchers can inspect variable plots, validate data quality, and export synchronized datasets for further analysis. The interface is designed to lower the technical barrier for researchers interested in replicating or extending this experimental setup in their own laboratories.
While the GUI does not conduct EEG signal analysis itself, it implements the synchronization algorithm shown in
Figure 4 to generate precisely time-aligned EEG and driving telemetry datasets. These synchronized outputs are compatible with widely used EEG analysis platforms such as EEGLAB [
39], MNE-Python [
40], and QStates [
41]. By formatting the data for direct use, the GUI enables streamlined signal-level analyses—including spectral decomposition, phase-locking value computation (as demonstrated in our prior work [
42]), and other cognitive state assessments. More importantly, this synchronization capability extends beyond conventional single-participant analysis. It creates new opportunities for advanced multi-participant paradigms such as hyperscanning, allowing researchers to explore inter-brain neural coupling and collective cognitive dynamics during complex driving interactions, an area rarely addressed in prior traffic simulation studies. This opens a path toward studying cognition not only at the individual level but also at the networked group level in realistic, dynamic environments.
The GUI consists of two key features: data loading and synchronization (shown in
Figure 5a), and the visualization of various synchronized variables for the drivers (illustrated in
Figure 5b–d). The GUI is designed to handle both asynchronous data and data that has already been synchronized. During the data loading process, a toggle button is provided to indicate whether the data is raw or pre-synchronized. If pre-synchronized data is loaded, no further synchronization is required, and the data can be visualized immediately.
4. System Validation and Usability Testing
This set-up is designed to be easily replicated. To achieve the best results, minimize latency, and maximize immersion, the minimum performance characteristics should include an Intel i9 processor (or similar), an NVDIA 3060Ti GPU (or similar), and at least 16 GB of RAM. A major consideration when designing this system is its overall usability, reliability, and resilience. Consequently, tests are conducted to validate the system design.
Since the set-up uses commercially available and well-tested software (Assetto Corsa), there is higher confidence in the reliability of this environment. When incorporating the motion feedback, the nature of the simulation results in the movement of the subject within their seat. Additionally, as subjects reposition and turn their heads, several motion artifacts appear in the EEG data. These motion artifacts can be identified by their characteristic spread across multiple EEG channels, which distinguishes them from localized neural activity, and can then be attenuated using standard preprocessing techniques. The EEG system ensures data reliability by transmitting recordings wirelessly to a local computer using Bluetooth communication, while also maintaining a redundant copy through onboard storage. After various trials, the signal quality of the EEG is verified to be excellent, with no apparent discontinuities or breaks in the transmission.
Pilot tests are conducted during and after the development of the synchronization algorithm to verify that the data are being synchronized properly, by means of computing the lag times. After interpolation of the driving data and complete synchronization with the higher sampled EEG data, the data points are exactly aligned. In addition, a general problem of data collection from real systems is the challenge of maintaining precise clock sampling due to hardware imperfections and transmission times, among other issues [
43]. Before synchronization, the driving data and EEG data contain a clock drift of approximately
ms and
ms, respectively, after about 30 min of continual data collection. After synchronization, the latency across the two datasets is measured to be under 2 ms from repeated trials, with a mode of around
ms.
To evaluate system usability, 15 participants tested the simulator and provided structured feedback on comfort, immersion, and control through a post-session survey, provided in
Appendix A. Their input was instrumental in refining several aspects of the setup, including the tuning of motion feedback amplitude and pedal stiffness, and determination of optimal experiment durations to minimize discomfort.
Three core usability dimensions were assessed: comfort, realism, and handling. Comfort was based on participant experiences with the VR headset, EEG headset, motion feedback, overall driving feel, and any VR-related disorientation (survey questions 1 through 6). Realism reflected user perceptions of the virtual environment, immersive quality, and the physical realism of motion and haptic feedback (questions 7 through 10). Handling captured impressions of the responsiveness of the brake, accelerator, and steering wheel (questions 11 through 13). The full list of survey questions is provided in the
Appendix A. The average ratings across these categories are reported in
Table 1.
On average, participants rated comfort at 7.5/10, realism at 8.0/10, and handling at 8.1/10, resulting in an overall usability rating of 7.8/10 for the system. The most common sources of discomfort were motion sickness during prolonged VR use and physical strain from the EEG headset when worn for extended periods (over one hour). These findings support the inclusion of a mixed interface option, allowing participants to choose between VR and monitor displays, and highlight the importance of limiting session durations to enhance user comfort and data quality.
5. Demonstration of Synchronized Data and Sample EEG Analysis
The developed framework enables precise temporal synchronization between EEG and driving telemetry data collected from multiple participants.
Figure 6 presents an example of successfully synchronized data, showing time-aligned gas and brake pedal inputs alongside O1 electrode EEG activity for two drivers participating in a shared driving scenario. This level of synchronization allows researchers to explore how neural and behavioral responses unfold in parallel across interacting participants, supporting rich multi-modal analysis of driving behavior.
While detailed EEG analysis is not the primary focus of this study, the synchronized EEG and driving telemetry datasets produced by the system are fully compatible with standard cognitive state assessment tools. For example,
Figure 7 illustrates a sample application using QStates software, where concentration levels during a sustained attention task are estimated based on
-band EEG activity. Both a linear estimator and a multivariate normal probability density function (MVNPDF) are applied to classify cognitive states, with lower
-activity being associated with higher concentration and elevated activity corresponding to relaxation. Similar analyses can be extended to other frequency bands (e.g., beta, delta, theta, gamma) to support broader assessments of driver attention, workload, and fatigue during various driving tasks.
Beyond individual-level analysis, the framework also enables more advanced applications, such as hyperscanning, which examines inter-brain synchrony and coordinated neural activity across multiple participants. While hyperscanning has seen widespread use in social neuroscience, it remains largely unexplored in driving research due to the absence of systems capable of collecting synchronized multi-user EEG and behavioral data. By bridging this gap, the present system lays the foundation for future investigations into shared decision-making, neural coordination, and real-time interaction in dynamic traffic environments.
6. Comparison to Other Networked Simulators
To contextualize the capabilities of the developed framework, we compare it against existing networked driving simulators based on six key criteria: scalability, support for VR, EEG integration, immersion, availability, and cost. The immersion level is qualitatively assessed based on factors such as motion and haptic feedback, environmental realism (e.g., weather and lighting), physics fidelity, and visual rendering.
Table 2 summarizes this comparison. As shown, the present system is one of the few that combines high immersion, multi-participant scalability, and synchronized EEG capability, while remaining accessible and cost-effective for broader research adoption.
Unlike prior work that either involves single drivers or lacks neural measurement, our system uniquely enables synchronized cognitive and behavioral data collection from multiple interacting participants. This lays the foundation for future studies on human–human driving interactions, shared decision-making, and inter-brain neural dynamics in traffic settings.
7. Driving Scenarios and Environmental Conditions
The framework supports the simulation of a wide range of driving scenarios under dynamic conditions, such as varying weather conditions and different times of day. Vehicle handling and responsiveness were adapted using realistic physics models—for instance, reduced traction on wet roads and limited visibility during fog or nighttime conditions. These features allow researchers to examine how drivers perceive and respond to naturalistic environmental challenges.
To support targeted investigations of driver behavior under diverse conditions, we designed a flexible and modular virtual environment. A custom virtual route was developed (
Figure 8) and divided into distinct zones featuring diverse traffic and road conditions. These segments were designed to support the study of a range of driving behaviors, including single-lane car-following, lane-changing maneuvers involving lateral interactions between vehicles, and responses to varying road surface quality, from rough dirt segments to smooth paved sections.
Two illustrative examples are shown in
Figure 9 and
Figure 10.
Figure 9 illustrates a single-lane car-following scenario under three distinct environmental conditions—clear daytime, foggy daytime, and clear nighttime—captured at the same location along the route to highlight differences in visibility and ambient context.
Figure 10 illustrates a two-vehicle merging scenario, where a red car is merging into the lane occupied by a white car. From the white car’s perspective, the red vehicle is visible directly ahead, while the red car’s driver could see the white car through the side mirror.
This setup demonstrates how drivers experience the same scenario from different viewpoints and interact in real-time, enabling the system to support real-time behavioral coupling between participants. When paired with EEG data, such scenarios can support future investigations into the cognitive and behavioral aspects of human–human driving interactions.
8. Limitations
While the system presented in this paper offers a robust platform for synchronized multi-participant traffic experiments, several limitations remain. First, although dry-electrode EEG systems provide ease of use and faster setup times, they can be susceptible to motion artifacts and signal noise. This may impact the fidelity of neural data, especially in motion-rich scenarios involving aggressive driving maneuvers or platform-induced vibrations.
To minimize motion-related artifacts during EEG acquisition, it is recommended that headsets are fitted snugly to ensure stable electrode contact. Environmental conditions should be optimized, such as maintaining a cool room temperature, to reduce perspiration, and participants should be advised to avoid physical exertion and spicy food prior to recording. A 1–50 Hz bandpass filter, applied in the DSI Streamer software, can attenuate high-frequency noise, including 60 Hz power line interference. Although this study does not focus on EEG analysis, the recorded EEG data are fully compatible with standard preprocessing pipelines, such as EEGLAB [
39] and QStates [
41], allowing future users of the framework to perform artifact rejection and advanced signal analysis as needed [
39,
48,
49].
Second, the current system architecture does not incorporate additional physiological sensors such as eye-trackers, galvanic skin response monitors, or heart rate sensors, which could offer complementary insights into the driver’s cognitive and emotional state. While integration is technically feasible, it requires further development and synchronization infrastructure. Third, although Assetto Corsa offers excellent realism and modding capabilities, future work may benefit from transitioning to fully open-source simulation platforms for greater experimental control and transparency. Finally, scalability is currently constrained by hardware requirements and local networking limitations. While the system can support multiple participants, expanding to large-scale simulations with a dozen or more drivers may require significant infrastructure upgrades and performance optimization. Addressing these limitations will be a key focus of future work aimed at extending the platform’s capabilities and ensuring broader usability across research domains.
9. Conclusions
This paper presents a comprehensive, modular, and cost-effective framework for conducting traffic experiments using networked, multi-participant driving simulators integrated with synchronized EEG data collection. Built upon commercially available hardware and open simulation platforms, the system replicates key physical and perceptual cues of real-world driving, enabling the observation of naturalistic human behavior in a controlled lab environment. Through the use of Assetto Corsa, motion-enabled cockpits, flexible visual interfaces, and dry-electrode EEG systems, we demonstrate a setup that is both scalable and adaptable to various experimental needs.
A key contribution of this work lies in the synchronization methodology, which enables the fine-grained temporal alignment of behavioral (driving) and neural (EEG) data both within and across participants. The system captures rich datasets that can inform future studies on driver cognition, cooperative and competitive behavior, and distributed decision-making in traffic scenarios. We also release the synchronization software, along with sample datasets, to facilitate broader adoption and collaborative development.
By demonstrating system performance, data quality, synchronization accuracy, and compatibility with EEG analysis tools, this work establishes the necessary foundation for future research that integrates behavioral and neural data in multi-participant traffic scenarios.
10. Future Work
Beyond enabling synchronized data acquisition, this platform opens the door to a wide range of novel research questions that have previously been difficult to study due to the lack of an affordable setup capable of supporting multi-driver experiments with synchronized data streams. For example, researchers can investigate how drivers cognitively and behaviorally respond to complex traffic scenarios such as near-miss incidents, merging conflicts, or cooperative maneuvers. The system also supports multi-participant synchronization, facilitating hyperscanning studies that examine inter-brain neural coupling during real-time human–human driving interactions—an area largely unexplored in traffic research. Its flexibility allows for the implementation of diverse driving tasks under varying conditions such as different times of day, different road geometries, and varying weather conditions, all while maintaining experimental control. The modular setup further enables comparisons between hardware configurations (e.g., VR headset vs. monitor) to examine how different components affect cognitive engagement. Together, these capabilities address key gaps in the literature, where most of the existing work relies on single-driver setups or lacks physiological insight. By providing a robust framework for synchronized EEG and telemetry data collection, this work lays the foundation for future cognitive and behavioral analyses in interactive driving contexts.