1. Introduction
One of the most crucial components of urban transportation systems is intersection performance, significantly impacting mobility, safety, and environmental outcomes. Precise monitoring and performance evaluation are essential for effective traffic management. Traditional approaches to measuring intersection performance have often relied on manual observations or overly simplistic models. For instance, studies assessing how pedestrians affect conflicts between opposing-through and left-turning traffic [
1] and research focusing on the Right-Turn-On-Red queuing process at signalized intersections highlights the limitations of conventional methods [
2].
On the other hand, developing and applying simulation models for intersection performance evaluation presents several challenges. Calibrating the simulation model to reflect real-world conditions accurately is one of this process’s most demanding tasks. This task involves fine-tuning model parameters, such as driving behavior, vehicle attributes, and traffic flow, to match observed data. This is a labor-intensive and time-consuming process that requires substantial data collection and iterative adjustments. Inaccurate calibration can compromise the reliability of simulation-based evaluations, potentially producing misleading results. Numerous studies have been conducted to calibrate simulation models [
3,
4,
5]; however, issues such as small sample size and neglect of intersection-specific features persist [
3].
Cameras and image processing have been used as the primary high-resolution data collection and monitoring method during the past few years. Although this method provides a variety of data compared to the older methods, there are some issues and concerns about the accuracy and resolution of this method, which are critical for effective traffic management and planning. A study found that discrepancies arise between manual road users’ counting and automatic counting methods using image processing, particularly in distinguishing vehicle types and sizes during varying visibility conditions, such as nighttime [
6]. As mentioned, the camera data collection method and image processing performance can be affected heavily by environmental conditions such as adverse weather conditions or low light conditions. Moreover, there are some limitations in the current automatic vision systems used in traffic cameras. These systems often require complex calibration processes and struggle to generate high-resolution data. This can hinder the ability to accurately monitor and respond to dynamic traffic conditions, analyze vehicle trajectories, and assess safety metrics such as speed and acceleration [
7]. Data security and privacy concerns also arise when collecting detailed vehicle movement data, raising issues about individual privacy.
In recent years, researchers have sought to address these challenges through meticulous approaches and robust methodologies. Improving calibration techniques [
8] by machine learning methods and automated calibration algorithms [
9] can streamline the calibration process and enhance model accuracy. Efforts to advance data collection methods, including utilizing cutting-edge sensing technologies and data fusion techniques, can improve the availability and quality of input data. Concerns regarding high-resolution data usage can be mitigated by ensuring data privacy and security through appropriate anonymization and encryption procedures. Despite these advancements, there remains a need for new approaches and solutions to overcome persistent difficulties.
The emergence of sophisticated data collection methods, such as LiDAR sensors, alongside recent developments of simulation tools, enables the creation of new systems for data collection and traffic management. The concept of digital twin platforms has been utilized in engineering industries in recent years. Traffic and transportation science can also employ the capabilities of this new concept in multiple fields. A transportation network digital twin simulates real-world traffic dynamics and interactions among various components of the transportation system. To create a digital twin platform of a transportation network, integrating high-resolution data into the traffic simulation is a critical step. By incorporating the physical layout of the road, intersections, and infrastructure, the virtual environment closely mirrors its real-world counterpart. Accurate representation of road users, including their types, locations, and speeds, allows the digital twin to simulate traffic scenarios with high fidelity.
Researchers and planners can utilize this virtual environment to run simulations and experiments that replicate real-world scenarios. By integrating comprehensive datasets, understanding road user behavior, and employing advanced traffic simulation software, the digital twin platform becomes a powerful tool for analyzing and optimizing transportation networks. It is also a tool for monitoring and managing traffic networks with the ability to provide real-time analysis and forecasting alongside features to manage non-recurrent traffic and emergencies.
This research presents a holistic approach to develop an integrated digital twin platform coupling a microscopic traffic simulator, VISSIM, and the high-resolution data produced by LiDAR to assess intersection performance. Using the exact position information obtained by LiDAR, VISSIM builds a digital twin platform that enables examining intersection performance through simulation and immediately feeds the optimization approach to the actual traffic system [
10]. Given this integration, researchers may simulate different scenarios and assess intersection effectiveness depending on various factors such as traffic volume, signal timing, and lane layouts. The simulation results offer insightful information on signal control system performance, delay analysis, traffic flow, and congestion patterns. The traditional simulation uses offline historical data to replicate the traffic system, as opposed to the digital twin, which uses online data. Traditional simulation cannot provide real-time, full life cycle supervision and data flow due to its single traffic model and constrained simulation scenarios [
10]. The method described in this research allows VISSIM, a microscopic traffic simulator, to be provided with real-time data while still using all of its benefits.
Even though LiDAR technology offers real-time, high-resolution data, there are serious drawbacks to using only raw LiDAR data for performance assessment and monitoring. First, it can be difficult to efficiently handle and analyze the enormous volume of raw data produced by LiDAR sensors. The contextual insights required to assess intersection performance holistically are absent from the raw data in the absence of appropriate aggregation and modeling. For instance, LiDAR data by itself is unable to forecast the consequences of modifications to lane designs or signal timing, nor can it replicate the dynamic interactions between road users. Moreover, custom design and complex computations are required for each new data requirement from the LiDAR system to derive valuable insights. This procedure can take a lot of time and resources, which restricts the monitoring system’s flexibility and scalability.
On the other hand, there is no greater flexibility when raw LiDAR data is integrated into a simulation environment like VISSIM. Researchers have access to an almost unlimited collection of performance measures and data kinds that are easily accessible within the simulation framework after the raw data is included in the simulation. Without requiring extra sensor reconfiguration or recalibration, users can examine factors including vehicle trajectories, speed profiles, signal timing effects, and congestion patterns with a few clicks. This easy access to a variety of data formats simplifies the assessment procedure and improves the capacity to perform performance evaluations.
In the current study, to demonstrate the power and flexibility of the proposed platform, a VISSIM network model was calibrated using a digital twin of a real-world intersection. Vehicle trajectories from the digital twin platform were collected and compared with those generated by the VISSIM simulation. By adjusting the driving behavior parameters in the VISSIM model, discrepancies between the two sets of trajectories were minimized, and the optimal values for the driving behavior parameters were identified.
The main question of this study is as follows: Is it possible to create a digital twin platform of transportation networks according to the recent development of data collection sensors to create a system for understanding and managing urban traffic? This paper is organized as follows. The foundational ideas and operations of transportation digital twin technology are discussed in the next section, with summaries of relevant research efforts. This fundamental information lays the groundwork for understanding the proposed platform’s components. Some selected research efforts that appear worth further investigation are discussed in the Literature Review Section. The Methodology Section subsequently presents the complexities of the suggested approach in detail. This methodology Section includes the overall design, the data used in the analysis, and the steps taken to handle it. A thorough case study is also presented to show how the suggested approach might be used in practice. The proposed strategy is tested during the case study, and the results attained through its application are carefully examined and thoroughly explained. The outcomes of this real-world application provide useful data for assessing the effectiveness and efficiency of the method. An in-depth understanding of the approach’s possible benefits and drawbacks is achieved by a careful analysis of the case study’s findings.
The main contributions of this study are as follows: (1) the development of a real-time digital twin platform that integrates LiDAR-based trajectory data with the VISSIM microsimulation environment for signalized intersections; (2) the design and implementation of a closed-loop feedback mechanism for behavior calibration using genetic algorithms; and (3) the demonstration of this system on a real-world intersection, enabling synchronized trajectory replay, conflict detection, and performance evaluation in real time. To the best of our knowledge, this study is among the first to combine real-time LiDAR data, dynamic calibration, and digital twin visualization in a fully integrated framework.
2. Digital Twin
2.1. Overview
The National Aeronautical Space Administration (NASA) defines digital twins as a thorough simulation of a real-world system or vehicle that incorporates various physics, scales, and probabilities [
11]. These simulations mimic the entire lifespan of their respective real-world counterparts by using the most precise physical models, sensor data updates, historical information from the fleet, and other pertinent aspects. Real-time monitoring, analysis, and optimization of complex systems are made possible by the idea of a digital twin. A conceptual representation of a digital twin is shown in
Figure 1.
The term “digital twin” refers to a wide variety of concepts, including the mechanical and reality twin models, and the digital sibling model [
12]. This idea has received much attention from several sectors and is used in many different businesses [
13]. Digital twin systems have been applied in the manufacturing industry for maintenance, product life cycle management, and production planning control [
14]. Congress and Puppala [
15] provided two case examples in infrastructure management to (1) illustrate a viable method for fusing unmanned aerial vehicle (UAV) data with digital twin techniques and (2) efficiently monitor infrastructure assets from the planning to the in-service stages.
2.2. Transportation Digital Twin
A virtual depiction of transportation systems that maintains a digital reproduction of actual transportation system components, including automobiles, highways, and people, is known as a transportation digital twin system [
16]. Recent studies have concentrated on digital twins for transportation. Saroj et al. [
17] built a data-driven linked corridor traffic simulation model. Dasgupta et al. [
18] developed a digital twin adaptive traffic signal control to shorten intersection delays and enhance user experience. Ivanov et al. [
19] investigated the definition and key components that make up a city’s digital twin and the existing technology utilized to build digital twins of cities. Other applications that demonstrate the use of digital twin systems in enhancing the security and mobility of current transportation systems include cooperative ramp merging [
20], cooperative driving [
21], and driver behavior modeling [
22,
23].
A thorough network dataset is necessary to create a digital twin of a transportation network. This dataset includes exact geometry information on roads, intersections, signal systems, and other significant infrastructure elements. It is indeed crucial to have a detailed grasp of different road users and their behaviors. This requires correctly classifying each object, such as cars, buses, pedestrians, bicyclists, etc., and retrieving current data on their precise positions and speeds.
When applied to assess intersection performance, a digital twin includes real-time data and offers a dynamic picture of the intersection, enabling simulation-based evaluation and optimization. Prior difficulties, such as the requirement for calibration, could be simplified by applying the digital twin idea to assess intersection performance.
3. Literature Review
This section synthesizes existing research related to the development and application of digital twin platforms in transportation. To establish a solid foundation for the proposed system, the literature is organized into three thematic areas: (1) digital twin frameworks and concepts in transportation, (2) real-time data collection technologies and sensor integration strategies, and (3) practical applications of digital twins in traffic monitoring, signal control, and simulation model calibration. These thematic areas provide insight into how digital twin technologies have evolved, their technical requirements, and their real-world applicability to dynamic urban traffic management.
3.1. Digital Twin Frameworks for Transportation
As mentioned, the digital twin concept originated in aerospace engineering, where NASA defined it as a comprehensive simulation of a real-world system that incorporates sensor updates, historical data, and physical modeling to enable real-time monitoring and optimization [
11]. In recent years, this concept has gained traction in civil and transportation engineering as a means of bridging the gap between physical infrastructure and virtual models. Digital twins in transportation are characterized by their ability to replicate and simulate road network behavior using live data streams, enabling responsive and adaptive management strategies [
16].
Ivanov et al. [
19] presented a foundational framework for digital twins of cities, identifying essential components such as cyber–physical synchronization, data modeling, and simulation feedback loops. Their work underscored the importance of interoperability between data sources and simulation environments. Irfan et al. [
16] further categorized transportation digital twin systems by function, such as traffic monitoring, safety analysis, and predictive control, and discussed their relevance in emerging mobility applications.
Liao et al. [
20] implemented a digital twin-enabled cooperative ramp merging system using vehicle-to-cloud communication. Their framework established digital replicas of vehicles in a cloud server using 4G/LTE connections, which were then used to analyze traffic conditions and provide advisory information to drivers. This setup showed that digital twins can actively mediate real-world traffic behavior by using feedback from simulations. The system was field-tested with vehicles in California, demonstrating real-time adaptability and safety enhancements during merge maneuvers.
Dasgupta et al. [
18] explored another major direction, adaptive traffic signal control, by integrating a digital twin environment with the SUMO microscopic simulator. Their control algorithm used real-time sensor inputs to adjust signal timing, and simulation results revealed a significant reduction in vehicle waiting times at intersections. This study emphasized that integrating simulation-based adaptive logic into digital twins can improve operational efficiency beyond static or pre-timed control systems.
The foundational literature confirms that digital twin platforms are not only capable of reproducing real-world traffic conditions but can also be used as decision-support systems for proactive traffic control. These studies provide the theoretical basis for coupling simulation tools with real-time data, laying the groundwork for more specialized applications.
3.2. Real-Time Data Collection and Sensor Integration
Accurate and high-resolution data is the cornerstone of any effective digital twin system. Traditional traffic monitoring systems, including inductive loops, magnetic sensors, and video detectors, have well-documented limitations. Cameras, for example, are prone to errors during adverse weather and nighttime conditions. Hoxha et al. [
6] found that camera-based vehicle classification and volume counting could deviate by as much as 9.5% compared to manual counts in low-visibility settings, leading to inaccurate assessments and compromised model calibration.
To overcome these limitations, recent efforts have focused on integrating advanced sensor technologies such as LiDAR, radar, GPS, and drone-based video into digital twin platforms. LiDAR, in particular, has emerged as a preferred technology due to its robustness, resolution, and insensitivity to lighting conditions. Gargoum and El Basyouny [
24] conducted a comprehensive review of LiDAR applications in transportation, emphasizing its superiority in extracting lane geometry, identifying road features, and tracking vehicle trajectories.
Zhao et al. [
25] demonstrated that infrastructure-based LiDAR sensors outperformed video and radar detectors under various levels of service, offering reliable traffic volume estimates even under high congestion or occlusion. The researchers highlighted the value of LiDAR in measuring precise lateral and longitudinal vehicle positions, which are essential for reconstructing detailed vehicle trajectories and calibrating simulation models.
Zhang et al. [
26] proposed a framework that combines LiDAR, UAV imagery, GPS data, and Building Information Modeling (BIM) to construct rich 3D digital twins of urban intersections. Their study used Unreal Engine 4 to visualize traffic dynamics and processed drone footage to extract behavioral patterns of road users. This multi-source integration illustrated how combining sensor modalities can compensate for the weaknesses of individual technologies while enriching the fidelity of digital twin models.
Zheng et al. [
27] introduced the CitySim dataset, derived from 1140 min of drone video footage capturing a wide variety of road environments. The dataset enabled researchers to reconstruct 3D traffic environments and simulate conflict scenarios involving cut-ins, merges, and crossings. Their comparative analysis showed that CitySim was more comprehensive than legacy datasets like NGSIM and highD, making it well-suited for use in safety-focused digital twin applications.
These studies collectively emphasize the growing importance of sensor diversity and real-time processing capabilities. They demonstrate that integrating high-resolution sensors, particularly LiDAR, into digital twin platforms significantly enhances the accuracy, scalability, and responsiveness of traffic monitoring systems.
3.3. Applications in Traffic Monitoring and Model Calibration
Digital twins offer more than just a descriptive lens into existing traffic conditions; they serve as analytical tools for simulation-based prediction, system calibration, and proactive control. One of the most promising applications of digital twins is in the calibration of microscopic traffic simulation models, which traditionally rely on aggregate statistics and time-consuming manual tuning.
Saroj et al. [
17] implemented a corridor-scale digital twin simulation using VISSIM for 15 signalized intersections in Atlanta. Their platform used real-time vehicle counts and signal state data to update the simulation every six minutes. While the model achieved near-real-time performance, it exhibited a 7 min processing delay and a dependency on historical turning movement data. These constraints made it less suitable for non-recurrent events such as crashes or emergency vehicle preemption.
Afshari et al. [
28] addressed the challenge of simulation calibration by integrating connected vehicle trajectory data with VISSIM and optimizing behavioral parameters using a genetic algorithm. Their approach reduced trajectory error by nearly 30% and demonstrated the value of individual vehicle data in tuning parameters such as desired headway, look-ahead distance, and deceleration limits. This work validated that combining digital twin data with evolutionary optimization can significantly improve the realism of traffic simulation models.
Zhang et al. [
26] used high-resolution video data to develop calibration routines for trajectory-based models. They applied object tracking techniques to infer behavioral features and aligned them with simulation outputs. Their framework helped quantify deviations between modeled and observed behaviors in lane-changing, stopping, and acceleration events, providing a feedback mechanism for iterative model improvement.
The reviewed applications reveal that digital twins can be effectively used not only for visualizing traffic flow but also for enhancing predictive accuracy, validating control strategies, and supporting infrastructure planning. As more cities adopt advanced sensing systems and real-time simulation tools, these integrated platforms are expected to play a central role in managing complex transportation networks.
4. Methodology
This study presents a novel approach to building a platform that utilizes state-of-the-art digital twin technology to collect data from the intersection and assess intersection performance. One of the key objectives of this study is to provide a clear and comprehensive explanation of this technology. To do this, the current section is divided into several subsections, each of which is devoted to meticulously outlining the numerous elements of the platform.
The following section digs into the platform’s design, illuminating how its parts are interrelated and work together to create a digital twin of an intersection. The platform’s potential uses and adaptability become clear by comprehending its fundamental structure. Next, the procedure for collecting the data is carefully described. The use of reliable and pertinent data to generate an exact picture of real-world intersections inside the digital twin platform is made possible by this crucial component of the platform. The platform’s data processing methods are described in the next section. This stage is crucial because it converts unactionable data into insights that can be used, improving the platform’s capacity to produce insightful evaluations and suggestions. The platform’s visualization, assessment features, and use cases are examined in the last section. The immersive and interactive visual representations of intersection mobility situations made possible by digital twin technology provide stakeholders with a clear understanding of the outcomes.
4.1. Overall Architecture
Figure 2 shows a high-level conceptual illustration of the proposed platform architecture. Real-time point cloud data is collected at the intersection using a LiDAR sensor to create the platform. After that, a server receives this data through the cloud. The server’s primary role is to build the data required to form a digital twin in VISSIM 2023. This must be accomplished by manually creating the network instruction for the intersection in the simulation software. Using BlueCity API by Ouster [
29], the road user data, such as vehicle type, position, speed, etc., is extracted from the point cloud data. Then, data transformation is required to transform local vehicle positions into digital twin locations to build digital twins for each vehicle. This transformation process involves converting the local vehicle positions into standard geographic coordinates, namely, latitude and longitude. From these latitude and longitude values, a subsequent transformation is carried out to obtain VISSIM’s coordinates, represented as X and Y. A Python 3.9 script is used to handle this procedure. Given a vehicle’s local position, the script converts it into VISSIM model coordinates. Using a manually produced dataset, the code pinpoints the precise link and lane each vehicle uses within the study’s area.
The Python code creates virtual representations of the vehicles in the VISSIM model by utilizing the information gathered, such as vehicle type, speed, link/lane position, and distance from the starting point, through VISSIM’s COM Programming Interface [
30]. If the vehicle already exists in the network, the algorithm simply changes its location in successive rounds to reflect its movement in real time. In the digital twin environment, vehicle motion is externally controlled and does not rely on VISSIM’s internal car-following or lane-changing models. Instead, each vehicle’s position is updated in real time using trajectory data obtained from the LiDAR sensor and processed through the BlueCity API. This enables the simulation to exactly replicate real-world traffic conditions without depending on VISSIM’s behavioral logic. These manipulated trajectories serve as the reference ground truth against which freely simulated trajectories are later compared during the calibration phase. With the tenth-of-a-second intervals, an iterative procedure ensures the creation of a smooth, real-time digital twin. After the simulation is finished, and also during the simulation, useful traffic parameters can be derived from the VISSIM software’s output. These variables aid in evaluating and improving transportation systems and provide valuable insights into intersection mobility. The whole process presented in the figure happens in real-time and repeats every 0.1 s.
4.2. Data Collection
As was already noted, feeding real-time data is crucial to the digital twin platform, and recent research has shown that LiDAR technology is the most reliable way to collect data [
25]. A consistent and real-time stream of road user data is essential for establishing a digital twin platform. While traditional image processing methods may need several cameras and high-performance computing (HPC) devices (e.g., Graphics Processing Units), LiDAR technology stands out because it can provide real-time vehicle data with a single sensor that covers the entire intersection area. In addition to video cameras, other commonly used detection devices include microwave radar and hybrid radar–video systems. While radar sensors are effective in detecting vehicle presence and estimating speed, especially under adverse weather conditions, they generally lack the spatial resolution required for detailed trajectory analysis. Hybrid radar–video detectors offer improved classification capabilities but still depend on camera visibility and are less suited for precise lateral positioning. In contrast, LiDAR provides rich 3D spatial data and high temporal resolution, making it ideal for real-time tracking, trajectory replay, and conflict detection in digital twin applications. Moreover, LiDAR systems can maintain high accuracy regardless of lighting or weather conditions, enabling consistent data capture across varied environments [
31].
The proposed platform employs a high-performance LiDAR sensor to collect point cloud data from the intersection. The LiDAR sensor used in this study features a 360-degree horizontal field of vision, a 40-degree vertical field of view, and an impressive 200 m range, making it suitable for monitoring an entire intersection with a single deployment [
29]. The point cloud data is sent to a server in real time. The classification of detected road users is performed by the BlueCity API [
29], which utilizes a deep learning-based model trained on large-scale annotated LiDAR datasets. The algorithm clusters point cloud data to form objects and infers the class type (e.g., car, bus, truck, pedestrian, bicyclist) based on shape features (such as length, width, height), temporal motion patterns, and reflection intensity. This process runs in real-time on the sensor’s embedded computing system, requiring no manual intervention. The class type is returned as a categorical value in the output stream. The integration ensures precise detection under varying environmental conditions, such as poor lighting, rain, or snow, without compromising accuracy. BlueCity’s processing capabilities further extend the utility of this sensor by providing actionable data in real-time. The system records key characteristics of road users, such as their unique IDs, positions, dimensions, speed, and classification type. These details, summarized in
Table 1, offer a comprehensive view of intersection dynamics. It is worth noting that values in the table are rounded to reflect realistic sensor precision. The LiDAR system offers approximately 5–10 cm horizontal resolution and sub-meter object classification accuracy, which is sufficient for traffic monitoring and modeling purposes.
A thorough picture of the traffic dynamics at the intersection is ensured by the sensor’s accurate recording of each road user’s specific location, speed, and object type. The usefulness of the platform depends heavily on the performance of real-time data processing. Every 0.1 s, the server receives the information obtained by the LiDAR sensor quickly and effectively. With the ability to transmit data smoothly, traffic management and decision-making may benefit significantly from the platform’s real-time, up-to-date insights on the flow of traffic at the intersection.
4.3. Data Processing
Data processing is essential for transforming raw collected data from a LiDAR into a form that the VISSIM COM interface can read. To successfully achieve this data transformation, numerous vital steps must be taken, as depicted in
Figure 3. First, the code filters the received data according to the vehicle type. Then, the local position of the detected vehicle needs to be changed in a way that can be understood by VISSIM software and its COM interface.
This study first lines up the location of the reference point of the VISSIM model to be synchronized with the local coordinate of the LiDAR sensor and the global geo-coordinate (i.e., latitude and longitude). The reference point for the VISSIM network must be carefully chosen to be precisely where the LiDAR sensor is located in the real world for smooth integration. The LiDAR’s local coordinates are converted to the real-world latitude and longitude by using Equation (1).
In this equation, Lat and Lon are the geographic latitude and longitude coordinates of the object, respectively. φ is the initial geographic latitude of the sensor, λ is the initial geographic longitude of the sensor, R is the Earth’s radius in m (6,378,137 m), x and y are local coordinates of an object detected by LiDAR, and θ is the sensor rotation in radians. de and dn are rotation-adjust Easting offsets, and dLat and dLon are offset in latitude and longitude, respectively.
In this study methodology, transforming the geographical coordinates captured by LiDAR into the VISSIM simulation format necessitates a meticulous data processing approach. The VISSIM simulation does not directly utilize geographical coordinates (latitude and longitude) or Cartesian coordinates (X, Y). Instead, it employs a unique identifier system based on link and lane IDs, along with the distance from the start of the link to precisely position road users on the network. To integrate real-world data into the VISSIM environment, the designed algorithm first translates the LiDAR-detected geographic coordinates into the VISSIM coordinate system using a geo-fencing technique. This technique involves the following:
Mapping Links and Lanes: Each link and lane within the VISSIM model is mapped against a corresponding polygon that represents the same lane-level roadway segment in the real world. This mapping is crucial to accurately trace the path of each road user.
Determining the Closest Baseline Point: The baseline of each lane is discretized into numerous closely spaced points, whose coordinates are known within VISSIM. For each vehicle detected by LiDAR, the Python-based algorithm computes the nearest baseline point from the vehicle’s converted local position.
Calculating Relative Distance: After identifying the closest point on the baseline, the distance from this point to the start of the link is computed by summing the distances between consecutive points along the baseline up to the closest point.
Illustrated in
Figure 4 is the conceptual workflow of the geo-fencing technique, where the process of identifying the correct link and lane, and computing the precise distance of the road user from the start of the link, is depicted. Upon accurately determining the link, lane, and relative distance, the road user’s position can be programmatically set within the VISSIM simulation. This allows for a realistic and dynamic representation of traffic patterns, enabling detailed analysis and optimization of traffic flows. This approach ensures that every road user’s location is defined not by arbitrary coordinates, but by meaningful and simulation-ready parameters that reflect their actual position on the virtual network. This detailed transformation process is fundamental to integrating LiDAR data into VISSIM, enabling the simulation to not only visualize but also analyze real-world traffic conditions effectively.
4.4. Digital Twin Platform
With the link and lane IDs of the vehicles in the network, along with their distance from the starting point of the link, the VISSIM software is ready to create the digital twin platform. Each detected object using the BlueCity API is allocated a Vehicle Identification (VID). The algorithm stores each VID in a dataset as soon as it introduces the vehicle into the digital twin platform. This dataset is crucial for determining whether the detected vehicle is new, necessitating the creation of a new vehicle in VISSIM, or if it is an existing vehicle, in which case the software will move it within the model. By receiving data every 0.1 s, the digital twin provides a smooth run and visualization. Using VISSIM software’s data collection methods, a wide range of traffic data points can be collected from the intersection in real-time, alongside the visualization. These data can be shown in real-time or saved directly based on the preference of the user.
Table 2 summarizes some of the most important traffic data that can be collected in real-time. The range of data that can be collected from the software is not limited to this table, and the user can define any data method based on their preference. Although the LiDAR sensor provides real-time vehicle trajectories and classification data, other parameters are estimated by the VISSIM simulation environment. VISSIM includes built-in models that estimate these values based on driving behavior, vehicle type, and speed profiles.
4.5. Case Study
In this study, the intersection of Warren and Lock Streets near the NJIT campus is chosen as a case study to implement and test the suggested platform in a practical environment. To reduce the sensor’s blind spots for this investigation, a single LiDAR sensor was mounted on a traffic signal pole at an elevation and angle that maximizes visibility across the intersection. The mounting height was carefully selected to ensure that the LiDAR’s 360-degree horizontal and 40-degree vertical field of view could capture the entire intersection with minimal obstruction. Nonetheless, occlusion can still occur in cases where large vehicles, such as buses or trucks, are parked or moving slowly near the sensor. To mitigate such issues, two strategies are used. First, the elevated placement reduces the chances of long-duration occlusion by providing a top-down perspective that allows partial visibility around and above most vehicles. Second, the BlueCity platform implements temporal tracking and motion prediction algorithms that interpolate the position of partially occluded vehicles across frames based on prior trajectory information. This enables robust detection continuity even when brief occlusions occur due to vehicle proximity. In future iterations of the system, deploying multiple sensors with overlapping fields of view can further minimize occlusion risks. The LiDAR sensor is connected to a Mini PC inside the traffic cabinet, which is equipped with a 5G modem to upload the data to the cloud.
Figure 5 shows the intersection with the installed LiDAR sensor. As shown in this figure, the LiDAR sensor provides a clear view of the intersection.
Figure 6 presents a comprehensive view of the intersection, highlighting the polygons utilized for the geo-fencing method. The intersection is formed by five streets: Warren Street features one lane in each direction for both eastbound and westbound traffic. Lock Street accommodates three lanes heading southbound and two lanes northbound. Raymond Blvd consists of two lanes in each direction. Wickliff Street operates as a one-way street directed southward. Each polygon corresponds to a single lane and extends as far as the LiDAR sensor’s range. Within the intersection, individual polygons are designated for each movement, accurately capturing the trajectories of the vehicles.
4.6. Calibration
To further demonstrate the capabilities of the proposed platform beyond monitoring and evaluation, this section introduces one of its key applications: model calibration. Using the developed digital twin platform, traditional transportation network models in VISSIM can be effectively calibrated. This approach enables traffic engineers and researchers to leverage the power of high-resolution, real-world data captured through the digital twin to align their simulation models more closely with actual traffic conditions.
Calibration using a digital twin significantly simplifies and enhances the model development process. Instead of relying solely on aggregate data or assumptions, practitioners can access precise vehicle trajectories and behavioral patterns observed in the field. This allows for more accurate parameter tuning, reduces the trial-and-error typically involved in calibration, and increases confidence in simulation outcomes. As a result, engineers can test infrastructure modifications, policy changes, or new signal timing plans with higher reliability and less risk, ultimately supporting better-informed, data-driven decision-making in transportation planning and operations.
To conduct the calibration process in this study, a traditional VISSIM network model of the Warren–Lock intersection was developed. Vehicle trajectories were extracted from the digital twin platform and compared to those generated by the traditional VISSIM model. By systematically adjusting the driving behavior parameters to minimize the differences between the two sets of trajectories, the optimal values for these parameters were identified and recorded.
During the calibration process, the method presented by [
28] was used to compare vehicle trajectories. Each vehicle that appeared in the digital twin platform was replicated at the exact same location in the traditional VISSIM network and allowed to drive freely through the network. Upon exiting, the corresponding trajectory was extracted from the VISSIM simulation. Unlike the digital twin simulation, where vehicles are directly positioned based on real-world LiDAR data, the traditional VISSIM simulation uses the built-in vehicle behavior models to simulate traffic flow. During calibration, the goal is to adjust driving behavior parameters such that the simulated trajectory, generated using VISSIM’s internal logic, closely matches the observed trajectory from the digital twin. This allows tuning the internal parameters without altering the core model logic.
Using this data, a time–space diagram was generated for each vehicle, displaying two lines: one representing the trajectory from the traditional VISSIM model and the other representing the trajectory from the digital twin platform. The area between these two trajectories was calculated and used as the trajectory error metric for the simulation.
In this study, a Genetic Algorithm (GA) was implemented to optimize the driving behavior parameters in VISSIM. The GA encoded each solution (i.e., a full set of driving behavior parameters) as a chromosome, with individual genes corresponding to specific parameter values (e.g., look-ahead distance, desired headway, maximum deceleration). The full chromosome consisted of 10 genes, one for each parameter listed in
Table 3.
The initial population consisted of 40 chromosomes, randomly generated within the specified min–max parameter ranges. Each chromosome was evaluated using a fitness function defined as the total trajectory error between the digital twin trajectories and the simulated VISSIM trajectories, computed by integrating the area between time and space curves of corresponding vehicles. The GA used roulette wheel selection to choose parents, uniform crossover with a probability of 0.8 to combine parent genes, and Gaussian mutation with a probability of 0.1 to maintain diversity. Elitism was applied to retain the top-performing individuals across generations, and the optimization proceeded for 100 generations.
5. Results
The intersection’s digital twin has been effectively generated using the VISSIM software, faithfully replicating the behavior of vehicles as observed in the real world. This test took place on 5 June 2023, lasting approximately 75 min as it was designed. During this period, the LiDAR sensor recorded approximately 1881 vehicle observations, as shown in
Table 4. The data span a weekday afternoon peak hour from 14:30 to 15:45, capturing typical urban traffic conditions near the NJIT campus. This test duration and location provide a representative sample for evaluating intersection performance in an urban setting. Moreover, the sensor collected position, speed, classification type, and vehicle dimensions at 0.1 s intervals, enabling a high-resolution understanding of traffic dynamics. During this test run, the digital twin closely emulated real-world traffic conditions, which can be observed in
Figure 7, captured as a snapshot from the simulation. This figure contrasts two snapshots: on the left, a frame captured by a camera recording from an adjacent building near the intersection; and on the right, a corresponding snapshot from the digital twin platform, both taken at the same time.
After the simulation test phase, the VISSIM program was used to extract the crucial intersection mobility characteristics. These insightful results are carefully reported in
Table 3, providing important information on the crucial elements of the intersection’s functionality. It is important to note that these data are represented as samples, and a wide array of additional traffic performance metrics can be collected from the intersection, including user-defined variables depending on study needs.
It is also important to remember that the VISSIM program recorded thorough vehicle trajectory data that was later used in the SSAM 3.0 software [
32] to forecast the safety circumstances at the intersection. The Surrogate Safety Assessment Model (SSAM) was created to automatically identify, categorize, and evaluate traffic conflicts found in vehicle trajectory data produced by models of microscopic traffic simulation like VISSIM. SSAM’s main objective is to help analysts and traffic engineers comprehend the safety consequences of their suggested traffic infrastructure. The SSAM can offer useful insights into the frequency and seriousness of conflicts through the use of statistical analysis, allowing analysts to make wise judgments concerning the design and implementation of secure traffic infrastructures [
33]. The findings of the SSAM3 program are shown in
Table 5, which includes useful data like the outcomes of conflict analyses, safety assessments, and other pertinent metrics. Note that the maximum time-to-collision (TTC) and maximum post-encroachment time (PET) are assumed to be 1.5 s and 5 s, respectively [
34]. These findings may be very important for evaluating the intersection’s design performance and indicating potential improvement areas to increase efficiency and safety for all road users.
The calibration process was conducted to align the traditional VISSIM model more closely with real-world conditions by adjusting the driving behavior parameters.
Figure 8 illustrates the changes in total trajectory error between the digital twin platform and the simulation model throughout the optimization process using the Genetic Algorithm. As shown in the figure, the model successfully reduced discrepancies significantly.
This figure presents the total error between the simulation trajectories and those from the digital twin in units of meters multiplied by time. The Genetic Algorithm effectively minimized this error during the optimization process, using a population size of 40 and 100 generations. Although the optimization continued throughout all generations, the trend of the best fitness values indicates that the minimum error was achieved around generation 55.
This figure illustrates the optimization process using the Genetic Algorithm, where the Best Fitness value refers to the total trajectory error between the simulated and digital twin vehicle trajectories. This error is computed as the area between the two trajectories in the time–space domain and is expressed in m·s. Each generation, the GA searches for parameter combinations that reduce this cumulative discrepancy across all vehicles. In this study, the initial error was approximately 198,024 m·s, and it was successfully reduced to 152,561 m·s after 100 generations. This 29.8% reduction indicates that the calibrated simulation much more closely matches observed vehicle movement patterns, confirming the effectiveness of the calibration procedure.
Table 6 presents the optimal values of the driving behavior parameters identified through the calibration process.
While this study does not include a direct comparative analysis with other existing traffic monitoring or calibration approaches, the focus is on validating the feasibility and performance of a novel real-time LiDAR-integrated digital twin framework. The highly customized nature of the platform, particularly the integration of trajectory replay, genetic algorithm-based calibration, and live synchronization with simulation, makes conventional benchmarks less applicable. However, future work may include systematic benchmarking against alternative sensing and modeling approaches (e.g., video-based or radar-based calibration systems) to quantify relative advantages.
6. Conclusions
This paper proposes a cutting-edge real-time digital twin platform for assessing intersection performance. Utilizing advanced LiDAR sensor technology, the platform successfully recorded real-time road user data, including local position, speed, and types (e.g., auto, truck, bus, pedestrian, bicyclist, etc.). This information was then fed into a server for processing. Numerous intricate processes are carried out on the server to create a digital twin representation of the intersection within the VISSIM environment. Positive results are seen throughout the proof-of-concept case study testing the feasibility and applicability of the proposed integrated platform.
Compared to conventional methods, combining high-resolution data with digital twin technology has several benefits for assessing intersection performance. First, using precise data from LiDAR technology enables a thorough comprehension of intersection dynamics and more exact monitoring of performance metrics. Second, the data’s real-time nature enables continuous traffic pattern monitoring and analysis, facilitating proactive decision-making in real-world circumstances. Thirdly, unlike conventional simulation modeling approaches, the suggested digital-twin approach does not need laborious calibration operations and can be used to calibrate traditional simulation models. In addition, traffic engineers, city planners, and transportation organizations can significantly benefit from the knowledge obtained from digital twin-based evaluation to make well-informed decisions for enhancing overall traffic management strategies, optimizing signal timing, and enhancing intersection design.
The newly introduced digital twin platform, which uses LiDAR technology for data collection, excels in efficiency, accuracy, and real-time capabilities for intersection performance assessments. This game-changer data collection technology offers accurate and thorough traffic data, establishing the foundation for a ground-breaking method of evaluating intersection performance. Furthermore, it is anticipated that the platform’s capabilities will be further improved by the continued development and improvement of LiDAR technology, making it an essential tool for developing smart cities and enhancing transportation systems in the future. The effects of this innovative platform are extensive. It not only gives academics and professionals in transportation the capacity to evaluate intersection performance in real time, but it also has the potential for a wide range of additional uses. The authors will soon expand the platform’s capabilities to support a variety of employment, such as traffic data collection, traffic congestion prediction, and traffic safety evaluation, among others.
While standalone LiDAR systems and certain video analytics platforms are capable of extracting real-time traffic metrics, they are inherently limited to descriptive analysis of existing conditions. In contrast, the proposed digital twin platform goes beyond real-time monitoring by integrating these high-resolution data streams into a microscopic traffic simulation environment. This enables advanced capabilities such as what-if scenario testing, behavioral calibration, predictive traffic forecasting, and infrastructure or signal timing optimization, none of which are supported by conventional LiDAR software. Moreover, by aligning simulated outputs with live traffic conditions, the platform supports a closed-loop feedback mechanism that can inform adaptive traffic control and proactive planning decisions. This integration of sensing and simulation constitutes a novel contribution to the field of urban digital infrastructure.
While the current study demonstrates the feasibility and effectiveness of the digital twin platform at a typical four-leg signalized intersection, its generalizability to more complex traffic environments should be considered in future research. The system can be scaled to larger or irregularly shaped intersections by deploying multiple LiDAR sensors with overlapping coverage zones and fusing their point cloud data. Additionally, the platform’s data processing pipeline can be extended to support multimodal traffic, including bicycles, scooters, and transit vehicles, by incorporating classification refinements and behavior-specific calibration models. Handling non-stationary conditions such as incident disruptions, construction zones, or weather-induced variability would require adaptive data-driven modules that update simulation parameters in real time. These directions represent promising future enhancements to increase the robustness and transferability of the proposed digital twin system in broader urban contexts.
Despite the outstanding results shown in this study, some challenges still need to be addressed in further investigations, such as the position overlap of multiple road users, diminishing road users, and collisions between road users. These challenges mainly arise from the accuracy of detecting road users in LiDAR, as it often exhibits undesirable performance. As researchers seek to enhance detection accuracy, addressing these constraints will be a primary focus for future research. Additionally, the platform’s adaptability goes beyond mere data associated with vehicles. By enhancing the proposed platform, it will be possible to build digital twin representations of vulnerable road users, such as pedestrians, bicyclists, and scooter riders, to protect them. Expanding the platform’s capabilities promises to broaden its applications and significantly impact transportation research and planning.
In conclusion, this work has introduced a real-time digital twin platform for intersection performance evaluation and demonstrated its potential to transform the area of transportation research and beyond. The platform creates the foundation for revolutionary improvements in traffic management and safety by merging cutting-edge technology and foreseeing its potential uses in the future. However, further research and development are going to be needed to realize its full potential and get past current constraints.