Closing Sim2Real Gaps: A Versatile Development and Validation Platform for Autonomous Driving Stacks
Abstract
1. Introduction
1.1. Motivation and Problem Statement
1.2. Contributions
- Conceptual framing of Sim2Real gaps for ADS. We analyze and relate the RG and the PG for autonomous driving stacks.
- We propose a Methodology for Closing Reality and Performance Gaps (MCRPG); this approach uses observations and metrics analysis with parameter fine-tuning to align behaviors across domains and to maximize ADS performance while progressively reducing both gaps, across three iterative stages (Digital Twin, Parallel Execution, and Real-World).
- We design and implement an open-source, cost-effective and robust Development and Validation Platform (DVP) that integrates CARLA, a real proving ground, and a vehicle through the Robot Operating System (ROS). The platform includes a built-from-scratch ADS and a fully developed real vehicle, enabling seamless switching among three iterative stages: Digital Twin, Parallel Execution, and Real-World.
- We define a metric suite for gap reduction based on two-level evaluation: (i) Reality-Alignment via Maximum Normalized Cross-Correlation (MNCC) between different signals (poses, steering, velocity, detections), and (ii) Ego-Vehicle Performance via safety, comfort and driving efficiency metrics.
- Through empirical validation across three iterative stages—Digital Twin, Parallel Execution, and Real-World—we demonstrate a progressive convergence between simulated and actual behavior, with performance indicators becoming increasingly consistent across stages and iterations.
- Beyond the methodology, we document a modular ADS and HW platform build that other groups can replicate or extend, fostering accessible Sim2Real research.
1.3. Paper Structure
2. Related Work
Beyond the State of the Art: Our Proposal
3. Methodology for Closing Reality and Performance Gaps
- 1.
- Digital Twin Stage (DT-Stg): ADS is tested within a fully virtual environment without real-world interaction.
- 2.
- Parallel Execution Stage (PE-Stg): the real vehicle and the DT agent in the simulator act as mirrors, performing the same movements in parallel. ADS generates actions that are applied to the real vehicle, which then transmits its real-time position to the agent in the simulator. The simulated environment generates the scenario’s objects and targets, producing observations that combine real-world dynamics with enriched virtual components for analysis and testing.
- 3.
- Real-World Stage (RW-Stg): ADS operates fully in the real world.
- Actions (a): represent any navigation outputs generated by the ADS. These outputs are generic and can include commands—such as linear and angular velocities, local trajectory, or other control signals—used for decision-making and navigation.
- Observations (o): represent the data extracted from the environment through various sensors. This can include information such as positional data, jerk, steering angles, lateral errors, object detections, and other relevant metrics that provide situational awareness to the ADS.
- ADS: refers to the SW that is being developed, implemented, and enhanced. The proposed MCRPG is designed for a generic architecture capable of processing observations in real-time to generate corresponding actions.
- Simulation: The simulation encapsulates the ego-vehicle, sensors, test environment, targets, and the scenario generation tools. It is a hyper-realistic environment used to test and refine the ADS in a controlled and reproducible manner, allowing for extensive experimentation without the risks and costs associated with real-world testing.
- Real-world: The real-world environment includes the ego-vehicle, sensors, test environment, and targets. Unlike the simulation, this environment involves testing the ADS in live scenarios.
- ADS’s Configuration Parameters (P): These parameters are intrinsic to the ADS and include settings like sampling rate, measurement errors, distance between waypoints, map name, max. steer angle, wheel track dist., sensor ranges, detection thresholds, and more. Tuning these parameters allows for minimizing the PG by improving system performance.
- Simulator’s Setting Parameters (S): These parameters govern the simulation environment and include factors such as synchronous mode, frame rate, map name, ego mass, torque curves, sensor ranges, sensor noises, weather conditions, agent dynamics, and more. Tuning these parameters can reduce the RG by making the simulation more representative of real-world conditions.
- Tuning Process: In this work, the tuning process employs a derivative-free coordinate-search optimization over predefined ranges of P and S. For each parameter, a small discrete set of candidate values is evaluated using the logs of actions and observations, and the value that maximizes the reality-alignment and ego-performance metrics. This procedure is repeated iteratively until no further improvement is observed.
3.1. Digital Twin Stage
- : ADS parameters (tunable).
- : Simulator settings (tunable).
- : ADS control function governed by .
- : Simulator model governed by .
3.2. Parallel Execution Stage
3.3. Real-World Stage
3.4. Iterative Cycle
4. Our Development and Validation Platform (DVP)
4.1. ADS: An Overview of Our Autonomous Driving Stack
- Mapping: This layer is built upon OpenDRIVE, a standardized HD Map format developed by Association for Standardization of Automation and Measuring Systems (ASAM) to represent static road networks for driving simulation environments. As detailed in [20], it is divided into two main modules:
- –
- Map Parser: Processes the HD Map and extracts relevant information like lane geometries and traffic light positions.
- –
- Map Monitor: Continuously monitors the vehicle’s surroundings using its current position, map data, and planned local trajectory. It identifies relevant lanes and regulatory elements, such as the current lane, upcoming intersection lanes, crosswalk areas, traffic lights, and more.
- Localization: Combines Global Navigation Satellite System (GNSS) and wheel odometry data through an Extended Kalman Filter (EKF) to estimate the vehicle’s pose in real-time. It also publishes the transform tree that links all coordinate frames (e.g., world, map, ego-vehicle, sensors or other attached elements). This transform system enables ROS to compute frame transformations efficiently, one of the key strengths of the middleware.
- Perception: Implements sensor fusion techniques using LiDAR and camera data.
- –
- Camera: Uses YOLOv8 [21] for real-time object detection in the images. Bounding boxes are projected into 3D using the camera’s intrinsic parameters to approximate object location.
- –
- LiDAR: Employs the MMDetection3D [22] framework with the PointPillars architecture for 3D object detection. The model detects and clusters dynamic objects, such as vehicles and pedestrians.
- –
- Object Fusion: Combines detections from both sensors using a late fusion approach to ensure accurate data association and to integrate complementary detection information. The fusion module outputs an Oriented Bounding Box (OBB) for every object, along with its 2D pose (x, y, yaw) relative to the ego-vehicle.
- Planning: This layer is divided into two main modules:
- –
- Global Path Planning: Computes the optimal global route using a Dijkstra-based graph algorithm, generating a topological path composed of roads and lanes. It includes a Lane Graph Planner (LGP), which builds the route graph by considering lane connections and associated travel costs, and a Lane Waypoint Planner (LWP), which discretizes the path into 3D waypoints spaced at fixed intervals for navigation. Additional details can be found in [20].
- –
- Decision-Making: Based on Hierarchical Interpreted Binary Petri Nets (HIBPNs), this module encodes discrete driving behaviors such as traffic light handling, pedestrian crossings, emergency stops, and other rule-based traffic interactions. Refer to [23] for further details.
- Tracking Controller: This module follows a sequence of waypoints by generating a smooth trajectory using cubic spline interpolation. Lateral control is handled through a Linear Quadratic Regulator (LQR), which computes the steering angle command sent to the Steer-By-Wire (SBW). Longitudinal control is performed independently, the desired speed is dynamically adjusted based on path curvature through a predefined speed profile, and the Throttle-By-Wire (TBW) regulates the throttle/brake commands to achieve the requested speed. In addition, a delay compensation module predicts future positions to mitigate the impact of actuator delays. Furthermore, the tracking controller works jointly with an MPC, which introduces a smooth offset in the trajectory when obstacle avoidance is required, ensuring safe maneuvering without abrupt deviations. This approach ensures accurate and stable tracking performance, especially in urban driving scenarios. For an in-depth explanation, refer to [24].
4.2. Simulation: CARLA as Our Framework
4.3. Real-World: TABBY EVO as Our HW Platform
- Localization (RearECU): Raspberry Pi 4 Model B (located in the rear trunk).
- Drive-By-Wire (DBW): Raspberry Pi 4 Model B (installed near the steering wheel).
- Human–Machine Interface (HMI): Jetson Xavier NX (mounted near the dashboard screens).
- Perception (RackECU): Jetson AGX Orin (mounted near the camera and LiDAR on the roof).
- Main PC: HP Omen 17-CK0000/Intel i7-11800H/32GB RAM/RTX 3070. Responsible for executing all remaining processes not handled by other units, in addition to launching and orchestrating all distributed processes.
- Secondary PC: Identical to the main PC, used for running CARLA during the PE-Stg.
5. Experiments
5.1. The Scenario: Design and Generation
- 1.
- Overtaking: While traveling at a speed below 30 km/h, the ego-vehicle encounters a stationary target vehicle in its lane. To proceed, the ego-vehicle must detect the obstacle and perform a safe overtaking maneuver.
- 2.
- Pedestrian Crossing: Continuing at a speed below 30 km/h, the ego-vehicle approaches a pedestrian crossing. Here, it must detect and respond appropriately to a pedestrian crossing the street at a speed of approximately 2.0 m/s, moving perpendicularly across the ego-vehicle’s path.
5.2. The Evaluation Metrics
5.2.1. Reality Alignment Metrics
- -
- is the mean value of .
- -
- is the mean value of .
5.2.2. Ego-Vehicle Performance Metrics
- (a)
- Safety Metrics
- Maximum Inverse Time-To-Collision (Max. TTC−1): We employ a geometric TTC estimator based on the corner–edge collision method introduced in [40], which enables the handling of oriented objects moving along arbitrary 2D trajectories by representing each object with OBBs, as we show in Figure 11. Under constant velocity and fixed orientation, the TTC is obtained by computing the first-contact event between each corner of the object i and each edge of the object j, achieved by equating their respective parametric motions:where:
- –
- Object , and j denotes the other object.
- –
- OBB i has four corners with .
- –
- OBB j has four edges indexed by , each defined by its endpoints and , and its direction vector .
A pair represents a physically valid collision only if and . By evaluating all corner–edge interactions, the TTC is defined as the minimum among all valid collision times:The associated risk metric is the maximum inverse TTC observed over the scenario time interval: - Time Exposed to Risk (TET): This metric quantifies the total duration during which the ego-vehicle is operating under hazardous conditions, defined by a TTC value falling below a critical threshold . It is computed as:where:
- –
- is the switching variable indicating hazardous conditions;
- –
- is the TTC for vehicle j at time step t;
- –
- is the critical TTC threshold;
- –
- is the simulation time step;
- –
- T is the total number of simulation steps.
Note: In all experiments, the critical threshold was set to , which corresponds to typical low-speed urban scenarios (<30 km/h) considered in our study (see Section 5.1).
- (b)
- Comfort Metrics
- Maximum Jerk: Jerk is the time derivative of acceleration and is widely used as a measure of driving smoothness. Higher jerk values indicate abrupt changes in motion:where is the longitudinal acceleration.
- Maximum Yaw Rate: The yaw rate quantifies the rate of rotation around the vehicle’s vertical axis and reflects the aggressiveness of turning maneuvers:where is the yaw angle at time t.
- (c)
- Driving Efficiency Metrics
- Task Completion Time: This metric records the total time required by the ego-vehicle to complete the predefined scenario, from the starting zone to the goal. It reflects the temporal efficiency of the system.
- Lateral Root Mean Square Error (Lateral RMSE): The lateral RMSE quantifies the average deviation of the ego-vehicle from the centerline of the target lane over time. It is defined as:where is the lateral distance between the ego-vehicle’s position and the lane center at time step i. Lower values indicate better trajectory tracking performance.
5.3. The Results: Performance Comparison Across the Stages
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Krizsik, N.; Sipos, T. Social Perception of Autonomous Vehicles. Period. Polytech. Transp. Eng. 2023, 51, 133–139. [Google Scholar] [CrossRef]
- Zhao, J.; Zhao, W.; Deng, B.; Wang, Z.; Zhang, F.; Zheng, W.; Cao, W.; Nan, J.; Lian, Y.; Burke, A.F. Autonomous driving system: A comprehensive survey. Expert Syst. Appl. 2024, 242, 122836. [Google Scholar] [CrossRef]
- Khan, M.A.; Sayed, H.E.; Malik, S.; Zia, T.; Khan, J.; Alkaabi, N.; Ignatious, H. Level-5 Autonomous Driving—Are We There Yet? A Review of Research Literature. ACM Comput. Surv. 2022, 55, 27. [Google Scholar] [CrossRef]
- SAE J 3016; Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE International, On-Road Automated Driving (ORAD) Committee: Warrendale, PA, USA, 2021. [CrossRef]
- Höfer, S.; Bekris, K.; Handa, A.; Gamboa, J.C.; Mozifian, M.; Golemo, F.; Atkeson, C.; Fox, D.; Goldberg, K.; Leonard, J.; et al. Sim2Real in Robotics and Automation: Applications and Challenges. IEEE Trans. Autom. Sci. Eng. 2021, 18, 398–400. [Google Scholar] [CrossRef]
- Revell, J.; Welch, D.; Hereford, J. Sim2real: Issues in transferring autonomous driving model from simulation to real world. In Proceedings of the SoutheastCon 2022, Mobile, AL, USA, 22–31 March 2022; pp. 296–301. [Google Scholar] [CrossRef]
- Salvato, E.; Fenu, G.; Medvet, E.; Pellegrino, F.A. Crossing the Reality Gap: A Survey on Sim-to-Real Transferability of Robot Controllers in Reinforcement Learning. IEEE Access 2021, 9, 153171–153187. [Google Scholar] [CrossRef]
- Ali, W.A.; Fanti, M.P.; Roccotelli, M.; Ranieri, L. A Review of Digital Twin Technology for Electric and Autonomous Vehicles. Appl. Sci. 2023, 13, 5871. [Google Scholar] [CrossRef]
- Gutiérrez-Moreno, R.; Barea, R.; López-Guillén, E.; Arango, F.; Revenga, P.; Bergasa, L.M. Decision Making for Autonomous Driving Stack: Shortening the Gap from Simulation to Real-World Implementations. In Proceedings of the 2024 IEEE Intelligent Vehicles Symposium (IV), Jeju Island, Republic of Korea, 2–5 June 2024; pp. 3107–3113. [Google Scholar] [CrossRef]
- Stocco, A.; Pulfer, B.; Tonella, P. Mind the Gap! A Study on the Transferability of Virtual Versus Physical-World Testing of Autonomous Driving Systems. IEEE Trans. Softw. Eng. 2023, 49, 1928–1940. [Google Scholar] [CrossRef]
- Pasios, S.; Nikolaidis, N. CARLA2Real: A tool for reducing the sim2real gap in CARLA simulator. arXiv 2024, arXiv:2410.18238. [Google Scholar] [CrossRef]
- Garcia Daza, I.; Izquierdo, R.; Martinez, L.M.; Benderius, O.; Fernández-Llorca, D. Sim-to-real transfer and reality gap modeling in model predictive control for autonomous driving. Appl. Intell. 2022, 53, 12719–12735. [Google Scholar] [CrossRef]
- Hu, X.; Li, S.; Huang, T.; Tang, B.; Huai, R.; Chen, L. How Simulation Helps Autonomous Driving: A Survey of Sim2real, Digital Twins, and Parallel Intelligence. IEEE Trans. Intell. Veh. 2024, 9, 593–612. [Google Scholar] [CrossRef]
- Chen, L.; Wu, P.; Chitta, K.; Jaeger, B.; Geiger, A.; Li, H. End-to-End Autonomous Driving: Challenges and Frontiers. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 10164–10183. [Google Scholar] [CrossRef] [PubMed]
- Bai, X.; Luo, Y.; Jiang, L.; Gupta, A.; Kaveti, P.; Singh, H.; Ostadabbas, S. Bridging the Domain Gap between Synthetic and Real-World Data for Autonomous Driving. ACM J. Auton. Transp. Syst. 2024, 1, 9. [Google Scholar] [CrossRef]
- Jain, N.; Burman, E.; Stamp, S.; Mumovic, D.; Davies, M. Cross-sectoral assessment of the performance gap using calibrated building energy performance simulation. Energy Build. 2020, 224, 110271. [Google Scholar] [CrossRef]
- Gómez-Huélamo, C.; Díaz, A.; Araluce, J.; Ortiz, M.; Gutierrez, R.; Arango, F.; Llamazares, A.; Bergasa, L. How to build and validate a safe and reliable Autonomous Driving stack? A ROS based software modular architecture baseline. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 4–9 June 2022; pp. 1282–1289. [Google Scholar] [CrossRef]
- Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009; p. 5. [Google Scholar]
- Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. In Proceedings of the 1st Annual Conference on Robot Learning, Mountain View, CA, USA, 13–15 November 2017; Levine, S., Vanhoucke, V., Goldberg, K., Eds.; Proceedings of Machine Learning Research; MLResearchPress: Cambridge, MA, USA, 2017; pp. 1–16. [Google Scholar]
- Díaz, A.; Ocaña, M.; Llamazares, A.; Gómez-Huélamo, C.; Revenga, P.; Bergasa, L. HD maps: Exploiting OpenDRIVE potential for Path Planning and Map Monitoring. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 4–9 June 2022; pp. 1211–1217. [Google Scholar] [CrossRef]
- Jocher, G.; Qiu, J.; Chaurasia, A. Yolov8. Available online: https://github.com/ultralytics/ultralytics (accessed on 1 October 2025).
- Mmdetection3d: OpenMMLab’s Next-Generation Platform for General 3D Object Detection. Available online: https://github.com/open-mmlab/mmdetection3d (accessed on 1 October 2025).
- Huélamo, C.G.; del Egido, J.; Bergasa, L.M.; Barea, R.; Guillén, M.E.L.; Arango, J.F.; Araluce, J.; López, J. Train Here, Drive There: Simulating Real-World Use Cases with Fully-Autonomous Driving Architecture in CARLA Simulator. In Proceedings of the Workshop de Agentes Físicos, Virtual, 22–23 October 2020. [Google Scholar]
- Gutiérrez, R.; López-Guillén, E.; Bergasa, L.M.; Barea, R.; Pérez, O.; Gómez-Huélamo, C.; Arango, F.; del Egido, J.; López-Fernández, J. A Waypoint Tracking Controller for Autonomous Road Vehicles Using ROS Framework. Sensors 2020, 20, 4062. [Google Scholar] [CrossRef] [PubMed]
- Generating Maps in RoadRunner—CARLA Simulator. Available online: https://carla.readthedocs.io/en/0.9.14/tuto_M_generate_map/ (accessed on 1 October 2025).
- RoadRunner—MATLAB. Available online: https://www.mathworks.com/products/roadrunner.html (accessed on 1 October 2025).
- CARLA ScenarioRunner. Available online: https://scenario-runner.readthedocs.io/en/latest/ (accessed on 1 October 2025).
- Chen, H.; Ren, H.; Li, R.; Yang, G.; Ma, S. Generating Autonomous Driving Test Scenarios based on OpenSCENARIO. In Proceedings of the 2022 9th International Conference on Dependable Systems and Their Applications (DSA), Wulumuqi, China, 14–15 July 2022; pp. 650–658. [Google Scholar] [CrossRef]
- OSVehicle TABBY EVO Open Source Platform|Open Motors. Available online: https://www.openmotors.co/product/tabbyevo/ (accessed on 1 October 2025).
- Han, Y. Research on PID-based Automatic Vehicle Steering System. In Proceedings of the 2025 IEEE 5th International Conference on Electronic Technology, Communication and Information (ICETCI), Changchun, China, 23–25 May 2025; pp. 205–208. [Google Scholar] [CrossRef]
- Arango, J.F.; Bergasa, L.M.; Revenga, P.; Barea, R.; López-Guillén, E.; Gómez-Huélamo, C.; Araluce, J.; Gutiérrez, R. Drive-By-Wire Development Process Based on ROS for an Autonomous Electric Vehicle. Sensors 2020, 20, 6121. [Google Scholar] [CrossRef] [PubMed]
- Tradacete, M.; Sáez, Á.; Arango, J.F.; Gómez Huélamo, C.; Revenga, P.; Barea, R.; López-Guillén, E.; Bergasa, L.M. Positioning System for an Electric Autonomous Vehicle Based on the Fusion of Multi-GNSS RTK and Odometry by Using an Extented Kalman Filter. In Advances in Physical Agents; Fuentetaja Pizán, R., García Olaya, Á., Sesmero Lorente, M.P., Iglesias Martínez, J.A., Ledezma Espino, A., Eds.; Springer: Cham, Switzerland, 2019; pp. 16–30. [Google Scholar]
- Moore, T.; Stouch, D. A Generalized Extended Kalman Filter Implementation for the Robot Operating System. In Intelligent Autonomous Systems 13; Menegatti, E., Michael, N., Berns, K., Yamaguchi, H., Eds.; Springer: Cham, Switzerland, 2016; pp. 335–348. [Google Scholar]
- Neurohr, C.; Westhofen, L.; Henning, T.; de Graaff, T.; Möhlmann, E.; Böde, E. Fundamental Considerations around Scenario-Based Testing for Automated Driving. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 121–127. [Google Scholar] [CrossRef]
- Cai, J.; Yang, S.; Guang, H. A Review on Scenario Generation for Testing Autonomous Vehicles. In Proceedings of the 2024 IEEE Intelligent Vehicles Symposium (IV), Jeju Island, Republic of Korea, 2–5 June 2024; pp. 3371–3376. [Google Scholar] [CrossRef]
- Zhou, J.; Wang, L.; Wang, X. Online Adaptive Generation of Critical Boundary Scenarios for Evaluation of Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2023, 24, 6372–6388. [Google Scholar] [CrossRef]
- Paniego, S.; Calvo-Palomino, R.; Cañas, J. Behavior metrics: An open-source assessment tool for autonomous driving tasks. SoftwareX 2024, 26, 101702. [Google Scholar] [CrossRef]
- Su, Y.; Wang, L. Integrated Framework for Test and Evaluation of Autonomous Vehicles. J. Shanghai Jiaotong Univ. 2021, 26, 699–712. [Google Scholar] [CrossRef]
- Zhou, J.; Wang, L.; Wang, X. Scalable evaluation methods for autonomous vehicles. Expert Syst. Appl. 2024, 249, 123603. [Google Scholar] [CrossRef]
- Tøttrup, D.; Skovgaard, S.L.; Sejersen, J.l.F.; Pimentel de Figueiredo, R. A Real-Time Method for Time-to-Collision Estimation from Aerial Images. J. Imaging 2022, 8, 62. [Google Scholar] [CrossRef] [PubMed]














| Parameter | Value | Parameter | Value |
|---|---|---|---|
| Length | 3030 mm | Max. Climb | 25% |
| Width | 1488 mm | Turn Radius | 5 m |
| Height | 1380 mm | Max. Steering Angle | ±20° |
| Weight (Batteries incl.) | 730 kg | Max. Motor Power | 29.5 kW @ 2500 rpm |
| Batteries Weight | 140 kg | Max. Motor Torque | 128 N·m @ 500 rpm |
| Wheelbase | 2360 mm | Max. Motor RPM | 5500 rpm |
| Track Width | 1315 mm | Rated Voltage | 80 Vac |
| Top Speed | 120 km/h | Max. Current | 400 A |
| Nominal Range | 100 km | Max. Power Factor | 0.97 |
| Tires | 175/55R15 | Reduction Gearbox Ratio | 5.8:1 |
| Parameters Set | Parameter | Description |
|---|---|---|
| P | LQR_v_max (m/s) | Maximum speed allowed for tracking controller. |
| LQR_rc_max (m) | Max allowed radius of curvature, inversely related to speeds while cornering. | |
| LQR_q11 (−) | Weight on lateral error, higher values keep the ego closer to the path but may increase oscillations. | |
| LQR_q22 (−) | Weight on heading error, higher values allow faster alignment and reduce straight-line oscillations. | |
| LQR_r11 (−) | Weight on steering-effort changes, higher values produce smoother/slower steering responses. | |
| MPC_w (m) | Lateral offset used by the MPC to avoid obstacles. | |
| DECISION_PC_d_max (m) | Distance used to calculate Pedestrian Crossing velocities. | |
| DECISION_PC_d_min (m) | Distance at which the ego must stop if the Pedestrian Crossing is occupied. | |
| DECISION_OT_d_max (m) | Distance at which the overtaking maneuver can start. | |
| DECISION_OT_speed (m/s) | Speed at which the overtaking starts. | |
| FUSION_min_dist (m) | Minimum matching distance for camera–LiDAR fusion. | |
| LIDAR_x_max (m) | Longitudinal cropping limit of the LiDAR point cloud in front of the ego. | |
| LIDAR_abs_y_max (m) | Lateral cropping limit of the LiDAR point cloud on both sides. | |
| S | PEDESTRIAN_speed (m/s) | Pedestrian speed used in scenario definition. |
| EGO_mass (kg) | Ego-vehicle mass in the CARLA simulator. | |
| EGO_max_steer_angle (°) | Maximum steering angle of the front wheels. | |
| EGO_dr_full_throttle (−) | Internal resistance of the engine to acceleration. Low values indicate rapid acceleration (e.g., sports or electric vehicles). | |
| EGO_dr_zero_throttle (−) | Engine deceleration rate when the accelerator is released. | |
| EGO_moi (−) | Parameter affecting how quickly engine RPM increases or decreases. | |
| THROTTLE_kp (−) | Proportional gain of the throttle controller in CARLA; higher values yield a more reactive response but may cause overshoot. | |
| THROTTLE_ki (−) | Integral gain of the throttle controller; helps eliminate steady-state speed error. | |
| THROTTLE_delay (sec) | Delay implemented to match real-world behavior; TABBY EVO presents a default throttle lag. | |
| BRAKE_kp (−) | Proportional gain of the brake controller. | |
| BRAKE_ki (−) | Integral gain of the brake controller. | |
| GNSS_x (m) | Longitudinal position of the GNSS relative to the ego’s geometric center. | |
| CAMERA_x (m) | Longitudinal position of the Camera relative to the ego’s geometric center. |
| Parameters Set | Parameter | 1st Iteration | 5th Iteration | 10th Iteration | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| DT-Stg | PE-Stg | RW-Stg | DT-Stg | PE-Stg | RW-Stg | DT-Stg | PE-Stg | RW-Stg | ||||
| P | LQR_v_max (m/s) | 7.0000 | 5.5000 | 5.5000 | … | 5.5000 | 5.5000 | 5.5000 | … | 5.5000 | 5.5000 | 5.5000 |
| LQR_rc_max (m) | 25.000 | 25.000 | 25.000 | 30.000 | 30.000 | 30.000 | 40.000 | 40.000 | 40.000 | |||
| LQR_q11 (−) | 20.000 | 20.000 | 20.000 | 30.000 | 30.000 | 30.000 | 30.000 | 30.000 | 30.000 | |||
| LQR_q22 (−) | 100.00 | 10.000 | 10.000 | 1.0000 | 1.0000 | 1.0000 | 0.0001 | 0.0001 | 0.0001 | |||
| LQR_r11 (−) | 9000.0 | 5000.0 | 5000.0 | 5000.0 | 5000.0 | 5000.0 | 5000.0 | 5000.0 | 5000.0 | |||
| MPC_w (m) | 3.5000 | 2.7500 | 2.7500 | 2.7500 | 2.7500 | 2.7500 | 2.7500 | 2.7500 | 2.7500 | |||
| DECISION_PC_d_max (m) | 30.000 | 30.000 | 30.000 | 30.000 | 30.000 | 30.000 | 40.000 | 40.000 | 40.000 | |||
| DECISION_PC_d_min (m) | 10.000 | 10.000 | 10.000 | 10.000 | 15.000 | 15.000 | 15.000 | 15.000 | 15.000 | |||
| DECISION_OT_d_max (m) | 40.000 | 40.000 | 40.000 | 40.000 | 30.000 | 30.000 | 30.000 | 30.000 | 30.000 | |||
| DECISION_OT_speed (m/s) | 7.0000 | 7.0000 | 7.0000 | 7.0000 | 5.0000 | 5.0000 | 5.0000 | 5.0000 | 5.0000 | |||
| FUSION_min_dist (m) | 2.5000 | 2.5000 | 2.5000 | 2.5000 | 2.5000 | 4.0000 | 4.5000 | 4.5000 | 4.5000 | |||
| LIDAR_x_max (m) | 50.000 | 50.000 | 50.000 | 50.000 | 50.000 | 30.000 | 30.000 | 30.000 | 30.000 | |||
| LIDAR_abs_y_max (m) | 10.000 | 10.000 | 10.000 | 10.000 | 10.0000 | 10.000 | 5.0000 | 5.0000 | 5.0000 | |||
| S | PEDESTRIAN_speed (m/s) | 3.0000 | 1.7500 | 1.7500 | 1.7500 | 1.7500 | 1.7500 | 1.7500 | 1.7500 | 1.7500 | ||
| EGO_mass (kg) | 1000.0 | 1000.0 | 1000.0 | 4000.0 | 4000.0 | 4000.0 | 8500.0 | 8500.0 | 8500.0 | |||
| EGO_max_steer_angle (°) | 25.000 | 25.000 | 25.000 | 25.000 | 25.000 | 25.000 | 20.000 | 20.000 | 20.000 | |||
| EGO_dr_full_throttle (−) | 0.1500 | 0.1500 | 0.1500 | 0.0500 | 0.0500 | 0.0500 | 0.0500 | 0.0500 | 0.0500 | |||
| EGO_dr_zero_throttle (−) | 0.1000 | 0.1000 | 0.1000 | 0.5000 | 0.5000 | 0.5000 | 0.5000 | 0.5000 | 0.5000 | |||
| EGO_moi (−) | 1.0000 | 1.0000 | 1.0000 | 0.7500 | 0.7500 | 0.7500 | 0.7500 | 0.7500 | 0.7500 | |||
| THROTTLE_kp (−) | 0.5000 | 0.5000 | 0.5000 | 1.2500 | 1.2500 | 1.2500 | 1.2500 | 1.2500 | 1.2500 | |||
| THROTTLE_ki (−) | 0.0250 | 0.0250 | 0.0250 | 0.0050 | 0.0050 | 0.0050 | 0.0050 | 0.0050 | 0.0050 | |||
| THROTTLE_delay (sec) | - | - | - | 0.1500 | 0.1500 | 0.1500 | 0.1500 | 0.1500 | 0.1500 | |||
| BRAKE_kp (−) | 1.0000 | 1.0000 | 1.0000 | 0.0900 | 0.0900 | 0.0900 | 0.0900 | 0.0900 | 0.0900 | |||
| BRAKE_ki (−) | 0.0000 | 0.0000 | 0.0000 | 0.0005 | 0.0005 | 0.0005 | 0.0005 | 0.0005 | 0.0005 | |||
| GNSS_x (m) | −0.600 | −0.600 | −0.600 | −0.600 | −0.600 | −0.600 | −1.000 | −1.000 | −1.000 | |||
| CAMERA_x (m) | 0.3750 | 0.3750 | 0.3750 | 0.3750 | 0.3750 | 0.3750 | 0.4000 | 0.4000 | 0.4000 | |||
| Aspect Group | Metric | 1st Iteration | 5th Iteration | 10th Iteration | Objective | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| DT-Stg | PE-Stg | RW-Stg | DT-Stg | PE-Stg | RW-Stg | DT-Stg | PE-Stg | RW-Stg | |||||
| Reality Alignment (MNCC) | Ego-Vehicle X | - | 0.8386 | 0.9861 | … | 0.9806 | 0.9904 | 0.9997 | … | 0.9970 | 0.9972 | 0.9998 | ↑ (≈1.0) and consistent across stages |
| Ego-Vehicle Y | - | 0.9026 | 0.9798 | 0.9948 | 0.9973 | 0.9997 | 0.9991 | 0.9994 | 0.9998 | ||||
| Steer | - | 0.7235 | 0.9689 | 0.9280 | 0.9425 | 0.9799 | 0.9834 | 0.8915 | 0.9604 | ||||
| Velocity | - | 0.5836 | 0.9877 | 0.9332 | 0.9580 | 0.9855 | 0.9568 | 0.9655 | 0.9893 | ||||
| Obj-Detection X | - | 0.9403 | 0.9687 | 0.8058 | 0.8886 | 0.9897 | 0.9847 | 0.9962 | 0.9940 | ||||
| Obj-Detection Y | - | 0.6905 | 0.8784 | 0.6372 | 0.8622 | 0.9525 | 0.9671 | 0.9591 | 0.9808 | ||||
| Ego-Vehicle Performance | Max. TTC−1 (s−1) | 0.6633 | 93.441 | 8.8570 | 0.7347 | 33.118 | 0.5614 | 0.3778 | 0.3847 | 0.4147 | ↓ and consistent across stages | ||
| TET (s) | 0.6947 | 6.7158 | 3.5821 | 1.7869 | 6.5243 | 0.7759 | 0.0000 | 0.0000 | 0.0000 | ||||
| Max. Jerk (m/s3) | 5.0593 | 2.2039 | 2.4506 | 2.6059 | 4.1848 | 4.0026 | 2.7441 | 2.4643 | 1.8712 | ||||
| Max. Yaw Rate (rad/s) | 1.6192 | 1.6637 | 1.6505 | 1.6009 | 1.6644 | 1.5199 | 1.5978 | 1.6084 | 1.5623 | ||||
| Task Completion Time (s) | 83.305 | 96.323 | 94.294 | 100.73 | 96.976 | 99.862 | 107.02 | 108.02 | 108.94 | ||||
| Lateral RMSE (m) | 0.8022 | 1.1165 | 1.0592 | 0.6685 | 0.8511 | 0.9042 | 0.5791 | 0.6790 | 0.7084 | ||||
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Arango, J.F.; Gutiérrez-Moreno, R.; Revenga, P.A.; Llamazares, Á.; López-Guillén, E.; Bergasa, L.M. Closing Sim2Real Gaps: A Versatile Development and Validation Platform for Autonomous Driving Stacks. Sensors 2026, 26, 1338. https://doi.org/10.3390/s26041338
Arango JF, Gutiérrez-Moreno R, Revenga PA, Llamazares Á, López-Guillén E, Bergasa LM. Closing Sim2Real Gaps: A Versatile Development and Validation Platform for Autonomous Driving Stacks. Sensors. 2026; 26(4):1338. https://doi.org/10.3390/s26041338
Chicago/Turabian StyleArango, J. Felipe, Rodrigo Gutiérrez-Moreno, Pedro A. Revenga, Ángel Llamazares, Elena López-Guillén, and Luis M. Bergasa. 2026. "Closing Sim2Real Gaps: A Versatile Development and Validation Platform for Autonomous Driving Stacks" Sensors 26, no. 4: 1338. https://doi.org/10.3390/s26041338
APA StyleArango, J. F., Gutiérrez-Moreno, R., Revenga, P. A., Llamazares, Á., López-Guillén, E., & Bergasa, L. M. (2026). Closing Sim2Real Gaps: A Versatile Development and Validation Platform for Autonomous Driving Stacks. Sensors, 26(4), 1338. https://doi.org/10.3390/s26041338

