Next Article in Journal
Service-Driven Dynamic Beam Hopping with Resource Allocation for LEO Satellites
Next Article in Special Issue
Face Desensitization for Autonomous Driving Based on Identity De-Identification of Generative Adversarial Networks
Previous Article in Journal
Review of Research on Ceramic Surface Defect Detection Based on Deep Learning
Previous Article in Special Issue
Automatic Pruning and Quality Assurance of Object Detection Datasets for Autonomous Driving
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Runtime Monitoring Approach to Safeguard Behavior of Autonomous Vehicles at Traffic Lights

by
Adina Aniculaesei
1,2,* and
Yousri Elhajji
1,*
1
Institute for Software and Systems Engineering, TU Clausthal, 38678 Clausthal-Zellerfeld, Germany
2
Department of Computer Science and Engineering, University of Gothenburg and Chalmers University of Technology, 41256 Gothenburg, Sweden
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(12), 2366; https://doi.org/10.3390/electronics14122366
Submission received: 1 May 2025 / Revised: 28 May 2025 / Accepted: 2 June 2025 / Published: 9 June 2025
(This article belongs to the Special Issue Development and Advances in Autonomous Driving Technology)

Abstract

Accurate traffic light status detection and the appropriate response to changes in that status are crucial for autonomous driving systems (ADSs) starting from SAE Level 3 automation. The dilemma zone problem occurs during the amber phase of traffic lights, when the ADS must decide whether to stop or proceed through the intersection. This paper proposes a methodology for developing a runtime monitor that addresses the dilemma zone problem and monitors the autonomous vehicle’s behavior at traffic lights, ensuring that the ADS’s decisions align with the system’s safety requirements. This methodology yields a set of safety requirements formulated in controlled natural language, their formal specification in linear temporal logic (LTL), and the implementation of a corresponding runtime monitor. The monitor is integrated within a safety-oriented software architecture through a modular autonomous driving system pipeline, enabling real-time supervision of the ADS’s decision-making at intersections. The results show that the monitor maintained stable and fast reaction times between 40 ms and 65 ms across varying speeds (up to 13 m/s), remaining well below the 100 ms threshold required for safe autonomous operation. At speeds of 30, 50, and 70 km/h, the system ensured correct behavior with no violations of traffic light regulations. Furthermore, the monitor achieved 100% detection accuracy of the relevant traffic lights within 76 m, with high spatial precision (±0.4 m deviation). While the system performed reliably under typical conditions, it showed limitations in disambiguating adjacent, irrelevant signals at distances below 25 m, indicating opportunities for improvement in dense urban environments.

1. Introduction

Advancements in autonomous driving systems (ADSs) are reshaping the future of transportation, promising safer, more efficient, and more accessible mobility. Industry players across the automotive and technology sectors are actively working to integrate self-driving capabilities into vehicles, aiming to transform personal and commercial mobility [1]. Autonomous vehicles (AVs) aim to improve traffic safety by reducing human error and driver fatigue, leveraging technologies such as advanced driving assistance systems (ADASs) and ADSs [2].
ADAS features—such as electronic stability control (ESC), anti-lock braking systems (ABSs), and collision avoidance systems (CASs)—assist drivers in executing dynamic driving tasks (DDTs). These features fall under the broader category of active safety measures, which aim to ensure predictable and safe vehicle behavior, even with limited or no driver involvement [3].
Recent studies have begun to analyze the real-world safety performance of AVs. Abdel-Aty and Ding [2] used a matched case-control logistic regression model to compare accidents involving AVs and human-driven vehicles. Their findings indicated that ADSs were less likely to cause accidents under rainy conditions, thanks to high-performance sensors such as RADAR, which can detect obstacles up to 150 meters away in adverse weather [4], compared to the approximate 10-meter range of human perception [5]. Additionally, the integration of diverse sensors—including cameras, LiDAR, GNSS, and RADAR—enables AVs to detect pedestrians and vehicles under varying weather conditions, such as fog, rain, snow, and darkness [6,7,8,9].
Despite these advantages, AVs still face challenges. For example, Abdel-Aty and Ding [2] found that AVs had a 5.25 times higher accident rate at dawn and dusk compared to human drivers, likely due to changing lighting conditions that reduce obstacle detection accuracy, causing recognition errors, e.g., false positives or false negatives. On the other hand, AVs show a reduced risk of rear-end and sideswipe collisions—where the front of one vehicle strikes the side of another—owing to their advanced sensors, which continuously monitor vehicle surroundings and enable them to respond quickly to safety hazards. Adaptive cruise control (ACC) systems exemplify this capability by using LiDAR to maintain safe distances from other vehicles and alert drivers if those distances fall below a safety threshold [10]. In contrast, human drivers often display larger velocity differences over greater spacing ranges, which increases the likelihood of collisions [11,12]. Such accidents commonly occur when a leading vehicle brakes suddenly and the following driver lacks time to react. The ego-vehicle’s behavior before an incident plays a critical role in the likelihood of an accident [2], with ADS-involved accidents being less frequent when the vehicle is moving straight, veering off the road, or merging into traffic lanes—scenarios where AVs can leverage their rapid and precise hazard detection and mitigation [2].
To standardize the capabilities of driving automation, the SAE J3016 standard [13] defined six levels of automation, from Level 0 (no automation) to Level 5 (full automation in all environments). At SAE Level 3 and above, the ADS assumes full responsibility for the DDT, including environmental monitoring. Ensuring safe performance within predefined operating conditions is essential for maintaining system integrity and fail-operational capability.
Verification and validation (V&V) processes play a central role in ensuring that ADSs at Level 3 and above function safely in real-world conditions. These systems undergo extensive testing for compliance with international safety standards, particularly functional safety (FuSa) and the safety of the intended functionality (SOTIF), as outlined in ISO 26262 [14] and ISO 21448 [15]. ISO 26262 defines verification procedures across development levels—from unit and integration tests to field testing and full system deployment. ISO 21448 complements this by addressing risks that are caused by the functional insufficiencies of an ADS, which may occur when the ADS operates outside its intended Operational Design Domain (ODD). Ensuring SOTIF compliance for SAE Level 3 and above requires monitoring both the system and its environment to verify that operational conditions and safety requirements are met. This involves continuous monitoring of the environment through the vehicle sensors to verify environmental conditions and trigger fallback mechanisms if deviations from the ODD or safety constraints occur.
While ADSs demonstrate many strengths, certain driving scenarios remain challenging. Complex maneuvers such as lane changes, navigating heavy traffic, and executing turns at intersections continue to pose difficulties [16,17]. These scenarios require high situational awareness, defined as the ability to perceive environmental elements, assess their relevance, and anticipate potential changes [18]. For example, unprotected left turns at intersections are particularly problematic due to uncertainty in traffic priority and vehicle trajectories [19,20]. AVs are often overcautious in these settings, resulting in delays that increase the risk of rear-end or sideswipe collisions [21].

1.1. Related Work

The review of related work spans a diverse range of approaches, which can be grouped into the following three main categories: (1) Runtime Monitoring and Formal Verification of AV Behavior; (2) Traffic Light Detection and Response Mechanisms; and (3) Traffic Signal Control and Intersection Optimization.

1.1.1. Runtime Monitoring in Autonomous Driving Systems

The papers in this category focus on the development, formalization, and implementation of runtime monitors, along with the verification frameworks for autonomous vehicle behavior, often relying on temporal logics, contracts, or formal models. These approaches ensure that the system behavior aligns with safety requirements or traffic rules, either by detecting deviations during runtime or by validating compliance.
Watanabe and colleagues [22] incorporated runtime monitoring into both the development and deployment phases by formalizing system requirements as contracts written in Signal Temporal Logic (STL). These contracts guided the automatic synthesis of monitors—implemented in C++ or as S-functions in MATLAB 9.3, Release R2017b—and verified them during runtime using the Breach tool [23]. Their approach was demonstrated on a cooperative pile-up mitigation system and a false-start prevention function.
To formalize traffic laws, Harper [24] transformed rules from the UK Highway Code into SQL assertions, which were validated within a PostgreSQL/PostGIS database populated with simulation traces from CARLA. This work also defined a pipeline for feature extraction from both static and dynamic driving scenes. In a related effort, Grundt [25] introduced a validation framework built on Traffic Sequence Charts (TSCs), a visual specification language with formal semantics. Their monitor design captures both spatial view recognition and temporal evolution recognition, reflecting requirements collected from multiple stakeholders.
Another contribution comes from Balakrishnan [26], who presented a runtime monitoring framework targeting perception components in CARLA and ROS. The authors used Spatial–Temporal Quality Logic (STQL)—an extension of Timed Quality Temporal Logic (TQTL)—to define properties on object detection and tracking based on bounding boxes and class IDs. Focusing on perception system robustness, Antonante [27] developed a fault detection and diagnosis framework based on diagnostic graphs. The system supported deterministic, probabilistic, and learning-based algorithms and was evaluated in the LGSVL simulator [28], integrated with the Apollo autonomous driving stack.
Zhang [29] addressed context-aware monitoring by modeling environmental factors in a probabilistic finite automata (PFA) model, subsequently combining them with LTL-specified system properties to synthesize a runtime monitor. This monitor was embedded into the execution layer for real-time validation. A hierarchical, trigger-based online monitor aligned with Chinese road traffic law was proposed by Yu [30]. Six legal articles—one of which pertained to AV behavior at traffic signal-regulated intersections—were formalized using Metric Temporal Logic (MTL), and compliance thresholds were chosen based on empirical driving behavior.
In a broader overview, Mehdipour [31] surveyed formal methods used in autonomous driving, with particular emphasis on how traffic rules are formalized (e.g., in LTL) and applied to verification, monitoring, and control synthesis. Addressing the vulnerability of ML-based perception systems, Huang [32] used timed automata in UPPAAL to model and detect adversarial inputs that could lead an ADS into unsafe states. This approach was evaluated in a CARLA-based scenario simulating adversarial perturbations. Finally, Arfvidsson [33] formalized intersection safety as an LTL specification and decomposed it using temporal logic trees for Hamilton–Jacobi reachability analysis. This method generated safe control strategies for vehicles navigating potentially conflicting paths in a simulated T-intersection.

1.1.2. Traffic Light Detection and Response Mechanisms

This category includes works centered on perception systems, V2I communication, and decision-making in response to traffic lights. The following papers address the detection, classification, and interpretation of traffic signal data, often incorporating machine learning, sensor fusion, or simulation-based evaluation.
Wu [34] proposed a camera-based perception system that used YOLOv5 to detect and classify traffic lights. The fusion of wide-angle and long-range camera inputs with high-definition (HD) map data enabled accurate 3D localization through projection-based matching. Rahman [35] integrated YOLOv8-based detection with V2I communication, allowing vehicles to receive SPaT (Signal Phase and Timing) messages. The system, validated using the MAVS simulator, enabled real-time speed adaptation at intersections.
To estimate the distance to traffic lights, Kumar [36] fused LiDAR and camera data. After calibrating the sensors using a 3D marker, LiDAR point clouds were projected onto camera images, and pixel-to-point alignment was used to assess accuracy. Gao [37] presented a verification framework that distinguished between explicit traffic regulations and implicit norms learned from common driving practices. Each vehicle’s trajectory was validated by constructing a short-term reachable region and checking for overlaps with others in the environment.

1.1.3. Traffic Signal Control and Intersection Optimization

This category showcases works aimed at improving the efficiency and adaptability of traffic light systems through prediction, optimization, and reinforcement learning. The goal is not monitoring or verifying AV behavior per se, but to enhance infrastructure and traffic coordination to support safer and more efficient AV navigation.
Navarro-Espinoza [38] trained and evaluated various machine learning (ML) and deep learning (DL) models for traffic flow prediction. Among them, a Multilayer Perceptron achieved the best performance in terms of both accuracy and training efficiency. A hybrid approach was proposed by Ugirumurera [39], where traffic signal cycle lengths were estimated using XGBoost, and red phase durations were predicted via a neural network trained on probe vehicle data. The resulting errors—0.56 s for cycle length and 7.2 s for red phase duration—demonstrated practical viability.
Smith [40] leveraged real-time route data from connected and automated vehicles (CAVs) to improve signal priority at intersections. The method enables delay reduction for buses, which could be extended to emergency or municipal services. To reduce congestion, Maadi [41] applied reinforcement learning for adaptive signal control. Their system allowed CAVs to adjust speed based on fixed signal timing, optimizing vehicle flow and minimizing stop delays.
Dong [42] introduced an overtaking-enabled eco-approach and departure control (OEAC) strategy. Lane selection was formulated as a Markov Decision Process and solved with dynamic programming, while optimal speed trajectories were computed using Pontryagin’s Minimum Principle. For safety, yellow signals were conservatively treated as red. The strategy assumed V2I connectivity for real-time coordination.

1.2. Research Question

While the reviewed approaches make significant contributions to runtime monitoring, traffic light perception, and intelligent traffic control, they generally do not address the specific challenge posed by the dilemma zone during autonomous vehicle operation at signalized intersections. This scenario—characterized by the transition from green to amber—requires a timely and context-sensitive decision on whether the vehicle should safely proceed through the intersection or initiate a braking maneuver. Existing works often focus on rule compliance, perception accuracy, or infrastructure-level adaptations, but lack mechanisms that integrate vehicle dynamics, signal timing, and safety requirements correlated with traffic regulations to handle such transitional states in real time. This limitation highlights an important gap in the current research landscape and motivates the following research question (RQ):
RQ: 
How can we effectively address the dilemma zone problem and ensure that autonomous driving systems make the correct decisions at traffic lights?

1.3. Contributions

The main contributions of this work are fourfold. First, we conduct a comprehensive review of traffic regulations at the European level, along with international standards for autonomous vehicle safety, to establish a legally grounded basis for runtime monitoring. Second, we introduce a structured methodology for constructing runtime monitors directly from lawful traffic regulations. This methodology yields a set of safety requirements formulated in controlled natural language, their formal specification in LTL, and the implementation of a corresponding runtime monitor. Third, we develop a modular autonomous driving system (ADS) pipeline specifically designed to integrate the runtime monitor within a safety-oriented software architecture, enabling real-time supervision of the ADS’s decision-making at intersections. Finally, we evaluate the proposed approach with respect to two criteria—effectiveness in safety assurance and robustness—using simulation scenarios created in the AirSim platform, demonstrating the monitor’s effectiveness and robustness in addressing the dilemma zone problem under nearly realistic traffic light conditions.

1.4. Paper Structure

The rest of this paper is structured as follows: Section 2 introduces the method for developing a runtime monitor for the decision-making of an ADS at traffic lights. Section 3 presents the evaluation process, while Section 4 discusses the results. Finally, Section 5 summarizes the contributions of this work and provides an outlook on future research.

2. Methods

Section 2.1 introduces a motivational scenario to explain the dilemma zone problem. Section 2.2 provides an overview of the legal frameworks that describe the mandatory requirements for ADSs in detecting and responding to traffic light signals. Section 2.3 describes the method for constructing runtime monitors that ensure ADS compliance with legal regulations and standards, and Section 2.4 presents the integrated safety architecture developed to support the runtime monitoring of ADS behavior at traffic lights.

2.1. Motivation Scenario

One of the most critical decision-making challenges that AVs face is responding to dynamic traffic signals, such as determining whether to stop or proceed when a green light turns yellow. This scenario introduces the well-known dilemma zone problem [43,44], which plays a key role in assessing the decision-making safety of ADSs.
The dilemma zone problem, depicted in Figure 1, arises when a self-driving car approaches a traffic signal that changes from green to yellow, and the ADS must decide whether to stop or continue. In this situation, neither action appears clearly safer, creating a critical safety dilemma. Zhang et al. [44] identified the following two types of zones:
  • Option Zone: The vehicle is far enough from the traffic light that the ADS can confidently choose to stop safely or proceed through the intersection.
  • Dilemma Zone: The vehicle is too close to the traffic light to stop safely, but too far to clear the intersection before the light turns red, increasing the risk of entering the intersection during a red signal.
To navigate this scenario safely, the ADS must calculate the following two key distances:
  • X C : the minimum distance from the stop line required for a safe stop;
  • X 0 : the maximum distance from the stop line within which the vehicle can safely cross the intersection before the amber phase ends.
The calculation of these distances depends mainly on the speed of the vehicle. However, for X C , additional factors include driver reaction time (or system latency in AV) and braking capabilities. For X 0 , the vehicle speed and the current distance from the stop line are critical.

2.2. Legal Considerations for Autonomous Vehicles at Traffic Lights

As already discussed in Section 1, one of the critical aspects of AV operation on public roads involves decision-making at signal-controlled intersections. This requires strict adherence to national and international traffic laws governing the detection of and response to traffic signals. This section explores the applicable legal frameworks in continental Europe, the United Kingdom (UK), and Ireland, with a focus on traffic light compliance. For the purpose of this work, we focus only on non-flashing traffic lights.

2.2.1. Continental Europe: Harmonization Under the Vienna Convention

In most continental European countries, traffic laws are harmonized through the Vienna Convention on Road Traffic from 1968 [45], an international treaty administered by the United Nations Economic Commission for Europe. The Convention, which entered into force on the June 6, 1978, established standard rules and introduced a standard signing system for road traffic—road signs, traffic lights, and road markings—that is legally binding on signatory countries. The phases specified in the Vienna Convention for non-flashing traffic lights and their corresponding meanings are presented in Table 1.
Several EU member states have incorporated these into national law and are currently updating legal frameworks to accommodate SAE L3+ automation. For example, Germany’s Autonomous Driving Act [46], enacted by the German Parliament in 2021, allows the operation of driverless vehicles under defined conditions and requires AVs to handle intersections lawfully. This was initially limited to designated areas approved in advance, e.g., shuttle services on company premises or at trade fairs. The law requires a technical supervisor to continuously monitor vehicle operation. The human operator is responsible for remotely intervening when necessary, either by stopping the vehicle or authorizing a specific driving maneuver.

2.2.2. United Kingdom: Domestic Law Framework

The United Kingdom, which is not a party to the Vienna Convention, regulates road traffic under domestic law. Traffic signals are governed by the Traffic Signs Regulations and General Directions (TSRGD) of 2016 [47], which defines the meaning and use of traffic lights and requires road users to comply accordingly. Table 2 presents the meanings of various traffic light signals as set out in Schedule 14, Part 1, Paragraphs 4 and 5 of the TSRGD.
AVs are expected to comply with these signal rules as any human driver would. The 2018 Act on Automated and Electric Vehicles [48] and the 2024 Act on Automated Vehicles [49] provide the basis for the regulation of AVs in the UK. These acts establish (1) liability frameworks for ADSs, (2) requirements for AV listing and insurer obligations, and (3) legal recognition of “self-driving features” under designated conditions.

2.2.3. Ireland: EU-Aligned Regulation with Vienna Convention Basis

Ireland is a signatory to the Vienna Convention and, as such, aligns its transport and safety policies with broader EU regulations. Irish traffic law governed by the Road Traffic Acts and the regulations issued under the Road Traffic (Signs) Regulations (RTSR) of 1962 and their subsequent amendments. The Irish Traffic Signs Manual (TSM) [50] specifies signal behaviors that are similar to the norms established by the Vienna Convention. These specifications are shown in Table 3.
Currently, in Ireland, AVs operate in controlled trials and must demonstrate the ability to detect and respond to traffic signals in a lawful and predictable manner, including handling complex signal phasing and ambiguous transitions, such as flashing amber lights or pedestrian crossings. In Ireland, AV trials must follow strict guidelines issued by the Road Safety Authority (RSA) and Transport Infrastructure Ireland (TII). In 2022, the RSA launched a roadmap for connected and automated mobility, calling for the development of AV-specific legal provisions, especially regarding behavior at intersections and signal compliance.

2.2.4. Regulations on Stop Lines at Traffic Lights

Most traffic regulations in Europe reference a “stop line” or similar road marking that indicates where vehicles must halt at intersections regulated by traffic signals. The specific legal definitions and placement rules vary slightly between jurisdictions.
The Vienna Convention on Road Signs and Signals (1968) [45], Annex 2, includes standardized road markings such as stop lines but does not specify a fixed distance between the stop line and the traffic light. The only provision made for the stop line is for it to be placed in such a way that the signal is still visible to the driver—or to the vehicle’s sensors, in the case of AVs—when the vehicle is stopped. To give some examples at the European level, we look at Germany and France. Under the German road traffic regulations (German: Straßenverkehrs-Ordnung (StVO)) [51] and the guidelines for traffic signal systems (German: Richtlinien für die Lichtsignalanlagen (RiLSA)) [52], the stop line is known as the “Haltlinie” and must be placed 1.0–2.5 m before the traffic signal post in such a way that vehicles stopped at the line have a clear view of the signal head. In France, the stop line is called “ligne d’arrêt” and must be positioned to provide optimal visibility of the signal head, generally 1–2 m before the intersection [53].
In the UK, stop lines are governed by the TSRGD of 2016 [47], which defines—in Part 2 of Schedule 14—the prescribed road markings at signalized intersections, including the transverse stop line. There is no statutory minimum or maximum distance between the stop line and the signal head in TSRGD. Nevertheless, Chapter 6 of the TSM [54] issued by the UK Department for Transport makes the recommendation to place the stop line between 1.5 and 2.5 m before the primary signal head, in order to ensure visibility of the signal while maintaining a safe buffer from the intersection.
Ireland follows both the principles of the Vienna Convention and the UK-style signage standards. The official guidelines provided in the TSM, as issued by the Irish Department of Transport, suggest that the stop line be placed at least 1 meter before the pedestrian crossing (if present), and approximately 1.5–2.0 m before the signal head, ensuring visibility of the traffic signal to the vehicles stopped at the signal.
Table 4 summarizes the requirements for stop lines at traffic lights across continental Europe, the United Kingdom, and Ireland.

2.3. Method for Constructing Runtime Monitors for ADSs Based on Regulations and Standards

The method, illustrated in Figure 2, focuses on the construction of runtime monitors with the goal of ensuring the safety of AV decision-making at traffic lights. This method takes as input the standards for ADSs introduced in Section 1 and the traffic regulations discussed in Section 2.2 and produces the implementation of the runtime monitors. This method consists of the following four steps:
  • Elicitation of safety requirements from domain-specific standards and legal regulations;
  • Reformulation of safety requirements in a structured natural language;
  • Translation of safety requirements in a formal language amenable to runtime monitoring;
  • Design of the runtime monitors.

2.3.1. Elicitation of Safety Requirements

To ensure AV safety, it is essential to adhere to the relevant legal requirements and standards, from which the appropriate safety requirements can be derived. We use the rules of the Vienna Convention depicted in Table 1, Section 2.2 to derive the safety requirements for ADS decision-making at traffic lights.
Using the rules of the Vienna Convention in Table 1, we build the following application scenario. An AV drives towards an intersection regulated by traffic lights, and the ADS detects a change in the traffic light—specifically, the light is either red, amber, or transitioning from green to amber. Depending on the vehicle’s distance from the intersection at the moment of detection, the ADS must decide whether to initiate a stop or proceed. When the stop line is still far ahead, the ADS has enough time to decelerate and bring the vehicle to a smooth stop before the stop line. If the AV is much closer to the stop line, then it enters the dilemma zone, in which it is too close to comfortably stop but not close enough to clear the intersection before the traffic light turns red. However, if the vehicle is close enough to the stop line that it can proceed safely, then the vehicle will drive through and clear the intersection before the light changes to red.
Using the laws and regulations presented in Table 1 and the application scenario described above, three safety requirements are derived for the behavior of the ADS at a traffic light. These requirements are shown in Table 5.

2.3.2. Reformulation of Safety Requirements in Structured Natural Language

To improve readability and transparency, the safety requirements are reformulated in a structured natural language, with the help of requirement templates or requirement patterns. A requirement template is basically a construction plan, which lays down the building blocks necessary for the formulation of requirements [56]. The reformulation of the safety requirements in structured natural language was carried out according to the rules laid out in [56]; the requirements are as follows:
  • Requirements are always described in the active form;
  • Requirements are always written as complete sentences;
  • Requirements express processes or activities with the help of process verbs;
  • Exactly one requirement is formulated for each process verb.
The process verb is accompanied by a modal verb, e.g., shall, which shows the different legal meanings that a requirement may have for various stakeholders. In [56], the following three modal verbs were considered for the formulation of requirements: shall, should, and will. All requirements formulated with “shall” are compulsory for system implementation. Requirements formulated with “should” represent a stakeholder’s wish, are not binding, and do not have to be implemented. However, their implementation increases stakeholder satisfaction, and their documentation improves communication between the development team and the system stakeholders [56]. A requirement formulated with “will” serves as preparation for a functionality, which is to be integrated in the future. The development team is obliged to consider this requirement in the system implementation, even if this functionality is not tested at first [56].
The safety requirements for an ADS at traffic lights reformulated in a structured natural language are shown in Table 6.

2.3.3. Formalization of Safety Requirements

Next, the safety requirements developed in the previous step are translated into a selected formal language. For our application scenario, the reformulated safety requirements RSR1, RSR2, and RSR3 in Table 6 are formalized in Linear Temporal Logic (LTL) [57] as the LTL formulae ϕ 1 , ϕ 2 , and, respectively ϕ 3 , as shown in Equations (1)–(3).
ϕ 1 : G ( ( red ( t ) amber ( t ) ) ( d TrafficLight = X C ) X FullAutonomousDriving : ProgressiveBraking )
ϕ 2 : G ( ( green ( t 1 ) amber ( t ) ) ( X 0 < d TrafficLight d TrafficLight < X C ) X EmergencyBraking )
ϕ 3 : G ( ( green ( t 1 ) amber ( t ) ) ( d TrafficLight X 0 ) X FullAutonomousDriving : Drive )
The notations used in Equations (1)–(3) are discussed in more detail in Notation 1.
Notation 1.
red, amber and green are Boolean predicates that are evaluated at each computation step to true if the traffic light displays the respective color and false otherwise.
The operator G is the LTL operator Globally, while the operator X stands for the LTL operator Next.
X C represents the minimum safe stopping distance of the AV, while v 0 is the vehicle’s current speed. d TrafficLight represents the current distance between the AV and the stop line of the traffic light.
FullAutonomousDriving and EmergencyBraking represent operational modes of the AV, each corresponding to distinct functional states of the ADS. FullAutonomousDriving corresponds to the AV’s nominal operating state, during which the ADS assumes full responsibility for the DDT without human intervention. This mode is further divided into two specific driving modes: Drive and ProgressiveBraking.
In the Drive mode, the ADS maintains a steady cruising speed consistent with traffic regulations and environmental conditions, enabling uninterrupted autonomous travel. Conversely, the ProgressiveBraking mode is characterized by the ADS initiating controlled deceleration in anticipation of upcoming events in the AV environment that have been detected with sufficient lead time. Such events may include, for example, the detection of a traffic signal transitioning to red or a gradually slowing vehicle ahead.
In contrast, EmergencyBraking denotes a safety-critical mode in which the ADS autonomously initiates a rapid deceleration maneuver in response of an event from the AV’s environemnt, e.g., an imminent collision risk or a change of the traffic light signal.

2.3.4. Design of the Runtime Monitor

A qualitative runtime monitor was designed and developed based on the safety requirements formalized in LTL. This software system operates alongside the ADS, continuously observing the system’s behavior in real time to detect any deviations from the specified safety requirements. It evaluates whether these requirements are being satisfied during the AV’s operation, providing a qualitative assessment of compliance with the requirements, i.e., whether a requirement is valid or not.
The monitor was implemented by translating the formalized safety requirements into source code. The translation of the formal safety requirements ϕ 1 , ϕ 2 and ϕ 3 into source code is presented in the Table 7, Table 8, and Table 9, respectively. Each table is organized into two columns; the left column displays the predicates as they appear in the LTL formulas, and the right column shows their corresponding implementations in the runtime monitor’s source code. In the second column, each code snippet is prefaced with an indication of its location within the software architecture of the runtime monitor, specified in the format package.module.Class.function().
The first safety requirement, translated in Table 7, ensures that the vehicle initiates progressive braking upon detecting a red or yellow traffic light within a predefined distance threshold, d TrafficLight = X C . This use of progressive braking enhances safety by avoiding abrupt stops and ensuring adherence to traffic regulations. The runtime monitor validates this condition using a safety requirement validator to ensure that the ADS transitions from autonomous driving mode to limited autonomous driving mode when appropriate, enabling a smooth and compliant response to traffic light changes.
The second safety requirement, translated in Table 8, addresses scenarios in which the traffic light changes from green to amber—i.e., green ( t 1 ) amber ( t ) —and the vehicle is within a critical distance, as defined by X 0 < d TrafficLight < X C . Under this condition, the vehicle must initiate emergency braking to prevent potentially hazardous intersection crossings. The runtime monitor enforces this requirement by ensuring that the ADS transitions into a fail-safe state, overriding autonomous controls and activating the emergency braking mechanism to prioritize the safety of all road users.
The third safety requirement, translated in Table 9, applies to situations where the traffic light changes from green to amber—i.e., green ( t 1 ) amber ( t ) —but the vehicle remains at a safe distance from the traffic light, defined as d TrafficLight X 0 , meaning it is close enough to the intersection to have sufficient time to cross before the light turns red. In this case, the runtime monitor validates the condition and allows the vehicle to remain in full autonomous driving mode. This ensures that the vehicle can proceed efficiently through the intersection, avoiding unnecessary stops and optimizing traffic flow, while adhering to traffic regulations and safety standards.

2.4. Integrated Safety Architecture for the Runtime Monitoring of ADS Behavior at Traffic Lights

To monitor the ADS’s behavior at traffic lights, we developed a safety-focused software architecture that integrates the qualitative runtime monitor previously described. This monitor continuously supervises the ADS’s decision-making at intersections, ensuring that the specified safety requirements (as defined in Section 2.3) are adhered to throughout the autonomous vehicle’s operation. The high-level integrated safety architecture, shown in Figure 3, builds upon an existing safety framework for ADSs originally introduced in [58] for parcel logistics applications. By adapting and extending that framework, we designed a structure specifically tailored to manage and guarantee safe AV behavior at traffic lights. The architectural adaptations we introduced are highlighted in grey in Figure 3.
The proposed architecture is composed of eight layers. The first six form a processing pipeline that delivers the core functionality of the ADS, including the following: (1) the sensor layer, (2) the input adapter layer, (3) the driving system layer, (4) the output selection layer, (5) the output adapter layer, and (6) the actuator layer. To specifically address safety at traffic lights, the following two additional layers were introduced: (7) the perception layer, which enhances environmental awareness, and (8) the dependability cage layer, which enforces the defined safety requirements and provides mitigation mechanisms in cases where these requirements are no longer satisfied.
Sensor layer. This layer includes all sensors on the autonomous vehicle used to collect data about the surrounding environment and traffic light states. Components such as cameras, LiDAR, inertial measurement units (IMUs), and wheel speed sensors provide the raw input for further processing along the pipeline.
Input Adapter Layer. In this layer, raw sensor data undergo initial preprocessing to be converted into a format suitable for interpretation by the vehicle’s systems. This step includes filtering, sensor fusion, and data interpretation.
Driving System Layer. This layer encapsulates the vehicle’s control logic across multiple operational modes, each realized via a distinct control system corresponding to a specific autonomy level, as detailed below:
  • Fully Autonomous Driving System: Provides the nominal functionality of the ADS;
  • Limited Autonomous Driving System: Represents a graceful degradation of the nominal behavior (cf. [59]) and is triggered as a fail-operational response to warnings issued by the runtime monitor (cf. [60]);
  • Manual Driving System: Enables remote manual control of the autonomous vehicle;
  • Emergency Braking System: Activates the emergency brakes in response to critical warnings from the runtime monitor, serving as a fail-safe mechanism (cf. [61]).
As the primary focus of this work is the development of the runtime monitor, we emulated the behavior of a fully autonomous driving system using the manual driving system.
Output Selection Layer. This layer determines the appropriate vehicle actions based on the information perceived from the AV environment, selecting suitable driving maneuvers accordingly.
Output Adapter Layer. This layer translates selected actions into low-level commands for the vehicle’s actuators, including instructions for steering, braking, and acceleration.
Actuator Layer. Responsible for the physical execution of control commands, this layer encompasses the vehicle’s actuator components—such as the steering system, throttle, and brakes—that directly control its movement.
Several modifications have been made to the processing pipeline compared to earlier versions to better align with the research objectives of this work. These include the following: (1) the integration of an additional sensor component dedicated to detecting the status of traffic lights, and (2) targeted adaptations to the manual driving system and emergency braking system to emulate autonomous vehicle behavior in traffic light scenarios via manual control. These adjustments enable the realistic testing and validation of the runtime monitor in controlled conditions, without requiring full deployment in a fully autonomous system.
Perception Layer. This layer provides a high-level semantic interpretation of the environment based on processed sensor data. It includes components responsible for detecting and classifying relevant objects, such as traffic signs, traffic lights, and other road users.
Dependability Cage Layer. This layer enforces system safety by integrating the runtime monitor and a fail-operational reaction mechanism. The runtime monitor ensures compliance with the specified safety requirements, while the mitigation component provides fallback strategies in cases where these requirements are violated. The runtime monitor is composed of the following two key components: Distance to Traffic Light Computation and Safety Requirement Verification.
Distance to Traffic Light Computation. The distance to the traffic light is computed by fusing LiDAR and camera data using the method described in [36]. The sensor fusion process involves the following steps:
  • Data Acquisition: LiDAR and camera sensors, mounted on the vehicle’s roof, continuously collect spatial and visual data from the environment.
  • Calibration: The coordinate systems of the sensors are calibrated to enable accurate alignment of LiDAR and camera data.
  • Point Cloud Projection: LiDAR point clouds are projected onto the camera’s image plane using the calibration parameters, establishing spatial correspondence between the two sensor datasets.
  • Feature Extraction: Traffic lights are identified within the camera image, providing target features for mapping.
  • Data Linking: Projected LiDAR points are matched with detected traffic lights in the camera image to ensure accurate alignment between modalities.
  • Depth Estimation: LiDAR-derived depth information is used to estimate the distance to the identified traffic lights, yielding a fused, range-aware interpretation of the scene.
Safety Requirements Verification. The runtime monitor continuously evaluates the vehicle’s speed and its distance from the traffic light to verify compliance with the defined safety requirements. This evaluation is based on comparing the measured distance to two critical thresholds—the minimum braking distance ( X C ) and the maximum yellow-phase passing distance ( X 0 ). For a visual representation of these thresholds, refer to Figure 1.
Using these parameters, the monitor determines whether one of the following conditions holds at each point in time:
  • The vehicle is sufficiently far from the traffic light, allowing the ADS to initiate a smooth braking maneuver in response to a red signal (cf. Equation (1)).
  • The vehicle is within the dilemma zone during an amber signal—i.e., it is too close to stop safely yet too far to pass before the signal turns red—requiring a transition to a fail-safe mode (cf. Equation (2)).
  • The vehicle is close enough to the intersection during the amber phase to proceed safely before the signal changes to red, and autonomous driving can continue uninterrupted (cf. Equation (3)).
This continuous monitoring process enables the runtime monitor to proactively identify safety-critical situations and trigger the appropriate system response in real time.
Fail-Operational Reaction. The fail-operational reaction mechanism is responsible for providing mitigation measures based on the results of the safety requirement verification process. If a safety requirement becomes invalid, the mechanism triggers a change in the vehicle’s driving mode as a response. The mode control component evaluates the severity of the safety violation and selects the most appropriate vehicle control system from the available options—fully autonomous, limited autonomous, manual, or emergency braking. For example, if the traffic light is red and the runtime monitor detects that the vehicle is still sufficiently far from the intersection, the fail-operational mechanism triggers a degradation of the vehicle’s driving mode from fully autonomous to limited autonomous. In contrast, if the monitor detects that the vehicle is in the dilemma zone during an amber signal, indicating it is too close to stop safely yet too far to pass before the signal turns red, the fail-operational mechanism transitions the vehicle to a fail-safe mode by initiating emergency braking. However, if the vehicle is sufficiently close to the intersection during the amber phase and has enough time to clear the intersection before the signal turns red, the fail-operational reaction mechanism instructs the vehicle to continue through the intersection without intervention.

3. Evaluation

This section presents the results of the concept evaluation. First, Section 3.1 discusses the choice of criteria for the evaluation. This is followed by Section 3.2, which outlines the necessary hardware and software requirements, test environments, and configuration steps for a comprehensive evaluation. Section 3.3 describes the setup of the experiments and presents the evaluation results.

3.1. Choice of Evaluation Criteria

To assess the performance and practical relevance of the proposed runtime monitor, we initially considered a broad set of evaluation criteria encompassing various technical, legal, and human-centered aspects. These criteria were selected to provide a comprehensive perspective on the monitor’s capabilities in ensuring safe and dependable autonomous driving behavior at traffic lights. However, for the scope of this evaluation, we focus on a subset of these criteria, chosen based on their direct relevance to the core functionality of the monitor and the feasibility of conducting meaningful evaluations within the given experimental framework. The full set of considered criteria is as follows:
  • Effectiveness in Safety Assurance: This criterion assesses the runtime monitor’s capability to detect and respond to safety-critical events at traffic lights. It includes evaluating how accurately the monitor identifies violations of safety requirements—such as failure to stop at a red light or inappropriate behavior in the dilemma zone—and how effectively it ensures compliance during various traffic light transitions.
  • Robustness and Reliability: Robustness pertains to the monitor’s resilience in diverse and potentially adverse traffic scenarios, including sensor noise, environmental variability, and partial system failures. Reliability refers to the monitor’s consistent performance over time in verifying safety requirements, minimizing false positives (false alarms) and false negatives (missed detections), and maintaining dependable behavior across repeated evaluations.
  • Compliance with Ethical and Legal Standards: This criterion examines the monitor’s alignment with established legal regulations and industry standards governing AV safety. It also considers the extent to which the monitor upholds ethical principles, including transparency, accountability, and fairness, thereby supporting public trust and regulatory approval.
  • Integration with ADS architecture: This criterion evaluates the ease and effectiveness with which the runtime monitor integrates into the broader autonomous driving architecture. It includes assessing interoperability with other subsystems, compatibility with communication protocols and data exchange formats, and the ability to coordinate real-time responses to safety violations with other system components.
  • User Interface and Interaction Design: The usability of the runtime monitor is determined by the clarity and responsiveness of its user interface. This includes evaluating how effectively the system presents visual alerts, status indicators, and control mechanisms to convey critical safety information and support timely human intervention when necessary.
For the evaluation of the proposed runtime monitor, we define evaluation criteria along three axes—effectiveness in safety assurance, robustness and reliability, and integration with ADS architectures. In the following, we briefly describe these evaluation criteria with their potential parameters taken under discussion.

3.1.1. Effectiveness in Safety Assurance

This criterion measures the monitor’s ability to accurately detect and respond to traffic light signals under varying vehicle dynamics and environmental conditions. As a quantitative evaluation criterion, it incorporates a variety of parameters that characterize both vehicle behavior—such as sensor lookup horizon, speed, acceleration, and reaction time—and environmental factors like road geometry and weather.
The sensor lookup horizon is evaluated based on the camera’s field of view (FOV) and the LiDAR’s detection range, assessing performance across different distances. Higher vehicle speeds and aggressive acceleration profiles pose challenges for the monitor in enforcing timely and appropriate responses. To evaluate this, the vehicle’s maximum speed, throttle input, and brake input are varied, and the vehicle is assessed on whether it complies with safety behaviors, such as stopping at red lights.
Reaction time is measured as the latency between detecting a traffic light signal and initiating the corresponding vehicle response. To assess the influence of road geometry, different map configurations can be used with both straight and curved roads (featuring varying curvature radii). Additionally, weather parameters such as rain or fog can be included to test how the monitor responds in more challenging scenarios.

3.1.2. Robustness

This criterion evaluates the monitor’s ability to perform reliably under uncertain conditions and in the presence of edge cases. Key parameters considered include the distance between the ego-vehicle and traffic lights, as well as the impact of sensor noise and perception errors. The accurate identification of relevant traffic lights is critical, especially in scenarios where multiple signals are present in the field of view. To test this, the placement distance of traffic lights is varied and the monitor’s detection accuracy is assessed, including the frequency of false positives caused by irrelevant or distant signals. Additionally, noise and sensor imperfections—such as LiDAR signal degradation and camera distortion—can be introduced in future work to simulate more realistic real-world conditions and evaluate the monitor’s resilience to degraded input quality.

3.1.3. Integration with ADS Architectures

This criterion is evaluated qualitatively by assessing the extent to which the runtime monitor can be incorporated into the vehicle’s existing software architecture with minimal disruption. It examines the monitor’s interoperability with other system components, its compatibility with existing data flows and interfaces, and its ability to operate seamlessly alongside other modules within the ADS pipeline.

3.2. Evaluation Setup

The hardware setup for the evaluation includes an AV platform provided by AirSim, equipped with LiDAR, a forward-facing camera, an IMU, and wheel speed sensors. A dedicated computer infrastructure is used to support the runtime monitor and the other components of the integrated safety architecture described in Section 2.4. The system features an Intel i7-11700 CPU 4.80 GHz (Intel Corporation, Santa Clara, CA, USA), an NVIDIA RTX 3070 GPU (Nvidia Corporation, Santa Clara, CA, USA), and 16 GB of RAM to ensure sufficient computational capacity for real-time data processing.
The evaluation is conducted on a Windows 11 machine using WSL2 (WSL stands for Windows Subsystem for Linux). WSL2 runs inside a managed virtual machine that includes a full Linux kernel. The runtime monitor is implemented in Python 3.8 and integrated with ROS2 Foxy (ROS stands for Robot Operating System), utilizing the real-time object detection framework YOLOv5 (YOLO stands for You Only Look Once) for traffic light detection. Controlled traffic scenarios are developed in Unreal Engine 4 to simulate diverse environmental and traffic conditions.
The test environment consists of a simulated test track with multiple traffic light intersections, intended for initial validation and performance testing. Sensor data—including camera and LiDAR streams—are recorded alongside ground-truth annotations to support evaluation. The runtime monitor is tightly integrated with the vehicle’s control and sensing infrastructure. Data logging and synchronization mechanisms are implemented to ensure accurate time stamping and alignment between sensor input and monitor output throughout the evaluation process.

3.3. Experiments Setup and Results

In order to quantitatively assess the performance of the runtime monitor, we designed two experiments, one for each quantitative evaluation criterion. For each experiment, we defined its hypothesis/question, the parameters taken into consideration and their corresponding value intervals, as well as the number of uniquely defined scenarios and number of runs per scenario. The description of the experiment setup is followed by a presentation of the evaluation results.

3.3.1. Effectiveness in Safety Assurance

The question for the first experiment (EQ1) regarding the effectiveness of the proposed runtime monitor in safety assurance of the AV at traffic lights is formulated as follows:
EQ1: 
Can the runtime monitor reliably detect and respond to traffic lights at varying vehicle speeds, and is the reaction time within acceptable bounds for safe autonomous driving?
The parameters used in this experiment are the maximum speed with which the vehicle approaches the traffic light, the throttle input, and the brake input. These parameters were varied to simulate different vehicle behaviors. Scenarios were created by setting the vehicle to approach a red traffic light at target speeds of 30 km/h, 50 km/h, and 70 km/h. These variations enabled the evaluation of stopping distance and reaction time under different acceleration profiles.
Along with the mentioned parameters, the sensor lookup horizon consists of the camera FOV set to 60° and the LiDAR range set to 100 m based on Velodyne VLP-16. These values represent a heuristic choice. Through various tests, we found that values over 70° for the camera FOV lead to delayed traffic light detection, while values under 50° lead to the traffic light no longer appearing in the camera image at a distance of less than ~10 m. With respect to the LiDAR range, our tests showed the object classification works up to 76 m, while larger values provide no added value.
Three unique scenarios were defined based on the target approach speeds. Each scenario was executed multiple times, resulting in a total of 100 runs across all scenarios to ensure statistical significance. The scenarios were executed only on straight roads. Although weather parameters such as rain and fog were intended to be included, they could not be evaluated due to limitations in the simulator during the test phase.
Figure 4 illustrates a representative evaluation run for the first evaluation criterion. The top plot shows the vehicle’s speed over time in relation to the traffic light status, i.e., green, amber, and red phases. The middle plot presents the distance to the traffic light, along with the minimum braking distance and maximum passing distance thresholds used by the runtime monitor. The bottom plot indicates the requested driving mode, i.e., full autonomous driving, limited autonomous driving, and emergency braking.
The vehicle accelerated smoothly to its target speed, and the monitor correctly detected the red phase. Thus, it was able to initiate a braking maneuver well before reaching the critical minimum braking distance at the traffic light. The requested driving mode transitioned appropriately from full autonomous driving to emergency braking during the red light phase, ensuring a safe stop at the traffic light.
Figure 5 shows the vehicle speed overlaid with the monitor reaction time. The reaction time remained consistently low throughout the run, typically between 40 ms and 65 ms, even during braking and emergency mode transitions. This consistency indicates that the monitor was able to react to environmental changes in near real time without noticeable delays, which is crucial for effective intervention and safe stopping behavior.

3.3.2. Robustness

The question for the second experiment (EQ2) regarding the robustness of the proposed runtime monitor in detecting the traffic lights relevant for the ego-vehicle is formulated as follows:
EQ2: 
Can the runtime monitor accurately identify the relevant traffic light while ignoring irrelevant ones at various distances, ensuring reliable decision-making in complex environments?
The parameter used in this second experiment is the distance of the AV to the traffic lights, which was systematically varied from 1 m to 85 m. Additionally, non-relevant traffic lights (e.g., perpendicular to the vehicle’s path) were placed nearby to test for false positives.
A total of 27 distinct distance-based scenarios were created. Each scenario was run once due to the resource-intensive setup, focusing on precision in controlled conditions.
The plot in Figure 6 shows how the measured distances to the traffic light align with the actual distances across 27 test runs. The bottom plot visualizes the number of false positive detections across different distance intervals. A red dashed line at 76 m indicates the detection threshold.
False positives in this context refer to traffic lights that are correctly detected by the perception system but are not relevant for the current driving lane. As shown in Figure 7, several traffic lights located in adjacent lanes or in the opposite driving direction are also detected. These detections are not considered critical for the decision-making of the ego-vehicle but may still trigger unnecessary monitor interventions if not filtered appropriately.

4. Discussion of Results

This section discusses the results presented in Section 3 with respect to the two evaluation criteria introduced in that section, namely effectiveness and robustness of the runtime monitor.

4.1. Effectiveness in Safety Assurance

The results shown in Figure 4 and Figure 5 demonstrate that the runtime monitor maintains stable and fast reaction times across different driving speeds. The vehicle’s speed profile, reaching up to ~13 m/s (~47 km/h), and the corresponding reaction times, consistently ranging between 40 ms and 65 ms, confirm that the monitor reacts promptly to traffic light changes.
According to [62], autonomous driving systems should achieve reaction times below 100 ms to ensure safe operation. The monitor’s performance in all tested scenarios remains well within these bounds, providing sufficient margin for safe braking and maneuvering.
Importantly, at all tested approach speeds—30 km/h, 50 km/h, and 70 km/h—the ego-vehicle correctly responded to traffic lights without committing any violations. This confirms that the monitor not only detects traffic lights reliably but also triggers appropriate actions to comply with traffic regulations under varying speed conditions.
Overall, the system exhibits strong effectiveness, ensuring fast and reliable reactions needed for safe autonomous driving in dynamic environments.

4.2. Robustness

This evaluation assessed the system’s ability to correctly identify relevant traffic lights while ignoring irrelevant signals from adjacent lanes across varying distances (1.0 m to 85.0 m). The results show that the system achieved 100% detection accuracy for relevant traffic lights within 76 m, with no false negatives observed. Beyond 76 m, detection failed entirely, which is operationally acceptable, as it substantially exceeds the minimum braking distance requirements for autonomous vehicles.
However, a critical limitation was observed regarding the handling of adjacent, irrelevant traffic lights at closer distances. Specifically, the system struggled to fully ignore non-relevant signals when they were located within 25 m, as shown by the following:
  • At 20–25 m, 1–2 irrelevant signals were incorrectly classified as relevant,
  • At 10–20 m, 2–3 irrelevant signals were misclassified,
  • Below 10 m, 3–4 irrelevant signals were consistently misclassified.
These results indicate that, while the system is robust at medium to long distances, its ability to disambiguate between relevant and irrelevant traffic lights deteriorates in close-range, dense traffic light environments.
On a positive note, the system maintained excellent spatial precision throughout the tests, with measured distances deviating by only ±0.4 m from the actual values. This confirms the reliability of depth estimation even when multiple signals are present.
In comparison to the previous studies, where false positives due to signal density are a well-known challenge, the findings suggest that the runtime monitor performs reliably within practical detection ranges but requires improvement within urban scenarios with closely spaced signals. Future work could focus on enhancing spatial filtering methods or introducing context-aware detection strategies to better differentiate between relevant and irrelevant traffic lights at close distances.

5. Conclusions and Future Work

The dilemma zone problem, which arises during the amber phase of traffic lights, demands a careful assessment of whether an ADS can stop the vehicle in time or safely guide it through the intersection before the signal changes. Addressing this challenge requires a runtime monitor capable of evaluating key factors such as vehicle speed, distance to the intersection, and braking capability.
This paper presented a methodology for designing a runtime monitor grounded in legal requirements derived from European traffic regulations. Developed to assess the decision-making of the ADS at traffic lights, the monitor was integrated into a safety-oriented software architecture, establishing a solid basis for future advancements in autonomous vehicle safety.
The proposed runtime monitor was evaluated in simulation scenarios based on the following two key criteria: effectiveness in safety assurance and robustness. The evaluation results demonstrate that the monitor effectively addresses the dilemma zone issue.
Regarding the first criterion, the monitor reliably detects traffic lights and triggers appropriate actions to ensure compliance with traffic regulations under varying speed conditions. Concerning robustness, the monitor performs reliably at medium to long distances; however, its ability to distinguish between relevant and irrelevant traffic lights diminishes in close-range, densely signaled environments. Despite this, the monitor exhibited high spatial accuracy across the test scenarios, with a maximum deviation of no more than ±0.4 m between estimated and ground-truth distances—highlighting the reliability of its depth estimation, even in complex scenes with multiple traffic lights.
Future developments will focus on improving both the external robustness and internal self-awareness of the runtime monitoring system for autonomous driving. First, one key area is increasing the monitor’s resilience under adverse environmental conditions, such as poor lighting, rain, or fog. While the current system assumes that the transition from green to amber is reliably detected, future work will address scenarios where this transition is missed or incorrectly perceived by the ADS, and how the monitor should respond to maintain safety.
Second, traffic complexity presents a challenge for real-world deployment. Testing under high traffic density and complex intersection layouts will help validate the monitor’s adaptability and identify edge-case failures. Enhancing situational robustness in these scenarios will be essential for generalizing the monitoring approach beyond controlled conditions.
Third, the integration of self-monitoring capabilities into the ADS. These features would enable the system to assess internal health metrics such as sensor degradation (e.g., camera or LiDAR failure), actuator anomalies, and communication delays within control units. In line with the ISO 26262 functional safety standard [63], fallback mechanisms like a limp-home mode—which can guide the vehicle to a minimal risk condition (e.g., slowly pulling over to the shoulder or following a reduced speed trajectory to a safe stop—should be implemented in case of failure to ensure continued operational safety under degraded conditions.
Fourth, the technical condition of the vehicle also plays a critical role in monitoring robustness. The current algorithm assumes nominal operation without faults; however, sensor drift, actuator degradation, or communication latency could impair the correctness of ADS decisions. Future work will explore integrating runtime diagnostics and fault-monitoring interfaces into the monitor’s validation process. This adaptation would enable real-time adjustments to validation criteria based on operational status, improving fault tolerance and aligning with ISO 26262 [63] recommendations for safety mechanisms. Moreover, conducting fault injection experiments and degraded-mode tests would support certification pathways and a more rigorous robustness evaluation.
Fifth, while our system was inspired by the architecture of the AVL rooftop box [64], all experiments were conducted in a simulation-only setup without real hardware testing. Future efforts should explore hardware-in-the-loop (HIL) integration or deployment in real test vehicles to assess system robustness and validate assumptions made in the simulation.
Lastly, although this work prioritized safety and decision accuracy at traffic lights, we recognize the growing importance of energy efficiency in autonomous vehicle planning. The current monitoring logic does not explicitly optimize for energy consumption. However, integrating energy-aware criteria, such as eco-friendly stopping behavior or optimized acceleration, may enhance sustainability without compromising safety. Previous studies such as [65,66] have demonstrated how decision making strategies at intersections can be tuned to reduce fuel or battery consumption. Future work may explore how such optimizations can be integrated into runtime monitors for a more efficient overall performance profile.
By addressing these directions environmental adaptability, internal diagnostics, hardware validation, vehicle health awareness, and energy efficiency, the proposed monitoring system can evolve into a more comprehensive and fault-tolerant safety assurance component within autonomous vehicles.

Author Contributions

Conceptualization, A.A.; methodology, A.A.; software, Y.E.; validation, A.A. and Y.E.; formal analysis, Y.E.; investigation, Y.E.; data curation, Y.E.; writing—original draft preparation, Y.E.; writing—review and editing, A.A.; visualization, A.A.; supervision, A.A.; project administration, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the support by Open Access Publishing Fund of Clausthal University of Technology for the open access publication of this work.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study. Requests to access the datasets should be directed to yousri.elhajji@tu-clausthal.de.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACCAdaptive Cruise Control
ADASAdvanced Driving Assistance System
ADSAutonomous Driving System
ABSAnti-lock Braking System
AVAutonomous Vehicle
CASCollision Avoidance System
DDTDynamic Driving Task
ESCElectronic Stability Control
FuSaFunctional Safety
GNSSGlobal Navigation Satellite System
HILHardware-in-the-Loop
LTLLinear Temporal Logic
ODDOperational Design Domain
ROSRobot Operating System
RSARoad Safety Authority
RTSRRoad Traffic (Signs) Regulations in Ireland
SOTIFSafety Of The Intended Functionality
StVOStrassenverkehrs-Ordnung (German road traffic regulations)
TSMTraffic Signs Manual in UK and Ireland
TSRGDTraffic Signs Regulations and General Directions in the UK
WSLWindows Subsystem for Linux
YOLOYou Only Look Once

References

  1. Vijayenthiran, V. Waymo’s Self-Driving Taxis Will Cover 100 Square Miles of Phoenix. 2018. Available online: https://www.motorauthority.com/news/1116003_cadillac-xt4-will-add-super-cruise-eventually (accessed on 28 May 2025).
  2. Abdel-Aty, M.; Ding, S. A matched case-control analysis of autonomous vs. human driven vehicle accidents. Nat. Commun. 2024, 15, 4931. [Google Scholar] [CrossRef] [PubMed]
  3. Ahangarnejad, A.H.; Radmehr, A.; Ahmadian, M. A review of vehicle active safety control methods—From anti-lock brakes to semi-autonomy. J. Vib. Control 2021, 27, 1683–1712. [Google Scholar] [CrossRef]
  4. Sun, Z.; Bebis, G.; Miller, R. On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 694–711. [Google Scholar] [PubMed]
  5. Zang, S.; Ding, M.; Smith, D.; Tyler, P.; Rakotoarivelo, T.; Kaafar, M.A. The impact of adverse weather conditions on autonomous vehicles: How rain, snow, fog, and hail affect the performance of a self-driving car. IEEE Vehicular Technol. Mag. 2019, 14, 103–111. [Google Scholar] [CrossRef]
  6. Vargas, J.; Alsweiss, S.; Toker, O.; Razdan, R.; Santos, J. An Overview of Autonomous Vehicles Sensors and Their Vulnerability to Weather Conditions. Sensors 2021, 21, 5397. [Google Scholar] [CrossRef]
  7. Brummelen, J.V.; O’Brien, M.; Gruyer, D.; Najjaran, H. Autonomous vehicle perception: The technology of today and tomorrow. Transp. Res. Part C 2018, 89, 384–406. [Google Scholar] [CrossRef]
  8. Radecki, P.; Campbell, M.; Matzen, K. All weather perception: Joint data association, tracking, and classification for autonomous ground vehicles. arXiV 2016, arXiv:1605.02196. [Google Scholar]
  9. Filgueira, A.; González-Jorge, H.; Lagüela, S.; Díaz-Vilariño, L.; Arias, P. Quantifying the influence of rain in LiDAR performance. Measurement 2017, 95, 143–148. [Google Scholar] [CrossRef]
  10. Li, Y.; Wang, H.; Wang, W.; Xing, L.; Liu, S.; Wei, X. Evaluation of the Impacts of Cooperative Adaptive Cruise Control on Reducing Rear-End Collision Risks on Freeways. Accid. Anal. Prev. 2017, 98, 87–95. [Google Scholar] [CrossRef]
  11. Adewale, A.; Lee, C. Prediction of car-following behavior of autonomous vehicle and human-driven vehicle based on drivers’ memory and cooperation with lead vehicle. Transp. Res. Rec. 2023, 2678, 248–266. [Google Scholar] [CrossRef]
  12. Li, Y.; Wu, D.; Lee, J.; Yang, M.; Shi, Y. Analysis of the transition condition of rear-end collisions using time-to-collision index and vehicle trajectory data. Accid. Anal. Prev. 2020, 144, 105676. [Google Scholar] [CrossRef] [PubMed]
  13. SAE J3016:2021; Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE International: Warrendale, PA, USA, 2021; Norm, Issued January 2014, Revised April 2021. [CrossRef]
  14. ISO 26262:2018; Road Vehicles—Functional Safety. ISO: Geneva, Switzerland, 2018; Norm, Issued November 2011, Revised December 2018.
  15. ISO 21448:2022; Road Vehicles—Safety of the Intended Functionality. ISO: Geneva, Switzerland, 2022; Norm, Issued January 2019, Revised June 2022.
  16. Levin, M.W.; Boyles, S.D. Intersection Auctions and Reservation-Based Control in Dynamic Traffic Assignment. Transp. Res. Rec. 2019, 2497, 35–44. [Google Scholar] [CrossRef]
  17. Haris, M.; Hou, J. Obstacle Detection and Safely Navigate the Autonomous Vehicle from Unexpected Obstacles on the Driving Lane. Sensors 2020, 20, 4719. [Google Scholar] [CrossRef] [PubMed]
  18. Endsley, M.R. Toward a theory of situation awareness in dynamic systems. Hum. Factors 1995, 37, 32–64. [Google Scholar] [CrossRef]
  19. Zhou, D.; Ma, Z.; Zhang, X.; Sun, J. Autonomous vehicles’ intended cooperative motion planning for unprotected turning at intersections. IET Intell. Transp. Syst. 2020, 16, 1058–1073. [Google Scholar] [CrossRef]
  20. Alhajyaseen, W.K.M.; Asano, M.; Nakamura, H.; Tan, D.M. Stochastic approach for modeling the effects of intersection geometry on turning vehicle paths. Transp. Res. Part C Emerg. Technol. 2013, 32, 179–192. [Google Scholar] [CrossRef]
  21. Noh, S. Decision-Making Framework for Autonomous Driving at Road Intersections: Safeguarding Against Collision, Overly Conservative Behavior, and Violation Vehicles. IEEE Trans. Ind. Electron. 2019, 66, 3275–3286. [Google Scholar] [CrossRef]
  22. Watanabe, K.; Kang, E.; Lin, C.W.; Shiraishi, S. Runtime monitoring for safety of intelligent vehicles. In Proceedings of the 55th Annual Design Automation Conference (DAC ’18), New York, NY, USA, 24–29 June 2018. [Google Scholar] [CrossRef]
  23. Donzé, A. Breach, A Toolbox for Verification and Parameter Synthesis of Hybrid Systems. In Proceedings of the Computer Aided Verification (CAV ’10); Touili, T., Cook, B., Jackson, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 167–170. [Google Scholar]
  24. Harper, C.; Chance, G.; Ghobrial, A.; Alam, S.; Pipe, T.; Eder, K. Safety Validation of Autonomous Vehicles Using Assertion-Based Oracles. Preprint. Available online: https://arxiv.org/abs/2111.04611 (accessed on 22 April 2025).
  25. Grundt, D.; Köhne, A.; Saxena, I.; Stemmer, R.; Westphal, B.; Möhlmann, E. Towards Runtime Monitoring of Complex System Requirements for Autonomous Driving Functions. Electron. Proc. Theor. Comput. Sci. 2022, 371, 53–61. [Google Scholar] [CrossRef]
  26. Balakrishnan, A.; Deshmukh, J.; Hoxha, B.; Yamaguchi, T.; Fainekos, G. PerceMon: Online Monitoring for Perception Systems. In Proceedings of the 21st International Conference on Runtime Verification (RV 2021); Feng, L., Fisman, D., Eds.; Springer: Cham, Switzerland, 2021; pp. 297–308. [Google Scholar] [CrossRef]
  27. Antonante, P.; Nilsen, H.G.; Carlone, L. Monitoring of perception systems: Deterministic, probabilistic, and learning-based fault detection and identification. Artif. Intell. Spec. Issue Risk-Aware Auton. 2023, 325, 103998. [Google Scholar] [CrossRef]
  28. Rong, G.; Shin, B.H.; Tabatabaee, H.; Lu, Q.; Lemke, S.; Možeiko, M.; Boise, E.; Uhm, G.; Gerow, M.; Mehta, S.; et al. LGSVL Simulator: A High Fidelity Simulator for Autonomous Driving. arXiv 2020, arXiv:2005.03778. [Google Scholar]
  29. Zhang, Y.; Xu, S.; Chen, H.; Aslam Bhatt, U.; Huang, M. Context-aware environment online monitoring for safety autonomous vehicle systems: An automata-theoretic approach. J. Cloud Comput. 2024, 13, 6. [Google Scholar] [CrossRef]
  30. Yu, W.; Zhao, C.; Wang, H.; Liu, J.; Ma, X.; Yang, Y.; Li, J.; Wang, W.; Hu, X.; Zhao, D. Online legal driving behavior monitoring for self-driving vehicles. Nat. Commun. 2024, 15, 408. [Google Scholar] [CrossRef] [PubMed]
  31. Mehdipour, N.; Althoff, M.; Tebbens, R.D.; Belta, C. Formal methods to comply with rules of the road in autonomous driving: State of the art and grand challenges. Automatica 2023, 152, 110692. [Google Scholar] [CrossRef]
  32. Huang, Z.; Li, B.; Du, D.; Li, Q. A Model Checking Based Approach to Detect Safety-Critical Adversarial Examples on Autonomous Driving Systems. In Proceedings of the Theoretical Aspects of Computing—ICTAC 2022; Seidl, H., Liu, Z., Pasareanu, C.S., Eds.; Springer: Cham, Switzerland, 2022; pp. 238–254. [Google Scholar] [CrossRef]
  33. Arfvidsson, K.M.; Jiang, F.J.; Johansson, K.H.; Mårtensson, J. Ensuring Safety at Intelligent Intersections: Temporal Logic Meets Reachability Analysis. In Proceedings of the 2024 IEEE Intelligent Vehicles Symposium (IV), Jeju Island, Republic of Korea, 2–5 June 2024; pp. 292–298. [Google Scholar] [CrossRef]
  34. Wu, S.; Amenta, N.; Zhou, J.; Papais, S.; Kelly, J. aUToLights: A Robust Multi-Camera Traffic Light Detection and Tracking System. In Proceedings of the 2023 20th Conference on Robots and Vision (CRV), Montreal, QC, Canada, 6–8 June 2023; pp. 89–96. [Google Scholar] [CrossRef]
  35. Rahman, M.; Islam, F.; Ball, J.E.; Goodin, C. Traffic light recognition and V2I communications of an autonomous vehicle with the traffic light for effective intersection navigation using YOLOv8 and MAVS simulation. In Proceedings of the Autonomous Systems: Sensors, Processing, and Security for Ground, Air, Sea, and Space Vehicles and Infrastructure; Dudzik, M.C., Jameson, S.M., Axenson, T.J., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2024; Volume 13052. [Google Scholar] [CrossRef]
  36. Kumar, G.A.; Lee, J.H.; Hwang, J.; Park, J.; Youn, S.H.; Kwon, S. LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles. Symmetry 2020, 12, 324. [Google Scholar] [CrossRef]
  37. Gao, F.; Luo, C.; Shi, F.; Chen, X.; Gao, Z.; Zhao, R. Online Safety Verification of Autonomous Driving Decision-Making Based on Dynamic Reachability Analysis. IEEE Access 2023, 11, 93293–93309. [Google Scholar] [CrossRef]
  38. Navarro-Espinoza, A.; Lopez-Bonilla, O.; García-Guerrero, E.; Tlelo-Cuautle, E.; Lopez-Mancilla, D.; Hernández-Mejía, C.; Gonzalez, E.I. Traffic Flow Prediction for Smart Traffic Lights Using Machine Learning Algorithms. Technologies 2022, 10, 5. [Google Scholar] [CrossRef]
  39. Ugirumurera, J.; Severino, J.; Bensen, E.; Wang, Q.; Macfarlane, J. A Machine Learning Method for Predicting Traffic Signal Timing from Probe Vehicle Data. arXiv 2023. [Google Scholar] [CrossRef]
  40. Smith, S. Integration of Autonomous Vehicles with Adaptive Signal Control to Enhance Mobility; Technical Report; Carnegie Mellon University: Pittsburgh, PA, USA, 2019. [Google Scholar]
  41. Maadi, S.; Stein, S.; Hong, J.; Murray-Smith, R. Real-Time Adaptive Traffic Signal Control in a Connected and Automated Vehicle Environment: Optimisation of Signal Planning with Reinforcement Learning under Vehicle Speed Guidance. Sensors 2022, 22, 7501. [Google Scholar] [CrossRef]
  42. Dong, H.; Zhuang, W.; Wu, G.; Li, Z.; Yin, G.; Song, Z. Overtaking-Enabled Eco-Approach Control at Signalized Intersections for Connected and Automated Vehicles. Trans. Intell. Transport. Syst. 2024, 25, 4527–4539. [Google Scholar] [CrossRef]
  43. Chang, M.; Messer, C.J.; Santiago, A.J. Timing Traffic Signal Change Intervals based on Driver Behavior. Transp. Res. Rec. 1985. [Google Scholar]
  44. Zhang, Y.; Fu, C.; Hu, L. Yellow light dilemma zone researches: A review. J. Traffic Transp. Eng. (Engl. Ed.) 2014, 1, 338–352. [Google Scholar] [CrossRef]
  45. 20. Convention on Road Signs and Signals; International Treaty; UNECE: Vienna, Austria, 1968.
  46. Bundestag. Gesetz zur Änderung des Straßenverkehrsgesetzes und des Pflichtversicherungsgesetzes—Gesetz zum autonomen Fahren. Bundesgesetzblatt Jahrgang 2021 Teil I Nr. 48, ausgegeben zu Bonn am 27. Juli 2021. 2021. National Law. Available online: https://www.bgbl.de/xaver/bgbl/start.xav?startbk=Bundesanzeiger_BGBl&start=//*%5b@attr_id=%27bgbl121s3108.pdf%27%5d#/switch/tocPane?_ts=1749380516316 (accessed on 22 April 2025).
  47. The Traffic Signs Regulations and General Directions 2016; National Law; UK Statutory Instruments: London, UK, 2016.
  48. Automated and Electric Vehicles Act 2018; National Law; UK Public General Acts: London, UK, 2018.
  49. Automated Vehicles Act 2024; National Law; UK Public General Acts: London, UK, 2024.
  50. Traffic Signs Manual—Chapter 9: Traffic Signals; National Law; Irish Department of Transport: Dublin, Ireland, 2024.
  51. Bundestag. Straßenverkehrs-Ordnung. Verordnung vom 06.03.2013 (BGBl. I S. 367), in Kraft Getreten am 01.04.2013 Zuletzt Geändert Durch Gesetz vom 11.12.2024 (BGBl. I S. 411) m.W.v. 01.01.2025. 2024. National Law. Available online: https://www.stvo.de/strassenverkehrsordnung/ (accessed on 22 April 2025).
  52. Richtlinienfür Lichtsignalanlagen (RiLSA)—Lichtzeichenanlagen für den Straßenverkehr. FGSV Verlag. Guideline. Available online: https://www.fgsv-verlag.de/pub/media/pdf/321.i.pdf (accessed on 22 April 2025).
  53. Versionsconsolidées de 2023 des 9 Parties de l’Instruction Interministérielle sur la Signalisation Routière (IISR). Journal of Official de la République Française (JORF). 2023. Instructions. Available online: https://equipementsdelaroute.cerema.fr/versions-consolidees-de-2023-des-9-parties-de-l-a528.html (accessed on 22 April 2025).
  54. Traffic Signs Manual—Chapter 6: Traffic Control; National Law; UK Department for Transport: London, UK, 2019.
  55. Road Traffic (Signs) Regulations; Statute; Irish Statutory Instruments: Dublin, Ireland, 2024.
  56. SOPHISTen.MASTeR—Schablonen für alle Fälle (Engl.: MASTeR—Requirements Patterns for All Use Cases). Brochure. 2024. Available online: https://www.sophistgroup.de/fileadmin/user_upload/Bilder_zu_Seiten/Publikationen/Wissen_for_free/MASTeR-Broschuere_Int/MASTeR_Broschuere_6-Auflage_31-07-2024_AvP_V4.pdf (accessed on 22 March 2025).
  57. Baier, C.; Katoen, J.P. Principles of Model Checking; MIT Press: Cambridge, MA, USA, 2008. [Google Scholar]
  58. Hegerhorst, T.; Vorwald, A.; Flormann, M.; Zhang, M.; Henze, R.; Rausch, A. VanAssist—Integriertes Sicherheitskonzept für Automatisierte Kleintransporter in der Paketlogistik (Engl.: VanAssist: Integrated safety concept for automated transport vans in parcel logistics). In Proceedings of the ACIMobility Summit, Braunschweig, Germany, 21–22 September 2021; pp. 1–12. [Google Scholar]
  59. Aniculaesei, A.; Grieser, J.; Rausch, A.; Rehfeldt, K.; Warnecke, T. Graceful Degradation of Decision and Control Responsibility for Autonomous Systems based on Dependability Cages. In Proceedings of the 5th International Symposium on Future Active Safety Technology Toward Zero Accidents, Blacksburg, VA, USA, 9–11 September 2019. [Google Scholar]
  60. Aniculaesei, A.; Aslam, I.; Bamal, D.; Helsch, F.; Vorwald, A.; Zhang, M.; Rausch, A. Connected Dependability Cage Approach for Safe Automated Driving. In Proceedings of the 23rd International Stuttgart Symposium; Kulzer, A.C., Reuss, H.C., Wagner, A., Eds.; Springer: Wiesbaden, Germany, 2023; pp. 3–21. [Google Scholar]
  61. Aslam, I.; Aniculaesei, A.; Buragohain, A.; Zhang, M.; Bamal, D.; Rausch, A. Runtime Safety Assurance of Autonomous Last-Mile Delivery Vehicles in Urban-like Environment. In Proceedings of the 2024 Stuttgart International Symposium. SAE International, Stuttgart, Germany, 2–3 July 2024. [Google Scholar] [CrossRef]
  62. Driveblocks. Fast Reaction Times: Why End-to-End Latency Matters in Autonomous Driving. 2022. Available online: https://www.driveblocks.ai/news/2022-11-28/fast-reaction-times-why-end-to-end-latency-matters-in-autonomous-driving (accessed on 28 May 2025).
  63. ISO 26262; Road Vehicles—Functional Safety. International Organization for Standardization: Geneva, Switzerland, 2018. Available online: https://www.iso.org/standard/68383.html (accessed on 22 March 2025).
  64. AVL List GmbH. AVL Dynamic Ground Truth System. 2024. Available online: https://www.avl.com/en/testing-solutions/automated-and-connected-mobility-testing/avl-dynamic-ground-truth-system (accessed on 22 May 2025).
  65. Wu, J.; Song, Z.; Lv, C. Deep Reinforcement Learning-Based Energy-Efficient Decision-Making for Autonomous Electric Vehicle in Dynamic Traffic Environments. IEEE Trans. Intell. Transp. Syst. 2023, 1, 875–887. [Google Scholar] [CrossRef]
  66. Meng, X.; Cassandras, C. Eco-Driving of Autonomous Vehicles for Nonstop Crossing of Signalized Intersections. IEEE Trans. Autom. Sci. Eng. 2022, 19, 320–331. [Google Scholar] [CrossRef]
Figure 1. Visual intuition of option zone and dilemma zone (cf. [44]).
Figure 1. Visual intuition of option zone and dilemma zone (cf. [44]).
Electronics 14 02366 g001
Figure 2. Method for constructing runtime monitors for autonomous driving systems based on domain-specific regulations and standards.
Figure 2. Method for constructing runtime monitors for autonomous driving systems based on domain-specific regulations and standards.
Electronics 14 02366 g002
Figure 3. High-level integrated safety architecture for runtime monitoring of autonomous vehicle behavior at traffic lights.
Figure 3. High-level integrated safety architecture for runtime monitoring of autonomous vehicle behavior at traffic lights.
Electronics 14 02366 g003
Figure 4. Vehicle behavior during one evaluation run, showing speed profile, distance to traffic light, and driving mode transitions.
Figure 4. Vehicle behavior during one evaluation run, showing speed profile, distance to traffic light, and driving mode transitions.
Electronics 14 02366 g004
Figure 5. Vehicle speed and monitor reaction time during evaluation run.
Figure 5. Vehicle speed and monitor reaction time during evaluation run.
Electronics 14 02366 g005
Figure 6. Traffic light detection analysis: (Top) Actual vs. Measured Distance per Run; (Bottom) False Positives Count Across Distance Bins.
Figure 6. Traffic light detection analysis: (Top) Actual vs. Measured Distance per Run; (Bottom) False Positives Count Across Distance Bins.
Electronics 14 02366 g006
Figure 7. Example of multiple detected traffic lights in the scene.
Figure 7. Example of multiple detected traffic lights in the scene.
Electronics 14 02366 g007
Table 1. Specification of phases for non-flashing traffic lights in Vienna Convention (cf. [45]).
Table 1. Specification of phases for non-flashing traffic lights in Vienna Convention (cf. [45]).
PhasePositionMeaning
RedAt intersectionVehicle may not proceed beyond the stop line or enter the intersection.
Red and AmberAt intersectionThe signal is about to change, but the red light rules apply.
AmberAt intersection, level crossing, swing bridge, airport, fire station or ferry terminalVehicle may not pass the stop line or enter the intersection, unless it cannot safely stop when the light shows.
GreenAt intersection, entrance to tunnel or bridgeVehicle may proceed, unless it is unable to clear the intersection before the next phase change.
Table 2. Specification of signals for traffic lights in UK TSRGD (cf. [47]).
Table 2. Specification of signals for traffic lights in UK TSRGD (cf. [47]).
PhaseMeaning
RedVehicle must not proceed beyond the stop line.
Red and AmberImpending change to green, but the same prohibition as the red signal applies.
AmberStop, unless too close to the stop line to do so safely.
GreenVehicle may proceed beyond the stop line and proceed straight on, to the left, or to the right.
Table 3. Specifications of signals for traffic lights in Irish TSM (cf. [50]).
Table 3. Specifications of signals for traffic lights in Irish TSM (cf. [50]).
PhaseMeaning
RedVehicles must not proceed past the primary traffic signal or the associated stop line.
AmberVehicles must not pass the signal or stop line, unless they cannot safely stop in time.
GreenVehicles may proceed with caution.
Table 4. Summary of requirements for stop lines at traffic lights in continental Europe, United Kingdom, and Ireland.
Table 4. Summary of requirements for stop lines at traffic lights in continental Europe, United Kingdom, and Ireland.
JurisdictionStop Line RequiredTypical Distance from Signal HeadGoverning Regulation
Continental EuropeYes1.0–2.5 mVienna Convention [45] and Laws of National States, e.g., Germany [51,52] and France [53].
United KingdomYes1.5–2.5 mTSRGD 2016 [47] and TSM 2019 [54].
IrelandYes1.0–2.0 mIrish TSM [50] and RTSR [55].
Table 5. Safety Requirements for ADS at traffic lights derived from laws and regulations.
Table 5. Safety Requirements for ADS at traffic lights derived from laws and regulations.
Requirement IDRequirement Text
RSR1The ADS recognizes a red or amber traffic light and calculates the distance to the stop line. As soon as it reaches the minimum safe braking distance, the ADS initiates a gentle braking maneuver to stop the vehicle safely without crossing the stop line.
RSR2The ADS recognizes when the traffic light changes from green to amber and finds itself in the dilemma zone. In this critical situation, the ADS initiates emergency braking to stop the vehicle safely and avoid crossing the stop line.
RSR3The ADS recognizes that the traffic light changes from green to amber and is within the maximum yellow crossing distance. In this situation, the ADS drives further and the vehicle safely crosses the intersection.
Table 6. Safety Requirements for ADSs at traffic lights reformulated in structured natural language.
Table 6. Safety Requirements for ADSs at traffic lights reformulated in structured natural language.
Requirement IDRequirement Text
RSR1The ADS shall initiate a progressive braking maneuver, if the traffic light is red or amber and the distance to the stop line is equal to the minimum safe braking distance.
RSR2The ADS shall initiate an emergency braking maneuver, if the traffic light changes from green to amber and the distance to the stop line is less than the minimum safe braking distance and greater than the maximum yellow passing distance.
RSR3The ADS shall drive further, if the traffic light changes from green to amber and the distance to the stop line is less than or equal to the maximum yellow passing distance.
Table 7. LTL formula ϕ 1 translated into runtime monitor source code.
Table 7. LTL formula ϕ 1 translated into runtime monitor source code.
Predicate in LTL Safety RequirementImplementation in Runtime Monitor
red ( t ) amber ( t ) Electronics 14 02366 i001
d TrafficLight = X C Electronics 14 02366 i002
FullAutonomousDriving: ProgressiveBrakingElectronics 14 02366 i003
Table 8. LTL formula ϕ 2 translated into runtime monitor source code.
Table 8. LTL formula ϕ 2 translated into runtime monitor source code.
Predicate in LTL Safety RequirementImplementation in Runtime Monitor
green ( t 1 ) amber ( t ) Electronics 14 02366 i004
X 0 < d TrafficLight d TrafficLight < X C Electronics 14 02366 i005
EmergencyBrakingElectronics 14 02366 i006
Table 9. LTL formula ϕ 3 translated into runtime monitor source code.
Table 9. LTL formula ϕ 3 translated into runtime monitor source code.
Predicate in LTL Safety RequirementImplementation in Runtime Monitor
green ( t 1 ) amber ( t ) Electronics 14 02366 i007
d TrafficLight X 0 Electronics 14 02366 i008
FullAutonomousDriving: DriveElectronics 14 02366 i009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aniculaesei, A.; Elhajji, Y. Runtime Monitoring Approach to Safeguard Behavior of Autonomous Vehicles at Traffic Lights. Electronics 2025, 14, 2366. https://doi.org/10.3390/electronics14122366

AMA Style

Aniculaesei A, Elhajji Y. Runtime Monitoring Approach to Safeguard Behavior of Autonomous Vehicles at Traffic Lights. Electronics. 2025; 14(12):2366. https://doi.org/10.3390/electronics14122366

Chicago/Turabian Style

Aniculaesei, Adina, and Yousri Elhajji. 2025. "Runtime Monitoring Approach to Safeguard Behavior of Autonomous Vehicles at Traffic Lights" Electronics 14, no. 12: 2366. https://doi.org/10.3390/electronics14122366

APA Style

Aniculaesei, A., & Elhajji, Y. (2025). Runtime Monitoring Approach to Safeguard Behavior of Autonomous Vehicles at Traffic Lights. Electronics, 14(12), 2366. https://doi.org/10.3390/electronics14122366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop