1. Introduction
Transport networks are crucial enablers of social cohesion, economic development, and spatial integration, facilitating the circulation of people, goods, and services within and between communities. Despite the substantial institutional and financial support that transport networks typically receive, they remain inevitably exposed and vulnerable to a broad range of stressors, including natural hazards, human-induced threats and errors, and capacity reductions due to planned or unplanned infrastructure construction or maintenance activities [
1,
2]. These stressors highlight the volatility of the environments in which transport networks operate, indicating that a stable operating context cannot be assumed over time. Instead, empirical evidence shows that proactive planning and operational strategies by transport authorities are essential to mitigate disruption impacts, for example, by developing appropriate contingency and response frameworks and measures [
3,
4]. As a result, fundamental concepts from resilience and lifeline engineering have been progressively embedded into the analysis, design, and management of transport networks, as reviewed by Gu et al. [
5].
While much of the existing research has focused on physical infrastructure disruptions, such as bridge collapses or road closures, comparatively less attention has been given to disturbances affecting digital infrastructure components, despite their growing importance. Systems such as telematics platforms, automated driving technologies, data exchange services, and traffic control mechanisms are critical to the real-time operation and coordination of modern, technology-dependent transport networks.
Disruptions to these systems may result from various causes, including extreme weather, power failures, human errors, technical malfunctions, and deliberate attacks, with significant impacts on traffic flow and traveler safety. For example, the 2024 Hurricane Beryl destroyed hundreds of traffic signals in Houston, causing widespread traffic disorder [
6]. According to the Texas Department of Transportation, daily collisions at signalized intersections increased by 41% during the disruption period [
6]. Similarly, during the 2024 Hurricane Helene in South Carolina, 207 traffic signals lost power, and the Greenville Police Department recorded 39 road collisions at that time [
7]. In 2018, a bug during a software update of the traffic control system in New York City disrupted the operation of approximately 600 traffic signals [
8]. Moreover, a simulation study of Austin’s road network showed that disabling 26 traffic signals could increase total traffic delay by a factor of 4.3, corresponding to USD 0.98 million in delay costs [
9]. According to the authors, a similar effect can be achieved through the targeted disruption of only seven traffic signals. Although such deliberate actions may have been considered relatively uncommon in the past, existing analyses indicate an increasing trend in cyber threats [
10,
11], while some observers report 27 publicly documented cyberattacks between mid-2023 and mid-2024 targeting the transport and logistics sector [
12]. In this context, some countries, such as the Netherlands, are now considering replacing actuated traffic signal control systems that have proven susceptible to spoofing by Internet of Things (IoT) devices [
13].
The current paper offers to the body of research on transport resilience a methodological framework and a readily quantifiable metric, based on the Macroscopic Fundamental Diagram (MFD), to assess the traffic-related impact of digital infrastructure malfunctions, with a focus on traffic signal tampering. Its remainder is organized as follows.
Section 2 provides a brief overview of the state of the art in the cybersecurity of digital and intelligent transport technologies, with emphasis on traffic signal tampering, and summarizes the study’s main contribution.
Section 3 presents the methodological approach and experimental design.
Section 4 and
Section 5 report the results of the basic experiment and a sensitivity analysis involving variable traffic states, respectively.
Section 6 discusses the findings, the practical exploitation potential of the proposed framework within Intelligent Transport Systems (ITS), and its generalization to real-world networks. Finally,
Section 7 concludes the study and outlines directions for future research.
3. Materials and Methods
The methodological workflow of this study is summarized in
Figure 1.
Due to the lack of real-world network data, the analysis in this study is conducted on one of the most widely used test networks in the transport-related literature, i.e., an aggregated version of the Sioux Falls network in South Dakota (USA), encoded in the Eclipse SUMO traffic simulator (version 1.12.0, German Aerospace Center, Berlin, Germany). Topologically, the network is comparable to several grid networks used in other studies (see
Table 1) and to Manhattan-style scenarios in CAV-related literature (e.g., see [
46,
47]). The existence of several segments and multiple routing options provides a sufficiently complex analytical context. Additionally, the presence of irregular intersections, including one with more than four converging links (i.e., at Node 10), increases the representation of non-uncommon complex infrastructure configurations and intersections within real-world urban networks. Given the concerns raised in previous research regarding the rationality of the original traffic demand [
48], this has been scaled down to 100,000 vehicle trips while preserving spatial distribution. The uneven accumulation of demand in certain zones adds another layer of realism, reflecting the widely accepted notion of urban space hierarchy.
On the supply side, the network is encoded using two road classes, as shown in
Figure 2. Traffic lights with fixed cycle lengths (between 90 and 120 s) are installed at intersections where more than two roads converge. For experimental balance, we assume that these traffic lights are centrally coordinated by three traffic signal controllers, each operating within a designated area (Areas A, B, and C). These areas are delineated to (1) be of comparable size, (2) exhibit a regular and spatially compact shape (consistent with common practices for assigning coordination responsibilities in traffic control centers), (3) encompass road corridors with varying traffic intensity, and (4) include links from both road classes. The number and configuration of signal phases are initialized based on the principles of the dual-ring controller.
Network loading is implemented through path flows obtained from the solution to the User Equilibrium (UE) traffic assignment problem by (1) using the Path4GMNS interface (version 0.8.0, Arizona State University, Tempe, AZ, USA) of the DTALite transport modeling platform [
49] and (2) translating these flows into SUMO-compliant XML route files. Path4GMNS employs the U.S. Bureau of Public Roads (BPR) volume-delay function and supports several traffic analysis procedures, including link-based UE, path-based UE, dynamic traffic assignment, and Origin–Destination matrix estimation. We use the path-based UE to obtain granular (path-level) flow information required to generate explicit vehicle routes in SUMO. Specifically, the resulting path volumes are used to generate vehicle trips that depart with a gradually increasing rate over the network loading period (1 h). This is performed to capture a wide range of traffic states using a random power distribution function (implemented through the random.power NumPy function with a = 1.25).
The BPR shape parameters are set to their typical default values (a = 0.15, b = 4), while the per-lane capacity of the network’s roads is set to 1900 vehicles/hour for highways and 1000 vehicles/hour for urban arterials [
50]. Correspondingly, free-flow speed is limited to 90 km/h and 50 km/h, respectively.
The derived UE flows, aggregated at the street level (
Figure 2), are used as input to Webster’s [
51] method for traffic signal design (implemented in the SUMO library) to determine green times for each phase. These timings are further refined in a stepwise manner, along with intersection connections, to better utilize available network capacity and reduce traffic queues.
In addition to the network loading method described above, hereafter referred to as “fixed routing”, three additional routing behaviors are implemented. The first, termed “flexible routing”, involves the relaxation of vehicle paths derived from UE traffic assignment (i.e., removing intermediate links) to allow trips to follow the fastest routes based on prevailing traffic conditions at the time of departure. The second behavior, termed “semi-adaptive routing”, builds on the previous one by incorporating a rerouting device with network-wide coverage that provides all drivers with updated traffic information every 60 s. Drivers are assumed to switch routes with a probability of 50% based on the information received. The third behavior, termed “adaptive routing”, is a variation of the semi-adaptive case, where drivers are assumed to switch routes with a probability of 100%. It is intentionally included as an upper-bound stress scenario to test the limits of macroscopic, infrastructure-centric indicators under extreme route choice adaptation.
Based on the above modeling and traffic simulation testbed, we assume that a malicious entity gains access to existing signal controllers and tampers the traffic signal control programs within their respective areas by introducing an “all-red” phase lasting 10 s at the beginning of each cycle. This attack behavior represents an intermediate case between the covert and overt manipulation scenarios distinguished by Ganin et al. [
35], as it introduces a detectable disruption without fully disabling signal operation. As illustrated in
Figure 3, signal tampering scenarios are spatially varied to cover each area individually, all pairwise combinations, and all areas simultaneously (A, B, C, AB, AC, BC, ABC). These scenarios are used to test the proposed framework under signal tampering conditions of increasing spatial extent and total intensity, extending progressively to the full network. Each tampering scenario is paired with the four assumed route choice behaviors, resulting in a total of 28 scenarios.
The impact of each scenario
on prevailing traffic conditions is evaluated through the following metric reflecting average relative flow reduction (AFRM):
where
and
denote the average (per-link) flow rate in the intact and affected (according to scenario
j) network states, respectively, i.e.,
, when the density on each link equals
.
represents the set of physical network links and
their total number. This formulation captures the relative reduction in the unweighted space-mean network flow, based on existing empirical evidence by Geroliminis and Daganzo [
52] and Geroliminis and Sun [
53] suggesting that both weighted and unweighted averages of the main traffic parameters (e.g., flow, speed, density) can be used to generate well-defined MFDs. The macroscopic flow–density relationship is approximated using a single-regime, non-physical functional form [
54], specifically, a 3-degree polynomial, as frequently performed in the literature [
55,
56]. The bounds
and
in Equation (1) are determined by considering both intact and perturbed network states and their common (i.e., overlapping) density domain, as follows [
57]:
It is noted that restricting the integration bounds to the overlapping density domain is considered as the only meaningful choice, as evaluating flow rates over heterogeneous density ranges would compromise the comparability of the resulting quantities.
The traffic flow measurements required for calculating AFRM are obtained from the edge data outputs of the corresponding SUMO simulation experiments. To avoid the effects of loading–unloading hysteresis [
58], only data from the network loading period are considered. Each experiment is repeated using five different random seeds.
AFRM is benchmarked against a series of disutility-based indicators, summarized in
Table 2. These indicators are quantified using trip information from the corresponding SUMO simulation experiments. Unless otherwise specified, the trip data covers the entire simulation duration, which is set to an arbitrarily large value to allow vehicles to reach their destinations.
The last entry in
Table 2 refers to a consensus ranking of the analyzed scenarios based on the Kemeny–Young method [
59], which has been used in the literature to derive consensus rankings of street criticality across multiple metrics [
60]. Due to its exponential computational complexity, we approximate it using a Monte Carlo-based approach involving ranking initialization, iterative swapping, and a selection of the ranking that minimizes total pairwise disagreements.
The derived results are evaluated using logical and statistical checks, including Pearson’s, Spearman’s, and Kendall’s tau correlation indices. Pearson identifies linear associations between AFRM and the benchmark disutility metrics; Spearman evaluates the consistency in the relative ordering of scenarios based on the obtained rankings; and Kendall’s tau assesses the level of pairwise agreement among these rankings, indicating the stability of AFRM’s prioritization across different performance metrics.
For each pair of variables, given the limited sample size, p-values are computed using permutation-based tests for the Pearson, Spearman, and Kendall correlations. Furthermore, uncertainty in the correlation estimates is assessed using an 80% bootstrap confidence interval (CI), defined by the 10th and 90th percentiles of the bootstrap distribution.
Finally, in an effort to check the stability of the integration bounds used in Equation (1), as defined in Equations (2) and (3), a sensitivity analysis is conducted under varying traffic states. The methodology and results of this analysis are presented in
Section 5.
4. Performance Evaluation
Figure 4 depicts the MFD of the tested road network across all analyzed scenarios, including the baseline. In the baseline state, the fixed routing behavior results in a higher maximum average network flow (i.e., average network capacity) of approximately 1300 vehicles/hour. Under flexible and adaptive route choice behaviors, this value is slightly reduced; however, as drivers become more adaptive, the congested regime becomes more pronounced, indicated by a higher maximum density. This suggests that, under fixed routing, the network accommodates a larger volume of traffic more productively before persistent congestion emerges. In contrast, the increased visibility of the congested regime under flexible, semi-adaptive, and adaptive behaviors indicates more uniform utilization of the road infrastructure once critical density is exceeded (i.e., increased congestion homogeneity). This is expected, as drivers with routing flexibility, access to real-time information, and varying levels of compliance tend to redistribute themselves more evenly across the network when congestion builds up.
Regarding AFRM outputs in the context of single-controller scenarios, the most impactful case involves Area C across all analyzed routing behaviors. This is largely expected, as the UE flow results (
Figure 2) show that the average number of vehicles directed toward signalized intersections in Area C is 3465.8, compared to 2933.9 in Area B and 2838.1 in Area A, indicating a significantly higher number of affected drivers.
For multi-area scenarios, the most impactful scenario under the flexible routing behavior involves tampering the controllers of Areas B and C, with the controller of Area A operating normally (Scenario BC). This can be attributed to the fact that, under flexible routing, drivers receive information at departure and tend to reroute toward Area A, which initially appears less congested. This results in increased congestion homogeneity during the transition state and more widespread queue formation across the network. However, delays are reduced more effectively compared to in other scenarios, as drivers manage to avoid specific bottlenecks within Areas B and C.
Conversely, under the fixed, semi-adaptive, and adaptive route choice behaviors, the most impactful scenario involves all three areas simultaneously (Scenario ABC). This is expected, as the system-wide controller malfunction affects the entire network at once, thereby affecting all users and limiting available rerouting opportunities.
A full comparison of AFRM with the benchmark disutility indicators is presented in
Table 3. As shown, under the less dynamic routing behaviors (i.e., fixed and flexible), the greatest increase in road user disutility, in terms of total, en-route, and waiting times, time losses, and reduced trip completion rates, is generally associated with the system-wide malfunction (Scenario ABC). In contrast, under the more dynamic behaviors (i.e., semi-adaptive and adaptive), the greatest disutility impacts are mostly linked to either Scenario AC or Scenario BC. This can be attributed to the ineffective diversion of road users toward areas that appear more attractive at certain time steps, or to frequent route switching by drivers, which ultimately results in higher overall system costs.
Interestingly, certain single-area tampering scenarios (e.g., Scenario A and B) occasionally result in improved road user disutility. While this may appear counterintuitive at first, it is not entirely unexpected, as reducing inflow rates along key traffic corridors constitutes a well-known traffic management strategy [
61,
62].
For example, tampering only Area B’s traffic signal controller facilitates the desynchronization of traffic queues, particularly along highly congested corridors (e.g., the one through Nodes 11–10–16–18), and reduces interference at key intersections, such as those at Nodes 10 and 15. Furthermore, as noted in
Section 2.1, similar results have been reported by Feng et al. [
37] in the context of ineffective signal attacks.
Another notable observation from
Table 2, based on AERTTI and ATTTI, is that the smallest increases in travel time are associated with flexible routing, while the largest increases often correspond to adaptive behavior. This suggests that when drivers receive information only at trip departure, rather than continuously switching routes as they approach their destinations, average travel times are lower, and the impacts of signal control malfunctions are more limited. This pattern aligns with the concept of rational inattention, whereby individuals choose to ignore information when the cost of processing it outweighs the expected benefit [
63]. While the information received by drivers at discrete moments during the simulation period reflects valid conditions at those times (i.e., based on dynamic shortest-path computations), this does not guarantee that the network will reach an equilibrium state if all users act on it simultaneously in an identical manner. Therefore, partially or completely disregarding this non-individualized information can be interpreted as a form of rational inattention. The results would have been much different if a Dynamic User Equilibrium (DUE) approach (e.g., Logit-based) had been applied, i.e., through SUMO’s duaIterate tool. However, this (1) lies outside the scope of this study and (2) may not be representative of user responses to short-notice disruptions.
Figure 5 presents the correlation between AFRM and the travel disutility metrics across all tampering scenarios and routing behaviors, based on Kendall’s tau, Pearson’s, and Spearman’s coefficients. Under fixed routing, AFRM shows a consistently strong and positive relationship with nearly all disutility metrics. Notably, Pearson’s and Spearman’s coefficients exceed 0.85 for AERTTI, AWTTI, and ATLI, and surpass 0.95 for ATCRD, resulting in an average correlation of 0.83 across all indices and metrics. The corresponding
p-values are consistently low (i.e., below 0.1 in all cases). Under flexible routing, the level of correlation generally decreases. For example, Pearson’s coefficients between AFRM and AERTTI, and between AFRM and ATLI, as well as the Kendall’s tau coefficient between AFRM and ATTTI, fall below 0.60, leading to a reduced average correlation of 0.74. In addition, in five cases, the corresponding
p-values exceed 0.1. In contrast, under semi-adaptive routing, an upward trend is observed. The Pearson coefficients between AFRM and AERTTI, and between AFRM and ATLI, return to values above 0.80, and the average correlation across all indices increases to 0.77. The number of cases in which the
p-values exceed 0.1 drops to two. Finally, under adaptive routing, correlations exhibit greater variability, as reflected in their relatively wider range. Although Pearson’s coefficients remain high for all disutility metrics, the corresponding Kendall’s tau values for AERTTI, AWTTI, and ATLI are substantially lower (each equal to 0.43). As a result, the average correlation decreases to 0.71, and the number of cases with
p-values exceeding 0.1 increases to six.
Overall,
Figure 5 shows that AFRM exhibits a strong correlation with the adopted disutility indicators under fixed routing. This correlation is reduced when user behavior is determined only at trip departure, increases again with moderate route switching, and decreases again when routes are adjusted too frequently. This pattern can be explained as follows. Under fixed routing, both AFRM and the disutility indicators consistently reflect the underlying traffic degradation, i.e., speed reductions align with travel time increases, as total vehicle mileage is predefined. When routing decisions are made only at departure, this often creates positive benefits for road users. However, flow redistribution is not fully synchronized with emerging congestion patterns, resulting in a relatively less consistent relationship between AFRM and the disutility indicators. Semi-adaptive routing partially realigns traffic demand and user behavior with evolving network conditions, thereby reinforcing the connection between the production-based and disutility indicators. Finally, fully adaptive routing introduces oscillatory dynamics and increased competition, which tend to decouple user- and infrastructure-centric benefits.
Figure 6 illustrates the relationship between AFRM-based scenario rankings and the Kemeny–Young (KY) aggregated rankings for each routing behavior, along with the corresponding Spearman correlation coefficients. Under fixed and flexible routing, AFRM accurately ranks the least critical scenarios (A, B, AB), with only minor discrepancies among the most critical ones, resulting in a high Spearman correlation of 0.89. Under semi-adaptive routing, discrepancies appear for scenarios of both moderate and high criticality, reducing the correlation to 0.82. Under adaptive routing, AFRM performs less consistently, showing noticeable discrepancies across scenarios of low, moderate, and high criticality, which further reduces the correlation to 0.71. The slightly advantageous aggregated ranking of flexible routing compared to semi-adaptive routing can be attributed to the more frequent dominance of the respective Kendall’s tau coefficients (see
Figure 5).
The above results confirm the usefulness of AFRM as a practical indicator for assessing digital infrastructure disruptions, particularly in contexts where drivers are unlikely to engage in aggressive route switching. Such conditions are common in real-world networks, where route choice adaptation is constrained by heterogeneous information availability, individual preferences, and subjective perceptions of travel conditions.
5. Sensitivity Analysis
Although computing AFRM over a wide range of traffic states to which a network can be exposed is the most rational and representative approach, this section further examines the sensitivity of the proposed metric to the adopted integration bounds.
The literature proposes several elegant approaches for distinguishing traffic states in road networks by employing either statistical, percolation theory-, or MFD-based approaches, such as the framework by Lee and Lee [
45], as discussed in
Section 2. However, these methodologies typically address sustained network loading conditions (e.g., based on daily demand profiles), which is not the case in the current study.
To address this challenge, following Tesone et al. [
64], we distinguish between (1) saturation onset, representing a transition phase during which the network enters a higher vehicle accumulation regime, and (2) congestion onset, after which vehicle accumulation reaches a significantly higher level. This distinction cannot be made based on traffic flow rate, due to its dual nature (similar values can occur in different states), but can instead be supported by traffic speed and density [
45].
By closely inspecting the shape of the MFDs in
Figure 4, it is evident that traffic density increases rapidly when the network is sparsely loaded, up to a point beyond which the rate of increase slows due to the gradual accumulation of a larger number of vehicles that need to be served. This point does not necessarily correspond to a high average (i.e., network-wide) density, due to the constraints imposed by traffic heterogeneity. Subsequently, approximately in the middle of the branch following this point, a noticeable increase in the dispersion of the flow–density pairs becomes apparent. According to Geroliminis and Sun [
53], this feature may indicate the onset of congestion, as increased scattering has been empirically observed from that point onward.
In this context, let
denote the network loading period and
the network-average traffic density (vehicles/km) at time
obtained by spatial aggregation over all network links and temporal averaging within each simulation interval. Let
also be the ordered sequence of observed density values, with
. The saturation onset is defined as the time instant at which the temporal evolution of network density exhibits a structural change in its growth rate. To identify this transition, the density trajectory is approximated by a two-segment piecewise linear model of the form:
where
denotes the saturation onset time,
,
,
, and
the set of regression coefficients, and
and
the zero-mean error terms. The optimal breakpoint
is estimated by minimizing the total sum of squared residuals of the two ordinary least squares regressions over all admissible breakpoints
, subject to a minimum number of observations per segment,
, i.e.,
where
. The corresponding saturation onset density is then defined as
.
Following the identification of this point, let denote the set of density observations recorded onwards. A characteristic congestion level is identified as the empirical median of the post-congestion density distribution, i.e., .
Subsequently, the congestion onset time,
, is approximated as the first observed time instant whose density is closest to this characteristic level, i.e.,
The congestion onset density is set to the value observed at , i.e., .
Figure 7 presents the evolution of traffic density throughout the network loading period, along with the identified saturation and congestion onset points, setting
. As shown, saturation onset is identified in most cases at densities close to 30 vehicles/km, while congestion onset does so at around 45 vehicles/km.
On that basis, AFRM is re-computed for each scenario
using the following bounds:
Equations (7)–(9) imply the removal of the congested regime, the removal of the free-flow regime, as well as the symmetrical removal of both. In this way, the current sensitivity analysis adopts a balanced approach, maintaining a common reference point (the saturation regime) while progressively neglecting unfavorable traffic conditions, highly favorable conditions, and both simultaneously. It should be noted that, under the adopted experimental setup, vehicles enter the network and complete their trips under a mixture of traffic conditions, covering both loading and unloading periods. Consequently, computing the AFRM solely based on extreme conditions, such as pure free-flow or fully congested regimes, would be biased and non-comparable with the disutility indicators, as defined and computed in
Section 3 and
Section 4, respectively.
The updated AFRM values are re-evaluated using (1) all routing behaviors, (2) the average of all correlation coefficients computed in
Section 4 (Pearson’s, Spearman’s, Kendall’s tau) across all disutility metrics and tampering scenarios, (3) their standard deviation, serving as a proxy for uncertainty, as well as (4) the elasticity of average correlation reduction (in relation to the reduction in the considered density range).
Table 4 summarizes the results of the current sensitivity analysis. For the ease of comparison, the table also includes the average values and the standard deviations corresponding to the basic experiment (
Section 4).
The AFRM should be interpreted as sufficiently stable, given that (1) the average correlation across all indices and scenarios did not fall below 0.56, (2) the density range used for the AFRM computation was reduced by 19% to 77%, while (3) the computation of the disutility indicators remained completely unmodified. Therefore, although the outcomes already conform to common acceptability criteria (e.g., all elasticity values are below 1.0), the actual stability can be considered even higher. Moreover, in certain cases, the correlation between the AFRM and the various disutility metrics was enhanced, without substantial differences in the level of uncertainty, as expressed through standard deviation.
6. Discussion and Recommendations
6.1. Reflection on Experimental Results
The derived results underscore the potential of macroscopic traffic analysis tools, i.e., the Macroscopic Fundamental Diagram (MFD), to extend beyond their traditional application in traffic management and control to the assessment of disruptions in the digital infrastructure of road networks. The proposed AFRM demonstrated consistency in capturing the effects of such disruptions when compared to conventional disutility-based performance metrics (e.g., total travel time, en-route delay, and waiting time), indicating its usefulness as a proxy for network-wide degradation.
However, the accuracy and representativeness of the AFRM are maximized under routing behaviors that are not overly dynamic or adaptive. Specifically, when road users do not substantially deviate from baseline (established) route choices, or deviate in a rational and moderate manner, AFRM aligns more closely with user-experienced disutility. This observation is consistent with the foundational theory of MFDs. As noted by Geroliminis et al. [
65], macroscopic relationships between vehicle accumulation and network production remain valid when traffic control strategies do not induce large-scale rerouting or abrupt behavioral shifts. Such conditions are more representative of real-world behavior, where route choices are typically influenced by limited information, individual preferences, and heterogeneous perceptions of congestion.
Further support for this observation is provided by several other studies in the literature. For instance, Zockaie et al. [
58] showed that under emergency conditions, excessively frequent route switching may artificially increase average network flow that is not necessarily accompanied by an increase in network output flow (trip completion rate). Similarly, Knoop et al. [
66] concluded that the relationship between traffic production and vehicle accumulation remains valid as long as routing is not substantially modified. Finally, the data-driven study by Mitsakis et al. [
67] on a high-intensity storm in Athens in 2013, which represents similarities with the signal tampering case, found that the extreme impacts recorded were a product of both traffic deterioration and user detours.
Other experimental results of this study, namely, the reduced disutility observed when drivers were not exposed to excessive, non-individualized en-route information, are consistent with recent applications of Rational Inattention Theory in traffic modeling [
62]. This finding implies that, under digital infrastructure disruption (e.g., signal attacks), traffic managers should rely on optimization tools to design targeted rerouting strategies. In this context, they need to determine which users to inform, the spatial scope of the advisory (e.g., corridor-level vs. area-wide), the guidance form (e.g., route constraints, recommended detours, or probabilistic advice), and the timing of the intervention (e.g., onset, duration, and update frequency), so as to minimize unintended travel time deterioration.
Furthermore, the more pronounced drop in network production in areas of higher traffic intensity (see
Table 3) is consistent with findings reported in the literature [
9,
35]. This provides an additional validation layer for the proposed AFRM and suggests that traffic managers should maintain greater awareness in high-traffic areas.
Overall, the analysis highlights both the potential and the boundaries of macroscopic metrics, such as the AFRM, in reflecting the system-level impacts of digital disruptions. A further discussion of its exploitation potential and incorporation into ITS and traffic management protocols is provided next.
6.2. Exploitation Potential
Regarding the incorporation of the proposed framework and metric into ITS and traffic management processes and protocols, we see two main directions, i.e., traffic planning and traffic operations.
In the context of traffic planning, the framework can support digital vulnerability analyses by identifying traffic control infrastructures, or groups thereof, whose malfunction leads to disproportionate network-wide performance degradation. This includes both ranking individual digital assets according to their criticality and assessing compound disruption scenarios involving multiple controllers or control zones. In this sense, the framework provides a system-wide analog to traditional critical link or node analyses applied to physical infrastructure.
Moreover, the computational efficiency and limited data requirements of AFRM enable its use in scenario-based stress testing and contingency planning. Traffic authorities can apply the framework in microscopic or mesoscopic simulation environments to explore representative combinations of control configurations and digital disruption intensities, without relying on trip-level information. The resulting rankings can then support the assessment and definition of predefined response strategies (i.e., a library of “what-if” measures), such as fallback signal timing plans, safe-mode operation, or controller isolation policies, which can be activated when digital malfunctions are suspected.
It is clarified that by “traffic planning”, we do not refer to long-term disruptions of transport infrastructure and its control logic, as these changes introduce new behaviors, path flows, and network structure, hence compromising the comparability of the respective network productions. The same holds true for permanent mobility measures, such as extended access regulation and pricing schemes, as well as the construction of new transport infrastructure. On that basis, traffic planning should be interpreted as (1) the estimation of short-term impacts of disruptions in traffic control or the deterioration (not complete blockage) of physical infrastructure (e.g., due to extreme weather and temporary work zones) and (2) the preparation of traffic management strategies. In this sense, the proposed framework is intended to support operational and tactical decision-making, rather than long-term strategic planning.
With respect to traffic operations, the framework can serve as a system-level monitoring and decision-support tool within traffic management centers. Since traffic flow and occupancy measurements are routinely collected within many urban networks, AFRM can be computed continuously, or quasi-continuously, and visualized alongside other traffic performance indicators (e.g., travel times, level of service, etc.) or disseminated to navigation service providers through centralized data exchange platforms.
Given that macroscopic flow–density relationships are largely determined by the underlying road infrastructure and its control logic [
39,
52,
68], persistent deviations in AFRM or in the observed macroscopic traffic state may indicate abnormal operating conditions not necessarily related to demand fluctuations on their own. In this role, AFRM can function as an infrastructure-centric indicator of network degradation, complementing intersection-level diagnostics.
If combined with traffic signal logs and controller status data, in the context of the existing data fusion methods reviewed in
Section 2 [
31,
32], AFRM may further contribute to cross-layer situational awareness. In this setting, the metric may not necessarily replace existing cybersecurity intrusion detection systems but can act complementarily by providing network-wide evidence. Such impact-aware monitoring can help distinguish benign anomalies from events that warrant escalation and coordinated response between traffic operators and cybersecurity teams.
This network-level perspective can also support operator prioritization and attention triage, allowing traffic managers to focus on control zones or network partitions exhibiting greater performance loss, rather than reacting to isolated disturbances. In addition, the evolution of the AFRM can inform when to activate or deactivate mitigation measures, such as reverting to fixed-time control, suppressing adaptive features, or applying escalating strategies when disturbances are not resolved.
Finally, the historical records of AFRM may support post-incident investigation and learning, aiding in reconstructing the temporal evolution and spatial footprint of digital disruptions. This may enable the evidence-based refinement of any existing response strategies, protocols, and measures.
6.3. Generalization to Real-World Networks
A limitation of the present study is the use of a stylized, grid-like road network. Although (1) this approach is common in several related studies, (2) steps were taken to enhance network realism (e.g., by introducing signal control programs and rationalizing travel demand), and (3) the network served as a useful testbed for initial exploratory analysis and hypothesis testing, generalization remains a necessary step. This section provides a preliminary assessment of the applicability of the proposed framework and the results that should be expected in more complex, real-world networks.
First, regarding the unimodal nature of real-world networks, we expect that the coexistence of public transport modes (e.g., buses, trams, trolleybuses), active mobility (e.g., micromobility devices, cyclists, pedestrians), and vehicular traffic will exacerbate the impacts of traffic signal tampering scenarios. This is anticipated due to both intensified vehicular and cross-modal interactions, and the fact that traffic signalization regulates the right of way for multiple user groups, including pedestrians, beyond vehicular traffic.
Second, with respect to travel demand, in cases of higher traffic intensity, the impacts of signal tampering attacks are also expected to be more pronounced. Nevertheless, the employed Sioux Falls scenario already represents a highly loaded system; even after demand rationalization (see
Section 3), the average volume-to-capacity (V/C) ratio remains at 1.4. This suggests that the simulated conditions closely resemble the peak-demand case. Furthermore, according to existing evidence, network performance exhibits relatively greater marginal sensitivity to signal attack patterns than to the level of traffic demand [
34]. Additionally, regarding demand variability in total terms, no methodological modification is foreseen to be necessary, at present, as previous studies show that MFDs are more sensitive to infrastructure characteristics (and traffic control logic thereof) than to traffic demand [
39,
51,
67]. However, in cases of substantial differentiation in mobility patterns (e.g., at the spatial level), it is advisable for traffic managers to maintain and use the appropriate baseline (reference) MFDs.
Third, highly complex network topologies may increase drivers’ rerouting opportunities, potentially weakening the association between user- and infrastructure-centric effects. In practice, however, a high degree of alternative routing is also expected to influence established route choices, thereby moderating the impact of network topology on AFRM. Moreover, the presence of turn restrictions, one-way road infrastructure, and capacity limitations on connecting streets, which are common in real-world networks, is expected to further strengthen the explanatory power of AFRM.
Fourth, with respect to more complex signal zoning for vehicular traffic, it is not straightforward to determine whether the resulting impacts would be greater or not, as this depends heavily on the underlying road infrastructure. For example, when exclusive signal phases for left- and right-turn movements are supported by dedicated turn (pocket) lanes, spillback effects under signal tampering scenarios are expected to be mitigated. In the absence of such configurations, the resulting impacts are likely to be much more severe.
Finally, the large scale of real-world urban networks poses clear implications for the application of the proposed framework. Large-scale networks are a priori not expected to exhibit a single homogeneous MFD. In such cases, traffic managers should apply network partitioning based on traffic homogeneity and spatial compactness criteria. Several existing methodological advancements can support this process [
69,
70,
71,
72], drawing on both simulation-based as well as complete or incomplete data-driven approaches. Subsequently, the AFRM should be computed within each partition, also considering the boundary effects of signal malfunctions.
7. Conclusions and Future Work
This paper proposed and tested a framework for analyzing disruptions in digital road transport infrastructure using the Macroscopic Fundamental Diagram (MFD). The proposed AFRM offers a practical and quantifiable measure of network-wide performance loss. A key advantage of the metric lies in its operational applicability, as it can be computed using either traffic simulation with relatively shorter loading periods or standard field data, such as from inductive loop detectors, without requiring detailed trajectory- or trip-level information. This reduces the complexity of evaluation and monitoring processes and positions the metric as a useful tool for enhancing traffic managers’ preparedness and real-time response capabilities. These capabilities are particularly relevant given the increasing reliance of modern transport systems on their digital infrastructure components, their growing interdependence with other critical infrastructures such as energy production and distribution systems, and the rising exposure to cyber-related hazards.
The conducted simulation experiments, focusing on multi-scale signal tampering scenarios and varying routing behaviors, showed that AFRM exhibits an acceptable correlation with conventional disutility performance metrics, particularly when road users do not engage in excessively aggressive route switching. While these experiments provided a useful basis for deriving several meaningful insights, we believe that more can be done to enhance the findings’ generalizability, including testing the framework across additional network topologies, behavioral assumptions, and real-world operational conditions.
Looking ahead, a first extension of the current study involves applying the proposed framework to larger, unimodal, and more complex networks. In this context, future work may also examine the behavior of the proposed measure across individual network partitions under varying conditions and scenario settings. These may include heterogeneous signal tampering scenarios, such as the insertion of red phases, removal of selected phases, shuffling of phase durations, or combinations thereof. Further extensions could consider (1) variable tampering intensities, for example, in terms of disruption duration, (2) different levels of accuracy in driver information systems, including cases where these systems are simultaneously manipulated, and/or (3) the presence of adaptive signal control programs and dynamically varying disruptions.
Another promising research direction may be to incorporate the proposed metric into intrusion detection algorithms, building on existing work [
31,
32]. These algorithms can then be tested in a traffic simulation environment to evaluate their ability to distinguish between recurring traffic disturbances (potentially accompanied by signal adaptations) and deliberate signal manipulations.
A third direction is to formulate the proposed metric using both pMFD and oMFD (Output MFD) approaches [
38]. In this setting, the pMFD-based metric (i.e., the AFRM) can reflect the perspective of infrastructure and traffic managers, while the oMFD-based formulation may serve as a benchmark metric, reflecting user-level effects. This dual approach would facilitate a more comprehensive analysis of the stability AFRM across multiple traffic states, due to the enabled ability to isolate these states more effectively. In a similar vein, it would be of interest to explore the use of proxies for trip completion rate, such as the one proposed by Lu et al. [
41], in the context of traffic signal or other digital infrastructure disruptions. If successfully implemented, such proxies could elevate the user-centric approach to the level of a direct evaluation metric. A key challenge, though, involves the robust estimation of critical parameters, such as the increase in average trip length under disrupted operating conditions.
Finally, it would be of interest to explore the behavior of the proposed framework under a system-of-systems approach, involving the complete disruption of traffic signal control due to events such as widespread power grid failures. Recent incidents, such as the blackout of 28 April 2025, which affected Spain, Portugal, and southern France, provide a relevant reference point. However, it is anticipated that such extreme scenarios are expected to push the limits of the current simulation and modeling capabilities. Therefore, the co-utilization of surrogate modeling approaches is advisable.