Next Article in Journal
Network Screening on Low-Volume Roads Using Risk Factors
Previous Article in Journal
Modeling the Deployment and Management of Large-Scale Autonomous Vehicle Circulation in Mixed Road Traffic Conditions Considering Virtual Track Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison at Scale of Traffic Signal Cycle Split Failure Identification from High-Resolution Controller and Connected Vehicle Trajectory Data

by
Enrique D. Saldivar-Carranza
1,*,
Saumabha Gayen
1,
Howell Li
1,2 and
Darcy M. Bullock
1
1
Joint Transportation Research Program, Lyles School of Civil Engineering, Purdue University, West Lafayette, IN 47907, USA
2
LSM Analytics LLC, West Lafayette, IN 47901, USA
*
Author to whom correspondence should be addressed.
Future Transp. 2024, 4(1), 236-256; https://doi.org/10.3390/futuretransp4010012
Submission received: 25 December 2023 / Revised: 29 January 2024 / Accepted: 19 February 2024 / Published: 1 March 2024

Abstract

:
Split failures have been a conventional method to estimate overcapacity at signalized intersections. Currently, split failures are estimated from high-resolution (HR) traffic signal controller event data by evaluating occupancy at the stop bar. Recently, a technique that uses high-fidelity connected vehicle (CV) trajectory data to estimate split failures has been developed and has been adopted by some agencies. This paper compares cycle-by-cycle split failure estimations from both techniques for 42 signalized intersections across central Indiana. CV trajectories were assigned to a cycle based on their arrival characteristics. Then, HR and CV data were used to determine whether each cycle split fails. Finally, agreements and discrepancies were quantified and evaluated. The results obtained after analyzing over 35,000 cycles showed that both techniques produce similar overall split failure estimations. The HR and the CV methods identified 4% and 3% of all cycles as split failing, respectively. However, only 23% of all cycles determined as split failing with the HR approach were also identified as split failing with CV data. Similarly, only 30% of all cycles determined as split failing with the CV approach were also identified as split failing with the HR approach. This indicates significant discrepancies on a cycle-by-cycle basis. Using CV data to identify split failing cycles produces more conservative results and is based on the entire experience of traversing vehicles. If data are available, the authors recommend the CV approach when allocating limited agency resources for operational improvement activities.

1. Introduction

Traffic signals have been estimated to contribute up to 10% of all traffic delays on the National Highway System [1]. With over 400,000 traffic signals in operation across the United States, it is important for agencies to monitor operations to identify locations where mobility could be improved [2,3,4,5].
Various traffic signal performance measures have been developed in the last two decades that enable the assessment of progression and capacity utilization at intersections [6,7,8]. A popular metric used to evaluate intersection congestion is the occurrence of split failures, also known as cycle failures. A split failure event occurs when a signal cycle does not provide enough green time to serve vehicles waiting on a particular movement [9]. Since split failures can be estimated from different data sources [6,7], it is crucial to evaluate the differences of using either approach to provide practitioners with insights on how to effectively interpret results.

1.1. Literature Review

The state-of-the-practice for systematic evaluation of signal performance is the use of automated traffic signal performance measures (ATSPMs) [10,11,12]. ATSPMs use high-resolution (HR) (tenth-of-a-second) traffic signal controller event data (i.e., changes in detector states and signal outputs) to provide visualizations and tools. The wide adoption of ATSPMs has resulted in acknowledged performance metrics such as arrivals on green (AOG), delay estimations, travel times, arrival profiles, queue lengths, green occupancy ratio (GOR), red occupancy ratio (ROR), and split failures [2,6,13,14,15,16].
Freije et al. developed an ATSPM methodology that combines stop bar detectors’ GOR (i.e., the percent of time a detector is occupied during green) and the first N seconds of ROR (i.e., the percent of time a detector is occupied during red) with the type of phase termination to estimate whether a cycle split-failed for a given movement [9]. Split failures are arguably among the most useful signal performance measures as they indicate the locations where a movement or approach is at overcapacity and motorists would most likely complain [17,18]. As such, various studies have utilized split failures estimated from HR data to identify intersections experiencing challenges where retiming or maintenance activities could improve operations [19,20,21].
Over the last few years, crowdsourced connected vehicle (CV) trajectory data has emerged as a new dataset capable of producing and expanding existing traffic signal and arterial performance measures [7,22,23,24,25,26,27,28,29,30,31,32,33]. With over 500 billion records generated each month in the United States, CV data enable the development of highly scalable techniques since no detection or communication equipment is required. Another advantage of using CV data for traffic signal performance estimations is that practitioners have access to entire vehicle trajectories and are not constrained to limited detection zones [7].
Saldivar-Carranza et al. developed a technique that solely uses CV trajectories to determine whether a vehicle experiences a split failure based on its number of stops during its approach to an intersection [7]. This technique has been scaled to identify intersections with signal retiming opportunities where performance improvements were accomplished after timing adjustments [7] and has been adopted by the industry [7,34].

1.2. Motivation

There have been studies contrasting HR- and CV-based traffic signal performance measures. Waddell et al. concluded that AOG and trajectory stops estimations from both data sources are similar with a mean average percent error of 8.5%. Additionally, it was indicated that while CV data have better spatial detail, HR data have a frequency advantage [22]. Remias et al. determined that both HR and CV data can be used to identify corridor coordination issues and make offset adjustments [35]. Saldivar-Carranza et al. compared HR- and CV-based AOG estimations at 52 intersections in Utah. It was concluded that both techniques produce closely correlated results when queues are short and undersaturated conditions exist; however, significant discrepancies were observed when vehicles modulated their arrival speeds or when large queues were present. The authors recommended the use of CV data as they generate AOG estimations that are resilient to different traffic conditions [36].
Gayen et al. developed the required concepts to contrast cycle split failure estimations from HR and CV data. A preliminary analysis was carried out by comparing estimations for a single movement at a signalized intersection [37]. Even though the results provided valuable insights on estimation agreements and discrepancies, the analysis only covered a small sample, and the authors acknowledged the need to expand the study to more traffic signals with a wider distribution of traffic conditions to generate higher confidence on the derived conclusions.

Objective

The objective of this paper is to provide a scaled comparison of traffic signal cycle split failure identifications from HR and CV data. The findings of this analysis can provide valuable information on how to interpret results using either approach, which is particularly important as the industry moves towards a hybrid blend of HR- and CV-based traffic signal performance measures.

2. Datasets

HR and CV data from 22 May 2023 to 26 May 2023, were used in this study. This section describes each dataset and explains how split failures were identified from either.

2.1. High-Resolution Controller Events

Raw HR (ATSPM) traffic signal controller event data was provided by the Indiana Department of Transportation (INDOT). The data has a tenth-of-a-second frequency, each event is attached with the timestamp of its occurrence, and contains the following relevant information:
  • Signal outputs: start and end time for the green, yellow, and red phases. Additionally, it provides information on the green phase termination type (i.e., gap-out, max-out, or force-off).
  • Detector characteristics: location, length, and detector type (i.e., count or presence).
  • Detector states: whether a detector is on or off.
Further details on the HR dataset, including a discussion of vehicle detection, data acquisition equipment, and communication infrastructure is presented in [6].

Split Failure Identification

The occurrence of a split failure on a lane was identified using HR data by evaluating stop bar detector occupancy and phase termination type [9]. First, GOR is calculated as:
G O R = O g g
where g is the duration of the green interval and Og is the total detector occupancy time during green. Then, occupancy during the first 5 s of the red phase (ROR5) is calculated as:
R O R 5 = O r 5
where Or is the total detector occupancy time during the first 5 s of the red interval. If GOR ≥ 80%, ROR5 ≥ 80%, and the phase terminated by max-out or force-off, then the lane being evaluated is said to have split failed for the cycle that started at the same time as g .
After occupancy has been evaluated at the lane level, occupancy at the movement (i.e., lane group) level can be assessed to identify if a movement split fails. An intersection movement with n lanes is assigned the GORl and ROR5l values of lane l, where lane l is the only lane identified as split failing. If more than one lane, or no lane, is identified as split failing, then GORl and ROR5l values that:
Maximize   f R O R 5 , G O R = R O R 5 + g 5 G O R Subject   to :   { ( R O R 5 l , G O R l ) :   l = 1 , , n }
are assigned to the movement. Equation (3) makes sure that the movement keeps the highest normalized occupancy ratios from the analyzed lanes. Once GORl and ROR5l values have been assigned to the movement, the same lane criterion to determine whether the movement split fails is applied.
It is important to note that detector configuration and occupancy thresholds can significantly affect HR-based split failure estimations. Emtenan and Day [38] stated that accurate results can be obtained for detection zones of different lengths if the occupancy threshold is adjusted accordingly. However, such tasks affect scalability as each detection zone must be evaluated individually and can be time-intensive, especially for agencies that manage thousands of intersections. In this study, the standard 80% occupancy threshold was used to evaluate all detection zones.

2.2. Connected Vehicle Trajectories

CV trajectory data, with an estimated penetration rate of 4.5% [39], were obtained from a third-party vendor. The dataset consisted of a set of waypoints for entire (i.e., from on to off) vehicle trips. The data had a reporting interval of 3 s and a spatial accuracy of 3 m (~10 ft.). Every waypoint contained the following information: GPS location, timestamp, speed, heading, and an anonymous unique trajectory identifier. By linking individual waypoints with the same trajectory identifier and sorting them by timestamp, a complete chronological vehicle journey can be obtained. The dataset did not include any information from the signal controller or roadside units, such as signal phase and timing (SPaT) or map data (MAP) messages [40,41].
Further details on the CV dataset, including a discussion of acquisition, storage, data access, best practices, and costs is presented in [7].

Split Failure Identification

A vehicle trajectory was categorized as having experienced a split failure if it stopped two times or more during its approach to an intersection [7]. The first stop corresponded to the vehicle arrival at the back of the queue and the second and subsequent stops correspond to failed attempts by the intersection to discharge its waiting vehicles.
Since this CV-based approach identifies whether individual trajectory samples experience a split failure, performance results are usually provided as the percentage of sampled vehicles that experience a split failure over a period of time [7]. However, the HR technique identifies whether a cycle split fails for a lane or a movement. To accomplish a comparison between split failure estimations between data sources, a technique to identify whether a cycle split fails from CV data is provided and discussed in the Methodology section.
CV-based performance analysis must be performed at the movement (i.e., lane group) level because its current spatial accuracy is not high enough to distinguish different lanes that execute the same movement. Therefore, in this study, all HR-based split failure estimations were also performed at the movement level.

3. Methodology

This section presents the techniques used to compare cycle split failure estimations from HR and CV data by evaluating the signalized intersection shown in Figure 1.
At this location, all movements operate under protected-only timing. The position of the stop bar detection zones is presented in Figure 1a. The westbound-through (WBT), westbound-left (WBL), northbound-left (NBL), eastbound-through (EBT), eastbound-left (EBL), and southbound-left (SBL) movements all had stop bar detection and only the northbound-through (NBT) and southbound-through (SBT) movements did not. This intersection was chosen because split failures are known to occur at different movements. For example, callout i in Figure 1b points to a vehicle waiting in the queue during red. Then, callout ii in Figure 1c points to the same vehicle waiting on red again after the green phase failed to discharge the waiting queue, representing the occurrence of a split failure for the WBT movement.

3.1. Cycle Split Failure Identification Agreement

The HR technique uses a cycle-based interval to determine whether a particular movement split fails. In contrast, the CV technique provides the number of sampled vehicles that experience a split failure for a given time period and movement. Therefore, CV sampled trajectories need to be assigned to specific cycles to compare the estimations from both datasets. The technique used to assign trajectories to cycles is discussed later in this section. Finally, the level of split failure agreement or disagreement can be assessed.
Following the methodology presented by Gayen et al. in [37], each movement signal cycle was assigned to one of four categories depending on whether the HR and CV techniques identify the occurrence of a split failure. In the categorization system, true (i.e., a split failure is identified) and false (i.e., a split failure is not identified) were denoted with “T” and “F” after “HR” and “CV” to indicate whether a split failure had been estimated by the respective data source. The four categories were:
  • HRT-CVT: the split failure criteria were met for both techniques. This is an instance of agreement.
  • HRF-CVF: the split failure criteria were not met for either technique. This is an instance of agreement.
  • HRT-CVF: the split failure criterion was met for the HR technique, but not for the CV approach. This is an instance of disagreement.
  • HRF-CVT: the split failure criterion was not met for the HR technique, but it was met for CV approach. This is an instance of disagreement.
If any cycle did not count with a sampled trajectory or stop bar presence detection data, that cycle was ignored. The four categories were tabulated to analyze the split failure estimations from both datasets.
The rest of this subsection provides a split failure estimation comparison analysis regarding Figure 1, first at the movement level for a 15-min period, then at the intersection level for a 15-min period, and finally at the intersection level by time-of-day (TOD).

3.1.1. Agreement at the Movement Level

Figure 2 shows split failure estimations for seven cycles (c1 to c7) that occurred within a 15-min interval from each dataset. Figure 2a provides an HR-based ROR5 versus GOR graph with red lines at the 80% threshold [9]. Every marker represents a signal cycle. Any cycle that lies within the top-right corner that had a force-off or max-out termination status was categorized as split failing. In this case, cycles c2, c5, and c7 were estimated to have split failed. It can be seen how this split failure classification is sensitive to the GOR and ROR5 thresholds as c1 and c6 are also close to the top-right quadrant.
Figure 2b shows a TOD Purdue Probe Diagram (PPD) [7] which is a time–space diagram where sampled vehicle trajectories are plotted in reference to their distance to the far side of the intersection and are color-coded based on their number of stops. The phase output is also shown (callout i), and cycle divisions are indicated with vertical black lines (callout ii). The cycles start and end at the beginning-of-green (BOG) because the HR split failure criterion first evaluates GOR and then ROR5. The sampled vehicle trajectories were assigned to cycles as follows:
  • If the vehicle stops, it was assigned to the cycle when it last stopped. This was done because a stop represents the arrival at the back of the queue at the cycle in which, if the queue existed in the previous cycle, the leftover queue would affect the ROR5 that may trigger a split failure identification. For example, callout iii points to the time when a once-stopping trajectory stops; since this occurs within c5, it is assigned to c5. Callout iv points to the time when a twice-stopping trajectory last stopped; since this occurred within c2, it was assigned to c2.
  • If the vehicle does not stop, it is assigned to the ongoing cycle when it enters the stop bar detection zone.
It is important to acknowledge that there are several ways in which CV trajectories can be assigned to signal cycles. All trajectories, regardless of their number of stops, could be assigned to the ongoing cycle once they enter the detection zone [37]. Another approach that maximizes the number of cycles with allocated trajectories would be to perform a cycle assignment every time a trajectory stops after the first time. The assignment approach used in this paper provides a simple conservative technique that aims at matching split failure estimations from both datasets.
If a cycle is assigned at least one trajectory that stops more than once, then that cycle is categorized as split failing by the CV technique. In this case, cycles c2 and c7 were estimated to have split failed. This method does not rely on preset thresholds as it only depends on the experience of each individual vehicle that approaches the intersection.
Table 1 shows the corresponding agreement matrix for the cycles analyzed in Figure 2. Two of the seven cycles (c1 and c4) did not contain any CV trajectories, leaving five cycles for comparison. Out of these five cycles, two were categorized as HRT-CVT, two were categorized as HRF-CVF, and one was categorized as HRT-CVF. The HR data indicate that 60% of cycles split failed and the CV data indicate that 40% of cycles split failed.
In general, the HRF-CVT disagreement category is expected to occur less than HRT-CVF since a vehicle stopping twice is a good indication of congestion that would usually lead to high GOR and ROR5 values. However, HRF-CVT may occur especially if vehicles at the front of the queue red-light-run, right-turn-on-red, or if the GOR and ROR5 thresholds are inadequate for the stop bar detection zone [38].

3.1.2. Agreement at the Intersection Level

Figure 3 shows split failure evaluations from each dataset for all relevant movements during the same analysis period as Figure 2. Figure 3a provides ROR5 versus GOR graphs. Since the SBT and NBT movements do not have stop bar detection (Figure 1a), no GOR and ROR5 calculations were possible. This represents a significant limitation since the coordinated through movements, usually serving the largest demand at the intersection, many times do not have stop bar detection. If these movements are split failing, then it would be impossible to know with the discussed HR-based technique.
Figure 3b shows TOD PPDs. No trajectories were sampled for the EBL and SBL movements. This is likely to occur for movements with low volumes since the independent probability of a vehicle being connected and providing its location is the market penetration rate (MPR) of ~4.5%. Ideally, samples for all movements would be available, but if a movement with low demand does not count with sampled trajectories, it is likely that the particular movement does not suffer from congestion. In contrast, the coordinated NBT movement showed various trajectories that experienced split failures (Figure 3b), information that was not available from the HR analysis (Figure 3a).
Table 2 shows the agreement matrix for all the cycles analyzed in Figure 3. In total, 12 cycles had both presence detection and trajectory data. Out of these 12 cycles, 10 (83%) agreed and 2 (17%) disagreed. Both HR and CV data indicated that 33% of the cycles split failed.

3.1.3. Agreement at the Intersection Level by Time-of-Day

It is of interest to provide graphical tools that allow at-a-glance identification of congestion challenges. Figure 3 provides a detailed comparison of split failure estimations for all relevant movements during a 15-min period. However, the same visualization cannot be used to evaluate an entire day as the TOD information when each cycle occurs would be lost. For this reason, the ratio of cycles within each 15-min period that were identified as split failing for each movement is provided as a heatmap. The cycle split failure ratio of movement m (sfm) that has nm cycles in a 15-min period is calculated as:
s f m = 1 n m j = 1 n m φ c j m
where φ is an indicator function that denotes whether the j-th cycle of movement m ( c j m ) is identified as split failing. That is:
φ c j m = 0 ,   i f   c y c l e   j   o f   m o v e m e n t   m   d o e s   n o t   s p l i t   f a i l 1 ,   i f   c y c l e   j   o f   m o v e m e n t   m   s p l i t   f a i l s
Figure 4 shows the s f m estimations from each dataset for all relevant movements from 06:00 to 22:00 hrs. on 24 May 2023. Figure 4a provides the results from HR data, where callout i points to the same results as Figure 3a. Figure 4b shows the results from CV data, where callout ii points to the same results as Figure 3b. Table 3 provides the agreement matrix for all cycles with both presence detection and trajectory data analyzed in Figure 4.
A single day of analysis (Figure 4) may not be enough to reliably identify patterns. Using Equations (4) and (5) to evaluate the same TOD 15-min period over several days, a more robust comparison can be accomplished. Figure 5 provides the s f m estimations from each dataset from 06:00 to 22:00 hrs. from 22 May 2023 to 26 May 2023. A qualitative comparison shows how HR s f m estimations are usually higher than those obtained from CV data. This is particularly clear when comparing the TOD periods indicated by callout i. It is also noteworthy how the significant s f m values occurring on the NBT movement (callout ii) were missed by the HR-based technique due to the lack of stop bar presence detectors.
Table 4 shows the agreement matrix for all the cycles analyzed in Figure 5. In total, 2348 cycles had both presence detection and trajectory data. Out of these 2348 cycles, 32 (1%) were categorized as HRT-CVT and 2154 (92%) as HRF-CVF. Further, 162 cycles (7%) disagreed. The HR data indicated that 6% of cycles split failed and the CV data indicated that 3% of cycles split failed.
The Results section extends the analysis to over 40 intersections and provides further insights into split failure estimations.

4. Results

This section presents the results of a scaled cycle-by-cycle comparison of split failure estimations. Additionally, the effects that the number of CV trajectories sampled by cycle have on split failure identification agreement were evaluated.

4.1. Study Locations and Analysis Period

HR- and CV-based cycle split failure estimations were calculated for all through movements that counted with stop bar presence detection at 42 signalized intersections in central Indiana from 06:00 to 22:00 hrs. from 22 May 2023 to 26 May 2023 (Figure 6). All intersections were managed by INDOT and operated under various conditions (i.e., different volumes, geometries, etc.). Both HR and CV data were available at these locations.

4.2. Scaled Split Failure Identification Agreement Evaluation

It is difficult to visualize the ratio of cycles that split failed by movement every 15-min, such as in Figure 4 and Figure 5, for 42 intersections at once. Instead, the ratio of cycles that split failed on all analyzed movements by intersection during the same 15-min period is provided. From Equations (4) and (5), the cycle split failure ratio of intersection i (sfi) that has ni analyzed movements is calculated as:
s f i = m = 1 n i n m 1 m = 1 n i j = 1 n m φ c j m
Figure 7 and Figure 8 show the sfi estimations from the HR and CV datasets, respectively, for the evaluated intersections (Figure 6) during the analysis period. A qualitative comparison shows how HR sfi values (Figure 7) are usually higher than those obtained from CV data (Figure 8), likely due to some signals serving large traffic volumes, resulting in high GORs and ROR5s, without split failing. This is particularly clear when comparing the TOD periods highlighted by callout i. It is important to note that there were no CV sfi estimations for some intersections because they only had stop bar presence detection on their side streets where no trajectory samples were available.
The TOD period of signal 5394 highlighted by callout ii is of particular interest because the HR technique (Figure 7) indicated saturated conditions while the CV technique (Figure 8) did not display the occurrence of split failures. A deeper analysis of this case for only one day of data is presented in Figure 9. Figure 9a shows ROR5 versus GOR graphs for the evaluated movements. Figure 9b provides TOD PPDs. From the TOD PPDs, it is clear that the intersection may just be getting close to saturation since only 5 out of the 451 sampled trajectories experienced split failures. Under these conditions, GOR and ROR5 values were high, regardless of whether the signal provided enough green time to serve vehicles waiting. For this reason, the HR technique significantly overestimated cycles that split failed.
Table 5 shows the aggregated agreement matrix for all the cycles analyzed in Figure 7 and Figure 8. In total, 35,218 cycles had both presence detection and trajectory data. Only 351 cycles (1%) were identified as HRT-CVT. As expected, there were more cycles identified as HRT-CVF (1202, 3%) than HRF-CVT (813, 2%) since once a trajectory was identified as having stopped twice it was likely that GOR and ROR5 values were high. The overall split failure estimations were similar, with the HR technique estimating that 4% of the cycles split failed while the CV data indicated that 3% of the cycles split failed.
Another useful agreement metric is the percentage (P) of cycles identified as split failing from one technique that were also identified as split failing from the other technique. These values were calculated as:
P C V T | H R T = 100 × HRT _ CVT HRT _ CVT + HRT _ CVF
P H R T | C V T = 100 × HRT _ CVT HRT _ CVT + HRF _ CVT
where P C V T | H R T denotes the percentage of cycles identified as split failing with the CV technique given that the HR technique identified them as split failing, and P H R T | C V T denotes the percentage of cycles identified as split failing with the HR technique given that the CV technique identified them as split failing. Additionally, dashed lines are replaced with underscores on the split failure identification categories (as defined in Section 3.1) to avoid confusion with minus signs.
From Equations (7) and (8), and Table 5, the P C V T | H R T and P H R T | C V T values for the analyzed intersections and time period were 23% and 30%, respectively. This indicates that approximately one out of every four cycles identified as split failing with HR data was also identified with CV data, and one out of every three cycles identified as split failing with CV data was also identified with HR data.

4.3. Effects of Sampled Trajectories by Cycle

The effects that the number of sampled trajectories assigned to each cycle has on split failure identification agreement is shown in Figure 10. Agreement results for cycles that had only one sampled trajectory (n: 28,418) are provided in Figure 10a, for cycles with two sampled trajectories (n: 5383) in Figure 10b, for cycles with three sampled trajectories (n: 1130) in Figure 10c, and for cycles with four sampled trajectories (n: 228) in Figure 10d.
The category with the highest difference between groups is HRF-CVF, which decreased from representing 93.9% of all cycles in Figure 10a to 88.6% in Figure 10d. This is expected as a higher number of trajectory samples per cycle would be obtained at cycles with higher volumes.
The second category with the largest difference between groups was HRF-CVT, which increased from representing 2.0% of all cycles in Figure 10a to 5.7% in Figure 10d. This change is likely to occur for scenarios where the movement at the signal split fails intermittently, events that are not caught by HR data, but with enough CV data they are identified.
The category with the smallest difference between groups was HRT-CVT. This category did not significantly increase at any group as initial intuition would suggest. This is because the MPR of the CV trajectory data was still ~4.5% for every group. Moreover, a cycle identified as split failing with both datasets provides certainty on the operational conditions. If an intersection movement is congested, it is likely that any sampled trajectory would experience a split failure, and increasing the number of samples per cycle would not change the ratio of cycles identified as split failing.
Table 6 shows a segregated tabulation of split failure identification by dataset. As the number of trajectories sampled by cycle increased, the proportion of split failures identified (CVT and HRT) also increased, and the proportion of cycles not identified as split failing (CVF and HRF) decreased. This is because, as with the changes in the proportion of HRF-CVF in Figure 10, higher volumes are likely to provide more trajectory samples per cycle.

5. Discussion

From the scaled comparison of split failure estimations shown in Figure 7 and Figure 8, it can be stated that the HR technique can overestimate results. If an intersection constantly serves traffic, the HR technique may indicate the occurrence of split failures, even if queues are effectively discharged. This is more likely if platoons are clipped at the end-of-green (EOG) due to late arrivals or because of vehicles coming from the side streets. Accuracy improvements for HR split failure estimations can be accomplished by modifying the GOR and ROR5 thresholds according to the arrival pattern and each stop bar detection zone [38]. Another key consideration is that coordinated through movements, which usually serve the highest demands at an intersection, sometimes do not count with stop bar presence detection, making it impossible to estimate split failures for those movements with HR data. This is usually not a limitation when using the CV technique.
It is possible that a cycle that split fails is identified with HR data but not from CV data due to the current low and varying CV MPR. However, availability of sampled vehicle trajectories is expected for most cases where congestion leads to split failures, especially when evaluating several days at a time. It is more probable that side streets with low traffic volumes lack sampled trajectories, such as the case presented in Figure 3, which is not a major limitation when assessing congestion.
Since the MPR of HR data are virtually 100% where available, and the managing agency usually owns the data collection, communication pipelines, and storage, the HR technique is believed to be the best approach to perform cycle-by-cycle signal control. However, as the CV technique bases its split failure estimations on the complete experience of sampled traversing vehicles, needs no preset thresholds, and usually counts with data for major movements, it provides the most benefits when identifying intersections for retiming, maintenance, and upgrade activities.
A limitation of the study is that each sampled trajectory was only assigned to one traffic signal cycle. Future research will focus on the development of techniques to extract congestion estimations for as many cycles as possible from the same CV trajectory. This would reduce the negative effects of low CV MPRs and would provide more insights on the different queue dynamics when congestion does not allow for complete discharges, which is an inherent characteristic of split failures that is hard to capture with HR data.
Another limitation of the study is that event data were solely extracted from inductive loop detectors. HR split failure estimations derived from other types of detectors, such as radar, should also be compared to those derived from CV data.

6. Conclusions

This study provided a scaled comparison of cycle split failure estimations from HR and CV data. Over 35,000 cycles were evaluated across 42 intersections in central Indiana. The following results were obtained:
  • Overall split failure estimations were similar, with the HR and CV techniques identifying 4% and 3% of cycles as split failing, respectively.
  • Approximately one out of every four cycles identified as split failing with the HR data was also identified as split failing with the CV data.
  • Approximately one out of every three cycles identified as split failing with the CV data was also identified as split failing with the HR data.
The main reason for the discrepancies was the preset occupancy ratio thresholds used by the HR technique, which did not provide optimal results for every stop bar detection zone or for all traffic conditions. Since the identification of split failing cycles using CV data produced more conservative results based on the entire experience of traversing vehicles, if data are available, its use is recommended for the identification of locations that require retiming, maintenance, or upgrade activities. On the other hand, because of the virtually complete MPR of HR data, its use is recommended for cycle-by-cycle signal control.

Author Contributions

Conceptualization, E.D.S.-C., S.G., H.L. and D.M.B.; methodology, E.D.S.-C., S.G., H.L. and D.M.B.; software, E.D.S.-C. and H.L.; validation, E.D.S.-C.; formal analysis, E.D.S.-C.; investigation, E.D.S.-C., S.G., H.L. and D.M.B.; resources, H.L. and D.M.B.; writing—original draft preparation, E.D.S.-C.; writing—review and editing, S.G., H.L. and D.M.B.; visualization, E.D.S.-C.; supervision, D.M.B.; project administration, D.M.B.; funding acquisition, D.M.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Joint Transportation Research Program and Pooled Fund Study (TPF-5(519)) led by the Indiana Department of Transportation (INDOT) and supported by the state transportation agencies of California, Connecticut, Georgia, Minnesota, Mississippi, North Carolina, Ohio, Pennsylvania, Texas, and Utah, and the Federal Highway Administration (FHWA) Operations Technical Services Team. The contents of this paper reflect the views of the authors, who are responsible for the facts and the accuracy of the data presented herein, and do not necessarily reflect the official views or policies of the sponsoring organizations. These contents do not constitute a standard, specification, or regulation.

Data Availability Statement

The aggregated datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

From 22 May 2023 to 26 May 2023, high-resolution traffic signal controller event data and connected vehicle trajectory data used in this study were provided by the Indiana Department of Transportation (INDOT) and Wejo Data Services, Inc., respectively. Map data copyrighted OpenStreetMap contributors and are available from https://www.openstreetmap.org, (accessed on 30 November 2023).

Conflicts of Interest

H.L. is a co-founder of LSM Analytics LLC, a company that aims to further develop and commercialize some of the technologies/methods described in this manuscript.

References

  1. ITE; NOCoE. 2019 Traffic Signal Benchmarking and State of the Practice Report. 2020. Available online: https://transportationops.org/trafficsignals/benchmarkingreport (accessed on 30 November 2023).
  2. National Academies of Sciences, Engineering, and Medicine. Performance-Based Management of Traffic Signals; The National Academies Press: Washington, DC, USA, 2020. [Google Scholar] [CrossRef]
  3. Sunkari, S. The Benefits of Retiming Traffic Signals. Inst. Transp. Engineers. ITE J. 2004, 74, 26–29. [Google Scholar]
  4. National Transportation Operations Coalition. 2012 National Traffic Signal Report Card. 2012. Available online: https://transportationops.org/publications/2012-national-traffic-signal-report-card (accessed on 30 November 2023).
  5. Koonce, P.; Rodegerdts, L. Traffic Signal Timing Manual; Federal Highway Administration: Washington, DC, USA, 2008.
  6. Day, C.M.; Bullock, D.M.; Li, H.; Remias, S.M.; Hainen, A.M.; Freije, R.S.; Stevens, A.L.; Sturdevant, J.R.; Brennan, T.M. Performance Measures for Traffic Signal Systems: An Outcome-Oriented Approach; Purdue University: West Lafayette, IN, USA, 2014. [Google Scholar] [CrossRef]
  7. Saldivar-Carranza, E.D.; Li, H.; Mathew, J.K.; Desai, J.; Platte, T.; Gayen, S.; Sturdevant, J.; Taylor, M.; Fisher, C.; Bullock, D.M. Next Generation Traffic Signal Performance Measures: Leveraging Connected Vehicle Data; Purdue University: West Lafayette, IN, USA, 2023. [Google Scholar] [CrossRef]
  8. Leitner, D.; Meleby, P.; Miao, L. Recent advances in traffic signal performance evaluation. J. Traffic Transp. Eng. Engl. Ed. 2022, 9, 507–531. [Google Scholar] [CrossRef]
  9. Freije, R.S.; Hainen, A.M.; Stevens, A.L.; Li, H.; Smith, W.B.; Summers, H.; Day, C.M.; Sturdevant, J.R.; Bullock, D.M. Graphical Performance Measures for Practitioners to Triage Split Failure Trouble Calls. Transp. Res. Rec. J. Transp. Res. Board 2014, 2439, 27–40. [Google Scholar] [CrossRef]
  10. FHWA. Every Day Counts: An Innovation Partnership with States; FHWA: Washington, DC, USA, 2019.
  11. Lattimer, C. Automated Traffic Signals Performance Measures. 2020. Available online: https://ops.fhwa.dot.gov/publications/fhwahop20002/fhwahop20002.pdf (accessed on 24 October 2022).
  12. Liu, H.X.; Ma, W.; Hu, H.; Wu, X.; Yu, G. SMART-SIGNAL: Systematic Monitoring of Arterial Road Traffic Signals. In Proceedings of the 2008 11th International IEEE Conference on Intelligent Transportation Systems, Beijing, China, 12–15 October 2008; pp. 1061–1066. [Google Scholar] [CrossRef]
  13. Liu, H.X.; Wu, X.; Ma, W.; Hu, H. Real-time queue length estimation for congested signalized intersections. Transp. Res. Part C Emerg. Technol. 2009, 17, 412–427. [Google Scholar] [CrossRef]
  14. Wu, X.; Liu, H.X. Using high-resolution event-based data for traffic modeling and control: An overview. Transp. Res. Part C Emerg. Technol. 2014, 42, 28–43. [Google Scholar] [CrossRef]
  15. Liu, H.X.; Ma, W. A virtual vehicle probe model for time-dependent travel time estimation on signalized arterials. Transp. Res. Part C Emerg. Technol. 2009, 17, 11–26. [Google Scholar] [CrossRef]
  16. Vigos, G.; Papageorgiou, M.; Wang, Y. Real-time estimation of vehicle-count within signalized links. Transp. Res. Part C Emerg. Technol. 2008, 16, 18–35. [Google Scholar] [CrossRef]
  17. Schultz, G.G.; Macfarlane, G.S.; Wang, B.; McCuen, S. Evaluating the Quality of Signal Operations Using Signal Performance Measures. 2020. Available online: https://rosap.ntl.bts.gov/view/dot/54639/dot_54639_DS1.pdf (accessed on 5 December 2023).
  18. Denney, R.W.; Head, L.; Spencer, K. Signal Timing under Saturated Conditions; Federal Highway Administration: Washington, DC, USA, 2008.
  19. Mahajan, D.; Banerjee, T.; Rangarajan, A.; Agarwal, N.; Dilmore, J.; Posadas, E.; Ranka, S. Analyzing Traffic Signal Performance Measures to Automatically Classify Signalized Intersections. In Proceedings of the VEHITS 2019—5th International Conference on Vehicle Technology and Intelligent Transport Systems; SciTePress: Heraklion, Crete, Greece, 2019; pp. 138–147. [Google Scholar] [CrossRef]
  20. Wang, B.; Schultz, G.G.; Macfarlane, G.S.; McCuen, S. Evaluating Signal Systems Using Automated Traffic Signal Performance Measures. Future Transp. 2022, 2, 659–674. [Google Scholar] [CrossRef]
  21. Wu, X.; Liu, H.X.; Gettman, D. Identification of oversaturated intersections using high-resolution traffic signal data. Transp. Res. Part C Emerg. Technol. 2010, 18, 626–638. [Google Scholar] [CrossRef]
  22. Waddell, J.M.; Remias, S.M.; Kirsch, J.N. Characterizing Traffic-Signal Performance and Corridor Reliability Using Crowd-Sourced Probe Vehicle Trajectories. J. Transp. Eng. Part A Syst. 2020, 146, 04020053. [Google Scholar] [CrossRef]
  23. Zhao, Y.; Zheng, J.; Wong, W.; Wang, X.; Meng, Y.; Liu, H.X. Estimation of Queue Lengths, Probe Vehicle Penetration Rates, and Traffic Volumes at Signalized Intersections using Probe Vehicle Trajectories. Transp. Res. Rec. J. Transp. Res. Board 2019, 2673, 660–670. [Google Scholar] [CrossRef]
  24. Khadka, S.; Li, P.T.; Wang, Q. Developing Novel Performance Measures for Traffic Congestion Management and Operational Planning Based on Connected Vehicle Data. J. Urban Plan. Dev. 2022, 148, 04022016. [Google Scholar] [CrossRef]
  25. Mahmud, S.; Day, C.M. Evaluation of Arterial Signal Coordination with Commercial Connected Vehicle Data: Empirical Traffic Flow Visualization and Performance Measurement. J. Transp. Technol. 2023, 13, 327–352. [Google Scholar] [CrossRef]
  26. Argote, J.; Christofa, E.; Xuan, Y.; Skabardonis, A. Estimation of measures of effectiveness based on Connected Vehicle data. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 1767–1772. [Google Scholar] [CrossRef]
  27. Wolf, J.C.; Ma, J.; Cisco, B.; Neill, J.; Moen, B.; Jarecki, C. Deriving Signal Performance Metrics from Large-Scale Connected Vehicle System Deployment. Transp. Res. Rec. J. Transp. Res. Board 2019, 2673, 36–46. [Google Scholar] [CrossRef]
  28. Waddell, J.M.; Remias, S.M.; Kirsch, J.N.; Young, S.E. Scalable and Actionable Performance Measures for Traffic Signal Systems using Probe Vehicle Trajectory Data. Transp. Res. Rec. J. Transp. Res. Board 2020, 2674, 304–316. [Google Scholar] [CrossRef]
  29. Christofa, E.; Argote, J.; Skabardonis, A. Arterial Queue Spillback Detection and Signal Control Based on Connected Vehicle Technology. Transp. Res. Rec. J. Transp. Res. Board 2013, 2366, 61–70. [Google Scholar] [CrossRef]
  30. Fourati, W.; Friedrich, B. Trajectory-Based Measurement of Signalized Intersection Capacity. Transp. Res. Rec. J. Transp. Res. Board 2019, 2673, 370–380. [Google Scholar] [CrossRef]
  31. Feng, Y.; Head, K.L.; Khoshmagham, S.; Zamanipour, M. A real-time adaptive signal control in a connected vehicle environment. Transp. Res. Part C Emerg. Technol. 2015, 55, 460–473. [Google Scholar] [CrossRef]
  32. Patire, A.D.; Wright, M.; Prodhomme, B.; Bayen, A.M. How much GPS data do we need? Transp. Res. Part C Emerg. Technol. 2015, 58, 325–342. [Google Scholar] [CrossRef]
  33. Wang, A. Measuring the Quality of Arterial Traffic Signal Timing—A Trajectory-Based Methodology; University of Nevada: Reno, NV, USA, 2020. [Google Scholar]
  34. INRIX. INRIX Documentation: Metrics. INRIX IQ Documentation. Available online: https://docs.inrix.com/signals/metrics/ (accessed on 5 December 2023).
  35. Remias, S.M.; Day, C.M.; Waddel, J.M.; Kirsch, J.N.; Trepanier, T. Evaluating the Performance of Coordinated Signal Timing: Comparison of Common Data Types with Automated Vehicle Location Data. Transp. Res. Rec. J. Transp. Res. Board 2018, 2672, 128–142. [Google Scholar] [CrossRef]
  36. Saldivar-Carranza, E.; Li, H.; Gayen, S.; Taylor, M.; Sturdevant, J.; Bullock, D. Comparison of Arrivals on Green Estimations from Vehicle Detection and Connected Vehicle Data. Transp. Res. Rec. J. Transp. Res. Board 2023, 2677, 328–342. [Google Scholar] [CrossRef]
  37. Gayen, S.; Saldivar-Carranza, E.D.; Bullock, D.M. Comparison of Estimated Cycle Split Failures from High-Resolution Controller Event and Connected Vehicle Trajectory Data. J. Transp. Technol. 2023, 13, 689–707. [Google Scholar] [CrossRef]
  38. Emtenan, A.M.T.; Day, C.M. Impact of Detector Configuration on Performance Measurement and Signal Operations. Transp. Res. Rec. J. Transp. Res. Board 2020, 2674, 300–313. [Google Scholar] [CrossRef]
  39. Sakhare, R.S.; Hunter, M.; Mukai, J.; Li, H.; Bullock, D.M. Truck and Passenger Car Connected Vehicle Penetration on Indiana Roadways. J. Transp. Technol. 2022, 12, 578–599. [Google Scholar] [CrossRef]
  40. SAE International. J2735D: Dedicated Short Range Communications (DSRC) Message Set DictionaryTM; SAE International: Warrendale, PA, USA, 2016. [Google Scholar]
  41. Abernethy, B.; Andrews, S.; Pruitt, G. Signal Phase and Timing (SPaT) Applications, Communications Requirements, Communications Technology Potential Solutions, Issues and Recommendations; Federal Highway Administration: McLean, VA, USA, 2012.
Figure 1. Intersection at US-421 and 116th St. detector configuration and split failure occurrence.
Figure 1. Intersection at US-421 and 116th St. detector configuration and split failure occurrence.
Futuretransp 04 00012 g001
Figure 2. HR and CV WBT split failure evaluation from 17:00 to 17:15 hrs. on 24 May 2023.
Figure 2. HR and CV WBT split failure evaluation from 17:00 to 17:15 hrs. on 24 May 2023.
Futuretransp 04 00012 g002
Figure 3. HR and CV split failure evaluation for relevant movements from 17:00 to 17:15 hrs. on 24 May 2023.
Figure 3. HR and CV split failure evaluation for relevant movements from 17:00 to 17:15 hrs. on 24 May 2023.
Futuretransp 04 00012 g003
Figure 4. HR and CV split failure evaluation for relevant movements from 06:00 to 22:00 hrs. on 24 May 2023.
Figure 4. HR and CV split failure evaluation for relevant movements from 06:00 to 22:00 hrs. on 24 May 2023.
Futuretransp 04 00012 g004
Figure 5. HR and CV split failure evaluation for relevant movements from 06:00 to 22:00 hrs. from 22 May 2023 to 26 May 2023.
Figure 5. HR and CV split failure evaluation for relevant movements from 06:00 to 22:00 hrs. from 22 May 2023 to 26 May 2023.
Futuretransp 04 00012 g005
Figure 6. Analyzed intersections (n: 42) (map data: OpenStreetMap).
Figure 6. Analyzed intersections (n: 42) (map data: OpenStreetMap).
Futuretransp 04 00012 g006
Figure 7. Percentage of through movement cycles with presence stop bar detection that split failed according to HR data from 06:00 to 22:00 hrs. from 22 May 2023 to 26 May 2023.
Figure 7. Percentage of through movement cycles with presence stop bar detection that split failed according to HR data from 06:00 to 22:00 hrs. from 22 May 2023 to 26 May 2023.
Futuretransp 04 00012 g007
Figure 8. Percentage of through movement cycles with presence stop bar detection that split failed according to CV data from 06:00 to 22:00 hrs. from 22 May 2023 to 26 May 2023.
Figure 8. Percentage of through movement cycles with presence stop bar detection that split failed according to CV data from 06:00 to 22:00 hrs. from 22 May 2023 to 26 May 2023.
Futuretransp 04 00012 g008
Figure 9. Split failure evaluation for callout ii in Figure 7 and Figure 8 from 11:00 to 18:00 hrs. on 24 May 2023.
Figure 9. Split failure evaluation for callout ii in Figure 7 and Figure 8 from 11:00 to 18:00 hrs. on 24 May 2023.
Futuretransp 04 00012 g009
Figure 10. Effects of the number of sampled trajectories per cycle on agreement results.
Figure 10. Effects of the number of sampled trajectories per cycle on agreement results.
Futuretransp 04 00012 g010
Table 1. HR and CV agreement matrix for cycles analyzed in Figure 2.
Table 1. HR and CV agreement matrix for cycles analyzed in Figure 2.
HRTHRFTotal
CVT2
(40%)
c2, c7
0
(0%)
 
2
(40%)
c2, c7
CVF1
(20%)
c5
2
(40%)
c3, c6
3
(60%)
c3, c5, c6
Total3
(60%)
c2, c5, c7
2
(40%)
c3, c6
5
(100%)
c2, c3, c5, c6, c7
Table 2. HR and CV agreement matrix for cycles analyzed in Figure 3.
Table 2. HR and CV agreement matrix for cycles analyzed in Figure 3.
HRTHRFTotal
CVT3
(25%)
1
(8%)
4
(33%)
CVF1
(8%)
7
(58%)
8
(67%)
Total4
(33%)
8
(67%)
12
(100%)
Table 3. HR and CV agreement matrix for cycles analyzed in Figure 4.
Table 3. HR and CV agreement matrix for cycles analyzed in Figure 4.
HRTHRFTotal
CVT10
(2%)
17
(3%)
27
(5%)
CVF29
(6%)
465
(89%)
494
(95%)
Total39
(7%)
482
(93%)
521
(100%)
Table 4. HR and CV agreement matrix for cycles analyzed in Figure 5.
Table 4. HR and CV agreement matrix for cycles analyzed in Figure 5.
HRTHRFTotal
CVT32
(1%)
47
(2%)
79
(3%)
CVF115
(5%)
2154
(92%)
2269
(97%)
Total147
(6%)
2201
(94%)
2348
(100%)
Table 5. HR and CV agreement matrix for cycles analyzed in Figure 7 and Figure 8.
Table 5. HR and CV agreement matrix for cycles analyzed in Figure 7 and Figure 8.
HRTHRFTotal
CVT351
(1%)
813
(2%)
1164
(3%)
CVF1202
(3%)
32,852
(93%)
34,054
(97%)
Total1553
(4%)
33,665
(96%)
35,218
(100%)
Table 6. Effects of the number of sampled trajectories on the identification of split failing cycles by technique.
Table 6. Effects of the number of sampled trajectories on the identification of split failing cycles by technique.
No. of Trajectories Sampled per CycleTotal Number of CyclesCVTCVFHRTHRF
128,418
(100%)
830
(3%)
27,588
(97%)
1172
(4%)
27,246
(96%)
25383
(100%)
248
(5%)
5135
(95%)
295
(5%)
5088
(95%)
31130
(100%)
65
(6%)
1065
(94%)
69
(6%)
1061
(94%)
4228
(100%)
14
(6%)
214
(94%)
13
(6%)
215
(94%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saldivar-Carranza, E.D.; Gayen, S.; Li, H.; Bullock, D.M. Comparison at Scale of Traffic Signal Cycle Split Failure Identification from High-Resolution Controller and Connected Vehicle Trajectory Data. Future Transp. 2024, 4, 236-256. https://doi.org/10.3390/futuretransp4010012

AMA Style

Saldivar-Carranza ED, Gayen S, Li H, Bullock DM. Comparison at Scale of Traffic Signal Cycle Split Failure Identification from High-Resolution Controller and Connected Vehicle Trajectory Data. Future Transportation. 2024; 4(1):236-256. https://doi.org/10.3390/futuretransp4010012

Chicago/Turabian Style

Saldivar-Carranza, Enrique D., Saumabha Gayen, Howell Li, and Darcy M. Bullock. 2024. "Comparison at Scale of Traffic Signal Cycle Split Failure Identification from High-Resolution Controller and Connected Vehicle Trajectory Data" Future Transportation 4, no. 1: 236-256. https://doi.org/10.3390/futuretransp4010012

Article Metrics

Back to TopTop