Next Article in Journal
Adaptive Virtual-Reactance-Based Fault-Current Limiting Method for Grid-Forming Converters in EV Charging Stations
Next Article in Special Issue
Integrating UAVs into Highway Infrastructure Management Across the Life Cycle: A Systematic Review and Research Outlook
Previous Article in Journal
State Observer Design for LCC-S Wireless Power Transfer Systems Based on State-Space Modeling
Previous Article in Special Issue
Graph-Based Deep Learning and Multi-Source Data to Provide Safety-Actionable Insights for Rural Traffic Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vehicle Delay Prediction at Urban Roundabouts: Comparing Historical, Operational, and Demand-Based Features

1
Department of Industrial Engineering, College of Engineering, University of Business and Technology, Jeddah 23435, Saudi Arabia
2
Department of Industrial Engineering and Systems, Zagazig University, Zagazig 44519, Egypt
Vehicles 2026, 8(3), 64; https://doi.org/10.3390/vehicles8030064
Submission received: 9 February 2026 / Revised: 10 March 2026 / Accepted: 17 March 2026 / Published: 18 March 2026

Abstract

Accurate short-term traffic delay prediction is essential for effective intersection management and real-time traffic control. Although deep learning models have shown strong predictive capabilities in traffic forecasting, the influence of input feature configuration on prediction performance remains insufficiently understood. This study investigates how different feature groups affect short-term delay prediction at an urban roundabout using high-resolution, approach-level traffic data collected at one-minute intervals. Five feature scenarios are evaluated, ranging from temporal indicators only (S0) to a comprehensive feature set combining historical delay, operational traffic indicators, demand measurements, and temporal context (S4). Two recurrent neural network architectures, Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM), are examined under two forecasting horizons (1-min and 5-min ahead). To ensure robustness, each configuration is trained through repeated runs and evaluated using statistical significance analysis. Results show that the temporal-only baseline produces the largest prediction errors (MAE ≈ 22.5 s), while scenarios incorporating operational traffic indicators significantly improve prediction accuracy. The full feature configuration (S4) achieves the best performance for the 1-min horizon, reaching MAE values of 17.24 s and 17.22 s for GRU and LSTM, respectively. For the 5-min horizon, prediction errors increase and performance differences between feature scenarios become smaller. Additional experiments across multiple approaches confirm the general consistency of the proposed framework, while hyperparameter sensitivity analysis indicates limited dependence on model capacity. Overall, the findings highlight the importance of operational traffic indicators—particularly queue dynamics and stop patterns—for reliable short-term delay forecasting and provide practical guidance for designing efficient real-time traffic prediction systems.

1. Introduction

Urban traffic congestion has emerged as one of the most pressing challenges facing cities worldwide as rapid urbanization continues to intensify. According to recent projections, approximately 68% of the global population is expected to reside in urban areas by 2050, placing unprecedented strain on existing transportation infrastructure [1]. The resulting traffic delays impose substantial economic costs, reduce quality of life, increase fuel consumption and emissions, and undermine the efficiency of urban transportation systems. Addressing these challenges requires innovative approaches to traffic management that leverage advances in data collection, computational capabilities, and predictive modeling. Improving urban traffic efficiency is also closely aligned with the United Nations Sustainable Development Goal (SDG) 11, which aims to make cities inclusive, safe, resilient, and sustainable. Intelligent traffic management systems that reduce congestion, travel delays, and vehicle emissions contribute directly to more sustainable urban mobility and improved quality of life in rapidly growing cities.
In recent years, the proliferation of connected vehicles, roadside sensors, and floating car data has created new opportunities for real-time traffic monitoring and prediction. These data sources enable the collection of high-resolution traffic measurements at fine temporal and spatial scales, providing detailed insights into traffic conditions at individual intersections. Such granular data are particularly valuable for intersection-level traffic management, where delays are often concentrated and where effective intervention can yield substantial system-wide benefits. Accurate short-term prediction of traffic delay at intersections is therefore essential for supporting adaptive traffic intersection dynamics, dynamic route guidance, and real-time traveler information systems.
The development of deep learning methods has opened new avenues for traffic flow and delay prediction, overcoming many limitations of traditional statistical and shallow machine learning approaches. Recurrent neural networks, particularly Long Short-Term Memory (LSTM) networks and Gated Recurrent Units (GRU) [2,3], have demonstrated strong performance in modeling sequential traffic data due to their ability to capture long-term temporal dependencies [4,5,6]. These architectures have been successfully applied to various traffic prediction tasks, including traffic flow forecasting [7,8,9,10] and speed prediction [11]. More recently, attention mechanisms and graph-based models have been proposed to further enhance prediction accuracy by explicitly modeling spatial dependencies and dynamic relationships in traffic networks [12,13].
Despite these advances in model architecture, a critical aspect that has received comparatively less attention is the role of input feature selection in shaping prediction performance. Many existing studies focus primarily on developing more complex model architectures, while the role of input feature selection is often treated as a secondary consideration. However, the choice of input features—including which traffic measurements to include, how to engineer derived features, and which temporal lags to consider—can be significantly influential, particularly when data availability or computational resources are constrained. Understanding which types of features contribute most to prediction accuracy can inform more efficient data collection strategies, reduce model complexity, and improve interpretability. This gap is particularly important at intersections, where operational traffic dynamics such as queue formation and stop behavior strongly influence short-term delay patterns.
At intersections and roundabouts, traffic delays arise from the complex interplay of demand, capacity, signal timing, and queue dynamics. Conventional traffic engineering approaches have long recognized the importance of queue length, vehicle stops, and traffic intensity as key indicators of delay [14,15]. However, the relative importance of different feature groups—such as historical delay values, operational state measurements (queue length, stops), efficiency indicators (free-flow travel time ratios), and demand characteristics (traffic volume)—remains an open question in the context of deep learning-based prediction. Different feature groups may capture complementary or redundant information, and their relative contributions may vary depending on the prediction horizon, traffic conditions, and model architecture.
This study addresses this gap by conducting a systematic investigation of how different input feature configurations affect short-term traffic delay prediction at an urban intersection. Unlike prior studies that primarily focus on developing new prediction architectures, this work isolates the contribution of different feature categories under a controlled experimental framework. Therefore, rather than introducing a novel model architecture, we adopt a controlled experimental design in which the model architecture (GRU and LSTM) and hyperparameters are held constant across multiple scenarios, while the input feature set is varied systematically. This approach allows us to isolate the effect of input features on prediction performance and to draw conclusions about the relative importance of different traffic measurements.
We define five feature scenarios based on distinct groups of traffic characteristics: a temporal baseline (S0), historical delay features (S1), operational state features (S2), combined historical and operational features (S3), and a full feature configuration that additionally incorporates demand measurements (S4). By comparing prediction performance across scenarios that include different combinations of these feature groups, we aim to quantify their individual and joint contributions to delay prediction accuracy. The analysis is conducted using high-resolution approach-level data collected at one-minute intervals from a roundabout in Jeddah, Saudi Arabia, providing insights relevant to similar urban intersection environments.
The findings of this study have practical implications for the design and deployment of traffic prediction systems. If certain feature groups are found to contribute little to prediction accuracy, they can be deprioritized in data collection or excluded from models to reduce computational burden. Conversely, identifying highly informative feature groups can guide investment in sensor technologies and data collection infrastructure. Moreover, understanding feature importance can aid model interpretability, helping traffic engineers and operators understand which traffic conditions drive delay predictions and supporting more informed decision-making.
The remainder of this paper is organized as follows. Section 2 reviews related work on traffic prediction, delay modeling, and feature importance. Section 3 describes the study site, data collection, feature engineering, and experimental design. Section 4 presents the results of the controlled experiments. Section 5 discusses the implications of the findings and study limitations. Section 6 concludes the paper.

2. Related Work

2.1. Traffic Prediction Using Machine Learning and Deep Learning

Traffic prediction has been an active area of research for several decades, with methods evolving from traditional statistical approaches to sophisticated deep learning architectures. Early work relied on time series models such as autoregressive integrated moving average (ARIMA) and its variants, which capture linear temporal dependencies but struggle with the nonlinear and dynamic nature of traffic flow, particularly during congested conditions or in the presence of incidents [16].
Deep learning methods have transformed traffic prediction by enabling automatic feature extraction from raw data and by providing powerful tools for modeling complex spatiotemporal patterns. Recurrent neural networks and their variants—particularly LSTM and GRU—have become dominant approaches for traffic flow and speed prediction due to their ability to maintain memory of past states [2,3,4,5,6,7,8]. Studies have demonstrated that LSTM and GRU models consistently outperform traditional methods across diverse datasets and traffic scenarios [16,17].
Recent research has explored hybrid architectures that combine multiple deep learning components. Convolutional neural networks (CNNs) are often integrated with RNNs to capture spatial dependencies in traffic networks, with CNN layers extracting spatial features and RNN layers modeling temporal dynamics [17,18]. Graph neural networks (GNNs) have been proposed to model traffic flow and assignment on road networks with irregular topologies, treating road segments as nodes and connections as edges, enabling richer spatial dependency modeling [13]. Attention mechanisms have been incorporated to dynamically weight the importance of different time steps or spatial locations [12].
While much of the literature focuses on traffic flow or speed prediction on highways and road networks, intersection-level delay prediction has received considerably less attention. Intersections exhibit unique traffic dynamics due to traffic operations, stop-and-go behavior, and queue formation and dissipation. Roundabouts present distinct operational characteristics compared to signalized intersections, with continuous flow subject to gap acceptance and circulating traffic priority. Recent studies have highlighted the complexity of roundabout design and traffic behavior, emphasizing the need for accurate performance evaluation tools [19,20,21,22,23,24]. To the best of the author’s knowledge, relatively few studies have systematically compared the contribution of different traffic feature groups for intersection-level delay prediction.
However, most of these studies focus on network-level traffic flow or speed prediction. In contrast, intersection-level delay prediction presents unique challenges due to queue dynamics, stop-and-go behavior, and localized congestion effects.

2.2. Traffic Delay and Congestion Indicators

The concept of delay is central to traffic engineering and has been extensively studied in the context of intersection capacity and level of service analysis. The Highway Capacity Manual defines control delay as the additional travel time experienced by vehicles due to traffic control devices and traffic interactions, encompassing deceleration delay, stopped delay, and acceleration delay [14]. Delay is influenced by multiple factors, including traffic volume, signal timing, geometric design, and driver behavior.
Queue length is recognized as a fundamental indicator of congestion and delay at intersections. When vehicle arrivals exceed the departure rate during red phases, queues form and persist until the green phase allows discharge. Queue length measurements can be obtained from various sources including loop detectors, video analytics, and probe vehicle data [25,26]. The relationship between queue length and delay is well-established in traffic theory, with longer queues generally corresponding to higher average delay per vehicle, though the relationship is nonlinear and depends on signal timing.
The number of stops experienced by vehicles is another important measure of traffic efficiency and delay. Stop-and-go driving not only increases travel time but also leads to higher fuel consumption, emissions, and driver discomfort. Advanced traffic control systems aim to minimize the number of stops through signal coordination and adaptive timing [27]. In the context of prediction, the number of stops reflects the intensity of congestion and the degree of signal interference experienced by approaching vehicles.
Traffic intensity and efficiency ratios provide normalized measures of congestion that account for the relationship between actual travel time and free-flow travel time. The ratio of actual travel time to free-flow travel time is commonly used in traffic assignment models and can be related to volume-capacity ratios. These indicators are useful for comparing congestion levels across different road segments and time periods, and for identifying bottlenecks in traffic networks.

2.3. Feature Importance and Input Selection

Feature selection and feature engineering are critical steps in developing effective machine learning models, yet they are often treated as preprocessing tasks rather than central research questions. In traffic prediction, the choice of input features can significantly impact model performance, but systematic studies comparing the contribution of different feature groups are relatively scarce. Most research focuses on including as many potentially relevant features as possible, or relies on domain knowledge to select a standard feature set without empirical validation [28,29,30,31].
Several approaches have been proposed for assessing feature importance in machine learning models. Filter methods evaluate features based on statistical properties such as correlation with the target variable, mutual information, or variance [32]. Wrapper methods assess feature subsets by training and evaluating models with different feature combinations, though this can be computationally expensive for large feature spaces. Embedded methods incorporate feature selection into the model training process, as in regularized regression or tree-based models that provide feature importance scores.
In the context of deep learning, attention mechanisms have been used to identify which input time steps or spatial locations contribute most to predictions [12]. Model-agnostic interpretation methods such as SHAP (SHapley Additive exPlanations) can quantify the marginal contribution of each feature to model predictions; however, they provide instance-level or feature-level importance scores rather than assessing the predictive value of entire feature groups under controlled experimental conditions. The present study therefore adopts a scenario-based approach that holds model architecture constant while systematically varying input feature groups, providing more operationally interpretable insights for practitioners.
Some studies have examined the importance of temporal features, such as time of day and day of week, for traffic prediction. These features capture recurring patterns in travel demand, such as morning and evening peak periods and weekday versus weekend traffic. However, the relative importance of temporal features compared to other feature groups—such as real-time traffic measurements or historical delay values—is not well understood [30,31].
A few studies have investigated the contribution of external factors such as weather conditions, special events, and incidents to traffic prediction. While these factors can cause significant deviations from typical traffic patterns, their impact may be localized in time and space. Incorporating such features requires additional data collection and may not always improve prediction accuracy, particularly for short-term forecasts where current traffic state provides strong signals [30,31,33].
Despite these efforts, there remains a lack of systematic studies that isolate and compare the contribution of different traffic feature groups to delay prediction at intersections. Most existing research evaluates models using fixed feature sets or focuses primarily on comparing different model architectures. Consequently, the relative contribution of distinct feature categories—such as historical delay, operational state variables, demand characteristics, and temporal context—remains insufficiently understood. To address this limitation, the present study adopts a controlled experimental framework in which the model architecture is held constant while the input feature groups are systematically varied. This design enables a clearer assessment of how different traffic measurements influence short-term delay prediction performance.
The following section describes the study site, dataset, and the experimental framework used to evaluate these feature configurations.

3. Methodology

3.1. Study Site and Data Collection

3.1.1. Intersection Description

This study focuses on a major urban roundabout located in Jeddah, Saudi Arabia, which consists of four approaches (Eastbound, Northbound, Westbound, and Southbound). The roundabout represents a typical high-demand urban traffic facility subject to recurrent congestion during peak periods.
To ensure methodological clarity and controlled analysis, the present study concentrates on a single approach (Approach 2), which exhibits the highest congestion levels and the largest delay variability during the observation period.
To verify the general applicability of the proposed modeling framework, the best-performing configuration was additionally evaluated on the remaining three approaches. Only summary performance statistics are reported for these approaches in order to maintain a focused methodological analysis.

3.1.2. Data Source and Period

Traffic data were obtained from the TomTom Junction Analytics platform (TomTom International BV, Amsterdam, The Netherlands), which provides high-resolution, approach-level traffic performance indicators derived from floating car data.
  • Study period: March 2025 (full month)
  • Temporal resolution: 1-min intervals
  • Total observations: 44,611 records for the selected approach
The study focuses on a single month (March 2025) to maintain consistent traffic conditions and avoid seasonal variability, enabling a controlled comparison of feature configurations. Moreover, the high temporal resolution enables short-term traffic delay prediction suitable for real-time traffic management applications.

3.2. Data Description and Variables

Table 1 summarizes the traffic variables available for each minute.
To further examine the relationships between key traffic variables, a pairwise correlation analysis was conducted between delay, queue length, and traffic volume. The results are illustrated in Table 2.
The results show a moderate positive correlation between queue length and traffic volume (r = 0.61), indicating that queue dynamics reflect variations in traffic demand. In contrast, the correlations between delay and the other variables are weaker (r ≈ 0.24–0.25), suggesting that delay is influenced by multiple operational factors beyond traffic demand alone. These findings support the interpretation that queue-related variables provide valuable information about congestion dynamics in intersection environments.
The delay (D) variable is selected as the primary prediction target because it directly reflects congestion severity and operational efficiency at intersections. Unlike travel time, which may include variations unrelated to intersection performance, delay isolates the additional time experienced by vehicles relative to free-flow conditions. As a result, delay provides a more informative indicator for evaluating traffic control performance and short-term congestion dynamics. Furthermore, delay is commonly used in intelligent transportation systems (ITS) to evaluate intersection performance and guide adaptive intersection dynamics strategies.
Although the predictive models in this study focus primarily on Approach 2, descriptive statistics for all roundabout approaches are presented in Table 3. Providing statistics for all approaches allows a quick comparison of the traffic characteristics across the intersection and helps contextualize the prediction results discussed later in the paper.
The statistics reveal noticeable differences in traffic demand and operational conditions among the approaches. In particular, Approach 2 exhibits relatively high queue lengths and substantial delay variability compared with the other approaches, indicating pronounced congestion dynamics suitable for evaluating delay prediction models.
Approach 1, in contrast, shows significantly lower traffic volumes and shorter queues, indicating more stable traffic conditions. Such characteristics typically result in smoother traffic flow and lower variability in delay, which can make prediction tasks comparatively easier. Meanwhile, Approaches 3 and 4 present higher traffic demand levels and larger queue variations, reflecting more complex congestion patterns.
For these reasons, Approach 2 was selected as the primary modeling case in this study. It provides a balanced traffic environment where congestion dynamics are sufficiently pronounced to evaluate prediction models while still maintaining adequate data consistency for training deep learning models. In addition, the similarity of key traffic indicators across several approaches supports the general applicability of the proposed prediction framework beyond a single approach.

3.3. Feature Engineering

Rather than using raw variables directly, the input features are organized into conceptually meaningful groups to analyze their relative contribution to prediction performance. This grouping allows a structured investigation of how different sources of traffic information, including temporal persistence, operational state variables, and demand indicators, contribute to prediction accuracy.

Feature Groups

Group A: Historical Delay Features
These features capture short-term temporal persistence in traffic conditions:
  • D t 1 : delay at the previous minute;
  • D t 5 : delay five minutes earlier;
  • U D t : usual (historical baseline) delay.
Group B: Traffic Efficiency Indicators
This group reflects vehicle movement efficiency and delay intensity:
  • S T t : average number of stops.
  • Travel efficiency ratio:
    T T t D t T T t
  • Delay intensity:
    D t T T t
Group C: Queue-Related Measurements
Queue dynamics are represented using:
  • Q L t : current queue length.
  • Queue growth rate:
    Q L t Q L t 1
  • Normalized queue length:
    Q L t m a x ( Q L )
Group D: Demand-Related Features
Traffic demand characteristics are described by:
  • V t : traffic volume.
  • Volume variation:
    V t V t 5

3.4. Temporal Context Features

To capture periodic traffic patterns, temporal features are included in all scenarios. Temporal context variables include hour-of-day and day-of-week encoded using sine and cosine transformations to capture cyclic temporal patterns. In addition, a binary peak-period indicator was introduced to represent typical congestion periods, including the morning peak (07:00–09:00) and evening peak (16:00–19:00).
These features allow the model to learn daily and weekly traffic regularities without explicitly relying on calendar variables.

3.5. Prediction Target

The prediction task is formulated as short-term delay forecasting. Two prediction horizons are examined: one minute ahead and five minutes ahead.
D ^ t + h
where h represents the prediction horizon (h = 1 or h = 5 min).
This short prediction horizon is operationally meaningful for real-time traffic monitoring and adaptive control strategies.

3.6. Input Feature Scenarios

To systematically assess the impact of different input configurations, five feature scenarios are designed as shown in Table 4.
This design enables controlled comparison between temporal, historical, operational, and demand-driven information.

3.7. Recurrent Neural Network Models

3.7.1. Recurrent Neural Network Architecture

To model temporal dependencies in traffic delay dynamics, recurrent neural networks are adopted. Specifically, both Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) architectures are evaluated. These architectures are widely used in traffic prediction because of their ability to capture sequential patterns in time-series data.
To ensure a fair comparison between architectures, both models follow the same overall network configuration, differing only in the recurrent layer type.
The network structure is defined as follows:
  • Input layer (scenario-dependent feature dimension);
  • Recurrent layer (GRU or LSTM) with 32 hidden units;
  • Dropout layer (rate = 0.1) to reduce overfitting;
  • Fully connected layer with 16 neurons and ReLU activation;
  • Output layer with a single neuron for delay prediction.

3.7.2. Training Configuration

The selected network configuration represents a moderate-complexity architecture commonly adopted in short-term traffic prediction studies. To isolate the effect of input features, all hyperparameters were kept constant across scenarios and model architectures:
  • Lookback window: 10 time steps (10 min);
  • Optimizer: Adam;
  • Learning rate: 0.001;
  • Batch size: 64;
  • Loss function: Mean Squared Error (MSE);
  • Early stopping based on validation loss.

3.8. Data Preparation

3.8.1. Cleaning and Filtering

Data preprocessing was conducted to ensure the reliability of the training dataset. Missing observations were removed using a complete-case strategy. In addition, records with extreme delay values greater than 300 s were excluded, as such values typically correspond to abnormal traffic conditions or measurement noise. These extreme observations represented less than 0.5% of the dataset and their removal did not materially affect the overall data volume. After these steps, only valid operational records were retained for model training and evaluation.

3.8.2. Feature Scaling

All numerical features were standardized using z-score normalization, with scaling parameters computed from the training set. To prevent information leakage and ensure reproducibility in real-time deployment scenarios, all normalization parameters were computed using the training set only. The same scaling parameters were then applied to the validation and test sets. This approach ensures that no information from future observations is used during model training.

3.8.3. Train–Validation–Test Split

A chronological split was applied to preserve temporal integrity:
  • Training set: 75%;
  • Validation set: 10%;
  • Test set: 15%.

3.9. Evaluation Metrics

Prediction performance is assessed using:
  • Mean Absolute Error (MAE):
    M A E = 1 n y y ^
  • Root Mean Squared Error (RMSE):
    R M S E = 1 n ( y y ^ ) 2
MAE provides an interpretable measure in seconds, while RMSE penalizes large errors associated with congestion peaks.

3.10. Visualization and Analysis Framework

Model performance is analyzed through a multi-level framework that includes:
  • Aggregate MAE and RMSE comparison across feature scenarios (S0–S4) to evaluate the contribution of temporal, historical, operational, and demand-related feature groups.
  • Relative improvement with respect to the historical baseline scenario (S1) to quantify the practical benefit of adding operational and demand-related inputs.
  • Comparison across recurrent architectures (GRU and LSTM) to examine whether the observed feature effects remain consistent under different model structures.
  • Comparison across prediction horizons (1-min and 5-min ahead forecasts) to assess how the value of different feature groups changes with forecasting range.
  • Repeated-run analysis (10 runs) using mean and standard deviation of performance metrics to evaluate result stability under stochastic training effects.
  • Statistical significance testing of pairwise scenario differences to determine whether observed performance gains reflect genuine improvements rather than random variation.
  • Time-window comparison of observed and predicted delay for representative scenarios to provide a qualitative view of temporal tracking behavior.
  • Residual analysis to examine prediction error distributions and identify systematic under- or over-estimation patterns.
  • Summary evaluation across the remaining three approaches using the best-performing configuration to assess the general consistency of the proposed framework beyond the main analyzed approach.
  • A hyperparameter sensitivity analysis to verify that the main findings are not highly dependent on a specific hidden-layer size choice.
This multi-level framework enables quantitative comparison, statistical validation, and practical interpretation of model behavior under different feature configurations, model architectures, forecasting horizons, and traffic conditions. All experiments were implemented using MATLAB R2025b (MathWorks, Natick, MA, USA).

4. Results

4.1. Overall Performance Comparison

Table 5 and Table 6 summarize the prediction performance of all feature scenarios for both the GRU and LSTM models under the two forecasting horizons considered in this study (1-min and 5-min ahead prediction, respectively). The reported values represent the mean and standard deviation obtained from 10 independent training runs, allowing assessment of both prediction accuracy and result stability.
The results presented in Table 5 and Table 6 reveal a clear hierarchy among the tested feature configurations. As shown in Table 5, which corresponds to the 1-min forecasting horizon, the temporal-only baseline scenario (S0) consistently produces the largest prediction errors for both models, with MAE values exceeding 22 s. The relatively small standard deviations observed across runs indicate stable model behavior despite stochastic training effects.
Introducing historical delay information (S1) improves prediction accuracy compared with S0. However, the improvement remains limited, with MAE values around 20–21 s. This suggests that relying solely on short-term temporal persistence does not fully capture the rapid dynamics of congestion formation at urban intersections.
A significant improvement emerges when operational traffic indicators are incorporated. As shown in Table 5, scenarios S2 and S3 reduce the MAE to approximately 17.3 s for both GRU and LSTM models. These results confirm that operational indicators such as queue length, stop frequency, and efficiency ratios provide highly informative signals about the current traffic state.
The best overall performance for the 1-min horizon is achieved by the full feature configuration (S4). The GRU model achieves a mean MAE of 17.24 ± 0.07 s, while the LSTM model achieves 17.22 ± 0.11 s. Compared with the historical baseline scenario (S1), this corresponds to an improvement of approximately 17% in prediction accuracy.
Another important observation is that the performance differences between scenarios S2, S3, and S4 remain relatively small, and their standard deviations largely overlap. This indicates that operational traffic variables already capture a large portion of the predictive information, while the additional demand-related variables in S4 provide only marginal improvement.
For the 5-min forecasting horizon, the results in Table 6 show that the overall prediction errors increase and the differences between feature scenarios become smaller. For example, the MAE values for the GRU model range from 22.52 s in S0 to 21.84 s in S4. This narrowing performance gap indicates that the predictive advantage of richer feature sets decreases as the forecasting horizon becomes longer.
Comparing Table 5 and Table 6 therefore suggests that richer feature configurations are particularly beneficial for very short-term predictions, while their relative advantage decreases for longer forecasting horizons.
Although both GRU and LSTM architectures were evaluated, the performance differences between the two models remain consistently small across all scenarios and horizons. This indicates that both recurrent architectures are capable of effectively modeling the temporal dynamics of traffic delay.
Given its simpler structure and lower computational complexity, the GRU architecture may therefore be more suitable for real-time deployment scenarios where computational efficiency is important.
To verify whether the observed differences between feature scenarios are statistically meaningful, additional statistical significance testing was conducted. The MAE values obtained from the 10 repeated runs were compared using paired t-tests. The results indicate that the improvements observed when moving from the baseline scenarios (S0 and S1) to the richer feature configurations are statistically significant for the one-minute forecasting horizon. However, the differences among scenarios S2, S3, and S4 are relatively small and not always statistically significant, confirming that operational features already capture most of the predictive information.

4.2. Statistical Significance Analysis

To further verify whether the observed improvements between feature scenarios are statistically significant, Table 7 presents a paired t-test that was conducted on the MAE values obtained from the 10 independent training runs for each model, scenario, and forecasting horizon.
The results indicate that for the 1-min forecasting horizon, the full feature configuration (S4) significantly outperforms both the temporal baseline (S0) and the historical baseline (S1) for both GRU and LSTM models (p < 0.001). This confirms that incorporating operational and demand-related traffic indicators leads to a statistically meaningful improvement in prediction accuracy.
For the 5-min forecasting horizon, S4 remains significantly better than S0 for both models. However, the differences between S4 and the intermediate feature configurations (S1–S3) are generally not statistically significant. This suggests that the relative advantage of richer feature sets decreases as the prediction horizon increases, which is consistent with the narrowing performance differences observed in Table 6.
Overall, the statistical analysis supports the conclusion that richer feature representations provide the greatest benefit for very short-term traffic delay prediction, while their relative contribution becomes less pronounced for longer forecasting horizons.

4.3. Temporal Prediction Behavior

To better illustrate the qualitative prediction behavior of the models, Figure 1 and Figure 2 present time-series comparisons between the observed delay and the predicted delay produced by the models for a representative half-day time window.
Figure 1 shows the predictions obtained using the GRU model for the baseline scenario (S1) and the full feature configuration (S4). It can be observed that both scenarios are able to capture the general evolution of delay throughout the day. However, the predictions produced by S4 follow the observed delay pattern more closely than those produced by S1. In particular, S4 better captures the magnitude of congestion peaks and the timing of rapid delay changes.
In contrast, the baseline scenario S1 tends to produce smoother predictions and often underestimates high delay values during peak congestion periods. This behavior reflects the limited information content of historical delay features alone.
A similar pattern can be observed for LSTM. The full feature configuration (S4) consistently tracks the observed delay more accurately than the historical baseline (S1). However, the difference between S1 and S4 appears slightly less pronounced for LSTM than for GRU, which is consistent with the small performance gap observed in the quantitative metrics.
Overall, the time-series analysis confirms that incorporating operational and demand-related features improves the model’s ability to follow rapid traffic dynamics.

4.4. Residual Distribution Analysis

To further analyze the prediction errors, Figure 3 and Figure 4 present the residual distributions for scenarios S1 and S4 for both GRU and LSTM models. The residuals are defined as the difference between the observed delay and the predicted delay.
The histogram shows that the residual distribution of S4 is more concentrated around zero than that of S1. This indicates that the full feature configuration produces more stable predictions and reduces the magnitude of large prediction errors.
A similar pattern is observed for the LSTM model, as shown in Figure 4.
In both models, the S4 residual distribution exhibits a narrower spread and fewer extreme errors compared with the baseline scenario S1. This confirms that the richer feature set helps the model better capture sudden changes in traffic conditions.

4.5. Cross-Approach Generalization

To further evaluate the generalizability of the proposed framework, the best-performing configuration was tested across the remaining roundabout approaches. Although the main experiments were conducted on the most congested approach (Approach 2), the best-performing configuration was also evaluated on the remaining three approaches to verify the general consistency of the proposed framework.
Table 8 summarizes the prediction accuracy obtained for each approach using the S4 configuration.
The results show noticeable differences in prediction difficulty among the approaches. Approach 1 achieves the lowest prediction error, with an MAE of approximately 8.17 s, while the other approaches exhibit MAE values between approximately 16.6 and 17.4 s.
This difference is primarily related to the underlying traffic characteristics of each approach. As shown in the dataset summary statistics, Approach 1 exhibits significantly lower queue lengths and traffic volumes compared with the other approaches. Consequently, the traffic conditions at this approach are more stable and easier for the models to predict.
It is also important to note that the preprocessing step removed observations with delay values greater than 300 s. However, these extreme values represented less than 0.5% of the dataset for all approaches. Therefore, their removal had a negligible impact on the dataset size and does not explain the performance differences between approaches.
Overall, these results indicate that the proposed prediction framework remains applicable across different approaches, while the prediction difficulty is influenced by the underlying traffic demand and congestion variability.

4.6. Hyperparameter Sensitivity Analysis

To evaluate the robustness of the model with respect to hyperparameter selection, a sensitivity analysis was conducted by varying the number of hidden units in the GRU network (16, 32, and 64 units) while keeping all other training parameters fixed.
Table 9 presents the prediction accuracy and training time obtained for each configuration.
The results show only minor variations in prediction accuracy across the tested configurations. The best performance is obtained with 32 hidden units (MAE ≈ 17.19 s), while configurations with 16 and 64 units produce MAE values of approximately 17.28 s and 17.39 s, respectively.
These small differences indicate that the proposed framework is relatively insensitive to moderate variations in model capacity. In other words, the prediction performance is primarily driven by the selected feature configuration rather than extensive hyperparameter tuning.

5. Discussion

5.1. Operational Insights

The experimental results provide useful insights into the relative importance of different feature groups for short-term intersection delay prediction. In particular, the strong performance of scenarios incorporating operational variables highlights the critical role of real-time traffic state indicators.
Queue length and the number of vehicle stops emerge as particularly informative predictors. Queue length directly reflects the imbalance between demand and available capacity, while stop frequency captures stop-and-go traffic behavior associated with congestion. Together, these operational indicators provide an effective representation of the current traffic state.
Interestingly, the improvement obtained by combining historical and operational features remains relatively small. Although the combined scenario achieves the lowest prediction error, the difference compared with operational-only features is marginal. This suggests that operational measurements implicitly capture short-term traffic history, since queue formation inherently reflects congestion accumulation over preceding minutes.
The limited contribution of demand-related variables also indicates that congestion effects are more effectively represented through observable operational indicators than through raw demand measurements alone. Queue dynamics integrate the combined effects of demand and capacity, making them a more direct signal of traffic conditions.

5.2. Practical Implications and Limitations

From a practical perspective, the results suggest that effective short-term delay prediction can be achieved using a relatively compact set of operational features. This is advantageous for real-time deployment because it reduces both data requirements and computational complexity. Traffic monitoring systems may therefore benefit from prioritizing technologies capable of accurately measuring queue length and vehicle stops.
However, several limitations should be acknowledged. First, the analysis focuses primarily on one approach at a single roundabout, which may limit the generalizability of the findings to other intersection configurations. Although cross-approach experiments provide supporting evidence, further validation across multiple intersections would strengthen the conclusions.
Second, the experiments employ a fixed model configuration in order to isolate the effect of input features. Future research could examine whether similar feature importance patterns hold across alternative architectures or training strategies.
Finally, the study focuses on one-step-ahead prediction. Longer prediction horizons may exhibit different sensitivities to input feature configurations, particularly with respect to demand-related variables. Extending the analysis to multi-step forecasting represents an important direction for future research.

6. Conclusions

This study investigated the impact of input feature configurations on short-term traffic delay prediction at an urban intersection using a controlled experimental framework. Five feature scenarios were evaluated using two recurrent neural network architectures (GRU and LSTM) and high-resolution, approach-level traffic data collected at one-minute intervals from an urban roundabout in Jeddah, Saudi Arabia. The analysis considered two forecasting horizons (1-min and 5-min ahead) and employed repeated training runs to ensure the robustness of the results.
The experimental findings provide several important insights regarding the role of different traffic features in intersection-level delay prediction. First, the temporal-only baseline scenario (S0) consistently produced the largest prediction errors for both models and forecasting horizons, confirming that temporal indicators alone are insufficient to capture short-term variations in intersection congestion. Introducing historical delay information (S1) improved prediction accuracy, but the improvement remained limited.
A substantial performance improvement emerged when operational traffic indicators were introduced. Scenarios incorporating queue-related and efficiency variables (S2–S4) significantly reduced prediction errors, highlighting the critical importance of real-time traffic state measurements. These results indicate that variables such as queue length, stop frequency, and efficiency ratios provide highly informative signals about current congestion conditions.
Among the tested configurations, the full feature scenario (S4) generally achieved the best performance for the one-minute forecasting horizon for both the GRU and LSTM models. However, the performance differences between scenarios S2, S3, and S4 remained relatively small, suggesting that operational indicators already capture most of the predictive information required for short-term delay forecasting. In contrast, demand-related variables such as traffic volume provided only marginal additional benefits when queue dynamics were already included.
For the longer forecasting horizon (5 min), the overall prediction errors increased, and the differences between feature scenarios became noticeably smaller. This result indicates that the predictive advantage of richer feature sets diminishes as the forecasting horizon increases, which is consistent with the growing uncertainty associated with longer-term traffic prediction.
The comparison between the GRU and LSTM architectures revealed very similar prediction performance across all scenarios and horizons. This suggests that, for the considered dataset and prediction task, the selection of informative input features plays a more influential role than the specific recurrent architecture used.
Additional experiments across the remaining roundabout approaches demonstrated that the proposed framework remains applicable under different traffic conditions. However, the prediction difficulty varied across approaches, largely due to differences in traffic demand and queue characteristics. Approaches with lower traffic volumes and shorter queues exhibited more stable traffic patterns and consequently lower prediction errors.
A hyperparameter sensitivity analysis further showed that moderate variations in the number of hidden units produced only minor changes in prediction accuracy. This indicates that the overall findings are relatively robust with respect to reasonable changes in model capacity.
Overall, the results indicate that input feature configuration plays an important role in short-term intersection delay prediction for the considered dataset and prediction task. The findings further suggest that effective delay forecasting can be achieved using a relatively compact set of real-time operational measurements, without requiring extensive historical data or complex feature engineering.
Future research could extend this work by evaluating the proposed framework across multiple intersections and network configurations, investigating multi-step forecasting horizons, incorporating external factors such as weather and incidents, and exploring interpretable learning methods that provide deeper insights into the relationship between traffic dynamics and prediction performance.

Funding

This research received no external funding.

Data Availability Statement

The data supporting the findings of this study are not publicly available due to licensing and access restrictions. The data were accessed through a private commercial account.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. United Nations. World Urbanization Prospects: The 2023 Revision; United Nations: New York, NY, USA, 2023. [Google Scholar]
  2. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  3. Cho, K.; van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1724–1734. [Google Scholar] [CrossRef]
  4. Jang, H.-C.; Chen, C.-A. Urban Traffic Flow Prediction Using LSTM and GRU. Eng. Proc. 2023, 55, 86. [Google Scholar] [CrossRef]
  5. Fu, R.; Zhang, Z.; Li, L. Using LSTM and GRU Neural Network Methods for Traffic Flow Prediction. In Proceedings of the 31st Youth Academic Annual Conference of Chinese Association of Automation (YAC), Wuhan, China, 11–13 November 2016; pp. 324–328. [Google Scholar] [CrossRef]
  6. Ma, X.; Tao, Z.; Wang, Y.; Yu, H.; Wang, Y. Long Short-Term Memory Neural Network for Traffic Speed Prediction Using Remote Microwave Sensor Data. Transp. Res. Part C Emerg. Technol. 2015, 54, 187–197. [Google Scholar] [CrossRef]
  7. Ma, C.; Huang, X.; Zhao, Y. GRU-LSTM Model Based on the SSA for Short-Term Traffic Flow Prediction. J. Intell. Connect. Veh. 2025, 8, 9210051. [Google Scholar] [CrossRef]
  8. Toba, A.-L.; Kulkarni, S.; Khallouli, W.; Pennington, T. Long-Term Traffic Prediction Using Deep Learning Long Short-Term Memory. Smart Cities 2025, 8, 126. [Google Scholar] [CrossRef]
  9. Zhou, Q.; Chen, N.; Lin, S. FASTNN: A Deep Learning Approach for Traffic Flow Prediction Considering Spatiotemporal Features. Sensors 2022, 22, 6921. [Google Scholar] [CrossRef] [PubMed]
  10. Harrou, F.; Zeroual, A.; Kadri, F.; Sun, Y. Enhancing Road Traffic Flow Prediction with Improved Deep Learning Using Wavelet Transforms. Results Eng. 2024, 23, 102342. [Google Scholar] [CrossRef]
  11. Zafar, N.; Haq, I.U.; Chughtai, J.U.R.; Shafiq, O. Applying Hybrid LSTM-GRU Model Based on Heterogeneous Data Sources for Traffic Speed Prediction in Urban Areas. Sensors 2022, 22, 3348. [Google Scholar] [CrossRef] [PubMed]
  12. Zhang, S.; Ma, J.; Geng, B.; Wang, H. Traffic Flow Prediction with a Multi-Dimensional Feature Input: A New Method Based on Attention Mechanisms. Electron. Res. Arch. 2024, 32, 979–1002. [Google Scholar] [CrossRef]
  13. Liu, T.; Meidani, H. End-to-End Heterogeneous Graph Neural Networks for Traffic Assignment. Transp. Res. Part C Emerg. Technol. 2024, 165, 104695. [Google Scholar] [CrossRef]
  14. National Academies of Sciences, Engineering, and Medicine. Highway Capacity Manual 7th Edition: A Guide for Multimodal Mobility Analysis; Transportation Research Board: Washington, DC, USA, 2022. [Google Scholar] [CrossRef]
  15. Akcelik, R. Traffic Signals: Capacity and Timing Analysis; Australian Road Research Board Research Report ARR No. 123; ARRB: Melbourne, Australia, 1981. [Google Scholar]
  16. Sroczyński, A.; Czyżewski, A. Road Traffic Can Be Predicted by Machine Learning Equally Effectively as by Complex Microscopic Model. Sci. Rep. 2023, 13, 14523. [Google Scholar] [CrossRef]
  17. Singh, V.; Sahana, S.K.; Bhattacharjee, V. A Novel CNN-GRU-LSTM Based Deep Learning Model for Accurate Traffic Prediction. Discov. Comput. 2025, 28, 526. [Google Scholar] [CrossRef]
  18. Cai, B.; Feng, Y.; Wang, X.; Quddus, M. Highly Accurate Deep Learning Models for Estimating Traffic Characteristics from Video Data. Appl. Sci. 2024, 14, 8664. [Google Scholar] [CrossRef]
  19. Wang, M.; Lee, W.-C.; Liu, N.; Fu, Q.; Wan, F.; Yu, G. A Data-Driven Deep Learning Framework for Prediction of Traffic Crashes at Road Intersections. Appl. Sci. 2025, 15, 752. [Google Scholar] [CrossRef]
  20. Navarro-Espinoza, A.; López-Bonilla, O.R.; García-Guerrero, E.E.; Tlelo-Cuautle, E.; López-Mancilla, D.; Hernández-Mejía, C.; Inzunza-González, E. Traffic Flow Prediction for Smart Traffic Lights Using Machine Learning Algorithms. Technologies 2022, 10, 5. [Google Scholar] [CrossRef]
  21. Wang, S. Urban Roundabout Traffic Flow Prediction Based on GRU with Gated Recurrent Unit Structure. In Proceedings of the 2023 5th International Communication Engineering and Cloud Computing Conference (CECCC), Nanjing, China, 27–29 October 2023; pp. 8–14. [Google Scholar] [CrossRef]
  22. Vasconcelos, L.; Silva, A.B.; Seco, Á.; Fernandes, P.; Coelho, M.C. Turboroundabouts: Multicriterion Assessment of Intersection Capacity, Safety, and Emissions. Transp. Res. Rec. 2014, 2402, 28–37. [Google Scholar] [CrossRef]
  23. Gkyrtis, K. An Overview of the Efficiency of Roundabouts: Design Aspects and Contribution toward Safer Vehicle Movement. Vehicles 2024, 6, 433–449. [Google Scholar] [CrossRef]
  24. Jiang, H.; Shen, Q.; Li, A.; Yin, C. A Review of Traffic Behaviour and Intelligent Driving at Roundabouts Based on a Microscopic Perspective. Transp. Saf. Environ. 2024, 6, tdad031. [Google Scholar] [CrossRef]
  25. Umair, M.; Farooq, M.U.; Raza, R.H.; Chen, Q.; Abdulhai, B. Efficient Video-based Vehicle Queue Length Estimation Using Computer Vision and Deep Learning for an Urban Traffic Scenario. Processes 2021, 9, 1786. [Google Scholar] [CrossRef]
  26. Comert, G.; Cetin, M. Queue Length Estimation from Probe Vehicle Location and the Impacts of Sample Size. Eur. J. Oper. Res. 2009, 197, 196–202. [Google Scholar] [CrossRef]
  27. Mirchandani, P.; Head, L. A Real-Time Traffic Signal Control System: Architecture, Algorithms, and Analysis. Transp. Res. Part C Emerg. Technol. 2001, 9, 415–432. [Google Scholar] [CrossRef]
  28. Xing, Z.; Huang, M.; Peng, D. Overview of Machine Learning-Based Traffic Flow Prediction. Digit. Transp. Saf. 2023, 2, 164–175. [Google Scholar] [CrossRef]
  29. Afandizadeh, S.; Abdolahi, S.; Mirzahossein, H. Deep Learning Algorithms for Traffic Forecasting: A Comprehensive Review and Comparison with Classical Ones. J. Adv. Transp. 2024, 2024, 9981657. [Google Scholar] [CrossRef]
  30. Qiu, J.; Zhao, Y. Traffic Prediction with Data Fusion and Machine Learning. Analytics 2025, 4, 12. [Google Scholar] [CrossRef]
  31. Cheng, L.; Lei, D.; Tao, S. Recent Advances in Deep Learning for Traffic Probabilistic Prediction. Transp. Rev. 2024, 44, 1129–1135. [Google Scholar] [CrossRef]
  32. Guyon, I.; Elisseeff, A. An Introduction to Variable and Feature Selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  33. Berlotti, M.; Di Grande, S.; Cavalieri, S. Proposal of a Machine Learning Approach for Traffic Flow Prediction. Sensors 2024, 24, 2348. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Delay prediction comparison using GRU for scenarios S1 and S4 over a representative half-day period.
Figure 1. Delay prediction comparison using GRU for scenarios S1 and S4 over a representative half-day period.
Vehicles 08 00064 g001
Figure 2. Delay prediction comparison using LSTM for scenarios S1 and S4 over the same half-day period.
Figure 2. Delay prediction comparison using LSTM for scenarios S1 and S4 over the same half-day period.
Vehicles 08 00064 g002
Figure 3. Residual distribution comparison between scenarios S1 and S4 using the GRU model.
Figure 3. Residual distribution comparison between scenarios S1 and S4 using the GRU model.
Vehicles 08 00064 g003
Figure 4. Residual distribution comparison between scenarios S1 and S4 using the LSTM model.
Figure 4. Residual distribution comparison between scenarios S1 and S4 using the LSTM model.
Vehicles 08 00064 g004
Table 1. Traffic variables used in this study.
Table 1. Traffic variables used in this study.
VariableAbbreviationDescriptionUnit
Travel timeTTObserved travel timeseconds
DelayDDifference between observed and free-flow travel timeseconds
Usual delayUDExpected delay based on historical traffic patternsseconds
StopsSTAverage number of stops per vehiclecount
Queue lengthQLEstimated queue lengthmeters
Traffic volumeVEstimated hourly traffic demandveh/hour
Table 4. Description of the proposed input feature scenarios.
Table 4. Description of the proposed input feature scenarios.
ScenarioFeature Groups IncludedDescription
S0Temporal onlyTemporal baseline
S1Group A + TemporalHistorical baseline
S2Groups B + C + TemporalOperational state model
S3Groups A + B + C + TemporalHistorical + operational features
S4Groups A + B + C + D + TemporalFull feature set
Table 2. Pairwise correlation between key traffic variables.
Table 2. Pairwise correlation between key traffic variables.
VariableDelayQueue LengthVolume
Delay10.250.24
Queue Length0.2510.61
Volume0.240.611
Table 3. Descriptive statistics of traffic variables across the roundabout approaches.
Table 3. Descriptive statistics of traffic variables across the roundabout approaches.
ApproachDelay Mean (s)Travel Time Mean (s)Queue Mean (m)Volume Mean (Veh/h)Stops Mean
Approach 138.0855.05126.83424.682.22
Approach 229.8147.54382.442325.941.79
Approach 335.5261.18369.621102.253.00
Approach 431.0056.26253.241690.872.87
Table 5. Prediction performance across feature scenarios for the 1-min forecasting horizon (mean ± standard deviation over 10 runs).
Table 5. Prediction performance across feature scenarios for the 1-min forecasting horizon (mean ± standard deviation over 10 runs).
ModelScenarioMAE (s) MeanMAE StdRMSE (s) MeanRMSE Std
GRUS022.4540.11530.1960.061
GRUS120.8670.04628.8070.036
GRUS217.380.05625.4330.045
GRUS317.3310.04925.440.056
GRUS417.240.07125.3460.034
LSTMS022.6690.17630.2940.057
LSTMS120.9710.12328.8920.047
LSTMS217.3460.10325.4930.089
LSTMS317.3220.06525.4730.035
LSTMS417.2210.10625.3590.065
Table 6. Prediction performance across feature scenarios for the 5-min forecasting horizon (mean ± standard deviation over 10 runs).
Table 6. Prediction performance across feature scenarios for the 5-min forecasting horizon (mean ± standard deviation over 10 runs).
ModelScenarioMAE (s) MeanMAE StdRMSE (s) MeanRMSE Std
GRUS022.5180.14330.2080.065
GRUS121.9180.14629.6070.049
GRUS221.9250.08429.4420.04
GRUS321.9480.10929.5110.074
GRUS421.8430.11429.4120.079
LSTMS022.7450.17230.280.094
LSTMS121.9250.18829.6780.092
LSTMS222.0330.13729.6090.081
LSTMS322.0340.20529.5860.085
LSTMS422.0090.18629.5440.101
Table 7. Paired t-test results for selected scenario comparisons based on MAE across 10 runs.
Table 7. Paired t-test results for selected scenario comparisons based on MAE across 10 runs.
ModelHorizonComparisonp-ValueSignificant
GRU1S4 vs. S19.12 × 10−16Yes
GRU1S4 vs. S02.51 × 10−15Yes
LSTM1S4 vs. S12.05 × 10−13Yes
LSTM1S4 vs. S02.09 × 10−14Yes
GRU5S4 vs. S10.285No
GRU5S4 vs. S05.75 × 10−7Yes
LSTM5S4 vs. S10.391No
LSTM5S4 vs. S01.87 × 10−5Yes
Table 8. Prediction performance of the best configuration (S4) across different roundabout approaches.
Table 8. Prediction performance of the best configuration (S4) across different roundabout approaches.
ApproachSamplesFeaturesMAE (s)RMSE (s)
Approach 144,467168.1714.25
Approach 344,6111616.6327.47
Approach 444,6141616.8327.30
Approach 244,6151617.3925.47
Table 9. Sensitivity analysis of the GRU model with respect to the number of hidden units.
Table 9. Sensitivity analysis of the GRU model with respect to the number of hidden units.
Hidden UnitsFeaturesMAE (s)RMSE (s)Training Time (s)
161617.2825.3349.55
321617.1925.3962.94
641617.3925.5260.28
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Atef, S. Vehicle Delay Prediction at Urban Roundabouts: Comparing Historical, Operational, and Demand-Based Features. Vehicles 2026, 8, 64. https://doi.org/10.3390/vehicles8030064

AMA Style

Atef S. Vehicle Delay Prediction at Urban Roundabouts: Comparing Historical, Operational, and Demand-Based Features. Vehicles. 2026; 8(3):64. https://doi.org/10.3390/vehicles8030064

Chicago/Turabian Style

Atef, Sara. 2026. "Vehicle Delay Prediction at Urban Roundabouts: Comparing Historical, Operational, and Demand-Based Features" Vehicles 8, no. 3: 64. https://doi.org/10.3390/vehicles8030064

APA Style

Atef, S. (2026). Vehicle Delay Prediction at Urban Roundabouts: Comparing Historical, Operational, and Demand-Based Features. Vehicles, 8(3), 64. https://doi.org/10.3390/vehicles8030064

Article Metrics

Back to TopTop