Next Article in Journal
A Lightweight LLM-Based Semantic–Spatial Inference Framework for Fine-Grained Urban POI Analysis
Previous Article in Journal
Comparative Experimental Performance of an Ayanz Screw-Blade Wind Turbine and a Conventional Three-Blade Turbine Under Urban Gusty Wind Conditions
Previous Article in Special Issue
Green Smart Museums Driven by AI and Digital Twin: Concepts, System Architecture, and Case Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Federated Learning Frameworks for Intelligent Transportation Systems: A Comparative Adaptation Analysis

by
Mario Steven Vela Romo
1,
Carolina Tripp-Barba
2,
Nathaly Orozco Garzón
3,*,
Pablo Barbecho
4,
Xavier Calderón Hinojosa
1 and
Luis Urquiza-Aguiar
1
1
Departamento de Electrónica, Telecomuncaciones y Redes de Información, Facultad de Ingeniería Eléctrica y Electrónica, Escuela Politécnica Nacional, Quito 170408, Ecuador
2
Facultad de Informática Mazatlán, Universidad Autónoma de Sinaloa, Mazatlán 82017, Mexico
3
ETEL Research Group, Faculty of Engineering and Applied Sciences, Networking and Telecommunications Engineering, Universidad de Las Américas (UDLA), Quito 170503, Ecuador
4
Department of Electrical Engineering, Electronics and Telecommunications, Universidad de Cuenca, Cuenca 010203, Ecuador
*
Author to whom correspondence should be addressed.
Smart Cities 2026, 9(1), 12; https://doi.org/10.3390/smartcities9010012
Submission received: 22 October 2025 / Revised: 23 December 2025 / Accepted: 1 January 2026 / Published: 16 January 2026
(This article belongs to the Special Issue Big Data and AI Services for Sustainable Smart Cities)

Highlights

What are the main findings?
  • From a review of 39 Intelligent Transportation System (ITS) studies, we identified 15 applications using federated learning (FL) frameworks, grouped into three architecture families (privacy-focused, pipeline-preserving, and advanced-infrastructure/resourcecoordinated). We then selected three representative frameworks for detailed comparison.
  • Our qualitative, architecture-level comparative analysis indicates that hierarchical, edge-assisted FL deployments (with intermediate aggregation at roadside units (RSUs) or cloudlets) are conceptually better aligned with scalability, latency, and stability requirements in ITS deployments than client–server baselines.
  • Digital Twin (DT) + Hierarchical Federated Learning (HFL) (DT+HFL) and Transfer Learning with Convolutional Neural Networks (TFL-CNN) emerged as complementary reference frameworks, combining simulation-based hierarchy and practical edge-level coordination.
  • We instantiated the adaptation methodology on four representative ITS applications—covering traffic prediction, real-time accident detection, transport mode identification, and driver profiling—to analyze their transition paths from centralized machine learning (ML) to FL-based deployments.
  • The proposed federability criteria—built around three diagnostic questions on (i) how naturally data sources are distributed across vehicles, edge nodes, and infrastructure; (ii) the feasibility of executing the core processing at edge or roadside units; and (iii) the decomposability of the learning pipeline into node-level models plus an aggregation step—effectively identify ITS applications that can transition from centralized ML to distributed FL settings.
What are the implications of the main findings?
  • Edge-assisted FL architectures are conceptually well suited for vehicular and traffic domains with intermittent connectivity and heterogeneous data.
  • This study provides a structured pathway for migrating existing ITS applications toward federated privacy-preserving and scalable smart-city deployments.

Abstract

Intelligent Transportation Systems (ITS) have progressively incorporated machine learning to optimize traffic efficiency, enhance safety, and improve real-time decision-making. However, the traditional centralized machine learning (ML) paradigm faces critical limitations regarding data privacy, scalability, and single-point vulnerabilities. This study explores FL as a decentralized alternative that preserves privacy by training local models without transferring raw data. Based on a systematic literature review encompassing 39 ITS-related studies, this work classifies applications according to their architectural detail—distinguishing systems from models—and identifies three families of federated learning (FL) frameworks: privacy-focused, integrable, and advanced infrastructure. Three representative frameworks—Federated Learning-based Gated Recurrent Unit (FedGRU), Digital Twin + Hierarchical Federated Learning (DT + HFL), and Transfer Learning with Convolutional Neural Networks (TFL-CNN)—were comparatively analyzed against a client–server baseline to assess their suitability for ITS adaptation. Our qualitative, architecture-level comparison suggests that DT + HFL and TFL-CNN, characterized by hierarchical aggregation and edge-level coordination, are conceptually better aligned with scalability and stability requirements in vehicular and traffic deployments than pure client–server baselines. FedGRU, while conceptually relevant as a meta-framework for coordinating multiple organizational models, is primarily intended as a complementary reference rather than as a standalone architecture for large-scale ITS deployment. Through application-level evaluations—including traffic prediction, accident detection, transport-mode identification, and driver profiling—this study demonstrates that FL can be effectively integrated into ITS with moderate architectural adjustments. This work does not introduce new experimental results; instead, it provides a qualitative, architecture-level comparison and adaptation guideline to support the migration of ITS applications toward federated learning. Overall, the results establish a solid methodological foundation for migrating centralized ITS architectures toward federated, privacy-preserving intelligence, in alignment with the evolution of edge and 6G infrastructures.

1. Introduction

ITS combine sensing, communication, and control to improve safety, efficiency, and sustainability in road transport [1,2]. Traditionally, data generated by vehicles, roadside units (RSUs), and user devices have been uploaded to centralized traffic management centers, where ML models are trained and deployed [3,4]. While this centralized paradigm has enabled many successful applications, it raises well-known concerns regarding privacy [5] and bandwidth consumption among others, especially when large volumes of raw data must be transferred to the cloud.
FL [6], in contrast, explicitly decouples model training from data centralization: models are updated where data are generated and only model parameters or gradients are aggregated. This makes FL particularly well aligned with hierarchical ITS architectures, where vehicles, RSUs, and control centers already form a multi-tier structure that can naturally host local training, intermediate aggregation, and global coordination.
However, ITS presents distinctive constraints—heterogeneous devices, intermittent connectivity, real-time latency requirements, and multi-stakeholder governance—that make it unclear when an application can be federated and which FL framework is appropriate in practice. This work addresses these questions by combining taxonomies, an operational federability criterion, and architecture-level adaptations of concrete systems, as summarized below. In addition, we provide a compact quantitative synthesis of how FL-enabled ITS frameworks are empirically validated in the literature.
Building on our previously published literature review on ML and FL in ITS [7], this paper advances from a broad survey perspective toward a more architecture-oriented and adaptation-driven analysis. We begin by establishing a fundamental distinction between contributions that describe systems, with explicit architectures and data flows, and those that introduce only models, where the focus lies on algorithmic innovation rather than on deployment structure. This distinction enables a domain-aware organization of the literature—spanning road-, vehicle-, and user-centric applications—while clarifying how each type of contribution constrains or enables the transition from centralized to federated learning. In parallel, we organize existing FL proposals for ITS into three families that reflect their architectural intent and infrastructural assumptions: privacy-focused approaches that minimize changes to existing systems, integrable frameworks that can be coupled directly with deployed ITS pipelines, and advanced-infrastructure solutions that leverage hierarchical or resource-coordinated deployments. Within this structure, we articulate the operational notion of federability, defined as the degree to which an existing ITS application can be refactored into a federated-learning deployment without redesigning its sensing and decision loop. An application is considered federable when it already relies on naturally distributed data sources (e.g., vehicles, roadside units, user devices), can feasibly process the relevant features locally at these nodes, and presents an input–output organization that can be partitioned across node-level model updates and an aggregation layer.
Unlike recent surveys on FL for ITS and connected/automated vehicles—which primarily catalogue applications, datasets, and challenges—our aim is to derive a mapping methodology that rewrites concrete ITS architectures in terms of hierarchical FL frameworks. To illustrate this methodological shift, we conduct a qualitative comparison of three representative frameworks, FedGRU [8], DT + HFL [9], and TFL-CNN [10], using the classical client–server pattern as a baseline. Building on that comparison, we ultimately retain two complementary frameworks, DT + HFL and TFL-CNN, as candidates for detailed architectural adaptation.
We then apply these frameworks to four representative ITS applications—traffic prediction and management, traffic control for signalized intersections, intelligent traffic-light control, and driver profiling—demonstrating how each can be instantiated with realistic edge/cloud roles and hierarchical aggregation layers. This study is a purely qualitative, architecture-level examination and does not include new simulation or testbed experiments nor explicit analysis of economic and operational costs, which are left for future empirical and implementation-oriented work. To support such future efforts, we also outline an architecture-level benchmarking blueprint in Section 5.4.
Contributions. In summary, this work provides (1) a taxonomy connecting ITS systems vs. models and domain focus to their FL implications; (2) a consolidated map of FL frameworks by family (privacy-focused, integrable, advanced-infrastructure), including their base models and aggregation styles; (3) an evidence map and compact quantitative synthesis of evaluation practices across FL-enabled ITS frameworks; (4) a practical federability filter for selecting ITS systems suitable for FL adaptation; (5) an adaptation methodology—illustrated on four ITS systems—showing how DT + HFL and TFL-CNN can be instantiated with realistic edge/cloud roles and hierarchical aggregation, highlighting latency, connectivity, privacy, and edge-compute trade-offs; and (6) an architecture-level benchmarking blueprint to guide controlled architecture-to-architecture comparison in future FL-enabled ITS studies.
Paper organization. The remainder of this paper is organized as follows: Section 2 reviews the ITS, ML, and FL concepts needed in the rest of the paper. Section 3 summarizes related surveys and framework proposals for vehicular communications. Section 4 presents the systematic literature review (SLR)-based taxonomies and consolidates FL-enabled ITS frameworks, including an evidence map and a compact quantitative summary of validation practices. Section 5 compares three FL frameworks and introduces an architecture-level benchmarking blueprint. Section 6 reports the federability filter and the four adaptation case studies. Section 7 concludes and outlines future work.

2. Background

The development of ITS builds upon advances in machine learning and, more recently, federated learning. This section briefly reviews essential ITS concepts as they define the domain and ML’s algorithmic foundation. We then describe FL, highlighting its core principles, approaches, data strategies, system variants, and aggregation techniques critical for adapting and assessing FL in ITS scenarios.

2.1. Intelligent Transportation Systems

ITS utilize advanced communication, control, and data technologies to enhance economic, energy, and social outcomes in transport [11]. By optimizing existing infrastructure, ITS boosts control, efficiency, and safety to meet growing mobility needs [2]. Applicable to any transport mode, ITS integrates vehicles, infrastructure, and users, including drivers, passengers, and pedestrians. Typically, ITS involves systems for gathering data, processing information, and user interaction, enabling real-time network monitoring, journey planning, traffic management, incident alerts, event visualization, pollution reduction, and safety improvements. ITS applications are diverse and evolving, covering traffic management, customer support, vehicle tracking, emergency responses, and cooperative systems for all transport users.
The swift progress in technology has fostered new ITS research areas, such as distributed traffic information systems allowing data exchange between vehicles. Vehicular ad hoc networks (VANETs) have arisen from this research, becoming a cornerstone of ITS [12]. Leveraging technologies like machine learning and big data, ITS now depend on diverse data sources for decision-making [11].

2.2. Federated Learning

FL, launched by Google in 2016, is a distributed training paradigm in which a global model is learned by aggregating updates computed locally on multiple data-holding clients, without centralizing their raw data [13], including secure aggregation protocol [14] to protect individual data. This makes FL a natural candidate for ITS deployments, where data are inherently generated at the edge—by vehicles, RSUs, and user devices—and where privacy, bandwidth, and regulatory constraints often discourage massive data uploads to central clouds.
In the remainder of this section, we briefly review FL approaches and their classification from data and system perspectives, focusing only on the aspects that will be used later when selecting and comparing candidate frameworks and adapting ITS architectures in Section 5 and Section 6.

2.2.1. Federated Learning Approaches

In the literature on FL, different authors propose their own ways of categorizing the field. For instance, some refer to architectures [5], others to types [15], and others to frameworks [16], while Kairouz et al. [17] classify them according to specific system characteristics. Given this variety of terms and criteria, in this work the different forms of federated learning are grouped under the term approaches, as it provides a broad designation that emphasizes strategies rather than technical structures. The aim is to offer a conceptual perspective of federated learning categories without committing to a particular implementation detail.
Centralized Federated Learning
This approach uses a central server to coordinate local devices during training, collecting their model updates to build a global model. Devices train on their own data and send only updates. Then, the server aggregates these updates with an algorithm and communicates solely with local nodes. However, the central server can become a bottleneck when handling many updates, and network failures can disrupt aggregation and degrade performance. A basic architecture for this type of approach is illustrated in Figure 1a.
Decentralized Federated Learning
This approach avoids a central training server. Local model updates are exchanged directly between edge nodes, eliminating a single point of failure. Nodes first share updated parameters with neighbors, then aggregate them with their own to build the global model. It supports various network topologies and can adapt to different setups [5], but the final model’s accuracy and efficiency depend largely on node arrangement [18]. In a peer-to-peer setup, connection patterns determine how updates are aggregated. Figure 1b shows a basic architecture for this approach.
Heterogeneous Federated Learning
Introduced in 2021, this method aims to concatenate the participation of heterogeneous nodes like phones, computers, Internet of Things (IoT) devices, and vehicles with varying hardware, software, and data types. Due to their differing capabilities, Heterogeneous Federated Learning (HeteroFL) [19] is proposed to enable these nodes to train diverse local models and manage non-Independent and Identically Distributed (non-IID)data, thus creating a precise global inference model.

2.2.2. Data-Based Types of Federated Learning

FL is differentiated based on the distribution of data among nodes, focusing on shared attributes, instances, or neither. This classification influences model training and is generally divided into three primary approaches:
  • Vertical federated learning. This method applies when devices or nodes hold different attributes for the same sample instances (Figure 2a). Each node thus has unique but complementary information [15]. Vertical federated learning lets them collaboratively train an ML model using their own features without sharing sensitive data, enabling joint training of a global model when data exchange is not allowed.
  • Horizontal federated learning. This approach, also called sample-based federated learning, has devices storing datasets with the same features but different samples (Figure 2b) [15]. It suits mobile and IoT data, which typically share feature sets.
  • Federated Transfer Learning. This is an extension of vertical federated learning where datasets from various devices or entities differ in samples and features but share a small common segment used for training. It addresses issues like missing labels or incomplete data, akin to traditional machine learning by integrating new features into a pre-trained model [15].

2.2.3. System-Based Types of Federated Learning

In the literature, FL is often categorized according to the characteristics of the participating systems and the distribution of data across them, leading to two widely recognized approaches:
  • Cross-silo Federated Learning: In federated learning, a silo is an entity, device, or node managing substantial data independently. This approach is ideal when there are few nodes but abundant data, necessitating constant training participation. The approach supports both horizontal and vertical federated learning, unaffected by node data type [15,17,20].
  • Cross-device federated learning: This approach involves numerous devices, with minimal data and lower training capacity than cross-silo setups, posing challenges in data processing and synchronization [15,17,20].
From a system design perspective, the ITS applications studied here are suitably modeled as cross-silo FL, where each silo (e.g., a city, a fleet operator, or an infrastructure provider) aggregates data from many devices and participates as a long-lived client.

2.2.4. Federated Learning Algorithms

A federated learning algorithm consolidates updated parameters from local nodes into the global model. At the algorithmic level, most FL frameworks implement some variant of federated averaging (FedAvg) or stochastic gradient descent, where local models are trained on client data and periodically aggregated at a coordinator [13]. Numerous refinements exist to handle data and system heterogeneity or robustness to faulty clients, but the frameworks we analyze in Section 5 can all be seen as instantiations of this general principle. For this reason, we do not focus on individual FL optimizers; instead, we treat them as interchangeable components and concentrate on the architectural differences between centralized, hierarchical, and digital-twin-enhanced deployments.
Beyond these classical baselines, recent adaptive federated optimization methods have introduced server-side variants of Adagrad, Adam, and Yogi [21]. By adapting learning rates from the history of aggregated gradients, these optimizers improve convergence stability under highly heterogeneous, non-IID client data and reduce the need for manual tuning of step sizes and momentum.
Complementary algorithmic advances address statistical heterogeneity and client drift. FedNova [22] normalizes local updates before aggregation to eliminate objective inconsistency when clients use different local step counts. SCAFFOLD [23] employs control variates to correct client drift, improving convergence under non-IID client distributions. FedMA [24] matches and averages hidden units layer-wise across local models so convolutional and recurrent networks from different silos can be aggregated effectively, even with differing architectures or momentum states. These methods are especially relevant for ITS deployments, where vehicles and roadside units have non-IID data, heterogeneous compute, and intermittent participation, and they can be combined with any FL framework in Section 5 and Section 6.

3. Related Work

Research on ML and FL for ITS spans both high-level surveys and concrete architectural and framework proposals. In this section, we provide a concise narrative overview of two strands of prior work: (i) surveys that catalogue ITS applications, datasets, and challenges for ML/FL-enabled ITS, and (ii) infrastructural proposals and frameworks that target vehicular communication infrastructures, edge/cloud computing, and trust mechanisms. The goal is to position our contribution within this landscape and to highlight the remaining gap in terms of task-to-framework mappings and component-level adaptation methods.

3.1. Surveys and Baselines in ITS

First, in [25], the authors examine ML and deep-learning (DL) models for traffic flow prediction to improve smart traffic lights in ITS. They assess five ML models: gradient boosting regression, random forest, linear regression, and stochastic gradient descent, along with Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) for DL. Using urban sensor data, Multilayer Perceptron Neural Network (MLP-NN) and gradient boosting achieve the highest accuracy (R2 = 0.93), while RNNs are less efficient due to longer training times. The study suggests real-time traffic light adaptation without new infrastructure, laying groundwork for ML in ITS, but does not address collaborative or federated methods for privacy-preserving traffic modeling. Next, in [26], a survey on federated learning for Connected and Automated Vehicles (CAVs) addresses its applications, benefits, and limitations. Applications are identified as cooperative perception, localization, traffic prediction, and driver behavior, stressing FL’s collaboration and data privacy. Challenges are high mobility, connectivity, heterogeneous data, and latency. Strategies discussed include adaptive client selection, efficient model aggregation, and integration with 6G, edge computing, and blockchain. Research questions focus on scalability, robustness, and energy efficiency for large-scale FL in CAVs. Complementarily, the study presented in [27] examines federated learning in ITS, focusing on its capacity for privacy-preserving decentralized analytics with extensive, varied, and sensitive data. ITS applications include traffic prediction, parking availability, and object recognition, enhancing efficiency and safety while maintaining privacy. Challenges such as data heterogeneity, communication constraints, scalability, and edge device resources are discussed, with solutions like adaptive client selection, efficient aggregation, and hybrid architectures evaluated. Future research aims to link theoretical models with large-scale ITS applications. The paper compiles datasets, metrics, results, and highlights implementations like FedGRU achieving over 90% in traffic prediction, noting communication and scalability constraints.
Similarly, Manias and Shami [28] propose incorporating federated learning within the Internet of Vehicles (IoV) and Intelligent Transportation Systems to facilitate decentralized model training while ensuring privacy. They highlight FL’s advantages over centralized methods, like reduced latency, improved scalability, and data protection compliance. The paper details suitable architectures, communication protocols, and aggregation strategies for dynamic vehicular environments. It also addresses challenges like non-IID data, intermittent connectivity, and network edge computational limits, suggesting future research for effective FL deployment in IoV and ITS. Likewise, a review of FL in ITS and their research challenges is presented in [29], categorizing applications into traffic prediction, autonomous driving, and intelligent traffic management. FL allows decentralized model training while preserving data privacy across nodes. Key challenges include handling non-IID data, defending against adversarial attacks, managing communication overhead, and edge resource constraints. Solutions discussed include hybrid FL architectures, secure aggregation protocols, and incentive mechanisms. The authors emphasize interdisciplinary efforts to apply theoretical FL frameworks to real-world ITS. Looking ahead, in [30], a survey explores FL in Intelligent Transportation Systems, overcoming centralized training limitations like privacy concerns, data silos, and poor real-time performance. It examines three ITS scenarios: traffic flow prediction, traffic target recognition, and vehicular edge computing, showing how FL supports decentralized training while maintaining privacy, enhancing scalability, and cutting communication overhead. The authors compare centralized and FL approaches, describe the FL system taxonomy, and review advanced FL solutions for ITS, including efficient algorithms, resource optimization, and security frameworks. They also discuss open challenges and future research, like integrating space–air–ground systems, using large language models, and ensuring ethical and regulatory compliance, to advance ITS applications. Finally, the authors of [31] provide a comprehensive survey on the application of federated learning to CAVs, emphasizing its potential to address data privacy, communication efficiency, and scalability challenges inherent in conventional centralized learning. The authors present a structured taxonomy of FL approaches tailored to CAV environments, categorizing them into horizontal, vertical, and hybrid FL and discussing their applicability to diverse vehicular scenarios such as perception, prediction, and planning. The paper reviews state-of-the-art FL algorithms designed for vehicular networks, highlighting methods for reducing communication overhead, managing heterogeneous data, and enhancing robustness against adversarial attacks. Additionally, it identifies key enabling technologies—such as vehicular edge computing, 5G/6G networks, and multi-access edge computing—that facilitate FL deployment in CAVs. The survey concludes by outlining open research challenges, including dynamic client participation, real-time model updates, integration with emerging AI models, and standardization issues, thus offering a clear roadmap for future advancements in privacy-preserving and efficient collaborative learning for CAV systems.
In parallel to these surveys, our own systematic review on ML/FL-based ITS applications [7] organizes 39 primary studies into taxonomies of ITS use cases and federated-learning frameworks. That SLR primarily aims at providing a comprehensive catalog and classification, whereas the present paper builds on its results to develop the task-to-framework mappings and component-level adaptation methodology discussed in Section 4, Section 5 and Section 6.
Completing this landscape from a security angle, the authors of [32] survey the Social Internet of Vehicles (SIoV), focusing on data security challenges and FL solutions. They classify SIoV security issues as confidentiality, integrity, availability, and privacy, reviewing cryptographic, blockchain, and trust management solutions. FL is highlighted as a promising method for privacy-preserving collaboration in SIoV, especially with diverse data sources and distributed networks. Furthermore, the paper also addresses research challenges like non-IID data, communication efficiency, and the integration of FL with edge computing and 6G, offering insights for secure machine learning frameworks in intelligent transportation systems.

3.2. Architectures and Integrations for Vehicular FL

Turning to architectural realizations, in [33], the concept of a Federated Vehicular Network (FVN) is proposed as a stable, resource-rich infrastructure for FL in vehicular networks. FVNs use Dedicated Short-Range Communications (DSRC) and mmWave technologies to provide scalable, low-latency, and privacy-preserving solutions for data- and computation-heavy tasks. Key contributions include the Federated Vehicular Cloud (FVC) as a backbone for data and model-parallel learning, enhancing fault tolerance and efficiency. Blockchain is used for reputation management and secure transactions, reducing malicious activity. Detailed architecture and evaluations show FVNs can surpass mobile-device-based FL, supporting high-performance learning in vehicular settings. In parallel, Javed et al. in [34] survey the integration of Blockchain Technology (BCT) with FL in Vehicular IoT networks, emphasizing their ability to improve data privacy, security, and decentralized intelligence in transportation systems. They explore how BCT can mitigate FL’s issues such as single points of failure, trust management, and model poisoning attacks, by using distributed ledgers for secure data exchange. The paper classifies integration approaches into architecture-based, consensus-driven, and application-oriented models, comparing their strengths and limitations. It also examines vehicular applications like autonomous driving and traffic management where BCT-FL can enhance efficiency and reliability. The study identifies challenges like scalability, latency, energy efficiency, and interoperability, and suggests future research on edge intelligence, AI-driven consensus, and next-gen communication technologies.
Table 1 summarizes these contributions, indicating their scope, main focus, and relevance.
In summary, the reviewed works encompass surveys and baselines that establish the state of the art in ML/FL for ITS and CAVs, as well as architectural and integration proposals that describe enabling infrastructures. Building on this foundation, our study links the classification of ITS applications and systems (from a prior systematic review) with the analysis of existing FL frameworks, proposing a pragmatic mapping between ITS functions and candidate FL solutions. This perspective complements existing surveys by suggesting comparative criteria and compact artifacts that can guide adoption in realistic ITS settings, thereby distinguishing our contribution from both broad surveys and infrastructure-oriented proposals.
The next section builds on our literature review [7] to review representative ITS applications and the FL frameworks that will be comparatively analyzed, setting the stage for the task-to-framework mappings and adaptation guidelines developed in Section 5 and Section 6.

4. ITS Applications and Frameworks

This section provides a synopsis of the findings from an earlier SLR [7]. The review initially compiled 283 documents, sourced from IEEE Xplore (194) and Scopus (89). After applying inclusion and exclusion criteria, along with removing duplicates and completing the final screening, 39 documents were selected for in-depth examination, as can be seen in Figure 3. The study was directed by two research questions: (i) Which ITS applications incorporate ML or FL? (ii) Which ITS frameworks utilize FL? The responses to these questions form the linkage between ITS applications and the frameworks that will be assessed in this paper.

4.1. Overview of the Literature Review Methodology

The ITS application and framework taxonomies used in this paper are grounded on our previously published SLR on ML and FL in ITS [7]. That SLR followed a standard protocol comprising database selection, query design, screening, and full-text assessment. Here, we summarize only the elements that are directly relevant for the present study.
Databases and time span. The SLR searched IEEE Xplore and Scopus for primary studies on ML- and FL-based ITS applications. Publications from 2016 onwards were considered, and only peer-reviewed journal articles and conference papers were included.
Search queries. For ML-based ITS applications, we combined ITS terms with generic machine-learning terminology. In both databases, the query followed the pattern
("Intelligent Transportation System") AND ("machine learning" OR "deep learning" OR "supervised learning" OR "unsupervised learning") AND "applications"
For FL-based ITS applications, we used the pattern
("Intelligent Transportation System") AND "federated learning" AND "applications"
Search terms were applied to titles, abstracts, and metadata in both databases.
Inclusion and exclusion criteria. The initial result set was filtered using the inclusion and exclusion criteria summarized in Table 2. In brief, studies had to (i) target road-transport ITS scenarios, (ii) implement or evaluate an ITS-relevant application, and (iii) employ a supervised or unsupervised ML/FL model. We excluded publications on maritime or air mobility, duplicate records, and works that did not propose or evaluate a concrete ITS application.

4.2. Applications Taxonomy

In our earlier research, we established a classification framework for ITS-related studies using two axes: (i) distinguishing whether the contribution is characterized as a system (involving architecture, components, and implementation) or as a model (concentrating on algorithmic or computational elements), and (ii) the functional domain being targeted, specifically road-centric, vehicle-centric, or user-centric. Within this taxonomy, we flag as federable those ITS systems whose architectures satisfy the three conditions introduced in Section 1: distributed data sources, feasibility of local feature processing at edge nodes, and node-level partitionability of the learning task. This federability tag is later used as a diagnostic filter when selecting the four applications adapted in Section 6, ensuring that the chosen case studies can realistically transition from centralized ML to federated learning without extensive architectural redesign.
Table 3 presents a compilation of representative examples aligned with the taxonomy categories, showing the breadth of applications, the corresponding ML algorithms, and their respective domains.
This taxonomy shows that road-centric applications dominate in terms of traffic management and congestion monitoring, vehicle-centric applications emphasize safety and perception, and user-centric applications focus on behavior and vulnerability. Importantly, contributions classified as systems provide architectural detail that makes them more readily adaptable to federated learning, while models often contribute algorithmic insights but lack explicit integration context. This distinction, together with the domain grouping, provides the foundation for the next step: evaluating which FL frameworks are most suitable for adapting these ITS applications. By keeping both models and systems in the taxonomy, we capture the full landscape of approaches reported in the literature. However, it is important to note that, in the subsequent analysis, we will concentrate primarily on systems, since they provide the architectural detail and data flow specification needed for assessing the suitability of federated learning frameworks, whereas models mainly contribute algorithmic insights without explicit deployment context.

4.3. FL Framework Selection

The selection of a federated learning framework in ITS is predominantly influenced by privacy constraints, network conditions, and the computational resources available [7]. Overall, 15 out of 39 papers from the systematic review are FL-enabled ITS frameworks, which are grouped in Table 4 according to their architectural family and deployment profile. The frameworks are grouped into three categories. Privacy-focused solutions (e.g., [48,49,50,51,52]) integrate mechanisms such as differential privacy, blockchain, or synthetic data generation to protect user information without degrading model utility. Integrable approaches (e.g., [8,53,54,55,56]) target specific challenges such as non-IID data, mobility-aware caching, vehicular trust, or concept drift, while remaining adaptable to broader ITS platforms. Finally, advanced infrastructure frameworks (e.g., [9,10,57,58,59]) exploit high-performance architectures, digital twins (DT), hierarchical federated learning, and 6G contexts for large-scale, multi-level coordination. This classification ensures coverage of representative solutions that balance privacy, adaptability, and technical sophistication in FL-enabled ITS.
For each framework, Table 4 reports the deployment style (in parentheses, e.g., cross-device, cross-silo, hierarchical, decentralized, asynchronous), the base model, the aggregation mechanism, and a representative ITS application. It also provides an operational assessment in terms of Latency (T), Connectivity (C), Privacy (P), and Edge compute (E) (H/M/L), offering a consolidated view of strengths and limitations across framework families.
While Table 4 characterizes each framework in terms of architectural family, deployment profile, and intended operational fit, it does not capture how each proposal is empirically validated in the original study (e.g., whether evidence is conceptual, model-level, or system-level). To fill this gap, Table 5 provides an evidence map that summarizes (i) evaluation scope, (ii) data provenance, (iii) the main quantitative focus and baselines, and (iv) whether FL architectures are explicitly instantiated and/or compared (as opposed to only comparing models, aggregation variants, or application-specific algorithms).
Quantitative synthesis from Table 4 and Table 5 (see Figure 4 for a compact visual summary) shows the following: among the 15 FL-enabled ITS frameworks, 7/15 are reported as cross-device deployments, 4/15 as cross-silo, 3/15 as hierarchical (including an end–edge–cloud two-layer design), and 2/15 as decentralized (note that these labels are not mutually exclusive). In addition, 1/15 incorporate asynchronous update mechanisms (vs. synchronous baselines). Regarding empirical validation, 13/15 works report quantitative results, with 8/15 evaluated at the model/task level and 5/15 including system/network-level simulation metrics (e.g., delay, Packet Delivery Ratio [PDR], cache hit ratio), whereas 2/15 remain conceptual proposals without numerical results. Despite this prevalence of quantitative evaluation, no study performs a controlled architecture-to-architecture benchmark; reported comparisons mainly target model baselines, aggregation variants (FedAvg-derived schemes appear in 8/15 frameworks), or application-specific algorithms under a fixed orchestration setting.
This evidence pattern is critical for interpreting architectural claims in the FL–ITS literature: most quantitative results support model-level gains, aggregation variants, or application-specific pipelines under a fixed orchestration setting, but they do not establish controlled advantages across FL architectural families. To make this gap explicit, we outline a benchmarking blueprint that specifies the minimum controls, datasets, and system-level metrics required for architecture-level comparisons in ITS. In line with this scope, the remainder of this work emphasizes a structured adaptation analysis of representative frameworks rather than introducing new end-to-end simulation benchmarks.
Informed by the taxonomy in Table 4 and the evidence map in Table 5, we next examine three representative frameworks from the non-privacy-focused categories. This choice aligns with our focus on system-level integration and scalability rather than cryptographic privacy mechanisms. From the ntegrable approaches category, we include FedGRU [8], a recurrent-based federated framework optimized for traffic flow prediction using large-scale organizational datasets (e.g., Uber or Didi). FedGRU is closely aligned with the classical client–server architecture and thus acts as a conceptual baseline to contrast with more advanced hierarchical designs. From the advanced infrastructure family, two frameworks were selected. The first, DT + HFL [9], integrates hierarchical federated learning with digital twins, providing a scalable and modular structure that enables real-time simulation and monitoring of vehicular systems without the excessive complexity of other DT-based proposals. The second, TFL-CNN [10], employs a dual-layer architecture tailored for 6G vehicular environments, where RSUs perform intermediate aggregation before transmitting updates to a cloud server. As reported in the original TFL-CNN study [10], this design supports scalability and low-latency coordination in large, heterogeneous networks.
In summary, the three selected frameworks—FedGRU, DT + HFL, and TFL-CNN—cover distinct layers of abstraction within the federated learning landscape: FedGRU represents an integrable baseline, DT + HFL emphasizes hierarchical and simulation-driven coordination, and TFL-CNN illustrates next-generation vehicular edge integration. Together, they provide a balanced foundation for comparative evaluation and subsequent adaptation of ITS applications.

5. Comparative Analysis of Selected FL Frameworks

This section presents a qualitative comparative analysis of three representative FL frameworks—FedGRU [8], DT + HFL [9], and TFL-CNN [10]. These frameworks illustrate the evolution of FL solutions in ITS, spanning increasing levels of structural sophistication and deployment scope. Methodologically, the comparative analysis proceeds in two steps. First, we derive evaluation criteria (e.g., scalability, latency, privacy, integrability) from the taxonomies and empirical trends reported in Section 4.3. Second, we apply these criteria to the selected frameworks using the architectural properties and empirical evidence reported in the cited literature, as summarized in the comparative tables of this section. As a result, all statements about scalability, latency, or privacy reflect qualitative interpretations of previously published experimental results, rather than new measurements obtained in this work. Finally, to operationalize the architecture-level evidence gap identified in Section 4.3, Section 5.4 outlines a compact benchmarking blueprint for controlled architecture-to-architecture evaluation in FL-enabled ITS, provided as methodological guidance for future experimental testbed or emulation/simulation studies.
FedGRU [8] represents an integrable approach derived from the client–server paradigm, serving as a conceptual baseline that bridges traditional centralized learning and more distributed designs. Building upon this foundation, DT + HFL [9] and TFL-CNN [10] embody advanced infrastructure frameworks that incorporate hierarchical aggregation and edge intelligence, enabling scalability and richer coordination across ITS environments. The following comparison examines their architectures, base models, aggregation mechanisms, privacy strategies, and scalability properties to identify their respective strengths and limitations.

5.1. Architecture and Infrastructure

The client–server architecture represents the most basic and widely used form of FL, serving as the reference point for more advanced frameworks. In this setup, individual devices—such as smartphones, sensors, or vehicles—train models locally on their own data and transmit only the learned parameters to a single central server. The server aggregates these parameters to update a global model, which is then redistributed to the clients for the next training round. Figure 5 illustrates this structure, where the simplicity of coordination comes at the cost of limited scalability and a potential bottleneck at the central server. Nevertheless, this approach remains effective in small or controlled environments with relatively few participating nodes.
Building on this baseline, FedGRU [8] follows the same client–server paradigm but adapts it to an organizational context rather than to individual devices. Here, the clients are organizations such as municipal traffic agencies, private companies, or sensor stations. Each organization maintains its own infrastructure, collects traffic flow data (e.g., from cameras, radars, or mobile devices), and trains a local GRU-based model before sending updates to the central cloud. While conceptually similar to the client–server model, this framework emphasizes collaboration between institutions rather than end-user devices, allowing the integration of larger and more diverse datasets. FedGRU therefore extends the baseline by scaling its scope to organizational silos, while maintaining the simplicity of centralized aggregation.
In contrast, DT + HFL [9] introduces a hierarchical architecture that combines federated learning with DTs. As shown in Figure 6, vehicles and IoT sensors in this framework do not perform local training; instead, they act only as data collectors. The raw information is forwarded to edge cloudlets, where a digital twin is created to represent each vehicle and its environment. These cloudlets perform local training and serve as the first aggregation layer in the hierarchical process. A central cloud server then consolidates the parameters from all cloudlets to generate the global model. The framework operates in six phases (initial, functional, analytical, anomaly identification, collaborative, and decision-making), where the analytical phase is particularly important for transmitting data into the simulated DT environment. This design increases system complexity but enables powerful anomaly detection capabilities and richer context awareness that cannot be achieved with simpler client–server structures.
Finally, TFL-CNN [10] employs a two-layer architecture illustrated in Figure 7. Vehicles perform local training and send their model updates to RSUs, which act as intermediate aggregation nodes. RSUs are equipped with limited computing and caching capabilities but provide context-aware aggregation within their coverage area. The RSU outputs are then forwarded to a centralized cloud with high-performance computing resources, where final aggregation is performed. Unlike the purely hierarchical DT + HFL, this two-layer design explicitly leverages RSUs as edge nodes, making it well suited for 6G-enabled vehicular networks that demand fast object detection, low latency, and scalability across large fleets of vehicles.
In summary, the baseline client–server model establishes the simplest FL configuration, FedGRU extends it to organizational-level collaboration, DT + HFL adds hierarchical layers with digital twins for anomaly detection, and TFL-CNN introduces RSUs for scalable edge aggregation. These architectural choices directly influence scalability, computational requirements, and the type of ITS applications each framework can support. Beyond infrastructure, the next step is to compare the frameworks in terms of their purpose, data requirements, and base algorithms, which are summarized in Table 6.

5.2. Purpose, Data, and Base Algorithms

Table 6 presents a concise overview of the selected frameworks, highlighting their core purpose, the type of data considered, and the base algorithms implemented. The table is designed to provide a quick reference, while the accompanying text elaborates on the broader implications and nuances of each case.
FedGRU [8] extends the baseline client–server model by involving organizations as federated nodes. Its main contribution lies in aggregating mobility data from agencies and companies into GRU-based models for traffic prediction. This organizational scope allows for larger and more heterogeneous datasets than individual devices could provide.
DT + HFL [9], in contrast, adopts a hierarchical structure where cloudlets equipped with digital twins handle local training before global aggregation. By combining IoT sensor streams with contextual information such as weather and regulations, this framework emphasizes anomaly detection and system resilience. Its modular design admits various base models, although CNN and Recurrent Neural Networks (RNN) hybrids are highlighted.
TFL-CNN [10] situates itself in the context of 6G vehicular networks. Vehicles train local CNN models, which are aggregated at RSUs and then at the central cloud. This two-layer structure enables scalability and supports object recognition tasks such as traffic sign identification and pedestrian detection, while leveraging edge processing for latency reduction.

5.3. Aggregation, Privacy, and Limitations

Table 7 summarizes how each framework addresses model aggregation, whether it incorporates privacy mechanisms beyond the default of federated learning, and their main limitations. The table is concise by design, while the accompanying text highlights distinctive choices and constraints.
FedGRU [8] employs FedAvg for small deployments and a joint announcement protocol to reduce overhead at scale. It does not integrate additional privacy mechanisms, making it dependent on the inherent safeguards of FL. Its specialization in traffic prediction and reliance on homogeneous data limit its adaptability to broader ITS domains.
DT + HFL [9] uses hierarchical FedAvg across cloudlets and a central server, with digital twins providing an intermediate abstraction that protects raw data. This strengthens anomaly detection but introduces significant system complexity and resource requirements, constraining scalability in practice.
TFL-CNN [10] combines RSU-level aggregation with FedAvg at the cloud, improving scalability in edge-enabled vehicular networks. It assumes secure RSUs with encrypted communication, but its dependence on RSUs and sensitivity to data heterogeneity may delay convergence in less advanced infrastructures.
In summary, the three frameworks exhibit complementary trade-offs between simplicity, fidelity, and scalability. FedGRU retains the straightforward client–server organization, facilitating rapid deployment but limiting extensibility beyond traffic-flow prediction; thus, it is regarded as a reference architecture for comparison. DT + HFL enhances analytical depth through hierarchical aggregation and digital twins, offering richer contextual modeling while potentially increasing computational demand. TFL-CNN achieves efficient coordination across vehicular networks through RSU-based aggregation and is conceptually better aligned with scalability requirements in edge-enabled 6G contexts.

5.4. Architecture-Level Benchmarking Blueprint for FL-Enabled ITS

Table 4 and Table 5 and Figure 4 show that, while quantitative evaluation is common in FL–ITS studies, controlled architecture-to-architecture benchmarking is largely missing. To make future actionable, reproducible architectural comparisons, we propose a compact benchmarking blueprint tailored to ITS constraints (mobility, intermittent Vehicle-to-Everything (V2X) links, heterogeneous edge hardware), using the classical client–server orchestration as the baseline.
Benchmark objective and controls. The objective is to isolate the impact of the orchestration architecture (client–server, hierarchical, or decentralized/asynchronous) from confounding factors. Thus, each experimental condition must fix (i) the learning task and dataset partitioning (including a controlled non-IID profile), (ii) the model architecture and training hyperparameters, and (iii) the aggregation rule when possible (e.g., FedAvg at each aggregation point). Only the orchestration topology (aggregation locations), update scheduling (synchronous vs. asynchronous), and compute placement across vehicle/RSU/cloud tiers should vary.
Approach A (gold standard): minimal physical testbed. A compact but representative ITS testbed can use one central server, two RSU/MEC edge aggregators, and at least three clients per aggregator (9 nodes total). Hardware heterogeneity should be explicit: (i) lightweight sensing/tiny-model endpoints (ESP32-class), (ii) vehicle/On-Board Unit (OBU)-grade clients (Raspberry Pi-class SBCs), (iii) higher-compute RSU/MEC aggregators (Jetson-class), plus an x86 cloud server. This setup supports measuring end-to-end training latency, communication overhead, and device-level compute/energy under realistic V2X variability.
Approach B: software-based benchmarking We distinguish two complementary software-based tracks based on whether the goal is to study static, repeatable regimes or dynamic, mobility-driven variability.
B1—Controlled network emulation (static/stress-test regimes). FL roles (clients/aggregators/server) run as containers with the actual training and communication stack. Network conditions are imposed using traffic control mechanisms (delay, loss, jitter, rate limits) [60] and, when needed, explicit multi-hop topologies via Mininet [61] or container-aware variants such as Containernet [62]. This track yields architecture-sensitive measurements under repeatable conditions: time-to-target, bytes-per-round (including control signaling), and the impact of orchestration choices (e.g., intermediate aggregation placement, asynchronous updates) under predefined “typical” vs. “worst-case” V2X impairment profiles. Emulation thus supports highly reproducible what-if testing, but does not generate mobility-driven variability unless scripted.
B2—Mobility-aware network simulation (dynamic/large-scale regimes). Mobility is generated with SUMO [63] and coupled to packet-level network simulators (e.g., Veins over OMNeT++/INET [64] or ns-3 [65]) to capture realistic temporal variability. This enables controlled experiments with time-varying connectivity—handovers, intermittent links, client churn, and changing link quality—as vehicles move. FL traffic is injected as application flows parameterized by the learning setup (e.g., update size/compression, clients per round, update periodicity, and synchronization/asynchrony policies), supporting analysis of distributions (median/95th percentile) and averages across many mobility realizations and fleets larger than any physical testbed. Thus, simulation offers dynamic realism and scalability at the cost of relying on abstracted network/compute models rather than executing all system components natively.
B2 can be used to extract representative distributions for delay/loss/availability (e.g., typical and extreme percentiles) that inform the impairment profiles for B1, while B1 can provide measured update sizes, serialization overheads, and processing times that parameterize B2’s traffic models.
Minimal metric set. To avoid over-parameterization, we recommend reporting a compact metric set:
  • Learning outcome: Task metric (e.g., Error Absoluto Medio (MAE)/Root Mean Square Error (RMSE) for traffic time-series; F1/precision/recall for detection/classification).
  • End-to-end efficiency: Time-to-target (time to reach a fixed performance threshold) and rounds-to-target.
  • Communication cost: Total bytes transmitted (uplink/downlink) and bytes-per-round (including control signaling when applicable).
  • Compute/energy footprint: Per-role CPU/GPU utilization and memory; energy-per-round for Approach A (or compute-time proxies for Approach B).
Latency-related metrics should be reported as distributions (median and 95th percentile) due to mobility-induced variability.
Experimental design and statistical reporting. A practical design is a two-factor experiment: Architecture (baseline client–server vs. one alternative architecture at a time) × Network regime (e.g., good/medium/poor connectivity or low/high mobility). For each condition, run multiple independent seeds (≥3) and report confidence intervals and effect sizes. When assumptions hold, a two-way ANOVA can be used; otherwise, a non-parametric alternative (e.g., Kruskal–Wallis with corrected post hoc pairwise tests) is appropriate.
Executing either Approach A or Approach B requires a dedicated engineering and experimental campaign. Approach A involves procuring and configuring heterogeneous devices, implementing and instrumenting cross-tier orchestration, measuring energy/resource footprints, and running repeated trials across multiple regimes. Approach B requires containerizing the pipeline, calibrating impairment profiles and mobility scenarios, integrating FL update-traffic generation consistent with client participation and round timing, and performing factorial sweeps to achieve statistically robust conclusions. Ensuring reproducibility (configuration management, artifact packaging, trace release, and consistent baselines) further increases the effort. For these reasons, a controlled architecture-to-architecture benchmark would constitute a standalone experimental contribution; accordingly, Section 6 focuses on a structured adaptation analysis of representative frameworks rather than introducing new simulation-based benchmarks.

6. Results

This section evaluates how selected ITS applications, originally implemented with traditional ML, can be restructured to operate under an FL paradigm. The analysis focuses on architectural feasibility, data distribution, and the specific adaptations required to integrate these systems into decentralized frameworks. The evaluation that follows is qualitative and architecture-oriented; no new simulations or testbed measurements are performed in this work. The applications were chosen from the taxonomy developed in the systematic review, but not all were suitable for federation. To make this notion precise, we applied the federability criteria introduced in Section 1, formulated as three diagnostic questions: (i) Are the main data sources naturally distributed across vehicles, roadside units, or user devices? (ii) Is it feasible to execute the core learning task locally at those nodes, without centralizing raw data? (iii) Can the application’s learning pipeline be decomposed into node-level model updates plus an aggregation step at an edge or cloud coordinator? Only applications answering all three questions positively were retained as truly federable candidates. Only those applications combining multi-source data collection with potential for local processing were retained. As a result, four ITS applications were selected for adaptation: traffic prediction and management [36,66], real-time accident detection [12], transport mode identification [53], and driver profiling and behavior detection [67]. Applications such as vehicle type identification by sound, driver fatigue recognition, and railway fault prediction were excluded due to their reliance on fixed or centralized data capture, which prevents distributed training without significant architectural redesign.
Since FedGRU [8] served as a baseline, meta-framework extension of the client–server model, the comparative framework analysis is focused on the two architectures deemed most suitable for adaptation: DT + HFL [9] and TFL-CNN [10]. These frameworks represent complementary paradigms of hierarchical and edge-enabled federation, presenting unique but compatible routes for embedding federated learning into ITS. In addition, they provide sufficient implementation detail for mapping ITS applications.
Each chosen ITS application is concurrently evaluated using DT + HFL and TFL-CNN within a standardized matrix-based analysis format. This format emphasizes common architectural traits, unique adaptation needs, and balances between latency, scalability, and privacy. The methodology maintains consistency across examples, demonstrating how conventional ITS systems can evolve into practical federated deployments in real-world scenarios.

6.1. Traffic Prediction and Management Applications

Two traffic prediction studies were considered for adaptation analysis: Najada [66] and Lakshna [36]. Both share common architectural features—heterogeneous sensing devices, real-time data requirements, and a need for hierarchical coordination—which make them suitable for federated settings. Table 8 summarizes their integration possibilities under the two selected frameworks, DT + HFL and TFL-CNN.
In both traffic prediction studies, the DT + HFL structure lets each vehicle or sensor cluster have a digital twin to simulate real-time traffic conditions. Cloudlets act as intermediate aggregation nodes, merging data from edge devices. The analytical phase involves executing models with algorithms like linear regression or random forest within the digital twin. A global model is formed via a Lambda server by combining parameters from all cloudlets. This setup offers scalability and privacy but increases infrastructure costs and synchronization needs between physical and digital nodes.
TFL-CNN resembles conventional vehicular structures, with RSUs act as coordination layers linking vehicles and the cloud, creating a two-tier system. RSUs locally train models like random forests or lightweight regressions and send parameters to a central server using FedAvg. The addition of 6G edge connectivity ensures quick synchronization, enabling real-time congestion alerts. This efficient method depends on RSUs with adequate computing power and communication bandwidth.
Overall, our qualitative analysis suggests that DT + HFL is conceptually better aligned with detailed, simulation-based decision-making for dense urban networks, while TFL-CNN is conceptually better aligned with scalability and communication efficiency in 6G vehicular contexts. The choice between them depends on the deployment goal: DT + HFL for predictive precision and proactive management, or TFL-CNN for lightweight, real-time adaptation across wide vehicular grids. On the other hand, in a traditional client-server setup, ITS applications rely on a main server to handle communication and processing among nodes or vehicles. Local devices train partial models and send their parameters or data to the server, which aggregates them into a global model. With no intermediate RSU or Cloudlets, all coordination and aggregation depend on the central server. To ensure data privacy, encryption of identifiers like MAC addresses is advised before transmission. This setup simplifies management but centralizes core tasks, reducing the distributed benefits of federated applications.

6.2. Real-Time Accident Detection

The accident detection system from [12] was assessed for potential adaptation to DT + HFL and TFL-CNN frameworks. This application, originally built over a VANET-based decentralized architecture, it is crucial for ITS scenarios demanding ultra-low latency and continuous vehicle data exchange. Table 9 maps the system’s components to the frameworks.
Within the DT + HFL framework, an accident detection system can be reorganized into a hierarchy where vehicles with onboard computing serve as edge nodesto update local models with sensed data like speed, position, and ID. These nodes send summaries or weights to upper-level entities such as cloudlets or a central server, allowing digital twins to simulate vehicles for anomaly detection without raw data sharing. Although local aggregation could produce inconsistent models, a semi-centralized server layer can rectify this, albeit with added deployment complexity.
The system can integrate into the TFL-CNN framework with moderate adjustments. Vehicles using VANET modules serve as local training units, while RSU-like nodes handle local aggregation and send model updates to the cloud. This setup maintains quick local inference and supports 6G-ready edge growth. Its hierarchical coordination is conceptually better aligned with low-latency requirements, though it lacks flexibility for detailed simulation or digital replication.
In summary, our qualitative comparison indicates that the DT + HFL framework provides richer modeling and privacy control through digital twins and simulation layers, making it attractive for analytical extensions albeit with additional resource overhead. Conversely, TFL-CNN emphasizes rapid communication and simpler coordination and is conceptually better aligned with low-latency requirements, albeit at the cost of reduced representational fidelity. The choice between them depends on whether the ITS deployment prioritizes accuracy and contextual insight (DT + HFL) or real-time responsiveness (TFL-CNN).

6.3. Transport Mode Identification

Below is a summary of the assessment conducted on the transport mode identification app from [45], as outlined in Table 10. This application uses mobile devices to gather and categorize travel modes using contextual and sensor data, featuring two data acquisition approaches that influence how the system could be adapted to federated learning adaptation.
In the DT + HFL framework, the system could leverage mobile devices as local nodes to collect sensor data (accelerometer, GPS, gyroscope) and transfer it to digital twin entities on edge cloudlets. These twins model individual user mobility for privacy-preserving local training. Cloudlets serve as initial aggregators before central servers update the global model. The DT layer can emulate user mobility in different modes (walking, driving, cycling) to enhance model calibration. While the original setup uses centralized training, only slight adjustments are needed for cloudlet-based processing. Synchronizing DT instances and dealing with varying sensor data rates remain key challenges.
The TFL-CNN framework facilitates transport mode detection via hierarchical aggregation. RSUs or micro-edge servers can obtain features from smartphones or wearables, conduct local training, and then transmit model parameters to a central server. The CNN component of the framework is naturally suited to learning spatio-temporal patterns from sensor data. Unlike DT + HFL, digital twins aren’t needed; aggregation occurs directly at the RSU with a lightweight FedAvg routine. Thus, the mobile sensing architecture can be extended cost-effectively, assuming stable network connectivity. This setup enables real-time classification with minimal added latency.
In our qualitative assessment, the DT + HFL framework offers richer contextual modeling through digital twins but requires higher computational resources and synchronization overhead at the edge. In contrast, TFL-CNN adopts a lighter-weight architecture with lighter hierarchical aggregation and a coordination path conceptually better aligned with low-latency requirements, which makes it conceptually better aligned with the requirements of real-time multimodal mobility detection in urban environments.

6.4. Driver Profiling and Behavior Detection

Below, we provide a summary of the driver profiling and behavior detection application’s evaluation from [67], as shown in Table 11. This tool assesses telemetry and driver behavior to detect risk, patterns, and anomalies using on-board and mobile data.
The original system architecture collects data from OBD-II sensors, radar sensors, and a mobile app linked to a central cloud. In the DT + HFL framework, the mobile app and on-board unit (OBU) can function as local nodes, each linked to a digital twin simulating driver behavior. Cloudlets act as intermediaries, handling local model training and sending aggregated parameters to a global server. Digital twins incorporate contextual data such as traffic density, vehicle type, and environment to improve behavioral modeling. The cloudlet-layer aggregated model estimates driver risk levels, while the central server combines these to create population-level profiles. Privacy is maintained as personal data stays within local digital twin instances.
In the TFL-CNN framework, vehicles serve as edge nodes, with nearby RSUs or mobile base stations acting as aggregation points. The mobile app acts as a supplemental RSU, sending summarized behavioral parameters rather than raw sensor data. RSUs consolidate driver-level models (e.g., CNN-based classifiers for risky maneuvers) and transmit parameters to the central cloud for global aggregation via the FedAvg algorithm. This hierarchical training setup provides real-time feedback and lowers communication costs. Thus, the dual approach of CNN feature extraction at vehicles and hierarchical aggregation at RSUs supports scalable monitoring of driver behavior across fleets.
Overall, our qualitative analysis suggests that DT + HFL enables deeper semantic analysis of driver behavior through digital twins and multi-layer aggregation, which is suitable for detailed behavioral studies. In contrast, TFL-CNN prioritizes low-latency hierarchical inference, which is conceptually better aligned with the requirements of large-scale, connected vehicle networks.
In brief, the comparative examination of the chosen ITS applications highlights how incorporating federated learning is feasible across various transportation sectors, from infrastructure-level traffic management to driver profiling. DT + HFL offers enhanced contextual modeling via its hierarchical, simulation-based setup, while TFL-CNN is conceptually better aligned with low-latency and scalability requirements in vehicle networks. Taken together, these qualitative observations form a basis for assessing the balance between model accuracy and deployment efficiency.

7. Conclusions and Future Work

This study examined ITS applications that employ ML and analyzed their potential transition toward FL. Through a systematic review and a comparative framework analysis, we identified the implementation patterns, aggregation algorithms, and structural requirements that determine the feasibility of adapting traditional ITS systems to a federated paradigm. Moreover, the consolidated evidence map (Table 5) and the compact counts summary (Figure 4) indicate that FedAvg and FedAvg-derived schemes dominate reported FL-enabled ITS frameworks, whereas hierarchical and asynchronous mechanisms appear less frequently and, crucially, are not assessed through controlled architecture-to-architecture benchmarks. In particular, despite the prevalence of quantitative evaluation in the literature, we found no study that performs a controlled architecture-level comparison under fixed tasks, models, and network regimes. No claim of empirical performance superiority is made; all architectural comparisons are conceptual and grounded in previously published evidence.
While our previous work [7] provided a comprehensive literature review on ML/FL-based ITS applications, the present paper goes one step further deriving taxonomies, task-to-framework mappings, and a component-level adaptation methodology for concrete ITS systems. In this sense, our contribution complements existing surveys by offering guidelines that can be directly used by practitioners when selecting and tailoring FL architectures for specific ITS deployments.
Our analysis confirmed that traditional ITS applications typically collect data from multiple distributed sensors or vehicles but centralize model training in a single server. Although this approach improves predictive performance by aggregating large datasets, it also introduces vulnerabilities such as bottlenecks, single points of failure, and risks to data privacy. In contrast, federated learning mitigates these issues by decentralizing training—sharing model parameters instead of raw data—and thus enhancing privacy and robustness. However, FL also faces challenges related to latency, communication overhead, and node heterogeneity, which remain open questions for large-scale deployment.
According to the distinction between models and systems, ITS applications characterized as systems—detailing architecture, data flow, and component interaction—are prime candidates for federation. Conversely, models only utilize datasets for predictions without implementation details. Applications with nodes capable of some level of local processing adapt well, whereas those with centralized architectures need significant restructuring, including local computing and distributed coordination enhancements.
The architecture-oriented adaptation analysis demonstrated that ITS applications can effectively integrate with frameworks such as DT + HFL and TFL-CNN by introducing intermediate aggregation layers and edge-computing components (e.g., cloudlets or RSUs). Both frameworks improve scalability and privacy, though they differ in complexity and implementation context: DT + HFL leverages digital twins for virtual simulation and monitoring, while TFL-CNN employs a two-layer edge–cloud hierarchy suited for vehicular and mobile networks. The integration of DTs proved particularly beneficial, enabling realistic modeling of dynamic traffic conditions without exposing sensitive information.
This work—structured as a literature-based, architecture-level analysis—synthesizes and critically examines existing research without introducing new quantitative, simulation, or testbed experiments. Consequently, the discussion on scalability, latency, and accuracy reflects previously published empirical evidence rather than measurements collected for this study. Likewise, economic and operational costs, including communication overhead, energy consumption, and infrastructure requirements, fall outside the scope of this review. These aspects are more appropriately evaluated in concrete deployments and therefore remain an important direction for future replication and implementation studies.
The most immediate direction for future work is to operationalize the architecture-level benchmarking blueprint (Section 5.4) as a standalone experimental effort. This can follow two complementary tracks: (i) a minimal physical testbed on heterogeneous edge hardware, using either an FL stack (e.g., Flower) or a lightweight custom coordinator to enable instrumentation, and (ii) mobility-aware simulation/emulation campaigns that sweep controlled network regimes and dynamic connectivity (e.g., SUMO traces coupled with Veins or NS-3). Both tracks would quantify architecture-level trade-offs under ITS variability, including latency distributions, communication overhead, and compute/energy footprint proxies.
Beyond benchmarking, an additional research avenue is to study how hierarchical orchestration interacts with privacy-enhancing mechanisms (e.g., differential privacy or trust layers) under the same controlled architecture-level protocol.
Overall, the study establishes a conceptual basis for the gradual migration of ITS applications toward federated learning, emphasizing adaptability, scalability, and privacy as key enablers for future deployments.

Author Contributions

Conceptualization, M.S.V.R., L.U.-A. and C.T.-B.; methodology, M.S.V.R., C.T.-B. and X.C.H.; formal analysis, M.S.V.R. and C.T.-B.; investigation, M.S.V.R., P.B. and N.O.G.; validation, P.B. and N.O.G.; writing—original draft preparation, M.S.V.R. and C.T.-B.; writing—review and editing, L.U.-A. and N.O.G.; supervision, L.U.-A.; project administration, X.C.H.; funding acquisition, L.U.-A. and X.C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Escuela Politécnica Nacional under grant number PIGR-22-06 (APPSTRADA). The APC was funded by Vicerrectorado de Investigación, Innovación y Vinculación from escuela Politécnica Nacional.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

During the preparation of this manuscript, the authors used Writefull (Writefull’s model, Writefull Ltd., London, United Kingdom, 2025) and ChatGPT (GPT-5, OpenAI, San Franisco, CA, USA 2025) to support specific editorial and consistency tasks. Writefull was employed for language polishing and grammar correction. ChatGPT was used to assist in ensuring structural consistency across sections, harmonizing terminology, and formatting. All AI-assisted outputs were reviewed, edited, and validated by the authors, who take full responsibility for the final content of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, F.Y. Parallel control and management for intelligent transportation systems: Concepts, architectures, and applications. IEEE Trans. Intell. Transp. Syst. 2010, 11, 630–638. [Google Scholar] [CrossRef]
  2. Azán, S.; Gómez, M.; Zarichta, D.; Bocarejo, J.P.; Chávez, J.C. Esquemas de Implantación de Tecnologías Inteligentes de Transporte en América Latina: Estudios de Casos y Recomendaciones; CAF: Caracas, Venezuela, 2018; Available online: https://scioteca.caf.com/handle/123456789/1396 (accessed on 31 December 2025).
  3. Nur Fadila, J.; Haliza Abdul Wahab, N.; Alshammari, A.; Aqarni, A.; Al-Dhaqm, A.; Aziz, N. Comprehensive Review of Smart Urban Traffic Management in the Context of the Fourth Industrial Revolution. IEEE Access 2024, 12, 196866–196886. [Google Scholar] [CrossRef]
  4. Anvesh, E.; Brindha, D.; Kumar, C.H. Dynamic Traffic Optimization Through Cloud-Enabled Big Data Analytics and Machine Learning for Enhanced Urban Mobility. In Proceedings of the 2025 3rd International Conference on Communication, Security, and Artificial Intelligence (ICCSAI), Greater Noida, India, 4–6 April 2025; Volume 3, pp. 469–474. [Google Scholar] [CrossRef]
  5. Herrera Piñeiro, M. Análisis del impacto del aprendizaje federado en entornos desbalanceados. Master’s Thesis, Universidad del País Vasco, San Sebastián, Spain, 2021. Available online: http://hdl.handle.net/10810/58969 (accessed on 31 December 2025).
  6. Jung, A. Federated Learning: From Theory to Practice. arXiv 2025, arXiv:2505.19183. [Google Scholar] [CrossRef]
  7. Vela Romo, M.S.; Tripp-Barba, C.; Hinojosa, X.C.; Barbecho, P.; Urquiza-Aguiar, L. A Systematic Review of Ml in Its: A Taxonomy From Traditional Models to Federated Learning. In Proceedings of the 2025 IEEE Technology and Engineering Management Society (TEMSCON LATAM), Cartagena, Colombia, 18–21 June 2025; pp. 1–8. [Google Scholar] [CrossRef]
  8. Liu, Y.; Yu, J.J.Q.; Kang, J.; Niyato, D.; Zhang, S. Privacy-preserving Traffic Flow Prediction: A Federated Learning Approach. IEEE Internet Things J. 2020, 7, 7751–7763. [Google Scholar] [CrossRef]
  9. Gupta, D.; Moni, S.S.; Tosun, A.S. Integration of Digital Twin and Federated Learning for Securing Vehicular Internet of Things. In Proceedings of the 2023 Research in Adaptive and Convergent Systems (RACS 2023), Gdansk, Poland, 6–10 August 2023; Association for Computing Machinery, Inc.: New York, NY, USA, 2023; pp. 1–8. [Google Scholar] [CrossRef]
  10. Zhou, X.; Liang, W.; She, J.; Yan, Z.; Wang, K. Two-Layer Federated Learning with Heterogeneous Model Aggregation for 6G Supported Internet of Vehicles. IEEE Trans. Veh. Technol. 2021, 70, 5308–5317. [Google Scholar] [CrossRef]
  11. Hernández Jayo, U. Telecomunicación en el Sector Transporte; Universitat Oberta de Catalunya: Barcelona, Spain, 2014. [Google Scholar]
  12. Dogru, N.; Subasi, A. Traffic Accident Detection Using Random Forest Classifier. In Proceedings of the 2018 15th Learning and Technology Conference (L&T), Jeddah, Saudi Arabia, 25–26 February 2018; IEEE: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  13. McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data. arXiv 2016, arXiv:1602.05629. [Google Scholar] [CrossRef]
  14. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; Mcmahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical Secure Aggregation for Privacy-Preserving Machine Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017. [Google Scholar]
  15. Mammen, P.M. Federated Learning: Opportunities and Challenges. arXiv 2021, arXiv:2101.05428. [Google Scholar] [CrossRef]
  16. Pavez Collado, J.O.A. Simulación de Sistemas de Federated Learning en Redes Moviles 5G. Bachelor Thesis, Universidad de Chile, Santiago, Chile, 2023. Available online: https://repositorio.uchile.cl/handle/2250/193429 (accessed on 31 December 2025).
  17. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and Open Problems in Federated Learning. Found. Trends Mach. Learn. 2019, 14, 1–210. [Google Scholar] [CrossRef]
  18. Liu, W.; Chen, L.; Zhang, W. Decentralized Federated Learning: Balancing Communication and Computing Costs. IEEE Trans. Signal Inf. Process. Over Netw. 2021, 8, 131–143. [Google Scholar] [CrossRef]
  19. Diao, E.; Ding, J.; Tarokh, V. HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients. arXiv 2020, arXiv:2010.01264. [Google Scholar] [CrossRef]
  20. Sánchez Farías, S.A. Aplicación de Aprendizaje Federado para Reconocimiento de Actividades Humanas. Bachelor Thesis, Universidad de Piura, Piura, Peru, 2023. Available online: https://hdl.handle.net/11042/6345 (accessed on 31 December 2025).
  21. Reddi, S.J.; Charles, Z.; Zaheer, M.; Garrett, Z.; Rush, K.; Konecný, J.; Kumar, S.; McMahan, H.B. Adaptive Federated Optimization. In Proceedings of the International Conference on Learning Representations (ICLR), Virtual, 3–7 May 2021. [Google Scholar]
  22. Wang, J.; Liu, Q.; Liang, H.; Joshi, G.; Poor, H.V. Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Virtual, 6–12 December 2020. [Google Scholar]
  23. Karimireddy, S.P.; Kale, S.; Mohri, M.; Reddi, S.J.; Stich, S.U.; Suresh, A.T. SCAFFOLD: Stochastic Controlled Averaging for Federated Learning. In Proceedings of the 37th International Conference on Machine Learning (ICML), Virtual, 13–18 July 2020. [Google Scholar]
  24. Wang, H.; Yurochkin, M.; Sun, Y.; Papailiopoulos, D.; Khazaeni, Y. Federated Learning with Matched Averaging. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  25. Navarro-Espinoza, A.; López-Bonilla, O.R.; García-Guerrero, E.E.; Tlelo-Cuautle, E.; López-Mancilla, D.; Hernández-Mejía, C.; Inzunza-González, E. Traffic Flow Prediction for Smart Traffic Lights Using Machine Learning Algorithms. Technologies 2022, 10, 5. [Google Scholar] [CrossRef]
  26. Chellapandi, V.P.; Yuan, L.; Żak, S.H.; Wang, Z. A Survey of Federated Learning for Connected and Automated Vehicles. In Proceedings of the 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), Bilbao, Spain, 24–28 September 2023; pp. 2485–2492. [Google Scholar] [CrossRef]
  27. Chong, Y.W.; Yau, K.L.A.; Ibrahim, N.F.; Rahim, S.K.A.; Keoh, S.L.; Basuki, A. Federated Learning for Intelligent Transportation Systems: Use Cases, Open Challenges, and Opportunities. IEEE Intell. Transp. Syst. Mag. 2025, 17, 18–32. [Google Scholar] [CrossRef]
  28. Manias, D.M.; Shami, A. Making a Case for Federated Learning in the Internet of Vehicles and Intelligent Transportation Systems. IEEE Netw. 2021, 35, 88–94. [Google Scholar] [CrossRef]
  29. Zhang, S.; Li, J.; Shi, L.; Ding, M.; Nguyen, D.C.; Tan, W.; Weng, J.; Han, Z. Federated Learning in Intelligent Transportation Systems: Recent Applications and Open Problems. IEEE Trans. Intell. Transp. Syst. 2024, 25, 3259–3285. [Google Scholar] [CrossRef]
  30. Zhang, R.; Mao, J.; Wang, H.; Li, B.; Cheng, X.; Yang, L. A Survey on Federated Learning in Intelligent Transportation Systems. IEEE Trans. Intell. Veh. 2024, 10, 3043–3059. [Google Scholar] [CrossRef]
  31. Chellapandi, V.P.; Yuan, L.; Brinton, C.G.; Żak, S.H.; Wang, Z. Federated Learning for Connected and Automated Vehicles: A Survey of Existing Approaches and Challenges. IEEE Trans. Intell. Veh. 2024, 9, 119–137. [Google Scholar] [CrossRef]
  32. Xing, L.; Zhao, P.; Gao, J.; Wu, H.; Ma, H. A Survey of the Social Internet of Vehicles: Secure Data Issues, Solutions, and Federated Learning. IEEE Intell. Transp. Syst. Mag. 2023, 15, 70–84. [Google Scholar] [CrossRef]
  33. Posner, J.; Tseng, L.; Aloqaily, M.; Jararweh, Y. Federated Learning in Vehicular Networks: Opportunities and Solutions. IEEE Netw. 2021, 35, 152–159. [Google Scholar] [CrossRef]
  34. Javed, A.R.; Hassan, M.A.; Shahzad, F.; Ahmed, W.; Singh, S.; Baker, T.; Gadekallu, T.R. Integration of Blockchain Technology and Federated Learning in Vehicular (IoT) Networks: A Comprehensive Survey. Sensors 2022, 22, 4394. [Google Scholar] [CrossRef] [PubMed]
  35. Falahatraftar, F.; Pierre, S.; Chamberland, S. A multiple linear regression model for predicting congestion in heterogeneous vehicular networks. In Proceedings of the International Conference on Wireless and Mobile Computing, Networking and Communications, Thessaloniki, Greece, 12–14 October 2020; IEEE Computer Society: Washington, DC, USA, 2020. [Google Scholar] [CrossRef]
  36. Lakshna, A.; Ramesh, K.; Prabha, B.; Sheema, D.; Vijayakumar, K. Machine learning Smart Traffic Prediction and Congestion Reduction. In Proceedings of the 2021 IEEE International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems, ICSES 2021, Chennai, India, 24–25 September 2021; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  37. Kamyab, M.; Remias, S.; Najmi, E.; Rabinia, S.; Waddell, J.M. Machine learning approach to forecast work zone mobility using probe vehicle data. Transp. Res. Rec. 2020, 2674, 157–167. [Google Scholar] [CrossRef]
  38. Chu, H.C.; Chi-Kun, W. Using K-means Algorithm for the Road Junction Time Period Analysis. In Proceedings of the IEEE 8th International Conference on Awareness Science and Technology (iCAST 2017), Taichung, Taiwan, 8–10 November 2017; IEEE: New York, NY, USA, 2017; p. 543. [Google Scholar]
  39. Liu, X.; Pan, L.; Sun, X. Real-time traffic status classification based on Gaussian mixture model. In Proceedings of the Proceedings-2016 IEEE 1st International Conference on Data Science in Cyberspace, DSC 2016, Changsha, China, 13–16 June 2016; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2017; pp. 573–578. [Google Scholar] [CrossRef]
  40. Mbuli, J.; Nouiri, M.; Trentesaux, D.; Baert, D. Root causes analysis and fault prediction in intelligent transportation systems: Coupling unsupervised and supervised learning techniques. In Proceedings of the International Conference on Control, Automation and Diagnosis (ICCAD), Grenoble, France, 2–4 July 2019; IEEE: New York, NY, USA, 2019. [Google Scholar]
  41. Dixit, A.; Jain, M. Trajectory Data Driven Driving Style Recognition for Autonomous Vehicles Using Unsupervised Clustering. Commun. Appl. Nonlinear Anal. 2024, 31, 715–723. [Google Scholar] [CrossRef]
  42. You, Z.; Gao, Y.; Zhang, J.; Zhang, H.; Zhou, M.; Wu, C. A Study on Driver Fatigue Recognition Based on SVM Method. In Proceedings of the 4th International Conference on Transportation Information and Safety (ICTIS), Banff, AB, Canada, 8–10 August 2017; IEEE: New York, NY, USA, 2017. [Google Scholar]
  43. Quintella, C.A.D.M.S.; Andrade, L.C.V.; Campos, C.A.V. Detecting the transportation mode for context-aware systems using smartphones. In Proceedings of the IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; IEEE: New York, NY, USA, 2016. [Google Scholar]
  44. de S. Soares, E.F.; de M. S. Quintella, C.A.; Campos, C.A.V. Towards an Application for Real-Time Travel Mode Detection in Urban Centers. In Proceedings of the 2017 IEEE 86th Vehicular Technology Conference (VTC-Fall), Toronto, ON, Canada, 24–27 September 2017; pp. 1–5. [Google Scholar] [CrossRef]
  45. Soares, E.F.D.S.; Quintella, C.A.M.; Campos, C.A.V. Smartphone-Based Real-Time Travel Mode Detection for Intelligent Transportation Systems. IEEE Trans. Veh. Technol. 2021, 70, 1179–1189. [Google Scholar] [CrossRef]
  46. Manzoor, M.A.; Morgan, Y. Vehicle Make and Model Recognition using Random Forest Classification For Intelligent Transportation Systems. In Proceedings of the IEEE 8th Annual Computing and Communication Workshop and Conference, Las Vegas, NV, USA, 8–10 January 2018; IEEE: New York, NY, USA, 2018; pp. 148–154. [Google Scholar]
  47. Zhang, Y.; Deng, Y. Analysis of Pedestrians’ Red-light Violation Behavior at Signalized Intersections. In Proceedings of the 5th International Conference on Transportation Information and Safety (ICTIS), Liverpool, UK, 14–17 July 2019; IEEE: New York, NY, USA, 2019; pp. 452–457. [Google Scholar]
  48. Olowononi, F.O.; Rawat, D.B.; Liu, C. Federated learning with differential privacy for resilient vehicular cyber physical systems. In Proceedings of the 2021 IEEE 18th Annual Consumer Communications and Networking Conference, CCNC 2021, Las Vegas, NV, USA, 9–12 January 2021; IEEE: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  49. Zhang, Z.; Wang, H.; Chen, J.; Fan, Z.; Song, X.; Shibasaki, R. GOF-TTE: Generative Online Federated Learning Framework for Travel Time Estimation. IEEE Internet Things J. 2022, 9, 24107–24121. [Google Scholar] [CrossRef]
  50. Firdaus, M.; Larasati, H.T.; Rhee, K.H. A Blockchain-Assisted Distributed Edge Intelligence for Privacy-Preserving Vehicular Networks. Comput. Mater. Contin. 2023, 76, 2959–2978. [Google Scholar] [CrossRef]
  51. Raghunath, K.M.K.; Bhat, C.R.; Kumar, V.V.; Kannan, V.A.; Mahesh, T.R.; Manikandan, K.; Krishnamoorthy, N. Redefining Urban Traffic Dynamics With TCN-FL Driven Traffic Prediction and Control Strategies. IEEE Access 2024, 12, 115386–115399. [Google Scholar] [CrossRef]
  52. Jiang, Y.; Wu, Y.; Zhang, S.; Yu, J.J.Q. FedVAE: Trajectory privacy preserving based on Federated Variational AutoEncoder. In Proceedings of the 2023 IEEE 98th Vehicular Technology Conference (VTC2023-Fall), Hong Kong, China, 10–13 October 2023. [Google Scholar] [CrossRef]
  53. Zhu, Y.; Zhang, S.; Liu, Y.; Niyato, D.; Yu, J.J. Robust federated learning approach for travel mode identification from non-IID GPS trajectories. In Proceedings of the International Conference on Parallel and Distributed Systems-ICPADS, Virtual, 2–4 December 2020; IEEE Computer Society: Washington, DC, USA, 2020; pp. 585–592. [Google Scholar] [CrossRef]
  54. Yu, Z.; Hu, J.; Min, G.; Zhao, Z.; Miao, W.; Hossain, M.S. Mobility-Aware Proactive Edge Caching for Connected Vehicles Using Federated Learning. IEEE Trans. Intell. Transp. Syst. 2021, 22, 5341–5351. [Google Scholar] [CrossRef]
  55. Campos, E.M.; Hernandez-Ramos, J.L.; Vidal, A.G.; Baldini, G.; Skarmeta, A. Misbehavior detection in intelligent transportation systems based on federated learning. Internet Things 2024, 25, 101127. [Google Scholar] [CrossRef]
  56. Manias, D.M.; Shaer, I.; Yang, L.; Shami, A. Concept Drift Detection in Federated Networked Systems. In Proceedings of the 2021 IEEE Global Communications Conference (GLOBECOM), Madrid, Spain, 7–11 December 2021. [Google Scholar] [CrossRef]
  57. Scott, C.; Khan, M.S.; Paranjothi, A.; Li, J.Q. Enabling Rural IoV Communication through Decentralized Clustering and Federated Learning. In Proceedings of the 2024 IEEE 14th Annual Computing and Communication Workshop and Conference, CCWC 2024, Las Vegas, NV, USA, 8–10 January 2024; IEEE: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  58. Khan, L.U.; Mustafa, E.; Shuja, J.; Rehman, F.; Bilal, K.; Han, Z.; Hong, C.S. Federated Learning for Digital Twin-Based Vehicular Networks: Architecture and Challenges. IEEE Wirel. Commun. 2022, 31, 156–162. [Google Scholar] [CrossRef]
  59. Lu, B.; Huang, X.; Wu, Y.; Qian, L.; Niyato, D.; Quek, T.Q.S.; Xu, C.Z. Digital Twin Aided Predictive Scheduling and Bandwidth Allocation for Multi-Vehicle Cooperative Perception Systems. In Proceedings of the 2024 IEEE 99th Vehicular Technology Conference, Singapore, 24–27 June 2024. [Google Scholar]
  60. Linux iproute2 Project. tc-netem(8)—Network Emulator. Linux Documentation. 2025. Available online: https://www.man7.org/linux/man-pages/man8/tc-netem.8.html (accessed on 14 November 2025).
  61. Lantz, B.; Heller, B.; McKeown, N. A Network in a Laptop: Rapid Prototyping for Software-Defined Networks. In Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks (HotNets IX), Monterey, CA, USA, 20–21 October 2010. [Google Scholar] [CrossRef]
  62. Peuster, M.; Kampmeyer, J.; Karl, H. Containernet 2.0: A Rapid Prototyping Platform for Hybrid Service Function Chains. In Proceedings of the IEEE Conference on Network Softwarization and Workshops (NetSoft), Montreal, QC, Canada, 25–29 June 2018. [Google Scholar] [CrossRef]
  63. Krajzewicz, D.; Erdmann, J.; Behrisch, M.; Bieker, L. Recent Development and Applications of SUMO—Simulation of Urban MObility. Int. J. Adv. Syst. Meas. 2012, 5, 128–138. [Google Scholar]
  64. Sommer, C.; German, R.; Dressler, F. Bidirectionally Coupled Network and Road Traffic Simulation for Improved IVC Analysis. IEEE Trans. Mob. Comput. 2011, 10, 3–15. [Google Scholar] [CrossRef]
  65. Riley, G.F.; Henderson, T.R. The ns-3 Network Simulator. In Modeling and Tools for Network Simulation; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar] [CrossRef]
  66. Najada, H.A.; Mahgoub, I. Anticipation and alert system of congestion and accidents in VANET using Big Data analysis for Intelligent Transportation Systems. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar] [CrossRef]
  67. Abdelrahman, A.; Hassanein, H.S.; Abu-Ali, N. A Cloud-Based Environment-Aware Driver Profiling Framework Using Ensemble Supervised Learning. In Proceedings of the ICC 2019-2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019. [Google Scholar]
Figure 1. Centralized & Descentralized federated learning. (a) Centralized federated learning process: (1) The central server distributes the global model to participating nodes. (2) The nodes train and update their local model using their local data. (3) The nodes upload their trained model to the central server. (4) The central server aggregates and updates the global model. (b) Decentralized federated learning process: (1) Each node shares the updated parameters with its neighboring nodes. Each node updates its model by aggregating its parameters and the ones received from its neighbors.
Figure 1. Centralized & Descentralized federated learning. (a) Centralized federated learning process: (1) The central server distributes the global model to participating nodes. (2) The nodes train and update their local model using their local data. (3) The nodes upload their trained model to the central server. (4) The central server aggregates and updates the global model. (b) Decentralized federated learning process: (1) Each node shares the updated parameters with its neighboring nodes. Each node updates its model by aggregating its parameters and the ones received from its neighbors.
Smartcities 09 00012 g001
Figure 2. Types of federated learning based on data.
Figure 2. Types of federated learning based on data.
Smartcities 09 00012 g002
Figure 3. Selection process for the SLR (283 initial records to 39 included studies).
Figure 3. Selection process for the SLR (283 initial records to 39 included studies).
Smartcities 09 00012 g003
Figure 4. Compact summary of counts derived from Table 4 and Table 5. Panel (a) reports deployment labels (note: labels are non-mutually exclusive). Panel (b) summarizes evaluation evidence and key comparison signals; Model/task-level (M) and System/network-level (S) are subsets of Quantitative (any).
Figure 4. Compact summary of counts derived from Table 4 and Table 5. Panel (a) reports deployment labels (note: labels are non-mutually exclusive). Panel (b) summarizes evaluation evidence and key comparison signals; Model/task-level (M) and System/network-level (S) are subsets of Quantitative (any).
Smartcities 09 00012 g004
Figure 5. Architecture Framework robust FL [53]. An application case of client-server baseline.
Figure 5. Architecture Framework robust FL [53]. An application case of client-server baseline.
Smartcities 09 00012 g005
Figure 6. Architecture Framework DT + HFL [9].
Figure 6. Architecture Framework DT + HFL [9].
Smartcities 09 00012 g006
Figure 7. Architecture Framework TFL-CNN [10].
Figure 7. Architecture Framework TFL-CNN [10].
Smartcities 09 00012 g007
Table 1. Summary of related work in two groups: (i) surveys and baselines in ITS/CAV and (ii) architectural/integration proposals.
Table 1. Summary of related work in two groups: (i) surveys and baselines in ITS/CAV and (ii) architectural/integration proposals.
Work (Year)ScopeMain FocusRelevance
Navarro-Espinoza et al. (2022) [25]ITS/traffic-flow baselineML/DL for traffic flow; real-time deployment scenario.Non-FL baseline context.
Chellapandi et al. (2023) [26]CAV/FL applications & constraintsPerception, localization, traffic, behavior; 6G/edge; client selection, aggregation.CAV constraints/tactics.
Chong et al. (2025) [27]ITS/datasets & metricsUse cases; datasets/metrics; adaptive clients, efficient aggregation, hybrids.App-level benchmarks.
Manias & Shami (2023) [28]IoV–ITS/FL architectures & aggregationPrivacy-preserving FL; protocols; handling non-IID, intermittent links.Architectural guidance.
Zhang et al. (2024) [29]ITS/challenges & incentivesNon-IID, adversarial robustness, secure aggregation, incentives.Research agenda.
Rongqing Zhang et al. (2024) [30]ITS/taxonomy & future directionsSpace–air–ground, LLMs, ethics/regulation; resource optimization.Forward-looking angle.
Chellapandi et al. (2024) [31]CAV/FL taxonomy (H/V/hybrid), 5G/6G/MECMapping of FL forms; scalability, standardization, dynamic participation.CAV roadmap/standards.
Xing et al. (2022) [32]SIoV/security & privacyCIA+privacy; cryptography, blockchain, trust; FL under heterogeneity.Security framing.
Posner et al. (2021) [33]Vehicular cloud & communicationsFVN/FVC; DSRC+mmWave; blockchain-based reputation; low-latency FL.Vehicular deployment.
Javed et al. (2022) [34]Blockchain–FL integrationIntegration patterns; consensus; apps (autonomous driving, traffic); challenges.Trust/robustness.
Table 2. Inclusion and exclusion criteria used in the SLR.
Table 2. Inclusion and exclusion criteria used in the SLR.
IDInclusion Criteria
IC1Publications from 2016 onwards are considered.
IC2Primary studies (journal articles and conference papers).
IC3Field of study focused on Intelligent Transportation Systems.
IC4Publications that use a supervised or unsupervised learning algorithm in an ITS application.
Exclusion Criteria
EC1Publications about maritime or air mobility.
EC2Duplicate works or multiple versions of the same study.
EC3Publications that do not contain an ITS application (only generic or non-ITS use cases).
Table 3. Representative ITS applications and ML algorithms organized by taxonomy categories (summary from SLR).
Table 3. Representative ITS applications and ML algorithms organized by taxonomy categories (summary from SLR).
TypeDomainApplicationML Algorithm(s)
SystemRoadTraffic congestion prediction and managementLinear/Logistic Regression [35], Random Forest [36], Gradient Boosting [37], K-means [38]
Traffic flow classification and statusRandom Forest [36], Gaussian Mixture [39], MLP [37]
VehicleAccident and event detectionRandom Forest [40], ANN [40], SVM [40], K-means [41]
UserDriver fatigue recognitionSVM [42]
Transport mode detectionBayesian Networks, Decision Tree, SVM [43,44,45]
ModelVehicleVehicle brand/model recognitionRandom Forest, Decision Trees [46]
Driving style classificationClustering (K-means, dimensionality reduction) [41]
UserPedestrian behavior predictionLogistic Regression [47], Decision Tree [47]
Table 4. Consolidated FL frameworks in ITS. H = High, M = Medium, L = Low; last column reports suitability for Latency (T), Connectivity (C), Privacy (P), and Edge compute (E).
Table 4. Consolidated FL frameworks in ITS. H = High, M = Medium, L = Low; last column reports suitability for Latency (T), Connectivity (C), Privacy (P), and Edge compute (E).
CategoryFrameworkBase ModelAggregationApplication (Example)Fit (T|C|P|E)
Privacy-focusedFL +  Differential Privacy [DP] (cross-device) [48]Multilayer Perceptron (MLP)FedAvg + differential privacyProtection in vehicle systemsL | M | H | M
Generative Online Federated Learning Framework for Travel Time Estimation [GOF–TTE] (cross-device) [49]Generative modelFedAvgTravel time estimationM | M | H | M
Federated Variational AutoEncoder [FedVAE] (cross-device) [52]Neighbor + MLP + Semi-sup. CAEFedAvg; joint announcementSynthetic trajectory generationM | M | M | M
TCN–FL (cross-silo) [51]Temporal CNN (TCN)FedAvg with TCNTraffic prediction/controlM | H | M | H
Blockchain–FL (decentralized) [50]CNN/RNN (var.)Consensus/ledger auditEdge intrusion detectionM | M | H | M
Integrable approachesFedGRU (cross-device) [8]GRU (time-series)FedAvg; joint announcement protocolTraffic flow prediction [8]M | M | M | M
Robust FL (cross-device) [53]CNN + attention (robust)FedAvg (robust)Transport mode identification [53]M | M | M | M
Mobility-aware Proactive edge Caching scheme based on Federated learning (MPCF) +  Context-aware Adversarial AutoEncoder [C–AAE] (cross-silo) [54]Conditional AAEFedAvg + MPCFVehicular cachingM | H | M | H
Misbehavior FL (cross-device) [55]MLPFedAvg/Fed+Vehicle misbehavior detection [55]M | M | M | M
Drift–aware FL (cross-device) [56]K-meansWeighted avg + PCA/clusteringConcept shift detectionM | M | M | M
Advanced infrastructureTFL–CNN (hierarchical) [10]CNNTwo-layer/hierarchical FL6G IoV object detectionM | M | H | M
DT + HFL (cross-silo, hierarchical) [9]Hybrid CNN/RNN + DTHierarchical FL (RSU/cloud)Vehicle anomaly detection (DT)M | H | M | H
DT–DFL (hierarchical) [58]Local model (e.g., CNN)Dispersed FL (DFL): sub-global aggregation → global aggregationDT-based infotainment/edge cachingH | H | M | H
Dynamic–weighted FL (cross-silo) [59]LSTMDynamic weighted averages; asynchronous FL updates (vs. synchronous baseline)Cooperative perceptionM | H | M | H
Decentralized FL (decentralized) [57]Clustering + PPODecentralized (no server)Rural vehicular communicationsM | M | M | L
Table 5. Evidence map for the 15 FL-enabled ITS frameworks in Table 4. Cat.: PF = Privacy-focused, IA = Integrable approaches, AI = Advanced infrastructure. Eval.: C = Conceptual only (no quantitative results), M = Model-/task-level quantitative evaluation, S = System-/network-level simulation or evaluation (e.g., latency, PDR, bandwidth, caching). N/R = Not reported.
Table 5. Evidence map for the 15 FL-enabled ITS frameworks in Table 4. Cat.: PF = Privacy-focused, IA = Integrable approaches, AI = Advanced infrastructure. Eval.: C = Conceptual only (no quantitative results), M = Model-/task-level quantitative evaluation, S = System-/network-level simulation or evaluation (e.g., latency, PDR, bandwidth, caching). N/R = Not reported.
Cat.FrameworkEval.Data ProvenanceQuantitative Focus (Examples)Main Comparisons ReportedArch. Instantiated?Arch.-to-Arch. Compared?Selected for Extended Analysis?
PFFL + DP [48]MMNIST (proxy)Accuracy vs. privacy budget ( ϵ )FL w/o privacy vs. FL w/local DP vs. proposed variant (LRP)Partial (generic FL)NoNo
PFGOF–TTE [49]MReal taxi trajectories (urban fleet)MAE, RMSE, MAPE (TTE)Centralized vs. FedAvg vs. Fed-PA (personalization variants)Partial (generic FL)NoNo
PFFedVAE [52]MGeolife GPS trajectories (real)Utility/privacy trade-offs for synthetic trajectoriesFedVAE vs. perturbation vs. MixZone vs. k-anonymity (privacy methods)Partial (generic FL)NoNo
PFTCN–FL [51]MGMNS (Melbourne), UTD19 (real traffic)MAE; travel-time/congestion reductions; node efficiencyTCN–FL vs. ST-3DGMR vs. DGCRN vs. MAARO (prediction baselines)N/RNoNo
PFBlockchain–FL [50]SMNIST (proxy)Accuracy; MAC/PHY overhead; PDR; convergence; privacy- ϵ impactFedAvg vs. DP-FL vs. Blockchain-FL (and related schemes)Yes (system framework)NoNo
IAFedGRU [8]MPeMS (California, real traffic sensors)MAE, MSE, RMSE, MAPEFedGRU vs. centralized GRU/LSTM/SAE/SVM; sensitivity to number of clientsPartial (generic FL)NoYes(baseline, integrable)
IArobust FL [53]MGeolife GPS trajectories (real)Accuracy (non-IID robustness emphasis)FedAvg-variant (data-sharing/robust) vs. FedAvg and non-IID scenarios; vs. standard ML baselinesPartial (generic FL)NoNo
IAMPCF + C–AAE [54]SMovieLens 1M (proxy for vehicular requests)Cache hit ratio; training time; #rounds; density impactOracle/Random/
LRU/LFU/AE; convergence vs. standard FedAvg
Yes (veh→RSU, RSU→MBS)NoNo
IAMisbehavior FL [55]MVeReMi (realistic misbehavior dataset)Accuracy, F1, MCC, Cohen’s Kappa, FPR, detection timeFedAvg vs. Fed+; original vs. SMOTE-Tomek balanced dataPartial (generic FL)NoNo
IADrift-aware FL [56]MMNIST (proxy)Drift detection performance (PCA+K-means)Drift scenarios (1/2/4 drifted nodes); no external drift baselinePartial (generic FL)NoNo
AITFL–CNN [10]SBelgiumTSC (real; augmented/sharded)Precision/recall/F1/ROC; convergence time; latency (6G vs. 5G); #RSUs impactTFL–CNN vs. CNN vs. Random Forest vs. RegionNet; 6G vs. 5G latency settingsPartial (simulated end–edge–cloud)No (only settings/latency)Yes (edge hierarchy, 6G-ready)
AIDT + HFL [9]CN/R (conceptual)N/RNone (no metrics/baselines reported)Conceptual onlyNoYes(architectural template, DT + hierarchy)
AIDT–DFL [58]CN/R (conceptual)Qualitative criteria only (e.g., management complexity, comm. cost, mobility handling)Qualitative discussion of sub-global aggregation options (veh/RSU/UAV)Conceptual onlyNoNo
AIDynamic-weighted FL [59]SSimulation-driven (cooperative perception)Prediction MSE; average delay; CDF of shared data volume; perception constraintsAsync FL vs. conventional (sync) FL (prediction stage); PPO-DEV and DQN baselines (allocation stage)Partial (DT-FL + DRL pipeline)No (protocol variant only)No
AIDecentralized FL [57]SVeins+SUMO simulation (rural IoV)PDR; packet delay; cluster lifetime; connectivityProposed vs. AODV vs. DSR vs. DL-based clustering baselinesYes (system simulation)NoNo
Table 6. Concise summary of purpose, data, and base algorithms.
Table 6. Concise summary of purpose, data, and base algorithms.
FrameworkPurposeDataBase Algorithm(s)
FedGRU [8]Traffic flow prediction across organizationsTime-series from sensors, radars, mobile devicesGRU (time-series), adaptable
DT + HFL [9]Anomaly detection via hierarchical FL + digital twinsVehicle diagnostics, IoT sensors, weather, regulations, camerasHybrid CNN/RNN (flexible integration)
TFL-CNN [10]Two-layer FL for 6G vehicular object recognitionVisual streams (LiDAR, cameras), GPS, navigation contextCNN + FedAvg at RSU and central server
Table 7. Aggregation, privacy, and limitations of the selected frameworks.
Table 7. Aggregation, privacy, and limitations of the selected frameworks.
FrameworkAggregationPrivacyLimitations
FedGRU [8]FedAvg; joint protocol for large scaleNone beyond FLNarrow focus on traffic prediction; weak scalability in heterogeneous ITS
DT + HFL [9]Hierarchical FedAvg (cloudlets + server)Digital twins as abstraction layerHigh complexity; heavy resource demand; scalability issues in large deployments
TFL-CNN [10]Two-layer: RSU + FedAvg at cloudSecure RSUs, encryption to cloudRSU-dependent; added latency from multi-layer aggregation; sensitive to heterogeneous data
Table 8. Adaptation of traffic prediction applications [36,66] to DT + HFL and TFL-CNN frameworks.
Table 8. Adaptation of traffic prediction applications [36,66] to DT + HFL and TFL-CNN frameworks.
FrameworkFeatureArch.Central ServerRSUs/CloudletsOBUs (Vehicles)Detectors/SensorsTraffic Congestion App.AlgorithmCloud-Based Reg.AlertsNotes
CommonHeterogeneous data Present in both, central link only in Najada.
Real-time processing Low latency requirement.
DT + HFLDigital twins (DT) Vehicles represented by DTs.
Hierarchical structure Cloudlet–server hierarchy.
Functional@edge (data collection) Cloudlet gathers multi-source data (Najada-specific env.).
Analytical phase (simulation) DTs run local simulations.
Anomaly/prediction phase FedAvg across layers.
Collaborative aggregation Global model integration via Lambda (Najada).
Alerts Alerts via DT interface.
TFL-CNNHierarchical structure Two-layer RSU–cloud design.
6G/edge network RSUs handle 6G connectivity.
RSU aggregation Local Radio Frequency (RF) model per RSU.
Vehicle data collection OBUs provide raw traffic data.
Central aggregation (FedAvg) Global model aggregation (Lakshna-specific).
Alerts Central + RSU alerts.
Legend: ✓ = both studies; ▲ = only in [66]; ▼ = only in [36].
Table 9. Adaptation of the real-time accident detection application [12] to DT + HFL and TFL-CNN frameworks.
Table 9. Adaptation of the real-time accident detection application [12] to DT + HFL and TFL-CNN frameworks.
FramworkFeatureArch.Central ServerVANET DevicesSmart VehiclesAlgorithmDataTraffic AlertsNotes
Common traitsHeterogeneous, real-time data streams Shared across both frameworks
DT + HFLDT DTs simulate vehicle behavior locally
Hierarchical structure Inclusion of central server optional
Cloudlets as intermediates Processing vehicles act as edge nodes
Local data collection (on-vehicle) Captures speed/position in situ
Analytical phase (simulation + model updates) Optional global model update
Aggregation phase (collaboration) Partial model aggregation across DTs
Alert generation (local or central) Alerts via DTs or central server
TFL-CNNHierarchical structure Adapted from VANET to RSU hierarchy
6G/edge connectivity Future 6G-ready extension
RSU aggregation Local RF model per RSU/vehicle
Vehicle data collection VANET nodes act as edge devices
Central aggregation (FedAvg) Global update through central server
Alert transmission Alerts from RSUs or central cloud
Legend: ✓ = the component support this feature.
Table 10. Adaptation of the transport mode identification application [45] to DT + HFL and TFL-CNN frameworks.
Table 10. Adaptation of the transport mode identification application [45] to DT + HFL and TFL-CNN frameworks.
ApproachFeatureArch.Central ServerMobile Device (Node)Sensors (GPS/Acc)Data CollectionAggregationAlgorithm (CNN)Alerts/OutputsNotes
Common traitsUses multimodal, real-time user data Shared across both frameworks
DT + HFLDigital twins (DT) Simulates mobility for each user
Hierarchical structure Cloudlet–server aggregation
Local data collection (mobile sensors) Raw sensor data per node
Edge aggregation (cloudlets) Performs intermediate updates
Central aggregation (FedAvg) Global model synchronization
Output via user interface Displays identified transport mode
TFL-CNNHierarchical structure RSU–cloud topology
RSU aggregation Local FedAvg per RSU
Sensor data preprocessing Feature extraction on-device
Central aggregation (FedAvg) Combines local models globally
Real-time classification output RSUs or server deliver predictions
Legend: ✓ = the component support this feature.
Table 11. Adaptation of the driver profiling and behavior detection application [67] to DT + HFL and TFL-CNN frameworks.
Table 11. Adaptation of the driver profiling and behavior detection application [67] to DT + HFL and TFL-CNN frameworks.
FrameworkFeatureArch.Central ServerRSUs/CloudletsVehicles/OBUsMobile AppAlgorithm (CNN/RF)Risk Level OutputNotes
Common traitsUses heterogeneous multi-source data (sensors, app, On-Board Diagnostic [OBD])) Shared data flows and learning objective
DT + HFLDT DTs replicate driver behavior and context
Hierarchical structure Multi-level edge/cloud coordination
Cloudlets as intermediates Local model training per cluster of drivers
Local data collection (on-vehicle and app) Raw telemetry processed locally
Analytical phase (simulation + model updates) DT-based analysis of driving risk
Collaborative aggregation (cloudlet → central) Hierarchical FedAvg synchronization
Alert/feedback generation Feedback loop via mobile interface
TFL-CNNHierarchical structure Two-tier RSU–cloud structure
6G/edge connectivity Future-ready low-latency adaptation
RSU aggregation RSU aggregates local CNN models
Vehicle/app data collection On-device feature extraction
Central aggregation (FedAvg) Global model for driver classification
Alert and risk feedback Real-time feedback to drivers
Legend: ✓ = the component support this feature.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vela Romo, M.S.; Tripp-Barba, C.; Orozco Garzón, N.; Barbecho, P.; Calderón Hinojosa, X.; Urquiza-Aguiar, L. Federated Learning Frameworks for Intelligent Transportation Systems: A Comparative Adaptation Analysis. Smart Cities 2026, 9, 12. https://doi.org/10.3390/smartcities9010012

AMA Style

Vela Romo MS, Tripp-Barba C, Orozco Garzón N, Barbecho P, Calderón Hinojosa X, Urquiza-Aguiar L. Federated Learning Frameworks for Intelligent Transportation Systems: A Comparative Adaptation Analysis. Smart Cities. 2026; 9(1):12. https://doi.org/10.3390/smartcities9010012

Chicago/Turabian Style

Vela Romo, Mario Steven, Carolina Tripp-Barba, Nathaly Orozco Garzón, Pablo Barbecho, Xavier Calderón Hinojosa, and Luis Urquiza-Aguiar. 2026. "Federated Learning Frameworks for Intelligent Transportation Systems: A Comparative Adaptation Analysis" Smart Cities 9, no. 1: 12. https://doi.org/10.3390/smartcities9010012

APA Style

Vela Romo, M. S., Tripp-Barba, C., Orozco Garzón, N., Barbecho, P., Calderón Hinojosa, X., & Urquiza-Aguiar, L. (2026). Federated Learning Frameworks for Intelligent Transportation Systems: A Comparative Adaptation Analysis. Smart Cities, 9(1), 12. https://doi.org/10.3390/smartcities9010012

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop