Next Article in Journal
Characteristics of Post-Exercise Lower Limb Muscle Tremor Among Speed Skaters
Previous Article in Journal
Rolling Bearing Degradation Identification Method Based on Improved Monopulse Feature Extraction and 1D Dilated Residual Convolutional Neural Network
Previous Article in Special Issue
Demonstration of 50 Gbps Long-Haul D-Band Radio-over-Fiber System with 2D-Convolutional Neural Network Equalizer for Joint Phase Noise and Nonlinearity Mitigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Collaborative Split Learning-Based Dynamic Bandwidth Allocation for 6G-Grade TDM-PON Systems

by
Alaelddin F. Y. Mohammed
1,*,
Yazan M. Allawi
2,*,
Eman M. Moneer
3 and
Lamia O. Widaa
2
1
Information Technology, Department of International Studies, Dongshin University, 67, Dongshindae-gil, Naju-si 58245, Republic of Korea
2
Department of Electrical Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
Department of Physics, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Sensors 2025, 25(14), 4300; https://doi.org/10.3390/s25144300
Submission received: 23 May 2025 / Revised: 2 July 2025 / Accepted: 8 July 2025 / Published: 10 July 2025
(This article belongs to the Special Issue Recent Advances in Optical Wireless Communications)

Abstract

Dynamic Bandwidth Allocation (DBA) techniques enable Time Division Multiplexing Passive Optical Network (TDM-PON) systems to efficiently manage upstream bandwidth by allowing the centralized Optical Line Terminal (OLT) to coordinate resource allocation among distributed Optical Network Units (ONUs). Conventional DBA techniques struggle to adapt to dynamic traffic conditions, resulting in suboptimal performance under varying load scenarios. This work suggests a Collaborative Split Learning-Based DBA (CSL-DBA) framework that utilizes the recently emerging Split Learning (SL) technique between the OLT and ONUs for the objective of optimizing predictive traffic adaptation and reducing communication overhead. Instead of requiring centralized learning at the OLT, the proposed approach decentralizes the process by enabling ONUs to perform local traffic analysis and transmit only model updates to the OLT. This cooperative strategy guarantees rapid responsiveness to fluctuating traffic conditions. We show by extensive simulations spanning several traffic scenarios, including low, fluctuating, and high traffic load conditions, that our proposed CSL-DBA achieves at least 99% traffic prediction accuracy, with minimal inference latency and scalable learning performance, and it reduces communication overhead by approximately 60% compared to traditional federated learning approaches, making it a strong candidate for next-generation 6G-grade TDM-PON systems.

1. Introduction

The exponential growth of data-intensive applications, cloud services, and connected devices has significantly increased the demand for faster, more responsive, and reliable network infrastructure. As global networks advance towards sixth-generation (6G) technology, they promise unprecedented data rates, ultra-low latency, massive connectivity, and substantial gains in reliability and efficiency [1,2]. This is specially true when considering the stringent 5G and beyond requirements for Ultra-Reliable and Low-Latency Communication (URLLC) type of applications, where delayed or failed transmission can result in severe consequences. Use cases such as disaster response, emergency medical services, and public protection increasingly depend on real-time data transmission and service orchestration over optical and wireless networks. Recent studies have declared the importance of reliability and latency guarantees in 5G and beyond for these domains, employing technologies such as network slicing, device-to-device communication, and cloud-native architectures [3,4,5,6]. These application scenarios underscore the need for intelligent, latency-aware bandwidth allocation frameworks that can adapt rapidly to varying demands and guarantee service continuity across network slices and edge infrastructures. To achieve these ambitious goals, next-generation network architectures, specifically Passive Optical Network (PON) systems, must evolve with the capability to address future demands.
Because of their high capacity, scalability, and energy efficiency, PON systems are regarded as one of the most favorable solutions for access networks [7,8]. PON technology utilizes passive splitters to minimize operational expenses (OPEX) and power consumption by eliminating the need for active electronic components in the distribution network [7,9]. Among the several types of PONs, Time Division Multiplexing-PON (TDM-PON) is a preferred option due to its simple architecture, relatively low implementation costs, and broad acceptance among optical access network Infrastructure Providers (InP) as well as Mobile Network Operators (MNO) [10,11].
The Dynamic Bandwidth Allocation (DBA) mechanism, which efficiently manages upstream bandwidth at the Optical Line Terminal (OLT) for multiple connected Optical Network Units (ONUs), is a core component of TDM-PON architecture. By dynamically allocating bandwidth based on real-time demand, traffic patterns, and Quality of Service (QoS) requirements, DBA significantly enhances overall network performance [7,12]. However, with the dynamic and diverse network environments anticipated for 6G-grade services and applications, conventional DBA approaches exhibit significant limitations that may hinder their effectiveness. Moreover, existing standards (e.g., IEEE 1904.1 [13] and ITU-T G.988 [14]) intentionally leave room for innovation, encouraging manufacturers and researchers to explore enhanced DBA designs.
Conventional DBA systems face several challenges, particularly in terms of latency, due to their reliance on centralized bandwidth allocation decisions at the OLT. This centralized approach requires continuous and frequent communication between ONUs and the OLT, which exacerbates latency, especially under heavy network load or rapidly changing traffic conditions [15,16]. Additionally, traditional DBA mechanisms often struggle to respond effectively to sudden shifts in traffic demand, resulting in inefficient bandwidth distribution and compromised QoS. A further concern lies in privacy and security risks, as centralized data processing at the OLT can expose sensitive user information, which represents a significant threat in mission-critical and high-security applications [17,18].
Recent advancements in response to these challenges have proposed enhancing DBA mechanisms by integrating Machine Learning (ML) and Deep Learning (DL) approaches. ML-based solutions enable predictive analytics to dynamically allocate bandwidth in real time, improving traffic prediction accuracy [15,19]. However, these systems often rely on centralized architectures, which aggravate privacy concerns and introduce significant computational and communication overhead. This overhead limits their practical applicability, particularly as networks scale to meet 6G-grade QoS demands [20], while existing ML-based DBA systems have demonstrated progress, they remain inadequate for addressing the distinct challenges of 6G networks, including ultra-high bandwidth requirements, extremely low latency demands, and the necessity for flexible and real-time adaptation to heterogeneous traffic environments [10,21]. To overcome these limitations, the emerging Split Learning (SL) technique, recently introduced by MIT’s Media Lab [22,23], presents a promising solution. As a variation of Collaborative Learning (CL), SL enables multiple parties to collaboratively train different neural network segments by exchanging only intermediate features, thus preserving raw data privacy. Given its inherent characteristics of decentralized processing, scalability, and privacy preservation, an SL-based approach can be particularly advantageous for next-generation TDM-PON environments. While traditional ML and Federated Learning (FL) approaches have shown potential in DBA, they present significant limitations. ML typically relies on centralized training, which increases communication overhead and introduces privacy risks, especially under high ONU density. FL partially addresses privacy by decentralizing model training, but it still requires full model parameter exchange between clients and the central node, resulting in large communication loads and synchronization issues. On the other hand, SL, as implemented in our proposed framework, only exchanges partial forward and backward features, significantly reducing communication cost while preserving data privacy.
In this paper, we introduce a novel Collaborative Split Learning-based Dynamic Bandwidth Allocation (CSL-DBA) technique tailor-designed for next-generation 6G-grade TDM-PON systems. Our approach decentralizes traffic prediction and bandwidth allocation through the application of SL, to distribute learning tasks between ONUs and the OLT, thus facilitating efficient knowledge sharing while preserving user privacy. Unlike traditional centralized DBA techniques, the CSL-DBA approach significantly reduces the latency caused by frequent round-trip signaling in the OLT-ONU-OLT communication cycles. This is made possible under the proposed architecture by enabling the ONUs to independently estimate local bandwidth requirements using lightweight models built from historical and real-time traffic data. Subsequently, ONUs only need to transmit model gradients to the OLT, without including any real user data, which guarantees the preservation of user privacy. The OLT then combines these gradients by means of the proposed cooperative approach to enhance prediction accuracy, as well as achieving lower latency and higher adaptability to the rapidly fluctuating traffic demands expected in 6G-grade TDM-PON systems.
To the best of the authors’ knowledge, this work is the first to explore the potential advantages of utilizing SL for enhancing the performance of DBA in TDM-PON systems and to propose a collaborative split learning-based DBA framework specifically tailored to the requirements and characteristics of 6G-grade TDM-PON systems. The results presented in this paper offer a foundational reference for optimizing bandwidth allocation in the context of next-generation optical communication networks, highlighting the significant potential of the proposed approach. The remaining part of this paper is organized as follows: Section 2 reviews the state-of-the-art DBA techniques and the integration of ML-based and SL-based approaches in the context of 6G networks. In Section 3, we explain our proposed CSL-DBA architecture together with its system model and design principles. Simulation results are presented and discussed in Section 4, showcasing the efficiency of our proposed design. Section 5 summarizes the main contributions presented in this work and identifies future research directions.

2. Related Work

Controlling the upstream bandwidth allocation among connected ONUs in TDM-PON systems depends on the DBA. Traditional DBA techniques, such as Interleaved Polling with Adaptive Cycle Time (IPACT), remain popular due to their simplicity. However, these approaches exhibit significant limitations in latency, scalability, and responsiveness when considered for handling the dynamic environments anticipated for 6G optical networks [9,11,15,24]. Recent advancements in DBA include the optimization of adaptive traffic management and resource efficiency by implementing a QoS-aware DBA algorithm that accounts for adjusting times in PON systems to improve bandwidth usage and latency performance [21].
The promise of more accurate traffic demand predictions for improved DBA management efficiency has driven high interest in ML-based approaches. Focusing on possible improvements in proactive resource management. Authors in [25] presented a thorough study of the potential applications for ML-based models in optical networks. In addition, different DL-based DBA approaches for estimating bandwidth demands in the context of XG-PON intended for mobile fronthaul within Cloud Radio Access Networks (C-RANs) were introduced in [19]. By means of simulations, the authors demonstrated that DL-based DBA approaches results in noticeable improvement in system efficiency and performance over conventional DBA techniques, and further suggested that DL-based models, if not designed properly, can increase CPU consumption, which can potentially lead to performance degradation in highly resource-limited environments.
Furthermore, a Deep Neural Network (DNN) approach for traffic prediction in PONs was proposed in [15], aiming at enhancing bandwidth allocation efficiency. The work in [24] presented a Deep Reinforcement Learning (DRL) method for network slicing in heterogeneous and dynamic traffic environments. Their approach seeks to improve resource allocation by means of adapting to varying traffic conditions and their QoS requirements. The system constantly learns and improves its allocation strategies using DRL. Particularly in multi-tenant network architectures, the study shows efficient bandwidth allocation and improved resource usage. Although their approach is effective in improving responsiveness to dynamic traffic conditions, it does not particularly address the special requirements and settings of TDM-PON systems. Despite these advancements, the core issue lies in the centralization of all tasks related to traffic monitoring, prediction, and resource allocation at the OLT, which limits scalability and raises privacy concerns.
Recent research endeavors have been focusing on the privacy and scalability issues present in existing DBA approaches. To help reduce vulnerabilities in next-generation PON systems, the authors in [26] proposed a security-enhanced DBA algorithm, which introduces a mitigating phase that reduces bandwidth allocation when identifying attackers and another detection phase for spotting unusual behavior. The work in [27] offers an attack-aware DBA technique for PON for addressing security flaws in traditional DBA techniques, which comprises an intelligent detection and allocation strategy to guarantee fair bandwidth distribution, even in hostile environments, by means of dynamic traffic monitoring and anomaly detection for effective resource utilization and enhanced privacy within the network. Nevertheless, existing detection techniques fall short when addressing highly-fluctuating traffic conditions or when dealing with large-scale networks.
In view of the aforementioned challenges, this paper proposes a collaborative, semi-supervised, split-learning (CSL)-based approach, referred to as CSL-DBA, specifically designed for next-generation 6G-grade TDM-PON systems. Our architecture delegates traffic prediction tasks to the ONUs, thereby significantly reducing latency, preserving user privacy, and enabling real-time adaptability to diverse network conditions, as detailed in the following section.

3. Network Architecture and Proposed System Model

3.1. TDM-PON System Architecture

Figure 1 shows a detailed representation of the proposed TDM-PON system architecture with CSL-DBA-based bandwidth allocation. The figure illustrates how multiple ONUs independently analyze local traffic and collaboratively train the global model at the OLT, which supports scalable bandwidth prediction and allocation in real-time multi-user environments. The standard optical access network topology consists of a number of ONUs at the user locations, a passive optical splitter (SPL), and an OLT at the Central Office (CO). This design sends downstream data traffic from the OLT over broadcast transmissions to all ONUs. On the other hand, upstream traffic from ONUs pass via the SPL to the OLT. Generally, in TDM-PON systems, eliminating collisions and guaranteeing optimal network performance solely depends on the DBA, which is responsible for the allocation and synchronization of the available bandwidth as well as the scheduling of transmissions for all ONUs that use a single optical fiber link in both directions.

3.2. Proposed CSL-DBA System Model

We propose a hybrid model that integrates SL-based approach and a DBA system, whereby the learning task is split between the distributed ONUs and the centralized OLT. The proposed CSL-DBA system assumes the role of distributing learning tasks among the ONUs and coordinating the exchange of prediction-related information between the OLT and ONUs. By utilizing SL, ONUs localize computations and traffic pattern analysis to enable capturing real-time bandwidth requirements and network-edge variations. As a result, instead of providing raw data or comprehensive traffic statistics, ONUs send only their locally generated model updates (i.e., gradient information) to the centralized OLT. After receiving these updates, the OLT compiles them to improve and preserve the global predictive model, which is then used to dynamically and actively allocate bandwidth. Furthermore, as unprocessed traffic data are excluded from crossing the whole network segment for centralized processing, the proposed CSL-DBA significantly reduces computing load, delay, and communication overhead.
Figure 1 also shows a conventional PON arrangement and points out particular integration sites for CSL-DBA. At the ONUs, which are located at the network edge, local predictive learning is carried out such that the OLT in the network core handles global model aggregation and resource allocation decisions. SL-based models are trained by the ONU and OLT to dynamically maximize bandwidth allocation. Our system comprises a lightweight SL-based local models at the ONU side, which serve the purpose of forecasting local bandwidth needs that are captured as follows:
B D ONU ( t ) = ϕ L ( X t 1 ONU , X t 2 ONU , , X t n ONU ) + ϵ L ( t ) ,
where ϕ L represents the local predictive model; X t i ONU encapsulates features such as historical bandwidth usage, queue lengths, and latency metrics; and ϵ L ( t ) accounts for the prediction uncertainty of local traffic.
Furthermore, ONU assists in predicting different traffic patterns (i.e., voice, video, and data) in DBA for TDM-PON systems as expressed in the following equation:
P F ( t ) = ϕ L F ( T t 1 ONU , T t 2 ONU , , T t n ONU ) + ϵ L ( t ) ,
where P F ( t ) is the predicted traffic (F) pattern at time t, ϕ L F is the model predicting local traffic type and patterns, and T t 1 ONU , T t 2 ONU , , T t n ONU are the past traffic types and patterns observed.
Traffic prediction allows the OLT to dynamically change time slot assignments depending on expected demand, enabling adaptive DBA scheduling. By guaranteeing fair resource distribution and lowering contention among ONUs, this increases network efficiency while following Service-Level Agreements (SLAs). In practical deployment scenarios, the behavior of users in different ONUs is highly unpredictable and not easily controllable. These variations can cause inconsistent and bursty traffic demand at multiple stations. The proposed CSL-DBA addresses this challenge through its distributed learning approach, where each ONU independently observes and learns from its localized traffic characteristics (see Figure 1). These local models were trained on historical and real-time traffic patterns, which allow the system to dynamically anticipate bandwidth needs without relying on static assumptions. In ONUs experiencing limited bandwidth, the global model at the OLT adjusts time slot assignments in real time by prioritizing ONUs with higher predicted demands or QoS sensitivity. This allows for fairness and responsiveness even under constrained network conditions. The collaborative structure confirms the system remains scalable and adaptive to real-life user behavior diversity.
Upon local training, each ONU, denoted by i, transmits its model gradient L L , i to the OLT. The OLT then aggregates all received gradients using an equal-weighted average to compute the global update. This aggregation strategy follows a Federated Averaging (FedAvg)-like approach and guarantees that each ONU contributes equally to the updated model θ G as follows:
θ G = θ G η · 1 N i = 1 N L L , i + λ R ( θ G ) ,
where θ G is the updated global model, η is the learning rate, N is the number of participating ONUs, λ is a regularization parameter, and R ( θ G ) is a regularization function (e.g., L2 penalty) used to improve model generalization and prevent overfitting [20]. After updating the local model at each ONU, all gradients of local models are aggregated and processed at OLT’s global model to predict optimal bandwidth allocation using an optimization-based approach, as expressed in (4),
B a l l o c OLT = arg max B p r e d i = 1 N U ( B i , t p r e d ) μ i = 1 N C i ( t ) ,
the global model, updated via aggregated gradients from ONUs, learns mapping from ONU-level traffic features to predicted bandwidth demand. These predictions, denoted as B i , t p r e d for each ONU i , serve as inputs to the OLT’s bandwidth allocation decision. Specifically, the OLT uses these predictions according to the optimization model in (4) that maximizes utility U ( B i ) while satisfying QoS constraints C i ( t ) . Thus, the optimization in (4) is driven by predicted demands. To further enhance the efficiency of bandwidth allocation, a second stage for allocation correction is performed based on (5), which serves as a feedback-based correction that adjusts the allocation using real-time queue length deviations, ensuring responsiveness to transient congestion.
B a l l o c f i n a l = B OLT a l l o c + α i = 1 N Q i ( t ) Q ¯ ,
where Q i ( t ) is the queue length of ONU i , Q ¯ is the mean queue length, and α is an adaptive correction coefficient.
The framework of our proposed model is shown in Figure 2, which illustrates the two primary levels of the overall process as follows: the local model (ONU level) and the global model (OLT level). Starting at the ONU level, each ONU relies on historical patterns of bandwidth usage, queue lengths, and latency measurements to guide its current activities. By means of local traffic patterns, this data aid in future bandwidth demand predictions. After that, the expected demand moves to a loss computation module, which is a process guided by predictive accuracy. This loss drives the adaptation of local model parameters. Each ONU sends model update gradients or partial weights to the OLT, instead of unprocessed traffic data, once the local training phase ends. These local model gradients reflect the acquired learning from the observed ONU traffic conditions. Each ONU individually sends its local updates, which are then gathered and processed by the global model at the OLT Level. This approach results in regularly updating the global model parameters, considering traffic patterns, all over the network from an integrated point of view. In the next scheduling cycle, ONUs can forecast their traffic patterns using the updated local models.

3.3. CSL-DBA Training and Update Cycle

Algorithm 1 summarizes the key steps involved in the model update cycle between the ONUs and the OLT, which provides a clearer understanding of the collaborative training workflow in CSL-DBA. Each ONU observes local traffic features, performs lightweight forward inference and local training using its MLP-based predictor, and computes gradients of the loss function based on predicted vs. actual bandwidth demands. These local gradients are transmitted to the OLT, which aggregates them to update the global model. The updated model parameters are then broadcast back to the ONUs to complete the cycle. This process is repeated across multiple rounds, enabling the global model to evolve based on distributed real-time observations while maintaining user data privacy.
Algorithm 1 Training and update cycle.
  1:
Initialize: global model θ G , local models θ L , i for each ONUi, learning rate η , update frequency f u
  2:
for each global training round t = 1 to T do
  3:
      for each ONUi in parallel do
  4:
            Local traffic features observation: X i , t n : t 1
  5:
            Local bandwidth prediction: B i , t p r e d = ϕ L ( X i )
  6:
            Local loss computation: L i = L ( B i , t p r e d , B i , t t r u e )
  7:
            Local gradient computation: L i
  8:
            Send L i to OLT
  9:
      end for
10:
      Gradients aggregation at the OLT: L G = 1 N i = 1 N L L , i L i
11:
      Update global model: θ G θ G η · L G
12:
      Broadcast global model to ONUs
13:
end for
In our implementation, the local model at each ONU is a lightweight Multilayer Perceptron (MLP) consisting of an input layer, two hidden layers with 64 and 32 neurons, respectively, and a ReLU activation function. The model takes as input a feature vector of 10 traffic-related attributes (e.g., historical queue lengths, latency, and bandwidth demand), and it produces a single output predicting the required bandwidth in (1) or traffic class in (2). This MLP architecture was selected due to its low computational cost and rapid convergence, making it well-suited for deployment in resource-constrained ONUs. Moreover, MLPs offer a good trade-off between accuracy and latency for time-series forecasting in fixed-length input formats. While more complex models like LSTM or CNN may offer marginal improvements in accuracy, they significantly increase inference latency and training overhead, which may not be acceptable in real-time PON environments. To support SL, the model is partitioned such that the input and first hidden layer (64 neurons) are hosted locally at each ONU, while the second hidden layer (32 neurons) and output layer are executed at the OLT. This division allows ONUs to perform lightweight local computations and transmit only intermediate activations, while the OLT completes the forward/backward pass and updates the global model based on aggregated gradients.
It has to be noted here that the lightweight SL-based local models at the ONUs are not separate early-exit variants but rather the front portion of a globally shared MLP model partitioned across the network. Specifically, each ONU hosts the input and first hidden layer, while the remaining layers reside at the OLT. During inference, the ONU performs the forward pass through its portion of the model and transmits the resulting intermediate activation (feature vector) to the OLT. The OLT then completes the forward pass through the remaining layers to generate the final prediction. This behavior is consistent with the training process in Split Learning and avoids redundant computation while preserving model consistency.

4. Results and Discussion

We created a large-scale simulation environment, using the TensorFlow (2.18.0) tool [28], in order to conduct a comprehensive performance evaluation for our proposed CSL-DBA approach. The commonly used open source ML toolkit made available by TensorFlow provides complete support for developing and training sophisticated SL-based models. The distributed learning interactions between the OLT and the ONUs are faithfully depicted, which allows us to replicate several traffic patterns required in various operational settings. The performance of our proposed CSL-DBA, in terms of its effectiveness and usability, is demonstrated through extensive simulations to capture the complexity and dynamicity of various TDM-PON systems mimicking real-world scenarios.
To evaluate CSL-DBA under varying network conditions, we modeled three common traffic scenarios using Poisson arrival processes, as follows:
  • Low traffic: Poisson arrivals with mean rate λ = 0.2 packets/slot per ONU, representing under-utilization.
  • High traffic: Poisson arrivals with λ = 0.8 packets/slot per ONU, simulating near-saturation conditions.
  • Fluctuating traffic: Time-varying Poisson rates alternating between λ = 0.2 and λ = 0.8 every 1000 slots, to mimic bursty, real-world traffic dynamics.
These parameters align with typical values used in recent DBA and TDM-PON studies in [29]. Our synthetic dataset includes up to 10,000 time slots of simulated traffic traces per scenario, encompassing queue length, delay, and arrival rate statistics.

4.1. Training and Inference Latency

This section offers a comparison of the Cumulative Distribution Function (CDF) for training and inference latency over the following three separate traffic scenarios: low, fluctuating, and high traffic with ONU counts of 8, 32, and 64, in each scenario. This will allow us to understand how delay behaves as network complexity and traffic conditions vary.
The training latency shown in the CDFs represents the time taken to complete one epoch of local training at each ONU, averaged across the simulation window. Each epoch includes a full pass over the local traffic samples for that round. Inference latency, on the other hand, corresponds to the time required to process a single input feature vector (i.e., one traffic sample) during prediction. This latency includes both forward propagation at the ONU (for the local model segment) and the continuation of inference at the OLT (for the server-side model segment), as dictated by the Split Learning structure.
Under the low traffic scenario, as illustrated in Figure 3a, for all ONU configurations, training and inference latencies are tightly coupled below 0.2 s. The system allows 8 ONUs with low latency, implying that a smaller number of ONUs speed up training and inference processes because of less data contention. We also notice that there is a slight increase in latency occurring when the number of ONUs rise to 32 and to 64, which indicates that the collaborative learning model can efficiently scale under smaller loads. Nevertheless, the difference in training and inference delay time is still negligible.
On the other had, under fluctuating traffic conditions, as presented in Figure 3b, we observe that our proposed CSL-DBA model adapts well to mid-level unpredictable traffic, while training latency continues to be manageable even as the number of ONUs scales up. At the same time, the inference latency remains tightly bounded, showing robustness in generating real-time inference.
Figure 3c shows that training becomes more resource-intensive with the size of the network. However, the CSL-DBA model is still capable of handling inference very well, with a slight increase in latency spread. This indicates that our CSL-DBA approach generalizes effectively, supporting training under low, static, and congested traffic conditions. This capability is particularly advantageous, given that training latency and fairness have been identified as critical bottlenecks hindering the effectiveness of TDM-PON’s DBAs under high traffic loads and increasing numbers of ONUs, as highlighted in [15,17].

4.2. Training Versus Validation Loss

The relationship between training and validation losses provides insights into potential performance issues such as overfitting, underfitting, slow convergence, or instabilities during model training. Figure 4a illustrates the low traffic scenario, where both training and validation losses exhibit a steady gradual decline. The close alignment of the two curves indicates the absence of overfitting and suggests strong generalization to unseen data. The rapid convergence to a minimal loss demonstrates the model’s ability to enhance the predictability of bandwidth demands, which enables efficient resource utilization with minimal performance degradation. In such a low traffic scenario, the availability of high-quality training data and the absence of significant demand fluctuations contribute to an optimal learning environment for our proposed model.
Figure 4b presents the training and validation loss curves for the fluctuating traffic scenario. Similar to the previously described behavior of low traffic scenario, both training and validation losses decrease smoothly and consistently across all configurations, with no significant spikes or instability observed. While validation loss varies only during the first three epochs, the proposed model continues learning and does not collapse, demonstrating robustness and the ability to track rapid changes in bandwidth demands. Furthermore, we see that the convergence behavior is similar across the different ONU counts (8, 32, and 64), with loss values stabilizing at a marginal value of 0.08. The close alignment between training and validation curves suggests that the model maintains strong generalization capabilities, even under fluctuating traffic conditions, indicating reliable and stable learning dynamics.
Figure 4c presents the training and validation loss curves under a high traffic scenario, and the results show that the overall trend remains stable with minimal overfitting across all configurations. The consistent convergence for both training and validation losses demonstrates the model’s scalability and robustness in handling high traffic loads and effective adaptability to the complexity and dynamicity of the traffic patterns encountered in high-load scenarios without significant degradation in performance. On top of the low and converging validation losses, there is no evidence of overfitting or instability. Therefore, while further enhancements such as regularization techniques (e.g., weight decay or dropout [30]) might offer marginal gains, the current training configuration already yields strong performance under high traffic conditions.

4.3. Accuracy and Inference Latency

In time-sensitive network environments such as TDM-PON, maintaining high inference accuracy while minimizing latency is critical for ensuring responsive and efficient bandwidth allocation. In this subsection, we explore the interplay between prediction accuracy and inference latency under different traffic conditions, providing insights into the suitability of our proposed CSL-DBA model for real-time deployment. By evaluating how CSL-DBA performs across different traffic conditions and ONU configurations, we assess its ability to scale while sustaining precise predictions with minimal delay.
Figure 5a illustrates the performance of the proposed model in a low traffic scenario, were the results indicate that our CSL-DBA model mainaining high accuracy of above 99.6%, while maintaining low inference latency across all ONU configurations. This demonstrates the model’s capability to deliver reliable bandwidth predictions with minimal computational delay when operating under lightly loaded network conditions. The observed distribution of data points shows that increasing the number of ONUs, from 8 to 32 and to 64, has a negligible impact on both accuracy and latency. Inference latency remains consistently below 0.1 s, indicating that the collaborative learning process introduces minimal overhead. Thus, the proposed model continues to perform optimally even as the network grows in size.
Figure 5b presents the model’s performance under fluctuating traffic conditions. Here, we notice that the accuracy remains high across all configurations, consistently exceeding 99.7%, although slight variations are observed in the 32 and 64 ONU setups. As the traffic becomes more dynamic and unpredictable, the increased complexity introduces a slightly higher computational demand, met by a rise in inference latency, especially in the case of 64 ONUs, where latency reaches up to 0.15 s. Such a modest drop in accuracy compared to the low traffic scenario suggests that the model is capable of facing the challenge of adapting to rapid and irregular fluctuations in bandwidth demand. While this is expected, as such volatility, which introduces an uncertainty that complicates the generalization of learned patterns, the model maintains robust performance, demonstrating resilience and adaptability with an acceptable trade-off in latency under high traffic variability conditions.
Finally, Figure 5c, shows the results in the case of a high traffic scenario, While the accuracy of our proposed CSL-DBA model remains consistently high with a score above 99.6%, there is a noticeable increase in inference latency corresponding to the larger network size, where it approaches 0.25 s for the case of 64 ONUs. This reflects a higher processing burden associated with managing large-scale networks under heavy congestion. Factors such as queuing delays and high variability in bandwidth demand usually intensify computational complexity; however, our CSL-DBA model successfully maintains real-time performance with a stable and high accuracy, underscoring its robustness in congested environments. However, the findings also highlight a scalability challenge; as the number of ONUs and traffic intensity increase, the cost in inference time becomes more pronounced.
Hence, the proposed CSL-DBA model demonstrates strong resilience across all traffic conditions, reliably forecasting bandwidth allocation. Yet, the observed growth in inference latency under high traffic conditions emphasizes the need for continued refinement to guarantee scalability and responsiveness in next-generation TDM-PON systems. To preserve real-time adaptability in large-scale deployments, future enhancements should focus on reducing computational overhead—potentially through model simplification or latency-aware optimization techniques [31,32].

4.4. Communication Overhead Analysis

An important metric for evaluating the scalability of learning-based DBA approaches is communication overhead, which refers to the volume of data exchanged between ONUs and the OLT during model updates. Traditional ML-based DBA requires centralized data aggregation at the OLT, leading to high upstream bandwidth usage and significant latency. FL-based DBA frameworks partially address this by decentralizing model training; however, they still involve full model weight exchanges in every training round, which may be impractical for resource-constrained ONUs.
To quantify the overhead, we implemented a baseline FL-based DBA model using the same local prediction model at each ONU as used in CSL-DBA, trained under a standard FL protocol (i.e., local training followed by global aggregation). Both approaches used identical layer structures and optimizer configurations to guarantee a fair comparison. The critical difference lies in the data exchange; FL requires each ONU to send the full model weights (typically 50–100 KB) to the OLT per training round, resulting in an average uplink usage of approximately 200 KB per ONU.
On the other hand, CSL-DBA significantly minimizes communication overhead by transmitting only intermediate activations and partial gradients. Based on simulation logs and profiling using TensorFlow, the average uplink size per ONU was measured at approximately 80 KB. This results in a communication overhead reduction of roughly 60% compared to the FL baseline. When compared to centralized ML approaches that require raw traffic data transmission from each ONU, the overhead reduction can exceed 80%.

4.5. Benchmarking CSL-DBA with Existing Solutions

The proposed CSL-DBA framework introduces a novel approach to bandwidth allocation in next-generation 6G-grade TDM-PON networks. To highlight its advantages, we systematically compare its functional capabilities with those of existing DBA schemes reported in the literature. Table 1 presents a comparative analysis between the CSL-DBA framework and various state-of-the-art DBA approaches. The comparison clearly demonstrate that CSL-DBA outperforms existing solutions in several key areas, particularly in traffic adaptability and prediction accuracy under dynamic load conditions. Unlike most conventional and ML-based DBA models that rely on centralized control at the OLT and are typically tuned for average or stationary traffic, such as those discussed in [15,19], our proposed CSL-DBA approach benefits from its utilization of the SL model to enable decentralized, real-time decision making without compromising efficiency or scalability.
Rule-based and QoS-oriented models [11,21] exhibit limited adaptability when confronted with bursty or non-uniform traffic, often resulting in reduced responsiveness. Similarly, security-focused DBA methods [26,27] typically overlook intelligent resource forecasting and lack general applicability across diverse network conditions. In contrast, the proposed CSL-DBA is capable of achieving an accuracy above 99.6% in variable and high traffic scenarios due to its collaborative learning mechanism, which effectively captures complex traffic dynamics between ONUs and the OLT.
Although recent architectural enhancements, such as fronthaul-aware and Mobile Edge Computing (MEC)-integrated DBA frameworks [1,8], provide improved infrastructure integration, they still lack an intelligent, adaptive learning layer. In contrast, CSL-DBA addresses this gap by offering a scalable, learning-driven solution optimized for next-generation optical networks. This confirms that the proposed CSL-DBA is a highly flexible, intelligent, and 6G-ready bandwidth management framework that meets the performance, privacy, and responsiveness demands of next-generation TDM-PON systems.
Compared to Federated Learning (FL), CSL-DBA offers improved communication efficiency and better suitability for asymmetric, low-power ONUs. While FL requires the full exchange of model weights after each training round, SL only transmits intermediate representations, making it lightweight and less demanding for ONU hardware. Moreover, SL inherently decouples forward and backward passes between ONUs and the OLT, allowing asynchronous execution and better latency control. These characteristics make CSL-DBA better suited for real-time DBA tasks in dynamic and large-scale TDM-PON environments.
Our proposed CSL-DBA framework is unique for TDM-PON networks, thus we systematically compare its functional characteristics with those of current advanced DBA systems published in the literature. Table 1 shows a comparison of our suggested CSL-DBA framework with current DBA approaches. The results unequivocally show that CSL-DBA outperforms present approaches in many important respects, especially with relation to traffic flexibility and precision under changing traffic. While most conventional and ML-based database models, as cited in [15,19], depend on centralized architectures and are optimized for average or stationary load patterns, our framework uses Split Learning to enable real-time, distributed decision making without sacrificing efficiency or scalability.

5. Conclusions and Future Work

This paper introduced a novel Collaborative Split Learning-based Dynamic Bandwidth Allocation (CSL-DBA) framework designed to meet the evolving demands of next-generation 6G-grade TDM-PON networks. By decentralizing the learning process and enabling ONUs to contribute to bandwidth prediction without sharing raw data, CSL-DBA achieves the right balance between privacy preservation, scalability, and real-time adaptability. Through extensive simulations under low, fluctuating, and high traffic conditions, the proposed model consistently demonstrated high accuracy (>99.6%) while maintaining low inference latency, even when the number of ONUs scaled from 8 to 64. In contrast to traditional rule-based, centralized, or QoS-driven DBA schemes, CSL-DBA demonstrated superior performance in handling bursty traffic patterns, reducing latency overhead and enhancing responsiveness under network congestion. Comparative analysis with current state-of-the-art DBA approaches confirmed that CSL-DBA meets the requirements for dynamic and intelligent bandwidth management in next-generation optical access networks. Its distributed architecture, built-in learning intelligence, and compatibility with emerging MEC and fronthaul paradigms make it a future-proof solution for 6G deployment scenarios. Future work will explore model compression, latency-aware training strategies, and integration with network slicing and cross-layer optimization techniques to further enhance real-time performance and deployment feasibility in large-scale optical access infrastructures.

Author Contributions

A.F.Y.M. Conceptualization, methodology, formal analysis, validation, and writing—original draft; Y.M.A. Conceptualization, resources, investigation, writing—original draft, review and editing, and funding acquisition; E.M.M. Review, editing, and funding acquisition; L.O.W. Review, editing, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research project was funded by the Deanship of Scientific Research and Libraries, Princess Nourah bint Abdulrahman University, through the Program of Research Project Funding After Publication, grant No. (RPFAP-112-1445).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fayad, A.; Cinkler, T.; Rak, J. Toward 6G optical fronthaul: A survey on enabling technologies and research perspectives. IEEE Commun. Surv. Tutor. 2024, 27, 629–666. [Google Scholar] [CrossRef]
  2. Ishteyaq, I.; Muzaffar, K.; Shafi, N.; Alathbah, M.A. Unleashing the power of tomorrow: Exploration of next frontier with 6G networks and cutting edge technologies. IEEE Access 2024, 12, 29445–29463. [Google Scholar] [CrossRef]
  3. Borsatti, D.; Grasselli, C.; Contoli, C.; Micciullo, L.; Spinacci, L.; Settembre, M.; Cerroni, W.; Callegati, F. Mission critical communications support with 5G and network slicing. IEEE Trans. Netw. Serv. Manag. 2022, 20, 595–607. [Google Scholar] [CrossRef]
  4. Rawat, P.; Haddad, M.; Altman, E. Towards efficient disaster management: 5G and Device to Device communication. In Proceedings of the 2015 2nd International Conference on Information and Communication Technologies for Disaster Management (ICT-DM), Rennes, France, 30 November–2 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 79–87. [Google Scholar]
  5. Spantideas, S.T.; Giannopoulos, A.E.; Trakadas, P. Smart Mission Critical Service Management: Architecture, Deployment Options, and Experimental Results. IEEE Trans. Netw. Serv. Manag. 2024, 22, 1108–1128. [Google Scholar] [CrossRef]
  6. Apostolakis, K.C.; Margetis, G.; Stephanidis, C.; Duquerrois, J.M.; Drouglazet, L.; Lallet, A.; Delmas, S.; Cordeiro, L.; Gomes, A.; Amor, M.; et al. Cloud-native 5g infrastructure and network applications (netapps) for public protection and disaster relief: The 5g-epicentre project. In Proceedings of the 2021 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit), Porto, Portugal, 8–11 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 235–240. [Google Scholar]
  7. Newaz, S.S.; Ahvar, E.; Ahsan, M.S.; Kamruzzaman, J.; Karmakar, G.; Lee, G.M. Energy conservation in passive optical networks: A tutorial and survey. IEEE Commun. Surv. Tutor. 2024, 27, 667–724. [Google Scholar] [CrossRef]
  8. Dias, I.; Ruan, L.; Ranaweera, C.; Wong, E. From 5G to beyond: Passive optical network and multi-access edge computing integration for latency-sensitive applications. Opt. Fiber Technol. 2023, 75, 103191. [Google Scholar] [CrossRef]
  9. Singh, R.; Kumar, M. A comprehensive analysis for the performance of next generation passive optical network. In Proceedings of the 2021 International Conference on Smart Generation Computing, Communication and Networking (SMART GENCON), Pune, India, 29–30 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar]
  10. Allawi, Y.M.; Mohammed, A.F.; Moneer, E.M.; Shbat, M.; Abadleh, A.; Bilal, M. Cost-efficient citywide neutral host design: A micro-operator business model for expedited 5G and beyond network infrastructure rollout. IEICE Trans. Commun. 2024, E108-B, 450–464. [Google Scholar] [CrossRef]
  11. Arya, V.; Kumari, M.; Rana, A.K. Historical development of passive optical network (PON): A review. J. Opt. Commun. 2024. [Google Scholar] [CrossRef]
  12. Al Musalhi, N.; Zebari, G.M. Dynamic Bandwidth Allocation Energy Efficient Operation for WDM/TDM PON Architectures: A Survey. East J. Comput. Sci. 2025, 1, 71–78. [Google Scholar] [CrossRef]
  13. 1904.1-2013; IEEE Standard for Service Interoperability in Ethernet Passive Optical Networks (SIEPON). IEEE: Piscataway, NJ, USA, 2013. [CrossRef]
  14. ITU-T G.988; ONU Management and Control Interface (OMCI) Specification. ITU: Geneva, Switzerland, 2022.
  15. Hatem, J.A.; Dhaini, A.R.; Elbassuoni, S. Deep learning-based dynamic bandwidth allocation for future optical access networks. IEEE Access 2019, 7, 97307–97318. [Google Scholar] [CrossRef]
  16. Mohammed, A.F.; Lee, J.; Park, S. Dynamic Bandwidth Slicing in Passive Optical Networks to Empower Federated Learning. Sensors 2024, 24, 5000. [Google Scholar] [CrossRef]
  17. Garima; Jha, V.; Singh, R.K. Comprehensive performance analysis of dynamic bandwidth allocation schemes for XG-PON system. Opt. Switch. Netw. 2023, 47, 100711. [Google Scholar] [CrossRef]
  18. Alshahrani, A.; Alqartas, E.; Almutairi, M.; Alessa, G.; Allawi, Y.M. The Future of Healthcare: On Designing 5G & Beyond Indoor Neutral Host for Smart City Medical Facilities. In Proceedings of the International Conference on Sustainability: Developments and Innovations, Riyadh, Saudi Arabia, 18–22 February 2024; Springer: Singapore, 2024; pp. 80–87. [Google Scholar]
  19. Garima; Jha, V.; Singh, R.K. A deep learning based dynamic bandwidth allocation method for XG-PON based mobile fronthaul for CRAN. Comput. Netw. 2024, 245, 110344. [Google Scholar] [CrossRef]
  20. Lin, Z.; Qu, G.; Chen, X.; Huang, K. Split learning in 6g edge networks. IEEE Wirel. Commun. 2024, 31, 170–176. [Google Scholar] [CrossRef]
  21. Zehri, M.; Haastrup, A.; Rincón, D.; Piney, J.R.; Sallent, S.; Bazzi, A. A QoS-Aware Dynamic Bandwidth Allocation Algorithm for Passive Optical Networks with Non-Zero Laser Tuning Time. Photonics 2021, 8, 159. [Google Scholar] [CrossRef]
  22. MIT Media Lab. Split Learning: Distributed and Collaborative Learning. 2018. Available online: https://www.media.mit.edu/projects/distributed-learning-and-collaborative-learning-1/overview/ (accessed on 17 May 2025).
  23. Han, D.J.; Bhatti, H.I.; Lee, J.; Moon, J. Accelerating federated learning with split learning on locally generated losses. In Proceedings of the ICML 2021 Workshop on Federated Learning for User Privacy and Data Confidentiality, ICML Board, Online, 24 July 2021. [Google Scholar]
  24. Koo, J.; Mendiratta, V.B.; Rahman, M.R.; Walid, A. Deep reinforcement learning for network slicing with heterogeneous resource requirements and time varying traffic dynamics. In Proceedings of the 2019 15th International Conference on Network and Service Management (CNSM), Halifax, NS, Canada, 21–25 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar]
  25. Amirabadi, M.; Nezamalhosseini, S.; Kahaei, M.; Chen, L.R. A Survey on Machine and Deep Learning for Optical Communications. arXiv 2024, arXiv:2412.17826. [Google Scholar] [CrossRef]
  26. Atan, F.M.; Zulkifli, N.; Idrus, S.M.; Zin, N.A.M.; Ismail, N.A. Mitigating DBA Exploits: Enhanced Security Against Degradation Attacks in XG-PON. In Proceedings of the 2024 IEEE International Conference on Advanced Telecommunication and Networking Technologies (ATNT), Johor Bahru, Malaysia, 9–10 September 2024; Volume 1, pp. 1–4. [Google Scholar] [CrossRef]
  27. Butt, R.A.; Faheem, M.; Ashraf, M.W.; Khawaja, A.; Raza, B. Attack-aware dynamic upstream bandwidth assignment scheme for passive optical network. J. Opt. Commun. 2023, 44, 485–493. [Google Scholar] [CrossRef]
  28. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: https://www.tensorflow.org/ (accessed on 7 July 2025).
  29. Taheri, M.; Zhang, J.; Ansari, N. Design and analysis of green optical line terminals for TDM passive optical networks. J. Opt. Commun. Netw. 2016, 8, 221–228. [Google Scholar] [CrossRef]
  30. Salehin, I.; Kang, D.K. A review on dropout regularization approaches for deep neural networks within the scholarly domain. Electronics 2023, 12, 3106. [Google Scholar] [CrossRef]
  31. Li, P.; Wang, X.; Huang, K.; Huang, Y.; Li, S.; Iqbal, M. Multi-model running latency optimization in an edge computing paradigm. Sensors 2022, 22, 6097. [Google Scholar] [CrossRef]
  32. Lim, J. Latency-aware task scheduling for IoT applications based on artificial intelligence with partitioning in small-scale fog computing environments. Sensors 2022, 22, 7326. [Google Scholar] [CrossRef] [PubMed]
Figure 1. TDM-PON system architecture with CSL-DBA-based bandwidth allocation.
Figure 1. TDM-PON system architecture with CSL-DBA-based bandwidth allocation.
Sensors 25 04300 g001
Figure 2. Flow diagram of the proposed CSL-DBA framework. The system operates across the following two levels: the ONU level, where local demand is predicted and gradients are computed based on historical traffic data; and the OLT level, where gradients are aggregated and the global model is updated and broadcast back to ONUs for subsequent use in local predictions.
Figure 2. Flow diagram of the proposed CSL-DBA framework. The system operates across the following two levels: the ONU level, where local demand is predicted and gradients are computed based on historical traffic data; and the OLT level, where gradients are aggregated and the global model is updated and broadcast back to ONUs for subsequent use in local predictions.
Sensors 25 04300 g002
Figure 3. Training and inference latency distributions under different traffic scenarios.
Figure 3. Training and inference latency distributions under different traffic scenarios.
Sensors 25 04300 g003
Figure 4. Training versus validation loss under different traffic conditions for 8, 32, and 64 ONUs.
Figure 4. Training versus validation loss under different traffic conditions for 8, 32, and 64 ONUs.
Sensors 25 04300 g004
Figure 5. Accuracy versus inference latency under different traffic conditions for 8, 32, and 64 ONUs.
Figure 5. Accuracy versus inference latency under different traffic conditions for 8, 32, and 64 ONUs.
Sensors 25 04300 g005
Table 1. Benchmarking the proposed CSL-DBA against existing DBA solutions.
Table 1. Benchmarking the proposed CSL-DBA against existing DBA solutions.
DBA SolutionsReferenceTraffic AdaptabilityAccuracy6G Readiness
CSL-basedThis WorkHigh: real-time adaptability to variable loadsHigh: >99.6% under diverse conditionsYes: engineered for 6G optical edge
DL-based[15,17,19]Moderate: effective with trained models only under static conditionsHigh: 97∼99% under fixed traffic loadsPartial: supports 5G scenarios but lacks 6G optimization
Security-Enhanced[26,27]Low: rule-based thresholding with limited dynamic responseModerate: dependent on attack detection efficiencyNo: static logic, lacks predictive capability
QoS-Aware[9,11,21]Low: based on pre-established static policiesLow: inflexible to sporadic or real-time trafficNo: legacy network compatibility only
SL-based[20]Low: not assessed specifically for PON trafficHigh: demonstrated in alternative sectors (healthcare, IoT)Yes: fundamental framework for 6G learning paradigms
Energy-Aware[7,12]Moderate: indirect adaptation via energy metricsModerate: accurate under balanced load scenariosPartial: energy-focused, lacks AI-driven flexibility
Fronthaul-Aware[1,8,10]Moderate: optimized resource deployment for edge computingVariable: dependent upon network orchestration granularityYes: suitable for scalable 5G/6G deployments
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohammed, A.F.Y.; Allawi, Y.M.; Moneer, E.M.; Widaa, L.O. Collaborative Split Learning-Based Dynamic Bandwidth Allocation for 6G-Grade TDM-PON Systems. Sensors 2025, 25, 4300. https://doi.org/10.3390/s25144300

AMA Style

Mohammed AFY, Allawi YM, Moneer EM, Widaa LO. Collaborative Split Learning-Based Dynamic Bandwidth Allocation for 6G-Grade TDM-PON Systems. Sensors. 2025; 25(14):4300. https://doi.org/10.3390/s25144300

Chicago/Turabian Style

Mohammed, Alaelddin F. Y., Yazan M. Allawi, Eman M. Moneer, and Lamia O. Widaa. 2025. "Collaborative Split Learning-Based Dynamic Bandwidth Allocation for 6G-Grade TDM-PON Systems" Sensors 25, no. 14: 4300. https://doi.org/10.3390/s25144300

APA Style

Mohammed, A. F. Y., Allawi, Y. M., Moneer, E. M., & Widaa, L. O. (2025). Collaborative Split Learning-Based Dynamic Bandwidth Allocation for 6G-Grade TDM-PON Systems. Sensors, 25(14), 4300. https://doi.org/10.3390/s25144300

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop