You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

11 December 2025

Neural Network–Based Adaptive Resource Allocation for 5G Heterogeneous Ultra-Dense Networks

and
1
Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
2
Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah 41411, Saudi Arabia
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Future Radio Wireless Sensor Networks for 5G Networks: Challenges and Opportunities

Abstract

Increasing spectral bandwidth in 5G networks improves capacity but cannot fully address the heterogeneous and rapidly growing traffic demands. Heterogeneous ultra-dense networks (HUDNs) play a key role in offloading traffic across multi-tier deployments; however, their diverse base-station characteristics and diverse quality-of-service (QoS) requirements make resource allocation highly challenging. Traditional static resource-allocation approaches lack flexibility and often lead to inefficient spectrum utilization in such complex environments. This study aims to develop a joint user association–resource allocation (UA–RA) framework for 5G HUDNs that dynamically adapts to real-time network conditions to improve spectral efficiency and service ratio under high traffic loads. A software-defined networking controller centrally manages the UA–RA process by coordinating inter-cell resource redistribution through the lending of underutilized resource blocks between macro and small cells, mitigating repeated congestion. To further enhance adaptability, a neural network–adaptive resource allocation (NN–ARA) model is trained on UA–RA-driven simulation data to approximate efficient allocation decisions with low computational cost. A real-world evaluation is conducted using the downtown Los Angeles deployment. For performance validation, the proposed NN–ARA approach is compared with two representative baselines from the literature (Bouras et al. and Al-Ali et al.). Results show that NN–ARA achieves up to 20.8% and 11% higher downlink data rates in the macro and small tiers, respectively, and improves spectral efficiency by approximately 20.7% and 11.1%. It additionally reduces the average blocking ratio by up to 55%. These findings demonstrate that NN–ARA provides an adaptive, scalable, and SDN-coordinated solution for efficient spectrum utilization and service continuity in 5G and future 6G HUDNs.

1. Introduction

With the accelerating development of cellular networks, the number of connected devices has grown significantly [1]. In addition, users are characterized by different features and quality-of-service (QoS) requirements, such as high-speed mobility, high reliability, minimum latency, and high data rates [2,3]. Although previous cellular network generations attempted to enhance the data rate, coverage area, and bandwidth to enable the simultaneous communication of more devices, the number of connected devices worldwide, including wearables, smart home devices, and smartphones, is expected to increase rapidly [2,4]. The network traffic volume was predicted to grow by about 7.462 exabytes per month in 2010, and it is predicted to reach about 5016 exabytes in 2030 [5]. This prediction represents an increase of about 672-fold compared with 2010 levels. Indeed, the worldwide network traffic is expected to increase by about 20,000 times between 2010 and 2030 based on 5G vision studies [6].
The massive demand for higher data rates and greater bandwidth led to the commercial worldwide rollout of 5G by the year 2020 [7]. 5G technology facilitates the delivery of dependable, high-speed services with minimal delays [8], with the aim of providing both fixed and mobile broadband services universally, being accessible to anyone, anywhere, and at any time [9]. Three service categories are supported by 5G: ultra-Reliable Low Latency Communication (uRLLC), enhanced Mobile Broadband (eMBB), and massive Machine Type Communication (mMTC) [10]. The QoS requirements vary based on the service category: uRLLC services require high reliability (up to 99.99%) and a strict latency of about 1 ms; eMBB services require a high data rate of up to 20 Gbps as a downlink peak data rate with a latency of about 4 ms; and mMTC services permit a high density of connected devices (up to 1 million devices per km2), with relaxed requirements regarding latency [11,12]. Whereas previous generations of cellular networks faced challenges regarding the data rate, connectivity, and latency, 5G effectively addresses these limitations [11]. Figure 1 illustrates the 5G heterogeneous ultra-dense networks (HUDNs), highlighting how various user types—vehicles (uRLLC), pedestrians and bikes (eMBB), and IoT sensors (mMTC)—connect through small and macro base stations (BSs) under Software Defined Networking (SDN) control management, together with the key 5G features of low latency, high reliability, massive connectivity, and high throughput.
Figure 1. 5G heterogeneous ultra-dense network components and key features. Service types are color-coded (uRLLC in blue, eMBB in orange, mMTC/IoT in green). Solid arrows represent reporting from BSs to the SDN controller, while dashed arrows represent control commands from the SDN controller to the BSs.
To deal with these limitations, network designers have incorporated several key technologies into 5G systems to handle the growing traffic and meet diverse service requirements [4]. Massive multiple-input multiple-output (MIMO) is a crucial technology that can theoretically increase link capacity and spectral efficiency by up to 10-fold compared with conventional single antenna systems by employing spatial multiplexing techniques [13]. Millimeter-wave (mmWave) can also increase capacity by utilizing the 30–300 GHz spectrum range to mitigate sub-6 GHz band congestion [14]. In addition, deploying HUDNs improves spectral efficiency [15]. The principle of an HUDN is deploying a vast number of various small cells within legacy macro cells so that their number at least will be equal to the number of user equipment (UE) [15,16]. This densification improves the network coverage and capacity by allowing offloading traffic, balancing network loads between different tiers, and reducing congestion in densely populated areas that are experiencing exponential growth in traffic demand [16,17,18]. The heterogeneous architecture of 5G networks increases the management complexity and network control overhead. The implementation of SDN is proposed to reduce management complexity by separating the control plane from the data plane, which allows for efficient management of network resources through a controller unit and minimizes overhead control messages [19,20]. However, employing this implementation in 5G environments introduces new challenges regarding resource allocation and cross-tier interference mitigation [17,18].
A 5G network includes plenty of resources that should be appropriately allocated, such as spectra, power, and channels [21]. The spectrum is a significant resource due to the spectrum scarcity issue [3,18]. Thus, the spectrum allocation approach should be sufficiently intelligent and dynamic to preserve this scarce resource and meet several user requirements, [2]. Resource allocation (RA) refers to dedicating a proportion of the available resources to achieve specific user demands [22]. The 3rd Generation Partnership Project (3GPP) defines RA as sharing, managing, and distributing network resources between different users to improve their QoS requirements [23,24], which requires allocating the available spectrum dynamically between users by applying an appropriate scheduling mechanism while taking the real-time network conditions into consideration [1].
An efficient scheduling mechanism incorporated with an RA approach is necessary to enable the coexistence of uRLLC, eMBB, and mMTC users with different QoS requirements in 5G HUDNs [1,22]. The scheduling process in 5G refers to serving user demands according to their priority while considering balancing the services provided to lower-priority users to avoid overall network performance degradation [22,23]. Due to the stochastic nature of uRLLC services and the necessity to serve them immediately, the scheduling of other service types may be negatively impacted. The 3GPP has proposed two scheduling approaches to handle uRLLC services—reservation-based scheduling and puncturing scheduling—each of them has advantages and disadvantages [25,26,27]. Achieving balanced RA and fair scheduling at the BSs remains a challenge, due to the limitations imposed by the restricted spectrum and the heterogeneous network environment [28,29].
When designing a framework incorporating an efficient RA approach with an appropriate scheduling mechanism, all criteria that affect the RA process should be considered [22]. We can divide these criteria into two primary types: those associated with the 5G HUDN architecture and those related to user features and requirements. Criteria related to the 5G HUDN architecture include the deployment of small cells within legacy macro cells, central resource management (which encompasses distributing resources and redistributing them if necessary), and congestion management; in contrast, criteria related to user features and requirements encompass fluctuations in traffic volume over time, varied QoS requirements, different types of UEs, and high-speed mobile UEs. Recent studies, as reviewed in Section 2, have suggested that RA schemes focus on one or two criteria at most, thus neglecting other criteria impacting the RA process.
In our proposed approach, the RA framework considers all mentioned criteria, and it applies to urban areas to reflect the real scenario of 5G HUDNs and accurate performance evaluation. The urban area selected for this purpose is Los Angeles (LA), a city in Southern California, which is the second-largest city in the United States regarding population. It is a smart city, as it employs all of the advanced technologies mentioned above to improve the QoS delivered to individuals. This study is not just a theoretical case study because it represents realistic HUDN deployment conditions of LA base station datasets that were used to generate user-distribution datasets through different simulation scenarios.
Machine Learning (ML) techniques have recently received considerable attention in the context of developing and automating 5G networks, especially in resource allocation problems [30]. These techniques can address the challenges faced by conventional optimization methods, which leave a significant gap between theoretical network design and real-time implementation [31]. ML models enable the analysis of network behaviors, such that they can predict future conditions and prepare accordingly. Supervised learning is one of the main paradigms of ML, which can address real-world computational challenges such as predicting numerical target values from given datasets [32]. It encompasses a variety of models, including decision tree, random forest, k-nearest neighbors, logistic regression, and an artificial neural network [33]. As highlighted in recent surveys on AI-empowered wireless networks, such as [34], artificial intelligence and neural learning techniques are expected to play a vital role in adaptive resource management for beyond-5G and 6G systems. This motivates the present study to leverage neural-network-based learning within an SDN-controlled 5G HUDN environment.
As highlighted in Section 2, existing RA approaches for 5G HUDNs reveal several limitations. Most adopt static spectrum allocation without allowing resource redistribution between cells, which leads to repeated congestion under dynamic traffic conditions. Moreover, previous works often simplify mobility or service diversity, limiting reality in HUDNs. Additionally, several ML-based RA works employ synthetic datasets or evaluate ML accuracy without evaluating the corresponding network performance impact. Novelty and contributions. Unlike most existing AI-driven RA frameworks, which rely on deep or reinforcement learning agents operating at the single-cell level or without real-time coordination, the proposed approach introduces a hierarchical and SDN-coordinated UA–RA framework. Furthermore, unlike reinforcement-learning–based RA schemes that require continuous online exploration, which leads to high decision latency, the proposed framework uses an offline-trained lightweight ANN that enables fast and stable inference within the SDN controller. The SDN controller supervises inter-cell cooperation and dynamic resource block (RB) lending between macro and small cells, while a lightweight feed-forward neural network integrated with the SDN controller refines these heuristic allocation and lending decisions in real time across the macro and small-cell tiers. This joint design minimizes computational latency and enables real-time adaptation to traffic, SINR, and QoS conditions across heterogeneous tiers.
The main contributions of this paper can be summarized as follows:
  • We develop a hierarchical and SDN-coordinated UA–RA framework that integrates a lightweight artificial neural network (ANN) for adaptive resource management and efficient scheduling, explicitly considering the unique characteristics of 5G HUDNs and diverse user QoS requirements.
  • We improve the proposed framework’s performance by enabling it to work proactively to prevent BS congestion through the redistribution of available resources using an SDN controller.
  • We integrate the ML model with the highest predictive accuracy into the SDN controller to reduce computational complexity and enhance the effectiveness of the adaptive RA process.
  • We evaluate our proposed framework on three real datasets. Two datasets represent the macro and small BS distributions in a selected area of LA, while the third was generated to represent the distribution of users in the same area. The evaluation includes both network performance metrics and ML performance metrics.
The remainder of this paper is structured as follows: Section 2 reviews the recent literature on RA for 5G networks and the associated limitations. In Section 3, we present the system model. The proposed adaptive user allocation-resource allocation (UA-RA) approach and its advantages are presented in Section 4. In Section 5, we describe the evaluation scenario with related simulation assumptions and details on the preparation of ML techniques. The performance evaluation, including analysis of the results regarding the proposed adaptive UA-RA approach, ML models, and NN-ARA approach, is provided in Section 6. In Section 7, we discuss our results. Finally, Section 8 concludes the paper.

3. System Model

3.1. Deployment Model

We consider an Orthogonal Frequency Division Multiple Access (OFDMA)-based downlink in a two-tier HUDN that includes a set of BSs to serve randomly distributed UE with different QoS requirements, as depicted in Figure 2. This system model allows us to study the performance of a 5G network in a high-density urban environment. Tier 1 includes a set of Macro-BSs (MBSs), while tier 2 consists of a set of Small-BSs (SBSs). Due to the overlap between coverage areas of the two tiers, we assume that each UE should be within the range of at least one BS. Our system model adopts mm-wave 5G NR standard specifications, where the MBSs operate at a carrier frequency of 3.5 GHz with a bandwidth of 100 MHz, while the SBSs use a carrier frequency of 28 GHz and a bandwidth of 500 MHz. According to 3GPP NR numerology, subcarrier spacing values of 30 kHz and 120 kHz are applied for the 3.5 GHz macro-cell and 28 GHz small-cell tiers, respectively [53]. The spectrum resource is divided into several RBs, where each RB is a set of 12 contiguous subcarriers in the frequency domain.
Figure 2. System model of the two-tier 5G heterogeneous ultra-dense network.
All BSs have predefined fixed bandwidth; consequently, they can only satisfy a limited number of UEs simultaneously. This bandwidth is controlled through a central SDN Controller, which plays a crucial role in efficiently managing and allocating resources in the network. The SDN Controller distributes and redistributes the bandwidth based on the current network state, which is periodically updated through status messages from each BS reporting its utilized and remaining RBs. These updates enable the controller to dynamically balance spectrum allocation among neighboring BSs, as further detailed in Section 4. In our implementation, these status messages are exchanged once per scheduling interval (1 ms), aligning with common 5G NR slot durations and enabling timely spectrum reallocation. A functional representation of the considered SDN Controller architecture is depicted in Figure 3. Each BS has buffers to store incoming requests in each time slot, and the downlink scheduler allocates RBs to active users based on their priority.
Figure 3. Functional architecture of the SDN Controller for dynamic bandwidth management across base stations.
When testing our system model, various UE types distributed across selected downtown streets in LA were considered. The selected area reflects the real scenario of a 5G environment with a high density of users having different communication requirements. The UE types were categorized into vehicles that are equipped with V2X communication systems, bikes, pedestrians, and IoT sensors. In this study, we are interested in three service categories: uRLLC, represented by vehicles; eMBB, represented by bikes and pedestrians, who use video streaming and web browsing application services; and mMTC, represented by static IoT environmental monitoring sensors were randomly and densely distributed across the defined area.

3.2. Channel Model

Due to the specific characteristics of our deployment scenario, the 3GPP path loss models were considered for the proposed approach. The Urban Macro-Non-Line-of-Sight (UMa-NLOS) model is a common choice for urban environments, as it is particularly effective when high buildings, terrain, and obstructions prevent direct line-of-sight (LOS) to BSs. In contrast, the Urban Micro-LOS (UMi-LOS) model, a practical and reliable option, is often more suitable when considering SBSs; especially in dense urban environments such as LA, where users are close to such BSs and a direct LOS can be established. For UMa and UMi models, the standard deviation of shadow fading is calculated. At the same time, the Rayleigh-fading channel between users and the associated BS is modeled as an exponentially distributed random variable with unit mean [54]. Rayleigh small-scale fading is assumed for all BS–UE links, which is a common simplification in urban system-level evaluations and is particularly suitable for NLOS or rich-scattering conditions. For tractability in large-scale simulations, channel fading coefficients are generated independently across channels, users, BS–UE links, and time slots, and spatial as well as temporal correlation of small-scale fading is neglected, consistent with common system-level simulation practices. Table 1 shows the path loss model equations adopted in this study, as described in the 3GPP TR 38.901 version 18.0.0 [55].
Table 1. Path-Loss models for Urban Macro Non-Line-of-Sight (UMa-NLOS) and Urban Micro-LOS (UMi-LOS) scenarios.
In Table 1, f c is the carrier frequency (in GHz); h UE and h BS are the heights of the UE and BS (in meters), respectively; and d BP is the breakpoint distance computed using Equation (1):
d BP = 4 h BS h UE f c c
where h BS and h UE are the effective heights of the BS and UE, respectively, and c is the speed of light in free space. The distances d between all BSs and UEs are calculated using GPS coordinates with the Haversine formula via Equation (2), which accurately provides the shortest distance over the Earth’s surface [56]:
d = 2 r arcsin sin 2 Δ lat 2 + cos ( lat 1 ) cos ( lat 2 ) sin 2 Δ lon 2 .
where r represents Earth’s mean radius.

3.3. Mobility Model

The adopted mobility model permits all users-except IoT devices-to move continuously and randomly inside the selected downtown area shown in Figure 4a. Each user type is represented in the network by distinct features, such as their latitude, longitude, speed, UE service class, and initial direction. We consider three kinds of mobility users: vehicles with a medium speed range from 10 to 60 km/h, bikes with a speed range of 10 to 30 km/h, and pedestrians with a speed of up to 3 km/h.
Figure 4. Mobility behavior of users on the selected streets in LA.
The vehicles and bikes follow specific routes, starting from their initial coordinates and moving straight along the road until the next intersection. Then, they randomly move straight or turn right or left at the intersection, as depicted in Figure 4b. In contrast, pedestrians follow the random waypoint model for mobility behavior, which is particularly useful for users who move inside buildings because it captures random start–stop movements and frequent direction changes within a confined area. This makes it suitable for representing indoor pedestrian motion patterns in dense urban scenarios. In our simulation, pedestrians are initially distributed randomly within the selected area—including both outdoor regions and areas corresponding to building footprints—which further justifies the suitability of the Random Waypoint model for representing their mobility behavior. When any user arrives at the edge of a selected area, they will be returned inside the area by applying the same random mobility behavior. Figure 4b shows the adopted mobility model in this study.

3.4. Theoretical RA Performance Analysis

Each UE has specific RB demands and seeks to connect with a BS that can provide sufficient RBs to meet its QoS requirements. These RB demands are directly proportional to the data rate requirements and inversely proportional to the RB bandwidth and spectral efficiency, which is determined by the SINR between the UE and the BS. The equation for calculating the required RBs for a UE to connect with a BS is given by Equation (3):
Required RBs = Data rate Spectral efficiency × RB Bandwidth
where the Data rate represents the requested throughput of a given UE (in bps), the Spectral efficiency corresponds to the achievable data rate per unit bandwidth (bps/Hz), which depends on the selected modulation and coding scheme, and the RB Bandwidth corresponds to the bandwidth of one RB (in Hz). For each UE, denoted as UE j where j { 1 , 2 , , U } , after checking the user’s speed, the received SINR of the UE over a particular RB based on coordinated scheduling can be obtained using Equation (4), where P tx i is the transmission power of the RB allocated by BS i , K is the total number of RBs at each BS, d i j η is the distance-based signal attenuation between the associated BS i and UE j , and the negative path loss exponent η quantifies the rate at which the signal power decreases as distance increases. The adjacent BSs contributing to the interference are expressed as I.
γ down i j , k = P tx i K d i j η σ 2 + f I P tx f K d f j η .
Furthermore, σ 2 is the Additive White Gaussian Noise (AWGN) power, which is calculated using Equation (5), where N 0 is the white noise power density and B W is the bandwidth of the subchannel.
σ 2 = N 0 · B W .
Therefore, the downlink data rate for U E j on R B k allocated by B S i , based on Shannon’s theorem, can be calculated via Equation (6).
R i j , k = k = 1 K B W k log 2 1 + γ down i j , k .
To calculate the spectral efficiency (SE) of U E j on R B k , which is allocated by B S i , Equation (7) is used, where R i j , k is the downlink data rate for U E j on R B k served by B S i and B W k is the bandwidth of R B k .
η i j , k = R i j , k B W k .
These analytical formulations of downlink data rate and SE metrics serve as a theoretical reference for the NN-ARA evaluation discussed later in Section 5.

4. Proposed Approach

This study proposes an adaptive UA-RA approach to ensure the best resource utilization and avoid the risk of insufficient resources after user association completion. In the proposed approach, a new scheduling mechanism that benefits from the advantages of standard scheduling algorithms is adopted, which distributes the available resources between various users based on their QoS requirements. Therefore, it first provides uRLLC users with the best RBs, meeting their strict latency and reliability requirements. Then, it provides the remaining RBs to eMBB users who require high data rates. We also consider mMTC services in our proposed approach with the lowest priority. The adaptive UA-RA approach aims to leverage the coexisting macro and small tiers through offloading. In addition, the adaptive UA-RA approach employs the SDN controller to redistribute available resources between various BSs on the same tier to improve resource management and prevent repeated congestion. The main advantages of our adaptive UA-RA approach can be summarized as follows:
  • It joins UA with the RA process to avoid inefficient resource allocation.
  • It adapts to all influencing factors of 5G HUDNs and user characteristics to achieve the best RA decision.
  • It considers all crucial factors of the RA process, including the service priority, SINR, Channel Quality Indicator (CQI), and available resources at each BS.
  • It prioritizes users based on their QoS requirements.
  • It utilizes an SDN controller to manage and redistribute resources between various BSs on the same tier.
  • It employs ML models trained on a real dataset obtained from a selected urban area in LA.
This comprehensive consideration increases the effectiveness of our proposed approach because it ensures that the RA decision is made with an exhaustive understanding of the network conditions and user requirements. A list of the main notation used in this paper is given in Table 2.
Table 2. List of main notations.

4.1. Adaptive UA-RA Stages

The adaptive UA-RA approach allocates spectrum resources considering the different characteristics of the 5G HUDN and user requirements. These characteristics include the multi-tiers nature of the HUDN, the coexistence of various service categories, user mobility behaviors, and user density. In such a complex network, different service categories compete to obtain resources to meet their various QoS requirements. Algorithms 1 and 2 provide pseudocode for the proposed adaptive UA-RA approach. Our approach is applied to a two-tier heterogeneous network, taking the dynamic and rapid demand changes due to user mobility into consideration. The following stages describe our proposed approach:
  • Initialization stage: All initial information about each ( B S i ) and ( U E j ) are set at this stage. This includes the number of available RBs at each BS, denoted as R B i avail , the SINR value γ , and the speed threshold S T .
  • Requests classification stage:
    This stage involves classifying incoming requests. There are three categories of service requests: uRLLC, eMBB, and mMTC. Each request belongs to just one service class, based on its QoS requirements. The classification stage is necessary because it determines each request’s priority level P , as shown in Equation (8):
    P u R L L C > P e M B B > P m M T C .
  • Tier selection stage:
    Based on the speed threshold, the service tier will be selected in this stage. If the UE speed exceeds the speed threshold, the MBS will be chosen to serve the UE, as shown in Equation (9); otherwise, the served BS will be chosen from the SBS, as shown in Equation (10).
    B S = B S i | B S i M B S , U E j U E s j > S T .
    B S = B S i | B S i S B S , U E j U E s j S T .
    This stage aims to reduce the burden on MBSs by offloading to SBSs, providing a large coverage area for high-mobility UE while minimizing handover overhead.
  • Cell selection stage:
    Once the tier is selected, the cell selection stage to determine the serving BS proceeds. The BS that provides the maximum SINR among the candidate BSs within the chosen tier is selected, subject to the constraint that the BS must also have sufficient available RBs to satisfy the user’s demand. Mathematically, the selected BS can be expressed as B S * , as shown in Equation (11):
    B S * = arg max B S i B S { γ i R B i avail R B i j req } .
    This stage aims to provide the best candidate BSs to serve UE with sufficient RBs to ensure high QoS while enhancing RB utilization.
  • CQI mapping stage:
    For each selected BS in the previous stage, the CQI is mapped automatically to the corresponding SINR value using Equation (12), which follows the CQI for 5G NR described in the 3GPP technical report. The maximum downlink modulation order used here is QAM 64, as described in [57].
    C Q I = m i n m a x log 2 1 + SINR linear C Q I step , C Q I min , C Q I max ,
    where C Q I min and C Q I max are the minimum and maximum C Q I indices (equal to 1 and 15, respectively), and C Q I step is equal to 0.1. The SINR linear is the corresponding linear-scale SINR.
  • Resource allocation stage:
    In this stage, the UA decision based on the previous stages determines the best BS for association. Considering the QoS requirements, the RA scheduler allocates the required RBs to each request via Equation (13). If the service is an uRLLC, then the best BS will be selected to serve it immediately. Then, if an eMBB request needs to be associated with the same BS, the remaining RBs are checked and allocated to the user. However, the scheduler prioritizes eMBB requests with the highest CQI compared with others. Finally, the remaining available RBs are used to serve mMTC requests. Typically, mMTC requests generate periodic traffic that necessitates minimum QoS requirements regarding latency and data rate. The RA process performed by the RA scheduler is demonstrated in Algorithm 2.
    RA _ Scheduler Q [ B S i ] , R B i avail .
  • Lending stage:
    This stage is triggered to activate the lending mechanism, with the aim of alleviating congestion caused by the high UE density in some cells. It involves lending underutilized RBs at any BS to congested BSs, thus redistributing resources between different cells based on the current network state, which enhances the RA process. The lending mechanism allows each request to be served by the BS that provides the best channel conditions, which contributes to improving the spectral efficiency and RB utilization. In the lending stage, all state updates are executed by a centralized SDN Controller, which maintains the R B alloc , R B i avail , and pending requests which are stored in Q [ B S i ] . The SDN controller continuously monitors the pending requests across all BSs and triggers the lending process whenever a BS cannot satisfy its current demand. It selects one of the neighboring BSs that has underutilized RBs sufficient to meet the required need and lends these underutilized RBs to serve the user demand.

4.2. Adaptive UA-RA Constraints

Some constraints are imposed on the adaptive UA-RA approach:
  • Request classification constraint:
    Each UE can generate a request that should belong to one service class at a time, as shown in Equations (14) and (15).
    c C z j , c = 1 , U E j U E .
    z j , c { 0 , 1 } , U E j U E , c C .
  • UE association constraint:
    Each UE should be associated with only one BS at any time, as shown in Equations (16) and (17).
    i B S a i j = 1 , U E j U E .
    a i j { 0 , 1 } , B S i B S , U E j U E .
  • BS capacity constraint:
    For each BS, the sum of the required RBs by its associated UEs must not exceed the available RBs at that BS, as shown in Equation (18).
    U E j U E r i j a i j R B i avail , B S i B S .
    When the lending mechanism is activated, the BS capacity is updated with borrowed RBs. This allows each UE to be served by the BS which provides the best channel conditions.
  • BS power constraint:
    The total downlink transmit power at each B S must not exceed its maximum power limit, P i max , as shown in Equation (19).
    U E j U E p i j RB r i j a i j P i max , B S i B S .
  • Serving constraint:
    The best channel condition should be used to serve any U E to increase the spectral efficiency, as shown in Equation (20).
    B S * ( j ) = arg max B S i B S { γ i j R B i avail r i j } , U E j U E .
Algorithm 1: Pseudocode for the Proposed Adaptive UA-RA Approach.
  • Input: MBS, SBS, UE information, QoS requirements
  • Output: B S * (for UA-RA decision)    
1:
Initialize R B i avail , S T ;                                                         ▹ Initialization Stage
2:
for all  request from U E j  do                                    ▹ Requests Classification Stage
3:
    Classify into {uRLLC, eMBB, mMTC};
4:
    Assign priority P: P uRLLC > P eMBB > P mMTC ;
5:
end for
6:
for all  U E j U E  do
7:
    if  speed ( U E j ) > S T  then                                                ▹ Tier Selection Stage
8:
         B S τ { B S i M B S }
9:
    else
10:
         B S τ { B S i S B S }
11:
    end if
12:
     B B S τ sorted by γ i j ( desc for U E j )                              ▹ Cell Selection Stage
13:
    for all  B S i B  do
14:
        if  R B i avail R B i j req  then
15:
            B S * ( j ) B S i
16:
           break
17:
        end if
18:
    end for
19:
     γ j SINR ( U E j , B S * ( j ) )                                              ▹ CQI Mapping Stage
20:
     C Q I j MapSINRtoCQI ( γ j )
21:
    enqueue( Q [ B S * ( j ) ] , U E j , P j , γ j , C Q I j , R B i j req )
22:
end for
23:
for all  B S i B S  do                                               ▹ Resource Allocation Stage
24:
     ( R A decision [ B S i ] , R B i avail )     RA _ Scheduler Q [ B S i ] , R B i avail
25:
end for
26:
for all  τ { MBS , SBS }  do                                                    ▹ Lending Stage
27:
    for all  B S i B S τ  do
28:
        if  pending [ B S i ] > 0  then
29:
            n e e d pending [ B S i ]
30:
            D { B S d B S τ { B S i } R B d avail R B d alloc n e e d }
31:
           if  | D | > 0  then
32:
                B S d * arg min B S d D dist ( B S i , B S d )
33:
               LEND( B S d * B S i , n e e d )         ▹ SDN controller issues the lending command
34:
                R B i alloc R B i alloc + n e e d                         ▹ state updated by SDN controller
35:
                R B d avail R B d avail n e e d                         ▹ state updated by SDN controller
36:
                pending [ B S i ] 0                               ▹ state updated by SDN controller
37:
           end if
38:
        end if
39:
    end for
40:
end for
Algorithm 2: Pseudocode for the RA_Scheduler.
  • Input: Q [ B S i ] , user tuple U E j , P j , γ j , C Q I j , R B i j req , current R B i avail
  • Output: R A decision [ B S i ] , updated R B i avail
1:
R B i alloc 0 ;    R A decision [ B S i ]
2:
Q uRLLC { j Q [ B S i ] P j = P uRLLC }
3:
Q eMBB { j Q [ B S i ] P j = P eMBB }
4:
Q mMTC { j Q [ B S i ] P j = P mMTC }
5:
sort  Q eMBB  by  C Q I j  desc                  ▹ Serve eMBB with highest CQI first
6:
for all  j [ Q uRLLC Q eMBB Q mMTC ]  do
7:
    if  R B i avail R B i j req  then
8:
         R A decision [ B S i ] [ j ] R B i j req
9:
         R B i avail R B i avail R B i j req
10:
    end if
11:
end for
12:
return  R A decision [ B S i ] , R B i avail

5. Evaluation Scenario

We used real datasets representing a 5G HUDN scenario to evaluate the proposed adaptive UA-RA approach. Two datasets reflected MBSs and SBSs in the selected area in LA, while the third comprised various users distributed within the same location. The datasets are described in detail in Section 5.2. All simulations and data analysis were performed on a workstation equipped with an Intel® CoreTM i7 processor, 64 GB DDR RAM, and an NVIDIA GeForce GTX 1060 6G GPU. The implementation was written entirely in Python 3.11.5 using open-source scientific libraries, including NumPy 1.26.4, Pandas 2.2.2, SciPy 1.12.0, and Matplotlib 3.9.2 for simulation and data handling, and PyTorch 2.0.1+cu117, scikit-learn 1.6.1, and XGBoost 2.1.4 for ML model development. Each simulation run emulated 5000 ms of network operation and required approximately 5–7 h of real execution time on the described hardware. Every run was repeated ten times with different random seeds, and the results were averaged to ensure statistical reliability.
The computational cost of the ML models was also evaluated to assess feasibility for real-time deployment. On the described hardware, training the ANN model required approximately 6–7 h for 381 epochs, while inference for a single UE request required less than 2 ms. The XGBoost and RF models exhibited slightly higher training times 7–10 h, and the DT model required more than 10 h of training, although all models achieved comparable inference latency. These results confirm that the proposed NN–ARA framework can operate under near real-time conditions when integrated into an SDN controller.
The generated traffic during runtime falls under one of three service categories, with the percentage of network traffic in the uRLLC, eMBB, and mMTC categories being 15%, 25%, and 60%, respectively. At the millisecond level, uRLLC users generate traffic that follows a Poisson distribution for request arrivals and a Pareto distribution for bursty traffic loads. The heavy-tailed eMBB traffic follows the Pareto distribution, while stationary IoT devices—representing the mMTC service class as described in Section 3.1—generate periodic traffic according to the uniform distribution. In addition, uRLLC traffic loads per UE vary between 0.1 and 5 Mbps, eMBB traffic loads vary between 0.5 and 15 Mbps, and IoT traffic loads vary between 0.5 and 2 Mbps.
The performance metrics measured in this study include the service ratio (SR), average downlink data rate, average RB utilization, and average spectral efficiency. For evaluating the UA–RA approach, the previous performance metrics were calculated directly from the simulation outcomes using the following equations:
η RB = R B used R B total ,
S E avg = T served B system ,
R avg = T served U served ,
where R B used and R B total denote the number of allocated and available RBs, respectively; T served is the total served downlink traffic (bps); B system is the system bandwidth (Hz); and U served is the number of successfully served users. The SR is given by:
S R = U served U total .
In contrast, for evaluating the NN–ARA approach, the average data rate and spectral efficiency were calculated using the theoretical equations previously defined in Section 3.4 because not all physical layer parameters can be directly observed from the trained model outcomes. In addition, a comparison was performed between the proposed adaptive UA-RA approach and NN-ARA with two related resource allocation approaches. The first one is a scheduling approach that employs a greedy search to balance uRLLC and eMBB services within each BS by first serving users with the best channel conditions by Al-Ali et al. in [38]. Specifically, we use the greedy scheduling strategy described in Section 2. This work does not allow a lending mechanism. The second introduces a dynamic downlink RA approach designed for IoT traffic, in which neighboring BSs can lend up to 15% of their underutilized RBs to congested BSs to enhance coverage and data rate by Bouras et al. in [47]. This approach prioritizes lower data rate requests and utilizes a fixed 15% lending threshold, as detailed in Section 2. To ensure a fair comparison, both baseline approaches were re-implemented within the same Python simulation environment, using identical network topologies, traffic distributions, and BS parameters as our proposed approaches.
We used the 5G NR as an air interface for the simulation. The system bandwidth of MBSs was 100 MHz with a subcarrier spacing of about 30 kHz, while that for SBSs was 500 MHz with a subcarrier spacing of about 120 kHz, respectively. The white noise density was set to −174 dBm/Hz. The maximum transmit power was 46 dBm for MBSs and 30 dBm for SBSs. The other simulation parameters are detailed in Table 3.
Table 3. Simulation parameters.

5.1. Preparation of the ML-Based Adaptive UA-RA Approach

ML models can be employed to address dynamic problems such as real-time RA. Artificial neural networks (ANNs) are supervised models that can be used to address management problems [58]; in particular, ANNs have been used to solve UA-RA problems by learning from the network structure [5]. They comprise multiple layers of neurons, which are connected to form a model that imitates the biological structure of neurons in the human brain [33]. Decision trees (DTs) represent another type of supervised machine learning model that can effectively address the 5G RA problem. They are particularly suitable for 5G network scenarios that demand rapid deployment, in which making real-time decisions is crucial for effective resource allocation [33]. Another machine learning model that is applicable to RA problems is random forest (RF) [59]. An RF consists of multiple decision trees, with the random selection of features employed to construct the base of each tree. Consequently, unknown samples are classified into specific classes based on the majority vote from the collection of decision trees [60]. Finally, eXtreme Gradient Boosting (XGBoost) is a gradient-boosted decision tree system specifically designed for efficiency and accuracy [61]. Its features, including parallelized tree construction, cache-aware implementation, regularized objectives, and sparsity-aware split handling, render it well-suited for 5G resource management models, providing high accuracy while maintaining efficient inference times [61,62]. The ML-based models for adaptive UA-RA were constructed in four phases, as follows:
  • Data preparation phase:
    This phase involved collecting two real datasets reflecting the distribution of BSs (i.e., MBSs and SBSs) in a selected area of downtown LA, and generating a third dataset reflecting the distribution of different UE types with different service requirements within the selected area. After this, any unnecessary data were eliminated, ensuring that all data were important for prediction of the serving BS using our proposed approach. Normalization was performed to make some columns more suitable for training ML models. Then, the adaptive UA-RA approach was used to accomplish labeling, as shown in Algorithm 1 and described in detail by the seven stages in Section 4. The pseudocode for the adaptive UA-RA algorithm demonstrates how to assign a serving-BS label to each sample in the UE dataset, which can then be used to train the various ML models. After the labeling process, the dataset was divided into 80/20 for training/testing purposes. The data for training were chosen randomly from the whole dataset, while the remaining data were used for testing purposes. To ensure the reliability and representativeness of the training data, the simulation framework used to generate dynamic features (e.g., SINR, RB availability, traffic requests) follows 3GPP NR-compliant channel, fading, mobility, and interference models, yielding realistic network behavior for both macro- and small-cell tiers. Additionally, the training dataset is randomly shuffled before each epoch to prevent ordering bias and enhance generalization. Furthermore, the heuristic UA–RA policy used to assign labels is a multi-criteria scheduler that jointly considers SINR, RB availability, service priority, and load balancing, making it substantially more robust than simple greedy selection rules. This guarantees that the labels reflect near-optimal allocation behavior rather than suboptimal patterns, thereby preventing the ANN from learning biased or unrealistic decisions.
  • ML model training phase:
    The training samples were used during this phase to train the ML models. In particular, ANN, DT, RF, and XGBoost models were trained in a supervised manner, with the aim of predicting the best BS to serve UE.
  • ML model testing phase:
    The testing samples were used to evaluate the trained ML models, as described in Section 6.2.1.
  • ML model deployment:
    The ANN, which was the trained ML model with the highest prediction accuracy, was deployed on an SDN controller to optimally control the spectrum pool of radio resources and enable dynamic redistribution to avoid congestion. The SDN is physically coupled to other network components, such as wireless BSs. The inputs of the trained Neural Network-Adaptive Resource Allocation (NN-ARA) model include the UE speed, the requested RBs, the UE service class, available RBs at each BS, and SINR values. Using this information, the NN-ARA model can predict which BS can best allocate the required RBs for user demand. Algorithm 3 provides pseudocode for the proposed NN-ARA approach.
Algorithm 3: Pseudocode for NN-ARA approach.
1:
Input: U E . speed , R B r e q , U E . service , R B a v a i l , U E . SIN R
2:
Output: Serving BS    
3:
while UE request is received do
4:
    Input = [ U E . speed , R B r e q , U E . service , R B a v a i l , U E . SIN R ]
5:
     B S = N N m o d e l ( I n p u t ) ;
6:
    Start association (UE, BS);
7:
    Update R B a v a i l ;
8:
end while

5.2. Datasets

This study used three datasets to evaluate our proposed approach. Two datasets include MBSs and SBSs distribution within a selected area of downtown Los Angeles; both represent real network deployment data. While the third dataset was generated such that it reflects the distribution of UEs with different service requirements based on realistic assumptions and spatial constraints derived from the real BSs data.
  • MBSs dataset:
    This dataset was obtained from the LA GeoHub governmental website (updated on 19 April 2022). It contains real information about 5248 microwave towers in LA [63], and many features of these MBSs. The most important features used in this study are the location of the MBSs (regarding their latitude and longitude coordinates) and the MBS identifier, which is a distinct feature of each MBS. After filtering the MBSs to include just those in the selected area and removing the redundancy in some coordinates, the total number of MBSs came to 46.
  • SBSs dataset: This dataset was updated on 3 January 2025, and contains information about 3005 SBSs attached to streetlight poles [64]. It provides two features for each SBS: a distinct identifier and its location (regarding latitude and longitude coordinates). The dataset was filtered to include the SBSs in the study area, amounting to 226 SBSs. Figure 5a shows the actual distribution of the MBSs and SBSs in LA, based on the above mentioned datasets, while Figure 5b shows the distribution of MBSs and SBSs in the selected area.
    Figure 5. Distribution of macro and small BSs in Los Angeles.
  • User distribution dataset: This dataset was generated using Google Maps (Google LLC) and a Python-based simulator (Python 3.11.5) following the methodology of [65], with adaptations in tools and feature design. Additional modifications were adopted to create a more suitable dataset for our scenario. This dataset contained about 50,000 samples randomly distributed in an area with a high density of small cells in LA. To simulate a 5G environment in which the user distribution varies from one location to another—thus causing congestion in some areas—more users were added to the central study area, as shown in Figure 6. The samples were divided into three UE types—vehicles, bikes, and pedestrians—along with numerous static IoT devices representing the mMTC service class.
    Figure 6. Distribution of user equipments in the selected area of LA.
    Although vehicles and bikes should follow certain routes, as shown in Figure 4b, pedestrians are not restricted to moving on these routes, and some of them may be inside buildings. Each UE sample has five static features: latitude, longitude, speed, initial direction, and service class. To make the dataset suitable for studying the RA problem, additional features were generated dynamically during the simulation run, including the UE traffic request and the remaining RBs at each BS. Each user in the dataset generates a request based on its service class (i.e., uRLLC, eMBB, or mMTC). Each service class follows a distinct mathematical distribution and traffic load range, as detailed in Section 5. Thus, the final UE dataset used for ML training combines both static and dynamic features. The labels corresponding to the best BSs based on the UA-RA decisions were obtained using Algorithm 1.

6. Results

This section presents the results in two subsections. In Section 6.1, the performance of the proposed adaptive UA-RA approach is compared against that of existing approaches under different traffic load scenarios, starting from the supposed traffic load as explained in Section 5 up to a 10-fold increased traffic load. In Section 6.2, the trained ML models are first examined in Section 6.2.1, following which the performance of the proposed NN-ARA (as a result of integrating the selected ML model into the UA-RA approach) is evaluated in comparison with existing approaches in Section 6.2.2. These results highlight both the relevance and the novelty of our research in the context of 5G HUDNs and resource management.

6.1. Performance Evaluation of the Adaptive UA-RA Approach Compared with Existing Approaches

Figure 7 compares the SR between the three considered approaches in two tiers under different traffic loads. The proposed adaptive UA-RA approach consistently outperforms the other two approaches across all traffic load scenarios. In particular, the proposed approach maintains nearly full service capability under low and moderate loads. The significant SR enhancement, reaching approximately 35.6% at the 10-fold load, can be referred to the adaptive design of the proposed UA-RA approach. Unlike the other approaches, the proposed approach dynamically connects UA and RA according to real-time network conditions such as SINR, RB availability, and service priority. Through the tier and cell selection stages, users are forced toward the most suitable service tier (i.e., macro or small cell) with sufficient resources, which minimizes blocking probability. Moreover, during the RA stage, the scheduler ensures the prioritized delay-sensitive uRLLC traffic while efficiently distributing remaining RBs between eMBB and mMTC users. In a heavy load scenario, the integration of the SDN-controlled lending mechanism contributes to the higher SR by allowing congested cells to benefit from underutilized RBs from neighboring BSs, ensuring balanced resource utilization across the network.
Figure 7. Service ratio under different traffic loads for the two tiers (MBS and SBS). The baseline approaches correspond to Bouras et al. [47] and Al-Ali et al. [38].
Figure 8 and Figure 9 illustrate the percentage of the remaining RBs at the macro and small cell tiers under different traffic loads. The proposed adaptive UA-RA approach consistently outperforms the other approaches regarding spectrum utilization efficiency, achieving the lowest percentage of remaining RBs under medium and high loads. In the macro cell tier, the proposed UA-RA gradually reduces the remaining RBs to increase the corresponding spectrum utilization gain by approximately 39.6% at a 10-fold load. Similarly, in the small cell tier, the proposed approach reduces the remaining RBs to improve the spectrum utilization by about 65.1% at a 10-fold load. In the macro cell tier, at low loads (e.g., 1- and 2-fold), the proposed UA-RA approach shows slightly higher remaining RBs compared to Al-Ali et al., even while serving more users. This occurs because users are dynamically associated with the BS with the highest SINR, enabling each user to meet its target data rate with fewer RBs.
Figure 8. Percentage of remaining RBs in MBSs under different traffic loads. The baseline approaches correspond to Bouras et al. [47] and Al-Ali et al. [38].
Figure 9. Percentage of remaining RBs in SBSs under different traffic loads. The baseline approaches correspond to Bouras et al. [47] and Al-Ali et al. [38].
Therefore, the proposed approach achieves a higher service ratio with fewer RBs, which leaves a portion of the spectrum temporarily idle for potential future arrivals or adaptive load balancing.
As the load increases, the adaptive UA-RA approach gradually utilizes additional RBs only when required, based on real-time SINR, service priority, and RB availability to guide both UA and RA. Moreover, the SDN-controlled lending mechanism redistributes underutilized RBs from underloaded cells to the congested ones to ensure full spectrum utilization across both tiers. The near-zero remaining RBs at 10-fold load confirm that residual blocking is due to physical RB exhaustion, not due to inefficiency of our algorithm. This ensures the ability of the proposed UA-RA approach for near-optimal spectrum utilization in 5G HUDNs, as detailed in Figure 10.
Figure 10. Blocking probability due to insufficient RBs under different traffic loads. The baseline approaches correspond to Bouras et al. [47] and Al-Ali et al. [38].
The proposed adaptive UA-RA approach outperformed those of Bouras et al. and Al-Ali et al. across all traffic loads regarding KPIs such as RB utilization, spectral efficiency (SE), and average downlink data rate per user. Figure 11 shows that the adaptive UA-RA approach achieved substantially higher RB utilization under every load. The proposed approach enhances RB utilization to about 13.8% at 1-fold and to about 22.5% at 10-fold. The severe increase at 5-fold and semi-explosion at 10-fold indicate that the UA-RA approach effectively minimizes underutilized spectrum resources through dynamic RB assignment in each tier. This advantage is due to the adaptability of UA-RA to load variations, which allows the SDN controller to reallocate underutilized RBs to congested cells in real-time, ensuring near-optimal RB utilization under heavy loads.
Figure 11. Achievable RB utilization under different traffic loads. The baseline approaches correspond to Bouras et al. [47] and Al-Ali et al. [38].
Figure 12 demonstrates that the SE improved gradually with load for all approaches, but that for adaptive UA-RA increased strongly at medium and high loads. The proposed approach improved the SE from 30.9% at light loads to about 66.6% at heavy loads. This trend confirms that the proposed UA-RA approach not only fills the available RBs but also allocates them more efficiently, benefiting from SINR-aware scheduling to assign high-quality RBs to users.
Figure 12. Average spectral efficiency under different traffic loads. The baseline approaches correspond to Bouras et al. [47] and Al-Ali et al. [38].
In addition, Figure 13 illustrates that the average downlink data rate increases gradually with load for all approaches, with the superiority of the adaptive UA-RA approach, especially at 5-fold and 10-fold loads. The proposed approach improved the data rate to 22.5% at high traffic loads. This result is consistent with the SE gains and implies that the proposed approach preserves QoS for served users as contention increases, i.e., it avoids starving high-priority or high-rate users and schedules them on RBs with the best channel quality.
Figure 13. Average downlink data rate under different traffic loads. The baseline approaches correspond to Bouras et al. [47] and Al-Ali et al. [38].
In general, the gap between adaptive UA-RA and the other approaches increased with load for all three performance metrics, indicating its better scalability and adaptability under network congestion. In medium- and high-load environments, where HUDNs are most exhausted, the proposed approach -which jointly combines UA with an RA policy- appears to enhance RB utilization toward full occupancy, support high spectral efficiency via SINR-aware scheduling, and maintain superior per-user data rates.

6.2. Performance Evaluation of the ML-Based Approach Compared with Existing Approaches

The following subsections present an evaluation of various trained ML models and a comparison of the resulting NN-ARA approach against other approaches.

6.2.1. Evaluation of the Trained ML Models

This subsection discusses the evaluation of the trained ML models using various performance metrics. To assess the prediction errors, the root mean square error (RMSE) was calculated using Equation (25) [66]:
R M S E = 1 N i = 1 N y ^ i y i 2
where N denotes the total number of samples, y ^ i represents the predicted value for the ith sample, and y i is the corresponding true value. In addition, a confusion matrix was constructed to estimate the performance of the trained ML-based models, which is an effective tool for assessing the percentages of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) [67,68]. The ML-based models were further evaluated using various metrics, including accuracy, sensitivity, specificity, precision, F-score, and geometric mean (G-mean), which are defined according to Equations (26) to (31) [69,70,71,72,73] below:
A c c u r a c y = Number of correctly classified samples Total number of testing samples ,
S e n s i t i v i t y = T P T P + F N ,
S p e c i f i c i t y = T N F P + T N ,
P r e c i s i o n = T P T P + F P ,
F - s c o r e = 2 T P 2 T P + F P + F N ,
G - m e a n = S e n s i t i v i t y × S p e c i f i c i t y .
Extreme Gradient Boosting (XGBoost), RF, DT, and ANN models were trained to predict the best serving BS based on the proposed UA-RA approach. The parameters of the trained ML models are given in Table 4.
Table 4. Machine learning models and parameters.
The ML models were trained and evaluated using data from a selected area in LA, with their resulting performance metrics shown in Table 5. Based on this table, which illustrates the prediction accuracy values obtained by the trained ML models, the ANN model with three hidden layers achieved the best prediction performance and a lower RMSE compared with the other trained models, indicating its higher accuracy and better fitting performance. Consequently, the three-layer ANN model was adopted as the basis for the proposed approach. A Python command was executed to calculate the inference time needed to predict a serving BS for a UE test sample. The proposed ANN model was found to process the prediction in approximately 1.6384 ms, resulting in a prediction duration of less than 2 ms. Notably, this meets the 5G latency requirements, underscoring the suitability of our proposed NN-ARA approach for this environment. Although both ANN configurations achieved high accuracy, the three-layer ANN slightly outperformed the four-layer version. This can be attributed to its lower architectural complexity, which reduced overfitting and improved generalization on the available dataset. The additional hidden layer in the four-layer ANN increased parameter count without providing proportional representational benefit, given the feature size and dropout rate (0.3) shown in Table 4. Therefore, the three-layer model achieved a better bias–variance trade-off, resulting in higher accuracy and lower RMSE. Furthermore, the ANN does not simply memorize the heuristic labels; the non-zero RMSE and confusion-matrix variations indicate that the model generalizes beyond the heuristic UA–RA decisions rather than copying them exactly.
Table 5. Performance metrics for the trained ML models.

6.2.2. Performance Evaluation of the NN-ARA Approach Compared with Existing Approaches

This subsection compares the performance of three approaches—the proposed NN-ARA approach, that of Bouras et al., and that of Al-Ali et al.—regarding the achievable downlink data rate, spectral efficiency, and received SINR values for a UE during the simulation time.
Figure 14 illustrates the cumulative distribution functions (CDFs) for the three approaches, allowing for a comparison regarding the achievable downlink data rate obtained for both macro and small tiers during the simulation. The proposed NN-ARA approach consistently outperformed the others, reaching noticeably higher peaks of achievable data rate. For macro tier, NN-ARA achieved a maximum downlink rate of about 2.1 Gbps, compared with approximately 2.0 Gbps for Bouras et al. and 1.9 Gbps for Al-Ali et al. At the median user throughput, the proposed NN-ARA improved the data rate by 20.8%. A similar performance is observed in the small tier, where the proposed approach attained a maximum of nearly 7.5 Gbps, surpassing the 7.2 Gbps of Bouras et al. and 7 Gbps of Al-Ali et al.; the median data rate improvement reached approximately 11%. The smooth and right-shifted curves of the proposed NN-ARA indicate that a larger portion of users experience higher data rate across diverse network conditions. This enhancement is due to the ability of NN-ARA to learn context-aware resource-allocation patterns integrating the SINR, service priority, and RB availability during scheduling. Such adaptive behavior enables NN-ARA to exploit high-quality channels efficiently while avoiding the starvation of low-SINR users, which makes it suitable for 5G HUDNs. The superior gain of the NN-ARA approach, especially at the median and higher percentiles (right-shift), is clearly demonstrated by the substantial horizontal separation between the NN-ARA curve and the baseline curves, indicating that a larger proportion of users experience significantly better data rates.
Figure 14. CDFs of achievable downlink data rate for a UE by macro-BSs and small-BSs during simulation. The baseline approaches correspond to Bouras et al. [47] and Al-Ali et al. [38].
Figure 15 shows the CDFs of spectral efficiency obtained during the simulation for both macro and small tiers. The values of spectral efficiency reached up to 16.5 bps/Hz for the macro tier and 14.5 bps/Hz for the small tier with the proposed NN-ARA, which outperformed the other approaches. This indicates that the proposed approach supports higher efficiency in utilizing spectrum resources, benefiting both average and high-performance users. This result was expected, as the spectral efficiency is the achievable data rate divided by channel bandwidth, and the achieved downlink data rate was superior for our proposed approach, as illustrated in Figure 14. The proposed NN-ARA achieved near-optimal spectral utilization and enhanced the median spectral efficiency by at least 20.7% and 11.1% for macro and small tiers, respectively, compared with the other approaches. This right-shifted CDF confirms that NN–ARA allocates spectrum resources more effectively, as a larger share of users attain higher spectral efficiency than with the baseline approaches.
Figure 15. CDFs of spectral efficiency during simulation. The baseline approaches correspond to Bouras et al. [47] and Al-Ali et al. [38].
Figure 16a shows the CDFs of SINR values received by the UE from macro tier during the simulation. As a result, our NN-ARA approach was found to enhance the downlink SINR as it enables the UE to associate with the highest SINR values, which is not possible with the other resource allocation approaches. The proposed NN-ARA approach aims to serve each user with the best SINR. Achieving higher SINR values leads to enhanced interference management and stronger signal quality for a larger proportion of users. In particular, the NN-ARA improved the SINR received by the UE by at least 33%. Figure 16b shows the CDFs of SINR values received by the UE from small tier during the simulation. The results of small tier are similar to the macro tier; the NN-ARA approach consistently achieved higher SINR levels compared to the other two approaches. It effectively associates each user with the best BS, which provides the best channel conditions to minimize inter-cell interference and improve signal quality. As a result, the proposed NN-ARA enhanced the SINR received by the UE by at least 53%, indicating stronger signal quality and more efficient interference control within dense small cell deployments.
Figure 16. CDFs of SINR values received by a UE during simulation. The baseline approaches correspond to Bouras et al. [47] and Al-Ali et al. [38].
The clear right-shift of the NN-ARA curve in both macro and small cell tiers demonstrates that a significantly higher percentage of users benefit from higher SINR values, validating the NN-ARA’s ability to enhance user association for better channel conditions.
Overall, across all evaluated metrics (downlink data rate, spectral efficiency, and SINR), the proposed NN–ARA consistently outperformed both baseline approaches, achieving approximately 11–50% median improvements depending on the tier and performance indicator, as clearly illustrated in Figure 14, Figure 15 and Figure 16.

7. Discussion

The obtained results demonstrate that the proposed adaptive UA–RA and NN–ARA frameworks effectively address the challenges of resource allocation, spectral utilization, and service quality in 5G HUDNs. The adaptive UA–RA approach connects the user association and resource allocation across macro and small tiers, adapts dynamically to real-time network conditions such as SINR value, RB availability, and service priority. This policy increases service ratio under various traffic loads and therefore the spectrum utilization. Specifically, a service ratio enhancement of approximately 35.6% under a 10-fold traffic load confirms the robustness of the proposed approach under congestion. Compared with the methods of Bouras et al. [47] and Al-Ali et al. [38], the proposed adaptive UA–RA achieves substantial gains in spectral efficiency, average downlink data rate and RB utilization. Bouras et al. employed a lending-based RA limited to 15% of underutilized resources for IoT services, while Al-Ali et al. used a greedy scheduling algorithm for uRLLC and eMBB users without activate lending mechanism between cells. In contrast, the proposed UA–RA integrates SINR-based association, SDN-controlled inter-cell lending, and adaptive RB scheduling, yielding spectrum-utilization gains of 39.6% for the macro tier and 65.1% for the small tier at high traffic loads. Nearly zero remaining RBs with higher service ratio indicates that the performance limitation is due to physical RB exhaustion rather than algorithmic inefficiency. Consequently, the proposed approach achieves up to 66.6% improvement in spectral efficiency and 22.5% higher average downlink data rate compared with existing approaches.
The selection of Bouras et al. [47] and Al-Ali et al. [38] as comparative benchmarks was intentional, as these studies represent two well-established and fundamentally distinct resource allocation philosophies within HUDNs. While our literature review identified several approaches, the vast majority of these fall into one of these two representative categories. Bouras et al. introduced a cooperative, lending-based mechanism that enables inter-cell RB sharing to improve coverage and IoT performance, representing the dynamic, inter-cell allocation paradigm. In contrast, Al-Ali et al. proposed an intra-cell greedy scheduling strategy without any inter-cell cooperation, serving as a robust baseline for the static, non-cooperative allocation paradigm. Together, they provide highly representative and contrasting baselines that allow a fair and meaningful assessment of the proposed adaptive UA–RA and NN–ARA frameworks across the full spectrum of allocation strategies. In addition to these two approaches, several recent studies have suggested ML and SDN-based resource management approaches for 5G and beyond networks. For example, Hurtado Sánchez et al. presented a comprehensive survey of deep reinforcement learning techniques for network slicing resource management in 5G and 6G systems [74]. Alsulami et al. proposed a federated deep learning framework for resource management to optimize 5G and 6G QoS [75]. In addition, Dutta et al. introduced a federated learning model for prediction-based load distribution in 5G network slicing [76]. Although these works confirm the increasing role of intelligence in wireless resource management, they generally deal user association and resource allocation as separate problems or rely on static learning models. In contrast, the proposed NN-ARA framework jointly optimizes both processes under SDN control, providing real-time adaptability and scalability to dynamic 5G HUDNs.
The integration of ML with resource allocation approach enhances its adaptability. After evaluated some ML models, the candidate model was three-layer ANN because it achieved the lowest RMSE (3.81) and highest accuracy (97.48%) to predict the best BS association. The NN-ARA framework outperformed all other approaches, reaching downlink data rates up to 7.5 Gbps in small tier and 2.1 Gbps in macro tier. It enhanced median data rates by 20.8% and 11% for macro and small tiers, respectively. It also increased median spectral efficiency by 20.7% and 11.1% and enhanced received SINR by 33% and 52% for macro and small tiers, respectively. These results highlight the ability of NN-ARA to learn allocation patterns and balance throughput through intelligent association and scheduling. Although the proposed UA–RA mechanism assigns strict priority to uRLLC and eMBB services, this prioritization is applied at each scheduling interval and is aligned with the QoS-aware service differentiation defined by 3GPP. mMTC devices are inherently low-rate and delay-tolerant; therefore, temporary de-prioritization does not compromise their QoS requirements. Additionally, the SDN-controlled RB-lending mechanism increases the available resources at congested BSs, helping to mitigate the risk of long-term starvation for lower-priority users. This design preserves fairness across service classes over time while still satisfying heterogeneous QoS constraints.
The proposed framework provides several advantages. Its SDN-controlled architecture centralizes resource management among BSs and facilitates inter-cell lending. The ML-based decision enables the network to adapt dynamically to network variations in user mobility, service demand, and traffic conditions. Such adaptive control and learning make the NN-ARA framework a strong candidate for beyond 5G and 6G networks, where intelligent resource management will be essential. Moreover, the proposed approach can be extended to support emerging paradigms such as AI-native RAN slicing. Despite these promising results, several limitations should be mentioned. The evaluation scenario was conducted on synthetic user-distribution data for downtown LA. Although our user-distribution dataset used real deployment of BSs, it does not capture real-time feedback by network operators. Furthermore, the centralized learning design may face scalability challenges in ultra-large network deployments. Regarding the robustness and generalization of the trained ANN model, we clarify that the current training process was performed using datasets generated for a specific urban region under fixed 5G NR channel and mobility assumptions. While these parameters provide a realistic and representative baseline for a dense urban HUDN, the model has not yet been validated across different geographic regions, propagation conditions, or network configurations. Therefore, its direct transferability to other deployment scenarios remains an open challenge. Extending the training dataset to include multi-regional layouts, diverse propagation environments, and varying traffic compositions—or adopting domain-adaptation techniques—will enhance the model’s robustness and improve its generalization capability.
Future work will focus on enhancing the learning process by employing advanced deep learning techniques such as federated learning and reinforcement learning to enable distributed and adaptive decision-making. Furthermore, the framework will be extended toward 5G Advanced scenarios, including AI-native RAN slicing and intelligent network management. Integration of real mobility traces, multi-tier energy-efficiency optimization, and latency-aware scheduling will further validate and extend the robustness of the proposed framework. Within this context, NN-ARA represents an important and solid step toward intelligent, self-optimizing resource management for next-generation wireless systems.

8. Conclusions

Due to its complexity, the problem of resource allocation in 5G HUDNs has received significant attention from researchers. This study proposed an adaptive UA-RA framework to address the issue of downlink RA in 5G HUDNs. The proposed framework integrates user association and resource allocation processes to efficiently allocate valuable resources. It adapts to the different features of HUDNs, including multi-tiered BSs, and their users (e.g., geographic location, speed, direction, and QoS requirements). Furthermore, the proposed approach operates proactively to prevent BS congestion by redistributing available resources. It improved the average RB utilization by up to 22.5% at a high load. Moreover, the proposed framework decreased the average blocking ratio due to insufficient RBs by up to 55%.
Overall, the proposed framework jointly integrates user association and resource allocation under an SDN-coordinated architecture, enhanced by an ANN-based learning model that refines scheduling and inter-cell resource lending decisions in real time. This integration effectively bridges the gap between heuristic and data-driven resource management approaches in 5G HUDNs. To evaluate the learning-based component, an SDN/ML-integrated framework was implemented using a lightweight feed-forward ANN trained on realistic 5G HUDN data from downtown LA. The simulation results demonstrated that the proposed NN-ARA approach can predict the best serving BS with high accuracy, outperforming other related approaches. At the median user percentile under the 10-fold traffic load scenario, the proposed NN-ARA approach enhanced the spectral efficiency by approximately 20.7% and 11.1% for macro and small tiers, respectively. It also improved the average achievable downlink data rate up to 20.8% and 11% compared with the other related works for macro and small tiers, respectively.
For future work, further performance metrics can be evaluated, and additional case studies will be conducted to validate the applicability of the proposed framework. In addition, other ML models can be employed and evaluated, such as reinforcement and federated learning, to enable distributed intelligence and dynamic optimization and potentially enable higher prediction accuracy.
The joint UA–RA and ANN-based optimization achieved significant gains in RB utilization, spectral efficiency, and user data rates, demonstrating that combining SDN coordination with ML intelligence provides a practical and scalable solution for dynamic 5G environments. In summary, the proposed framework enhances downlink data rate, RB utilization, and spectrum efficiency in 5G HUDNs. Due to the design of our proposed framework, it can be extended beyond 5G and 6G systems to support intelligent and self-optimizing resource allocation.

Author Contributions

Conceptualization, A.S.A.; methodology, A.S.A.; software, A.S.A.; validation, A.S.A.; formal analysis, A.S.A.; investigation, A.S.A.; resources, A.S.A.; data curation, A.S.A.; writing—original draft preparation, A.S.A.; writing—review and editing, A.S.A.; visualization, A.S.A.; supervision, M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

A list of the abbreviations that are mentioned in this paper is given in following table:
AbbreviationDescription
5GFifth generation
IoTInternet of Things
uRLLCultra-Reliable and Low Latency Communications
eMBBenhanced Mobile Broadband
mMTCmassive Machine Type Communication
mMIMOMassive Multiple Input Multiple Output
mm-waveMilli-meter wave
HUDNsHeterogeneous Ultra-Dense Networks
UEUser Equipment
SDNSoftware-defined networking
RAResource Allocation
3GPPThird Generation Partnership Project
BSBase Station
MLMachine Learning
RBResource Block
MMSEMinimum Mean Square Error
UA-RAUser Association-Resource Allocation
BPBlocking Probability
HetNetsHeterogeneous Networks
SINRSignal to Interference plus Noise Ratio
NRNew Radio
CQIChannel Quality Indicator
MBSMacro Base Station
SBSSmall Base Station
OFDMAOrthogonal Frequency Division Multiple Access
UMa (NLOS)Urban Macro (Non-Line of Sight)
UMi (LOS)Urban micro (Line of Sight)
AWGNAdditive White Gaussian Noise power
ANNsArtificial Neural Networks
RFRandom Forest

References

  1. Ejaz, W.; Sharma, S.K.; Saadat, S.; Naeem, M.; Anpalagan, A.; Chughtai, N.A. A comprehensive survey on resource allocation for CRAN in 5G and beyond networks. J. Netw. Comput. Appl. 2020, 160, 102638. [Google Scholar] [CrossRef]
  2. Kamal, M.A.; Raza, H.W.; Alam, M.M.; Su’ud, M.M.; Sajak, A.b.A.B. Resource allocation schemes for 5G network: A systematic review. Sensors 2021, 21, 6588. [Google Scholar] [CrossRef]
  3. Rony, R.I.; Lopez-Aguilera, E.; Garcia-Villegas, E. Dynamic spectrum allocation following machine learning-based traffic predictions in 5G. IEEE Access 2021, 9, 143458–143472. [Google Scholar] [CrossRef]
  4. Alsharif, M.H.; Nordin, R. Evolution towards fifth generation (5G) wireless networks: Current trends and challenges in the deployment of millimetre wave, massive MIMO, and small cells. Telecommun. Syst. 2017, 64, 617–637. [Google Scholar] [CrossRef]
  5. Rekkas, V.P.; Sotiroudis, S.; Sarigiannidis, P.; Wan, S.; Karagiannidis, G.K.; Goudos, S.K. Machine learning in beyond 5G/6G networks—State-of-the-art and future trends. Electronics 2021, 10, 2786. [Google Scholar] [CrossRef]
  6. Xiang, W.; Zheng, K.; Shen, X.S. 5G Mobile Communications; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  7. Adebusola, J.A.; Ariyo, A.A.; Elisha, O.A.; Olubunmi, A.M.; Julius, O.O. An overview of 5G technology. In Proceedings of the 2020 International Conference in Mathematics, Computer Engineering and Computer Science (ICMCECS), Lagos, Nigeria, 18–21 March 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–4. [Google Scholar]
  8. Pirinen, P. A brief overview of 5G research activities. In Proceedings of the 1st International Conference on 5G for Ubiquitous Connectivity, Levi, Finland, 26–28 November 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 17–22. [Google Scholar]
  9. Lei, Y.; Zhu, G.; Shen, C.; Xu, Y.; Zhang, X. Delay-aware user association and power control for 5G heterogeneous network. Mob. Netw. Appl. 2019, 24, 491–503. [Google Scholar] [CrossRef]
  10. Zaidi, A.; Athley, F.; Medbo, J.; Gustavsson, U.; Durisi, G.; Chen, X. 5G Physical Layer: Principles, Models and Technology Components; Academic Press: Cambridge, MA, USA, 2018. [Google Scholar]
  11. Series, M. Minimum requirements related to technical performance for IMT-2020 radio interface (s). Report 2017, 2410, 2410-2017. [Google Scholar]
  12. Sufyan, A.; Khan, K.B.; Khashan, O.A.; Mir, T.; Mir, U. From 5G to beyond 5G: A comprehensive survey of wireless network evolution, challenges, and promising technologies. Electronics 2023, 12, 2200. [Google Scholar] [CrossRef]
  13. Larsson, E.G.; Edfors, O.; Tufvesson, F.; Marzetta, T.L. Massive MIMO for next generation wireless systems. IEEE Commun. Mag. 2014, 52, 186–195. [Google Scholar] [CrossRef]
  14. Elsayed, M.; Erol-Kantarci, M. Radio resource and beam management in 5G mmWave using clustering and deep reinforcement learning. In Proceedings of the GLOBECOM 2020 IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  15. Gotsis, A.; Stefanatos, S.; Alexiou, A. UltraDense networks: The new wireless frontier for enabling 5G access. IEEE Veh. Technol. Mag. 2016, 11, 71–78. [Google Scholar] [CrossRef]
  16. Yu, W.; Xu, H.; Zhang, H.; Griffith, D.; Golmie, N. Ultra-dense networks: Survey of state of the art and future directions. In Proceedings of the 2016 25th International Conference on Computer Communication and Networks (ICCCN), Waikoloa, HI, USA, 1–4 August 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–10. [Google Scholar]
  17. Lee, D.; Jang, G.; Ha, T.; Oh, J.; Cho, S. BS Deployment Strategy and Energy Efficient BS Switching in Heterogeneous Networks for 5G. In Proceedings of the 2021 International Conference on Information Networking (ICOIN), Jeju Island, Republic of Korea, 13–16 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 727–729. [Google Scholar]
  18. Lee, Y.L.; Chuah, T.C.; Loo, J.; Vinel, A. Recent advances in radio resource management for heterogeneous LTE/LTE-A networks. IEEE Commun. Surv. Tutor. 2014, 16, 2142–2180. [Google Scholar] [CrossRef]
  19. Oulahyane, H.A.; Eddermoug, N.; Bakali, A.; Talea, M.; Bahnasse, A. Towards an SDN-based Dynamic Resource Allocation in 5G Networks. Procedia Comput. Sci. 2024, 231, 205–211. [Google Scholar] [CrossRef]
  20. Hakiri, A.; Gokhale, A.; Berthou, P.; Schmidt, D.C.; Gayraud, T. Software-defined networking: Challenges and research opportunities for future internet. Comput. Netw. 2014, 75, 453–471. [Google Scholar] [CrossRef]
  21. Elfatih, N.M.; Hasan, M.K.; Kamal, Z.; Gupta, D.; Saeed, R.A.; Ali, E.S.; Hosain, M.S. Internet of vehicle’s resource management in 5G networks using AI technologies: Current status and trends. IET Commun. 2022, 16, 400–420. [Google Scholar] [CrossRef]
  22. Sarah, A.; Nencioni, G.; Khan, M.M.I. Resource allocation in multi-access edge computing for 5G-and-beyond networks. Comput. Netw. 2023, 227, 109720. [Google Scholar] [CrossRef]
  23. Mamane, A.; Fattah, M.; El Ghazi, M.; El Bekkali, M.; Balboul, Y.; Mazer, S. Scheduling algorithms for 5G networks and beyond: Classification and survey. IEEE Access 2022, 10, 51643–51661. [Google Scholar] [CrossRef]
  24. 3rd Generation Partnership Project (3GPP). Study on Scenarios and Requirements for Next Generation Access Technologies; Technical Report TR 36.331; 3rd Generation Partnership Project (3GPP): Sophia Antipolis, France, 2017. [Google Scholar]
  25. Pocovi, G.; Pedersen, K.I.; Mogensen, P. Joint link adaptation and scheduling for 5G ultra-reliable low-latency communications. IEEE Access 2018, 6, 28912–28922. [Google Scholar] [CrossRef]
  26. Popovski, P.; Trillingsgaard, K.F.; Simeone, O.; Durisi, G. 5G wireless network slicing for eMBB, URLLC, and mMTC: A communication-theoretic view. IEEE Access 2018, 6, 55765–55779. [Google Scholar] [CrossRef]
  27. 3rd Generation Partnership Project (3GPP). Technical Specification Group Services and System Aspects; Release 15 Description; Technical Report 21.915; 3rd Generation Partnership Project (3GPP): Sophia Antipolis, France, 2019. [Google Scholar]
  28. Tayyaba, S.K.; Shah, M.A. Resource allocation in SDN based 5G cellular networks. Peer-to-Peer Netw. Appl. 2019, 12, 514–538. [Google Scholar] [CrossRef]
  29. Gao, X.; Wang, J.; Zhou, M. The research of resource allocation method based on GCN-LSTM in 5G network. IEEE Commun. Lett. 2022, 27, 926–930. [Google Scholar] [CrossRef]
  30. Sun, H.; Chen, X.; Shi, Q.; Hong, M.; Fu, X.; Sidiropoulos, N.D. Learning to optimize: Training deep neural networks for interference management. IEEE Trans. Signal Process. 2018, 66, 5438–5453. [Google Scholar] [CrossRef]
  31. Vu, T.X.; Chatzinotas, S.; Nguyen, V.D.; Hoang, D.T.; Nguyen, D.N.; Di Renzo, M.; Ottersten, B. Machine learning-enabled joint antenna selection and precoding design: From offline complexity to online performance. IEEE Trans. Wirel. Commun. 2021, 20, 3710–3722. [Google Scholar] [CrossRef]
  32. Kaur, J.; Khan, M.A.; Iftikhar, M.; Imran, M.; Haq, Q.E.U. Machine learning techniques for 5G and beyond. IEEE Access 2021, 9, 23472–23488. [Google Scholar] [CrossRef]
  33. Bartsiokas, I.A.; Gkonis, P.K.; Kaklamani, D.I.; Venieris, I.S. ML-based radio resource management in 5G and beyond networks: A survey. IEEE Access 2022, 10, 83507–83528. [Google Scholar] [CrossRef]
  34. Zhu, G.; Lyu, Z.; Jiao, X.; Liu, P.; Chen, M.; Xu, J.; Cui, S.; Zhang, P. Pushing AI to wireless network edge: An overview on integrated sensing, communication, and computation towards 6G. Sci. China Inf. Sci. 2023, 66, 130301. [Google Scholar] [CrossRef]
  35. Huang, X.; Cui, Y.; Chen, Q.; Zhang, J. Joint task offloading and QoS-aware resource allocation in fog-enabled Internet-of-Things networks. IEEE Internet Things J. 2020, 7, 7194–7206. [Google Scholar] [CrossRef]
  36. Panno, D.; Riolo, S. An enhanced joint scheduling scheme for GBR and non-GBR services in 5G RAN. Wirel. Netw. 2020, 26, 3033–3052. [Google Scholar] [CrossRef]
  37. Al-Ali, M.; Yaacoub, E.; Mohamed, A. Dynamic resource allocation of eMBB-uRLLC traffic in 5G new radio. In Proceedings of the 2020 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS), New Delhi, India, 14–17 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  38. Al-Ali, M.; Yaacoub, E. Resource allocation scheme for eMBB and uRLLC coexistence in 6G networks. Wirel. Netw. 2023, 29, 2519–2538. [Google Scholar] [CrossRef]
  39. Manzoor, A.; Kazmi, S.A.; Pandey, S.R.; Hong, C.S. Contract-based scheduling of URLLC packets in incumbent EMBB traffic. IEEE Access 2020, 8, 167516–167526. [Google Scholar] [CrossRef]
  40. Elsayed, M.; Erol-Kantarci, M. AI-enabled radio resource allocation in 5G for URLLC and eMBB users. In Proceedings of the 2019 IEEE 2nd 5G World Forum (5GWF), Dresden, Germany, 30 September–2 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 590–595. [Google Scholar]
  41. Abdelsadek, M.Y.; Gadallah, Y.; Ahmed, M.H. Resource allocation of URLLC and eMBB mixed traffic in 5G networks: A deep learning approach. In Proceedings of the GLOBECOM 2020 IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  42. Rathod, T.; Tanwar, S. AI-based resource allocation techniques in D2D communication: Open issues and future directions. Phys. Commun. 2024, 66, 102423. [Google Scholar] [CrossRef]
  43. Agarwal, B.; Togou, M.A.; Ruffini, M.; Muntean, G.M. A low complexity ML-assisted multi-knapsack-based approach for user association and resource allocation in 5G HetNets. In Proceedings of the 2023 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Beijing, China, 14–16 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  44. Liu, Z.; Chen, X.; Chen, Y.; Li, Z. Deep reinforcement learning based dynamic resource allocation in 5G ultra-dense networks. In Proceedings of the 2019 IEEE International Conference on Smart Internet of Things (SmartIoT), Tianjin, China, 9–11 August 2023; IEEE: Piscataway, NJ, USA, 2019; pp. 168–174. [Google Scholar]
  45. Qi, X.; Khattak, S.; Zaib, A.; Khan, I. Energy efficient resource allocation for 5G heterogeneous networks using genetic algorithm. IEEE Access 2021, 9, 160510–160520. [Google Scholar] [CrossRef]
  46. Bouras, C.; Kalogeropoulos, R. Prediction mechanisms to improve 5G network user allocation and resource management. Wirel. Pers. Commun. 2022, 122, 1455–1479. [Google Scholar] [CrossRef]
  47. Bouras, C.J.; Michos, E.; Prokopiou, I. Applying Machine Learning and Dynamic Resource Allocation Techniques in Fifth Generation Networks. In Proceedings of the International Conference on Advanced Information Networking and Applications, Sydney, NSW, Australia, 13–15 April 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 662–673. [Google Scholar]
  48. Chabira, C.; Shayea, I.; Nurzhaubayeva, G.; Aldasheva, L.; Yedilkhan, D.; Amanzholova, S. AI-Driven Handover Management and Load Balancing Optimization in Ultra-Dense 5G/6G Cellular Networks. Technologies 2025, 13, 276. [Google Scholar] [CrossRef]
  49. Ullah, Y.; Roslee, M.; Mitani, S.M.; Sheraz, M.; Ali, F.; Aurangzeb, K.; Osman, A.F.; Ali, F.Z. A survey on AI-enabled mobility and handover management in future wireless networks: Key technologies, use cases, and challenges. J. King Saud Univ. Comput. Inf. Sci. 2025, 37, 47. [Google Scholar] [CrossRef]
  50. Coumar, S.O.; Surender, R. Efficient Network Resource Management for Improving Radio Access Through Machine Learning Approach in 5G Networks. J. Inst. Eng. India Ser. B 2024, 106, 207–215. [Google Scholar] [CrossRef]
  51. Radha, B.; Sarojini, R. Advanced AI techniques for optimizing resource allocation in 6G networks: A hybrid diffusion-enhanced meta-deep-reinforcement-learning framework. AIP Adv. 2025, 15, 115306. [Google Scholar] [CrossRef]
  52. Tera, S.P.; Chinthaginjala, R.; Pau, G.; Kim, T.H. Towards 6G: An Overview of the Next Generation of Intelligent Network Connectivity. IEEE Access 2025, 13, 925–961. [Google Scholar] [CrossRef]
  53. NR; Physical Channels and Modulation. Technical Specification TS 38.211, 3rd Generation Partnership Project (3GPP), Valbonne, France, 2024. Available online: https://www.3gpp.org (accessed on 1 January 2024).
  54. Zia, K.; Javed, N.; Sial, M.N.; Ahmed, S.; Pirzada, A.A.; Pervez, F. A distributed multi-agent RL-based autonomous spectrum allocation scheme in D2D enabled multi-tier HetNets. IEEE Access 2019, 7, 6733–6745. [Google Scholar] [CrossRef]
  55. 3rd Generation Partnership Project (3GPP). Technical Specification Group Radio Access Network; Study on Channel Model for Frequencies from 0.5 to 100 GHZ (Release 18); Technical Report TR 38.901; 3rd Generation Partnership Project (3GPP): Sophia Antipolis, France, 2024. [Google Scholar]
  56. Omelyanchuk, E.; Semenova, A.Y.; Mikhailov, V.Y.; Bakhtin, A.; Timoshenko, A. User equipment location technique for 5G networks. In Proceedings of the 2018Systems of Signal Synchronization, Generating and Processing in Telecommunications (SYNCHROINFO), Minsk, Belarus, 4–5 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–7. [Google Scholar]
  57. 3rd Generation Partnership Project (3GPP). Technical Specification Group Radio Access Network; NR; Physical Layer Procedures for Data (Release 18); Technical Report TR 38.214; 3rd Generation Partnership Project (3GPP): Sophia Antipolis, France, 2024. [Google Scholar]
  58. Bouras, C.; Caragiannis, I.; Gkamas, A.; Protopapas, N.; Sardelis, T.; Sgarbas, K. State of the Art Analysis of Resource Allocation Techniques in 5G MIMO Networks. In Proceedings of the 2023 International Conference on Information Networking (ICOIN), Bangkok, Thailand, 11–14 January 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 632–637. [Google Scholar]
  59. Imtiaz, S.; Ghauch, H.; Koudouridis, G.P.; Gross, J. Random forests resource allocation for 5G systems: Performance and robustness study. In Proceedings of the 2018 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), Barcelona, Spain, 15–18 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 326–331. [Google Scholar]
  60. Zhang, S.; Zhu, D. Towards artificial intelligence enabled 6G: State of the art, challenges, and opportunities. Comput. Netw. 2020, 183, 107556. [Google Scholar] [CrossRef]
  61. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  62. Shyam, R.; Ayachit, S.S.; Patil, V.; Singh, A. Competitive analysis of the top gradient boosting machine learning algorithms. In Proceedings of the 2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN), Greater Noida, India, 18–19 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 191–196. [Google Scholar]
  63. Microwave Towers. Available online: https://geohub.lacity.org/datasets/lacount:microwave-towers/explore?showTable=true (accessed on 20 April 2023).
  64. Small Cell Locations. Available online: https://catalog.data.gov/dataset/small-cell (accessed on 20 April 2023).
  65. Alablani, I.A.; Arafah, M.A. A new vehicle dataset in the city of Los Angeles for V2X and machine learning applications. Appl. Sci. 2022, 12, 3751. [Google Scholar] [CrossRef]
  66. Fan, P.; Zhao, J.; Chih-Lin, I. 5G high mobility wireless communications: Challenges and solutions. China Commun. 2016, 13, 1–13. [Google Scholar] [CrossRef]
  67. Chicco, D.; Starovoitov, V.; Jurman, G. The benefits of the Matthews correlation coefficient (MCC) over the diagnostic odds ratio (DOR) in binary classification assessment. IEEE Access 2021, 9, 47112–47124. [Google Scholar] [CrossRef]
  68. Bhatnagar, A.; Srivastava, S. A robust model for churn prediction using supervised machine learning. In Proceedings of the 2019 IEEE 9th International Conference on Advanced Computing (IACC), Tiruchirappalli, India, 13–14 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 45–49. [Google Scholar]
  69. Rodrigues, J.d.C.; Rebouças Filho, P.P.; Peixoto, E., Jr.; Kumar, A.; de Albuquerque, V.H.C. Classification of EEG signals to detect alcoholism using machine learning techniques. Pattern Recognit. Lett. 2019, 125, 140–149. [Google Scholar] [CrossRef]
  70. Lee, J.; Woo, J.; Kang, A.R.; Jeong, Y.S.; Jung, W.; Lee, M.; Kim, S.H. Comparative analysis on machine learning and deep learning to predict post-induction hypotension. Sensors 2020, 20, 4575. [Google Scholar] [CrossRef] [PubMed]
  71. Porto, A.; Voje, K.L. ML-morph: A fast, accurate and general approach for automated detection and landmarking of biological structures in images. Methods Ecol. Evol. 2020, 11, 500–512. [Google Scholar] [CrossRef]
  72. Lee, J.; Lee, U.; Kim, H. PASS: Reducing redundant notifications between a smartphone and a smartwatch for energy saving. IEEE Trans. Mob. Comput. 2019, 19, 2656–2669. [Google Scholar] [CrossRef]
  73. Khanna, M.; Aggarwal, M.; Singhal, N. Empirical analysis of artificial immune system algorithms for aging related bug prediction. In Proceedings of the 2021 7th international conference on advanced computing and communication systems (ICACCS), Coimbatore, India, 19–20 March 2021; IEEE: Piscataway, NJ, USA, 2021; Volume 1, pp. 692–697. [Google Scholar]
  74. Hurtado Sanchez, J.A.; Casilimas, K.; Caicedo Rendon, O.M. Deep reinforcement learning for resource management on network slicing: A survey. Sensors 2022, 22, 3031. [Google Scholar] [CrossRef]
  75. Alsulami, H.; Serbaya, S.H.; Abualsauod, E.H.; Othman, A.M.; Rizwan, A.; Jalali, A. A federated deep learning empowered resource management method to optimize 5G and 6G quality of services (QoS). Wirel. Commun. Mob. Comput. 2022, 2022, 1352985. [Google Scholar] [CrossRef]
  76. Dutta, N.; Patole, S.P.; Mahadeva, R.; Ghinea, G. Federated learning framework for prediction based load distribution in 5G network slicing. In Proceedings of the 2024 Sixteenth International Conference on Contemporary Computing, Bangalore, India, 15–16 March 2024; pp. 421–426. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.