Next Article in Journal
NSA-CHG: An Intelligent Prediction Framework for Real-Time TBM Parameter Optimization in Complex Geological Conditions
Next Article in Special Issue
Unseen Attack Detection in Software-Defined Networking Using a BERT-Based Large Language Model
Previous Article in Journal
Advanced Interpretation of Bullet-Affected Chest X-Rays Using Deep Transfer Learning
Previous Article in Special Issue
Machine Learning-Based Network Anomaly Detection: Design, Implementation, and Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence Empowering Dynamic Spectrum Access in Advanced Wireless Communications: A Comprehensive Overview

by
Abiodun Gbenga-Ilori
1,
Agbotiname Lucky Imoize
1,*,
Kinzah Noor
2 and
Paul Oluwadara Adebolu-Ololade
1
1
Department of Electrical and Electronics Engineering, Faculty of Engineering, University of Lagos, Akoka, Lagos 100213, Nigeria
2
Office of Research Innovation and Commercialization, University of Management and Technology, Lahore 54770, Pakistan
*
Author to whom correspondence should be addressed.
AI 2025, 6(6), 126; https://doi.org/10.3390/ai6060126
Submission received: 30 April 2025 / Revised: 27 May 2025 / Accepted: 11 June 2025 / Published: 13 June 2025
(This article belongs to the Special Issue Artificial Intelligence for Network Management)

Abstract

This review paper examines the integration of artificial intelligence (AI) in wireless communication, focusing on cognitive radio (CR), spectrum sensing, and dynamic spectrum access (DSA). As the demand for spectrum continues to rise with the expansion of mobile users and connected devices, cognitive radio networks (CRNs), leveraging AI-driven spectrum sensing and dynamic access, provide a promising solution to improve spectrum utilization. The paper reviews various deep learning (DL)-based spectrum-sensing methods, highlighting their advantages and challenges. It also explores the use of multi-agent reinforcement learning (MARL) for distributed DSA networks, where agents autonomously optimize power allocation (PA) to minimize interference and enhance quality of service. Additionally, the paper discusses the role of machine learning (ML) in predicting spectrum requirements, which is crucial for efficient frequency management in the fifth generation (5G) networks and beyond. Case studies show how ML can help self-optimize networks, reducing energy consumption while improving performance. The review also introduces the potential of generative AI (GenAI) for demand-planning and network optimization, enhancing spectrum efficiency and energy conservation in wireless networks (WNs). Finally, the paper highlights future research directions, including improving AI-driven network resilience, refining predictive models, and addressing ethical considerations. Overall, AI is poised to transform wireless communication, offering innovative solutions for spectrum management (SM), security, and network performance.

1. Introduction

The transition from the fourth generation (4G) to 5G and beyond has revolutionized wireless communication, delivered high-speed, low-latency connectivity, and expanded capacity. However, increasing demands for reliable communication present challenges in spectrum management (SM) and network optimization [1]. The shift to 5G enables transformative applications in smart cities, autonomous systems, and industrial automation, yet limited spectrum availability drives the need for advanced spectrum utilization strategies [2]. Higher frequency bands offer potential solutions but require efficient management techniques [3].
Traditional spectrum allocation (SA) relies on fixed assignments and struggles with dynamic network demands, leading to inefficiencies. Machine learning (ML) has emerged as a promising solution, leveraging predictive analytics to optimize spectrum usage and dynamically allocate resources [4]. Spectrum sharing (SS) and sensing enhance spectral efficiency, reducing congestion and improving system performance [5]. ML-driven models also improve spectrum forecasting, minimizing interference while ensuring adaptability to network conditions [6]. However, further research is needed to refine ML-based approaches for complex wireless networks (WNs), including advanced neural network (NN) models for optimizing software defined networking (SDN) [7].
As networks advance toward the sixth generation (6G), the need for ultra-reliable, high-capacity frameworks grows. Terrestrial networks (TNs) and non-terrestrial networks (NTNs) are categories of communication infrastructure that can leverage AI-driven architectures to optimize performance, efficiency, and adaptability to address energy consumption and resource management challenges in wireless networks [8]. While artificial intelligence (AI)-enhanced optimization techniques show promise, they must overcome scalability and computational constraints to support escalating traffic demands fully [9].
In line with the 3rd Generation Partnership Project (3GPP) specifications, the existing spectrum between the long-Term evolution (LTE) and NR carriers can be shared, leveraging dynamic spectrum sharing and allowing a smoother transition from LTE to faster adoption of NR. Dynamic spectrum sharing also enables a migration path from LTE to NR, where both share the same carrier. It is imperative to cater to sufficient scheduling capacity for NR user equipment (UEs) on the shared carriers corresponding to an increase in NR devices in a network. Another technique worth mentioning is the dynamic spectrum allocation service, which helps to manage spectrum access for non-primary users in frequency bands where spectrum sharing is applied. However, spectrum sharing and management issues arising from the complexity of a rapidly evolving wireless network call for a holistic approach to navigating this critical aspect of wireless communications.
In particular, spectrum scarcity remains a critical challenge, with mobile devices expected to reach 80 billion by 2030 [10]. 5G new radio (NR) spans two frequency ranges, frequency range 1 (FR1), which covers frequencies below 7.225 GHz, and frequency range 2 (FR2), spanning 24.25–52.6 GHz, with FR2 offering higher bandwidth (BW) but limited range. Static SA policies constrain FR1, necessitating dynamic spectrum access (DSA) [11]. DSA facilitates shared spectrum use, as seen in Citizens Broadband Radio Service (CBRS) and LTE-U, but traditional spectrum-sharing approaches often require high computational resources and centralized control, making them inefficient for low-power devices [12]. In recent standardization reports, all stakeholders, like 3GPP, IEEE, ITU, and others, are clamoring for the application of AI in dynamic spectrum management.
Hence, to address the aforementioned limitations, reinforcement learning (RL)-based DSA solutions have been explored for spectrum optimization, cooperative sensing, and multi-channel access [13]. While RL enhances SA efficiency, most models still rely on centralized controllers, posing reliability and security concerns. Decentralized RL-based frameworks offer improved privacy and reduced interference risks [14]. Future research should focus on RL-driven distributed DSA for optimizing spectral efficiency, power consumption, and interference mitigation in next-generation WNs.
The increasing availability of wireless communication devices and the demand for data have put a lot of stress on the radio frequency spectrum, a limited and precious resource. Most of the traditional SA methods are static and have some defects, such as underutilization of some bands and overload on other bands. This has made it necessary to search for better and more sophisticated methods of SM. DSA has emerged as a paradigm shift, enabling real-time adaptation to spectrum availability and usage patterns. By allowing secondary users (SUs) to opportunistically access underutilized spectrum bands without interfering with primary users (PUs), DSA promises to enhance spectral efficiency (SE) and support the growing needs of WNs. However, implementing DSA effectively requires addressing challenges such as spectrum sensing, interference mitigation, and decision-making in highly dynamic environments.
AI has proven to be a transformative tool across various domains, and its application in SM is no exception. AI can use ML, deep learning (DL) and RL to analyze complex spectrum data, predict usage patterns, and make autonomous, real-time decisions. This integration has the potential to revolutionize DSA and allocation, ensuring optimal utilization of the spectrum while maintaining the quality of service (QoS).
This review aims to provide a comprehensive overview of the current state of AI in DSA and allocation. It discusses key AI techniques in this domain, evaluates their effectiveness in addressing SM challenges, and identifies existing gaps and future research directions. By synthesizing recent advancements, this paper seeks to highlight the potential of AI to transform SM and support the ever-evolving demands of wireless communication systems. This review will be a valuable resource for researchers, engineers, and policymakers working towards more efficient and intelligent SM solutions in the 5G, 6G, and beyond era.
The main findings consist of the following points:
  • This study organizes the presentation of AI methodologies applied for DSA operations, including deep learning (DL), supervised and unsupervised learning, reinforcement learning (RL), and metaheuristic algorithms. A systematic classification system helps specialists understand which intelligent techniques perform well for sensing, prediction, and spectrum allocation functions.
  • The paper includes in-depth research comparing AI-based techniques alongside traditional spectrum management protocols. The evaluation analyzes different approaches regarding their ability to perform accurately while being adaptable with minimum latency and computational complexity through scalable systems, demonstrating their operational weaknesses in practical deployment conditions.
  • This review establishes connections between AI implementations, modern wireless ecosystems, stretching from 5G to 6G, and Internet of Things (IoT)-based networks. The technical aspect of the review highlights the capability of AI to develop real-time spectrum perception while achieving dynamic resource allocation in ultra-dense heterogeneous and mobile network environments.
The remainder of this paper is structured as follows: Section 2 provides a comprehensive review of the background and related work, focusing on the spectrum management research, dynamic spectrum access (DSA), allocation, the limitations of DSA, and future scope. Section 3 introduces the core principles of dynamic spectrum access (DSA), exploring spectrum sharing models such as interweave, underlay, and overlay. It highlights key challenges in spectrum allocation and discusses regulatory considerations essential for enabling efficient and flexible spectrum use. Section 4 reviews the use of AI techniques, DSA, and spectrum allocation (SA), including machine learning (ML) approaches for spectrum prediction and classification, and reinforcement learning (RL) for autonomous decision-making. Deep learning (DL) models are highlighted. The role of AI in enabling adaptive and intelligent cognitive radio networks (CRNs) is also discussed. Section 5 explores the effectiveness of AI in tackling spectrum management challenges by comparing its performance with conventional methods, particularly in terms of adaptability and real-time decision-making. It also examines the security and privacy concerns associated with AI-driven dynamic spectrum access and such approaches’ computational complexity and resource demands. Section 6 discusses the integration of AI in dynamic spectrum access for 5G, 6G, and beyond, focusing on its role in efficient spectrum management and enabling ultra-reliable low-latency communications (URLLC). It also explores AI-driven spectrum sharing for IoT, smart cities, and connected vehicles, along with future applications of GenAI. Section 7 highlights the key challenges and limitations associated with DSA and allocation. Section 8 discusses recent trends, future research directions, and lessons learned. Finally, Section 9 concludes the paper by summarizing the main findings and their broader implications for DSA and management.

2. Background and Related Work

As the demand for WNs rises, efficient SA is crucial for enhancing network performance. ML techniques have gained prominence in SA for 5G and beyond networks due to their capability to manage complex and dynamic environments [15]. This survey examines recent advancements in ML-based SA, outlining key methodologies, applications, and future research directions.

2.1. Dynamic Frequency Assignment and ML Approaches

Dynamic frequency assignment enables flexible frequency allocations based on network conditions and user demands. However, conventional mechanisms rely on static rule-based methods that often fail to adapt to real-time variations. This limitation highlights the necessity for intelligent and flexible SA strategies in 5G and beyond networks to enhance resource utilization. Various ML methodologies, including RL and DL, have been explored to address the challenges of dynamic network environments [16]. ML-based solutions offer adaptability by leveraging data-driven techniques to optimize real-time spectrum usage [17]. These approaches promise to improve network performance and resource efficiency [18]. Future research should enhance model scalability, real-time responsiveness, and explainability while mitigating security and privacy risks [19].

2.2. RL-Based DSA in 5G Networks

Several studies have investigated RL-based techniques for DSA in 5G networks [20]. Additionally, the need for more efficient and adaptive SA methods to address growing WNs’ demands and the heterogeneous nature of 5G networks has been emphasized [21]. DL approaches, particularly convolutional neural networks (CNNs), have been explored for spectrum prediction in DSA scenarios within 5G networks [22]. By leveraging CNNs’ feature extraction capabilities, these methods aim to enhance the accuracy and efficiency of spectrum prediction, which is crucial for improving spectrum utilization and reducing interference. Furthermore, hybrid ML approaches integrating supervised learning and RL have been proposed to optimize dynamic SA [23]. These methods seek to effectively manage limited spectrum resources while adapting to highly dynamic and heterogeneous 5G environments.

2.3. Challenges and Opportunities in Intelligent 5G Networks

Intelligent 5G and beyond networks transform wireless communication with high-speed connectivity, low latency, and efficient Internet of Things (IoT) management, benefiting industries like healthcare, transportation, and manufacturing. Despite offering peak data rates of 10 Gbps and millisecond-level latency, deployment faces challenges like SM, high costs, security risks, and regulatory issues. Proposed solutions, including network slicing and infrastructure investments, remain insufficient in fully addressing these complexities [24]. ML has emerged as a promising approach for optimizing network management and resource allocation (RA), enabling real-time adjustments to dynamic conditions [25]. Research has demonstrated the effectiveness of ML in predictive traffic management, security anomaly detection, and energy efficiency optimization [26]. Existing ML models encounter data quality, generalizability, and scalability limitations in diverse network environments. This review addresses these challenges by proposing robust ML frameworks that enhance adaptability and performance in next-generation networks while ensuring efficient resource utilization.

2.4. Spectrum Sharing (SS) in Integrated TNT and NTN

Efficient SS in integrated TN-NTNs is essential for optimizing limited spectrum resources. Both TN and NTN components can simultaneously or opportunistically access the same spectrum. Traditional SS methods are categorized into static and dynamic approaches. Static SA assigns fixed spectrum portions based on geographical separation or orthogonal frequency bands. Legacy systems like geostationary orbit satellites and terrestrial networks operate in distinct frequency bands. However, as spectrum demand increases, static allocation leads to inefficient utilization [27].
DSA offers greater flexibility by allowing SUs to utilize licensed spectrum when not occupied by PUs. Spectrum sensing techniques such as energy detection, matched filtering, and cyclostationary feature detection are widely used in cognitive radio (CR) for identifying available spectra. Additionally, co-channel and adjacent-channel coexistence is enabled by power control, beamforming, and interference cancellation to minimize interference [28]. Recent advancements focus on scalability and adaptability in DSA. Database-driven SS improves coordination by maintaining real-time information on spectrum availability [29]. Furthermore, cooperative spectrum sensing enhances detection accuracy by allowing SUs to share sensing results. Spectrum mobility techniques enable dynamic switching between frequency bands based on network conditions and traffic demands, ensuring efficient resource utilization.

2.5. AI-Driven Spectrum Sharing Approaches

While conventional DSA techniques improve spectrum efficiency, they often struggle with the complexity of TN-NTN integration. Traditional optimization and game theory-based approaches require complete system information, which is impractical in dynamic and heterogeneous environments. AI-driven SS offers a robust alternative, excelling in learning from experience, adapting to changing conditions, and optimizing spectrum utilization in environments with incomplete information.
Several AI techniques have been successfully applied in SS, including DL, deep reinforcement learning (DRL), and federated learning (FL). DL models, such as CNNs and recurrent neural networks (RNNs), facilitate spectrum sensing, channel prediction, and interference classification [30]. DRL-based algorithms enable autonomous decision-making by learning optimal spectrum access strategies through environmental interaction and reward-based learning [31]. A multi-agent DRL-based framework has been proposed to enhance SS efficiency [32]. A hierarchical SS model for cognitive satellite-TNs dynamically allocates resources to improve SE [33]. FL further strengthens AI-driven SM by addressing privacy concerns and allowing decentralized devices to collaboratively train models without sharing raw data, making it particularly effective in distributed spectrum-sharing scenarios. Table 1 provides a structured overview of SA, addressing its methodologies, challenges, and opportunities in 5G and beyond networks. Integrating AI-driven approaches into SM highlights their potential in enhancing spectrum utilization and optimizing WNs.

3. Fundamentals of DSA and Allocation

5G deployment will leverage cloud radio access network [34], centralizing most radio access network (RAN) functionalities into a central unit while access points (APs) handle basic radio frequency tasks. This centralization enables network-wide control via an SDN-based central controller, dynamically allocating bandwidth based on AP requirements. Network behavior prediction will be essential for proactive BW distribution in ultra-dense networks. As automation advances, ML will play a key role in dynamic SA.
Looking ahead to 6G, DSA will be critical in managing spectrum scarcity [35]. While higher frequencies, such as millimeter waves and terahertz, offer new opportunities, they pose challenges like molecular absorption loss and limited propagation range. DSA will ensure optimal resource utilization for diverse applications, including IoT, enhanced mobile broadband (eMBB), and augmented/virtual reality (AR/VR), while addressing frequency band variations, interference, and QoS demands. Unlike traditional frequency allocation, DSA in 6G will dynamically adjust resources based on traffic loads, application needs, and geographical factors. Given the complexity, adaptive algorithms will be required, surpassing the limitations of deterministic approaches. ML and AI will enable networks to predict demand fluctuations, optimize spectrum usage, and autonomously adapt to evolving communication requirements. Figure 1 shows a distributed DSA system allowing SUs to benefit from two licensed spectrum bands through opportunistic access. Users follow self-initiated channel sense and selection methods based on observed signals and coordinated strategies to restrict interference toward PUs.

3.1. DSA for 5G and Beyond

Licensed spectrum has always been a costly and limited resource for wireless network service providers. As capacity demand in WNs grows, the spectrum has become even more valuable, with increasing BW allocation being the most direct approach to meeting capacity requirements. However, efficiently distributing BW among different cells remains a significant challenge. Historically, Fixed SA has been widely used, where BW is assigned to APs based on initial capacity planning and remains static, disregarding dynamic variations in traffic load. In contrast, DSA offers a more adaptive approach, adjusting spectrum distribution based on real-time network demands. Early DSA techniques, such as those discussed in [36], relied on periodic load estimation to allocate additional spectrum when necessary. With advancements in WNs, CR and RL-based DSA methods have emerged, improving spectrum utilization by reallocating underused frequency bands. For instance, studies in [37] demonstrated RL-based spectrum allocation to IoT users and signal-to-interference-plus-noise ratio (SINR)-maximizing techniques in decentralized and centralized frameworks. In modern cellular networks like 4G and 5G, full frequency reuse (FR = 1) enables aggressive BW sharing, but this can lead to interference and inefficiencies without careful management.
Techniques such as enhanced inter-cell interference coordination and beamforming have been introduced to address this. In ultra-dense networks, where diverse APs serve users with varying mobility and density, static BW allocation becomes increasingly ineffective. Thus, intelligent SM, such as the SDN-based DSA approach in [38], is necessary to dynamically allocate BW based on real traffic data. Some works, like [39], have explored cross-service SS, enabling BW redistribution among cellular, vehicular, and IoT networks, while others have examined inter-operator SS [40]. However, this study focuses on optimizing spectrum allocation within a single mobile network operator. By leveraging an intelligent SDN-based controller, the proposed approach dynamically assigns BW chunks (e.g., 5 MHz) from a spectrum pool (e.g., 100 MHz total), balancing frequency reuse with interference minimization. Furthermore, ML techniques predict AP throughput requirements, allowing proactive spectrum distribution. The approach is validated by comparing real and predicted throughput allocations, demonstrating its potential to enhance network efficiency while mitigating interference. Future work could explore multi-mobile network operator scenarios to optimize spectrum utilization further.

3.2. Different Classifications of DSA

The DSA techniques follow different classification criteria, including network architecture, spectrum sensing approach, and spectrum access method [41]. Figure 2 depicts DSA classifications split into system architecture, sensing behavior, and spectrum access methods, including overlay, underlay, and interweave. The structured framework shows how the design and implementation of DSA depend on major influencing characteristics.

3.2.1. Architecture-Based Classification

DSA techniques can be broadly categorized based on network architecture, primarily into centralized and distributed networks. In centralized networks, a designated control entity, often a regulator, manages network access by collecting spectrum sensing data from individual devices. This central authority oversees key cognitive functions, including frequency block allocation, access permissions for SUs, and enforcement of penalties for unauthorized access. The centralized approach offers advantages such as optimized network throughput, interference minimization, fairness in RA, and prioritization of critical devices. However, it also introduces challenges, including high overhead, dependency on additional infrastructure, and vulnerability to single points of failure, particularly in high-density and congested environments.
Conversely, distributed networks operate without a central controller, with individual secondary devices independently handling spectrum access and cognitive functions [42]. This decentralized structure is particularly suitable for ad hoc networks, where devices can autonomously adapt to spectrum variations in real time without relying on central directives. However, this independence may lead to suboptimal decision-making at the network level, security vulnerabilities, and difficulties in enforcing penalties for unauthorized access.
Early spectrum-sharing methodologies were designed around these centralized and distributed frameworks, each presenting distinct trade-offs. Distributed SS relies on local coordination among devices, offering greater efficiency than centralized approaches, which depend on a central unit for coordination without direct system-to-system interaction. Centralized spectrum-sharing methods, such as the geo-location database approach and spectrum broker method, encounter adaptability challenges in dynamic environments. Solutions like the harmonized SDN-enabled approach have been proposed to address these limitations, aiming to reduce erroneous decisions caused by inconsistent QoS. Furthermore, research has explored various optimization strategies, including interference graph models and game-theoretic approaches, to enhance the performance of decentralized spectrum sensing. These strategies emphasize adaptability to highly dynamic environments, a crucial requirement for efficient SS [43]. While centralized architectures streamline network management, their rigidity in adapting to changing conditions remains a significant concern. In contrast, distributed approaches offer greater flexibility but may compromise overall network efficiency and security.

3.2.2. Spectrum Sensing Behavior-Based Classification

DSA can be categorized into non-cooperative and cooperative networks based on spectrum sensing behavior. Individual devices independently assess the spectrum in non-cooperative networks and make decisions without exchanging information. This approach minimizes communication overhead and enables swift decision-making. However, due to wireless channel impairments, secondary devices may fail to detect available spectrum opportunities or mistakenly identify spectrum holes where none exist [44].
Conversely, cooperative spectrum sensing enhances detection accuracy by leveraging spatial and multiuser diversity through collaborative decision-making among multiple secondary devices. This cooperation can be classified into two primary techniques: data fusion and decision fusion [45]. In data fusion, secondary devices share raw sensing data; they exchange their final spectrum access decisions in decision fusion. Despite its advantages, cooperative sensing introduces challenges such as increased system complexity and implementation difficulties. Efficient coordination requires a dedicated control channel for reliable data exchange, and discrepancies in sensing times among different devices can lead to outdated or asynchronous information.

3.2.3. Spectrum Access Method-Based Classification

DSA is classified into interweave, overlay, and underlay networks based on spectrum access methods. Interweave networks operate only in spectrum holes, ensuring interference-free transmission. Overlay networks allow simultaneous primary and secondary transmissions by leveraging interference cancellation techniques. Underlay networks enable concurrent transmissions while maintaining interference within acceptable limits through power control and blackout zones [46].
The spectrum access system is an emerging model for managing uplink interference in massive machine-type communication (mMTC), particularly facilitating commercial use of the 3.5 GHz military radar band in the U.S. However, database-driven DSA raises location privacy concerns for SUs. A multi-server private information retrieval approach addresses this issue by enabling private access to spectrum databases. To enhance SA in 5G-NR, the analytic hierarchy process is employed, prioritizing user access and preventing conflicts. Additionally, a two-time-scale hierarchical model optimizes resource management in wireless network virtualization, considering delay, data rates, and IoT throughput [47]. Figure 3 presents the taxonomy that divides DSA methods between centralized and distributed paradigms in two main groups, subdividing them per coordination style, licensed/unlicensed user behavior, and decision-making guidelines. The classification scheme demonstrates how DSA frameworks operate across numerous wireless settings thanks to their diverse application options.
The dynamical advance access SS method leverages a finite-state Markov chain to model state transitions, improving quality of experience (QoE) by efficiently adapting to network conditions. Furthermore, a hybrid spectrum access model combines exclusive and pooled spectrum access by utilizing low-frequency carriers for dedicated access and high-frequency carriers for shared access. An optimized power allocation (PA) strategy enhances energy efficiency, offering a robust solution for dynamic SM [48]. Table 2 comprehensively compares DSA classifications based on their architecture, spectrum sensing behavior, access methods, interference management, complexity, advantages, and challenges.

3.3. Regulatory and Policy Considerations

Spectrum scarcity constrained the growing demand for capacity, connectivity, and energy efficiency in mobile networks, as lower frequencies are already allocated, while higher frequencies suffer from attenuation. Traditional static SA leads to inefficiencies, with studies showing that 15–85% of licensed spectrum remains underutilized. To address this, DSA allows unlicensed SUs to access unused licensed spectrum while minimizing interference with PUs. This is often facilitated through CR, which employs spectrum sensing and adaptive communication for optimized spectrum use [49].
DSA is implemented across various Institute of Electrical and Electronics Engineers (IEEE) standards. The IEEE 802.22 wireless regional area network (WRAN) standard enables unlicensed broadband access in rural areas without disrupting PUs, while IEEE 802.11af utilizes CR techniques to detect white spaces via geolocation databases. The IEEE 1900.x series further defines multiple DSA standards. Additionally, licensed shared access (LSA) in LTE and 5G-NR allows mobile operators to share spectrum while minimizing interference [50]. The CBRS of the FCC exemplifies real-world DSA applications, while standards such as IEEE 802.15.4m (Zigbee) and IEEE 802.19.1 improve TV band spectrum efficiency while ensuring device coexistence.
DSA is crucial for managing spectrum reuse and ensuring smooth network transitions. 3GPP enables migration from LTE to 5G-NR by allowing spectrum refarming, where unused LTE spectrum is gradually reassigned to 5G services [51]. Millimeter-wave (mmWave) frequencies in 5G present coverage challenges due to attenuation, which can be mitigated by implementing DSA on low-band frequencies and using carrier aggregation. Future 6G networks are expected to integrate CR with ML and game theory for advanced SM [52]. Figure 4 presents spectrum refarming as a transition approach that connects LTE service to implementing 5 G-NR. The illustration shows how spectrum resources from LTE get reallocated and optimized for NR deployment, which enables smooth network migration without interrupting active services.
DSA relies on a centralized system to dynamically allocate spectrum, whereas DSA allows opportunistic use of already-allocated spectrum, often through CR technology. Although SS among operators is conceptually possible, commercial and technical constraints make it a complex challenge. DSA enhances spectrum utilization, supports technology coexistence, and facilitates smooth network transitions. As demand for data and connectivity rises, DSA will remain essential for optimizing SM and enabling next-generation WNs. A more efficient transition from LTE to 5G-NR occurs through DSA, according to Figure 5. Real-time SM and adaptive RA allow the seamless integration of NR, through which service disruption remains minimal while spectrum use benefits from optimization between the two technologies. Table 3 describes the frequency distribution for legacy communication services, encompassing mobile networks Enhanced Global System for Mobile Communications 900 (E-GSM-900) and Digital Cellular System (DCS), broadcasting Frequency Modulation (FM) radio via satellite communication using the C Band, and navigation via Non-Directional Radio Beacon. The specified bands serve essential wireless connectivity purposes, along with navigation services, and have provided reliable signal transmission for years.

3.4. Key Challenges in DSA

DSA enhances SE but faces key challenges that must be resolved for effective implementation. Interference remains a significant issue, mainly when SUs misinterpret channel availability, leading to overlapping transmissions. A reliable common control channel is essential for seamless coordination and interoperability. Security and privacy risks arise due to CRNs, with threats like PU emulation and false spectrum information dissemination. Efficient SS requires robust medium access control (MAC) mechanisms, secure payment models, and well-defined coordination strategies. Reliable spectrum sensing is crucial, necessitating cooperative sensing to mitigate fading and shadowing effects. Ensuring policy compliance demands strict enforcement and continuous monitoring to prevent unauthorized access. Additionally, spectrum-sharing agreements between PUs and SUs require transparency, and the absence of direct spectrum ownership verification raises security concerns, leading to potential conflicts and degraded QoS. Addressing these challenges is essential for successfully deploying DSA in modern WNs.
Cognitive wireless networks (CWNs) CR technology optimizes spectrum utilization by identifying and utilizing available spectrum opportunities. Their primary objectives include alleviating congestion in unlicensed channels while significant portions of the licensed spectrum remain underutilized and enhancing overall spectral efficiency. The IEEE 802.22 standard, the first CR-based standard, enables CWNs to achieve downstream throughput of approximately 1.5 Mb/s. However, distributed networks typically suffer a lower throughput than centralized networks due to the lack of central controllers. Many distributed DSA protocols are based on the IEEE 802.11 distributed coordination function standard, which has a maximum per-channel throughput of about 0.82 Mb/s for a 1 Mbps BW. The performance of distributed CWNs further declines in the presence of multihop transmission, mobility, and dynamic spectrum availability, as available spectrum bands are not efficiently utilized. Several critical challenges must be addressed to enhance the performance of distributed CWNs, including the common control channel problem, spectrum sensing scheduling, power control mechanisms, multi-channel hidden and exposed terminal issues, access coordination, and mobility management. Overcoming these challenges is essential to improve the efficiency and reliability of distributed CWNs in dynamic network environments [53]. Figure 6 presents a spectrum access problem taxonomy and its fundamental objectives: spectrum efficiency, interference control, and fair maintenance. The framework classifies solution approaches, including optimization strategies combined with game-theoretic and AI-based RL methodologies.

4. AI Techniques in DSA

The application of AI in dynamic SM has been extensively explored, particularly since the advent of CR [54]. CR technology enables SUs to access unused spectrum resources across time, frequency, space, and power domains without causing interference to PUs. A notable example is the utilization of TV white space under the IEEE 802.22 wireless regional area network standard. However, this dynamic allocation introduces challenges such as interference and collision when multiple users simultaneously access the same spectrum. To address these concerns, spectrum sensing is crucial in detecting available spectrum by measuring received power at various frequency bands. Despite its significance, spectrum sensing faces inherent limitations, including high device complexity, limited availability, and increased energy consumption.
Several AI-based mechanisms have been proposed for SM [55]. One extensively studied technique is the multi-armed bandit (MAB) algorithm, which facilitates efficient radio RA in competitive environments [56]. MAB enables users to allocate a limited spectrum pool dynamically, maximizing performance metrics such as throughput, latency, and reliability. Each user gains partial knowledge of the overall RA strategy through environmental sensing and iteratively refines its approach to optimize performance. MAB exemplifies the exploration-exploitation trade-off in RL, where initial decisions may be greedy. Still, with iterative learning, the system converges toward an optimized allocation strategy that minimizes interference and enhances spectrum efficiency. A key extension of MAB, the contextual bandit algorithm, further refines RA by incorporating environmental factors such as transmission power, data load, and user location [57]. This adaptive strategy enables better performance across diverse network conditions than conventional MAB approaches, which focus on a single optimization strategy. Additionally, combinatorial bandit algorithms have been applied in hierarchical decision-making scenarios, such as optimizing RA across multiple users within a base station. This cooperative approach enhances network-wide performance by enabling coordinated decision-making among various agents.
Despite their effectiveness, traditional MAB-based strategies have limitations in modeling state transitions, particularly in capturing long-term rewards associated with specific actions. This can lead to suboptimal decision-making, where immediate gains result in network inefficiencies over time. For instance, users selecting the most favorable spectrum slots at a given moment may inadvertently contribute to network congestion and heightened interference in subsequent time slots. Consequently, RL techniques have gained prominence in SM, offering a more robust framework to optimize dynamic RA by considering current and future network states [58]. Figure 7 illustrates that AI has emerged as a powerful tool to maximize spectrum utilization by predicting usage patterns and enhancing RA strategies. By analyzing historical transmission patterns, user traffic, and mobility trends, AI-driven approaches enable intelligent decision-making for selecting optimal transmission resources, time slots, and power levels to balance fairness and system performance. From the spectrum scenario, as shown in Figure 7, spectrum is detected via a rigorous spectrum analysis and shared, leveraging channel throughput. The shared spectrum access is linked to the radio spectrum scenario considered through the broadcasted signal. Within the loop, the radio spectrum examined is connected to the spectrum monitoring block via the RF signal. By identifying the primary user, the spectrum monitoring part is connected to the spectrum transition, which, in turn, is fed to the spectrum detection section.

4.1. ML Techniques

The application of ML in wireless communication networks offers numerous advantages, including the ability to analyze network behavior, learn from patterns, predict future trends, and optimize RA. Traditional optimization methods often struggle to bridge the gap between theoretical models and real-time implementation. In contrast, ML-driven approaches significantly enhance network automation and efficiency, particularly in 5G and beyond. These techniques are widely employed in interference management [59], beamforming, link quality estimation, 5G-enabled IoT, energy efficiency [60], and RA. ML methodologies are broadly classified into three categories: (i) supervised learning, which utilizes labeled datasets for training, (ii) unsupervised learning, which discovers hidden patterns from unlabeled data, and (iii) RL, where models refine decision-making through feedback mechanisms. A detailed overview of these learning models, such as support vector machines (SVM), K-means clustering, and gradient followers and their relevance to WNs, can be found in [61]. An ML-based workflow appears in Figure 8 to execute automated pattern recognition and decision automation. The workflow includes critical phases to collect and preprocess data, followed by model training, evaluation, and deployment processes. The method improves complex system accuracy and adaptability through data-based pattern recognition techniques.
The study [62] proposes an innovative approach to decouple ML-based traffic prediction from DSA decision-making. Instead of integrating ML directly into the DSA mechanism, we leverage ML to predict traffic demand, a key enabler in reducing uncertainty and enhancing QoS in next-generation WNs. A central controller collects predicted traffic requirements from an independent ML agent and dynamically reallocates spectrum resources based on demand calculations. This architecture offloads ML complexity from DSA operations while ensuring efficient spectrum distribution. Unlike RL-based DSA methods, which rely on time-intensive iterative learning, our approach enables rapid spectrum adjustments by restricting allocation combinations using predefined network constraints, significantly accelerating decision-making.
Recent research has explored various ML-based traffic prediction techniques to optimize network efficiency. For instance, in [63], a network trace data model was developed to enhance monitoring efforts, while cell classification and clustering techniques were used to refine traffic forecasting. However, these methods often require extensive cell-level data. Advanced ML techniques, such as artificial neural networks (ANNs), mitigate this dependency by analyzing time-series traffic data without relying on additional feature inputs. ANNs, inspired by the human brain, consist of interconnected neurons structured across input, hidden, and output layers, where weighted connections facilitate complex computations [64]. Their ability to model nonlinear relationships and generalize across diverse datasets has led to widespread adoption of wireless network research.
Among ANN architectures, long short-term memory (LSTM) networks—a variant of RNNs—address long-term dependency issues by selectively retaining or discarding past information. LSTM-based multi-step traffic prediction has been successfully applied to LTE networks [65], demonstrating its capability to forecast traffic trends beyond immediate time-series data. However, LSTMs introduce additional implementation complexity compared to conventional ANNs. In scenarios focused on short-term predictions, the nonlinear autoregressive neural network (NARNET) provides a robust alternative. Studies in [66] have identified NARNET as a highly effective ML technique for nonlinear time-series prediction, further validating its superior performance regarding the coefficient of determination, R2, confirming its reliability for network traffic forecasting.
Future WNs can achieve intelligent RA, minimize interference, and enhance efficiency by integrating ML-based predictive models into SM and traffic forecasting. These advancements pave the way for more adaptive and autonomous communication infrastructures. Table 4 provides a comparative analysis of various ML techniques used in wireless communications and SM. It highlights their key features, advantages, limitations, and potential future research directions.

RL for DSA

To address the optimization problem in [67], an RL approach, specifically, quality learning (QL), is employed, along with three DRL algorithms: deep Q-network (DQN), deep deterministic policy gradient (DDPG), and twin delayed deep deterministic policy gradient (TD3). These methods are leveraged to enhance decision-making and optimize performance in dynamic environments.
In the RL framework for DSA, the agent operates within three fundamental components: state, action, and reward. For simplicity, we assume a discrete DSA environment where the state space represents PA levels at the base station (BS) for a user, ranging from 0 mW to Pm mW with increments of Ps mW. The action space consists of three possible decisions: increasing, decreasing, or maintaining the current PA. The reward function is designed to optimize the average data rate of the licensee’s network while ensuring minimal interference with the incumbent’s spectrum.
The reward function incorporates a penalty mechanism based on the channel quality ratio and transmits power to achieve effective PA. The penalty is enforced when the allocated power deviates from a predefined target utilization T, with a tolerance factor τ. The penalty function is mathematically formulated to discourage deviations beyond the acceptable range, ensuring spectrum efficiency. With the state, action, and reward structure defined, the subsequent section provides a detailed discussion of the RL algorithms considered for optimizing DSA.
  • Q-Learning-Based DSA
In study [68], the distributed DSA network is formulated as a QL problem, where a quality table (QT) maintains state–action values, known as quality values (QV). The state space corresponds to individual users’ PA levels at base stations (BSs), while the action space includes increasing, decreasing, or maintaining power levels. The reward function balances the trade-off between maximizing transmission power and minimizing interference to incumbent users. QL operates iteratively, continuously updating QV in the table as the agent explores the DSA environment. Initially, QT was populated with zero values, as the agent lacks prior knowledge of the environment. The learning process involves two approaches: exploitation, where the agent leverages existing QT information, and exploration, which tests new actions to improve decision-making. The balance between these approaches is managed using the ε-greedy method, prioritizing exploration in early stages and shifting towards exploitation as learning progresses.
Upon taking an action, the agent transitions to a new state and receives a reward based on the predefined function. The QV is updated accordingly, refining the decision-making process. Learning continues until the agent reliably selects optimal actions for each state. A practical example illustrates that when multiple agents attempt to increase power simultaneously, constraints such as the maximum allowable transmission power (e.g., 3500 mW) ensure that only one agent can proceed, preventing excessive interference. This structured QL approach enables efficient SA while maintaining interference constraints, making it a viable solution for adaptive DSA in WNs. QL controls intelligent DSA operations through its implementation, as shown in Figure 9. The SU agent engages with their environment to learn efficient channel selection policies through system exploration alongside performance feedback. The adaptive procedure improves spectrum utilization metrics by reducing PU interference.
Table 3 describes the frequency distribution for legacy communication services, encompassing mobile networks E-GSM-900 and DCS, broadcasting FM radio through satellite communication using Standard C Band, and navigation Non-Directional Radio Beacon. The specified bands serve essential wireless connectivity purposes, along with navigation services, and have provided reliable signal transmission for years.
2.
Deep Q Network (DQN)-Based DSA
While QL effectively manages small-scale QTs, its practicality diminishes as the state–action space expands. Maintaining a quality function (QF) for each state–action pair becomes computationally infeasible, particularly when dealing with large or continuous state spaces. For instance, a system with 10,000 states and 1000 actions per state results in an unmanageable QT of 107 entries. To address this limitation, function approximations—particularly deep neural networks (DNNs)—are employed to generalize across states, leading to the development of DQNs.
A DQN replaces traditional QTs with NNs to approximate the QV function, Q (s, a, θ), where θ represents the NN model parameters. A key component of DQN is experience replay, which records agent interactions within the DSA environment. The agent learns from these past experiences to improve decision-making, rather than relying solely on immediate state–action pairs.
DQN operates through two key processes: prediction and learning. In the prediction phase, the NN processes the input state, which outputs QVs for all possible actions. The learning phase leverages experience replay, randomly sampling past interactions in batches to update the weights of the NN. Unlike conventional QL, where direct QV updates occur, DQN minimizes a loss function based on the difference between predicted and actual QVs. The weight update process iterates continuously until the agent achieves optimal performance [69]. This approach significantly enhances scalability and efficiency in DSA networks, enabling intelligent decision-making in large and complex environments. Figure 10 illustrates a DQN-based framework for enabling intelligent dynamic spectrum access. It depicts how the agent uses DNNs to approximate QVs, allowing it to learn optimal channel selection policies in high-dimensional environments. This approach improves adaptability and decision-making under complex wireless conditions.
3.
Deep Deterministic Policy Gradient (DDPG)-Based DSA
The DDPG algorithm is an RL approach that simultaneously learns QF and policies by leveraging off-policy data and a behavioral function to approximate the QF. Unlike stochastic policy-based methods, DDPG generates deterministic actions, ensuring precise decision-making. A key distinguishing feature of DDPG is its dual-network structure, consisting of an actor and a critic network. The actor network observes the current state and determines the optimal action. In contrast, the critic network evaluates the state–action pair and assigns a QV, indicating the quality of the selected action.
DDPG operates through two key processes: prediction and learning. In the prediction phase, the actor network receives the state as input and outputs an action. The learning phase utilizes experience replay, where past interactions are randomly sampled in batches to update the NN weights of both the actor and critic networks. The learning process follows a structured approach, beginning with loss function calculation, where the critic network minimizes a regression loss function to refine QV estimations. This loss function is expressed in (1):
Loss = s a + γ max a   Q t ( s , a ) Q t 1 ( s , a )
It quantifies the quality of weight updates in both networks and serves as a reference for minimizing prediction errors. Once the loss function is computed, the weight update process begins, where the NN weights are adjusted iteratively using gradient-based optimization. Finally, the iterative learning process ensures that the agent continuously evaluates and refines its decision-making until optimal performance is achieved. By integrating deterministic policy gradients with value function approximation, DDPG enhances decision-making in complex DSA environments, facilitating more efficient RA and interference management. Figure 11 depicts the DDPG framework, which unifies actor–critic architecture alongside DL applications for continuous action space choices. The actor network uses its action selection process to choose spectrum channels. However, the critic determines evaluation outcomes based on the expected rewards. The method delivers productive and stable educational outcomes in intricate operational environments such as WNs.
4.
Twin Delayed Deep Deterministic (TD3)-Based DSA
While DDPG demonstrates strong performance in various applications, it is susceptible to hyperparameter tuning and often suffers from overestimated QVs, which can misguide the agent’s policy. The TD3 algorithm was introduced in [70] to address these limitations, building upon DDPG with key modifications to enhance stability and performance. Inspired by Double DQN, TD3 employs two critical networks to mitigate QV overestimation by selecting the minimum value between the two.
TD3 differs from DDPG in three significant ways. First, it introduces Gaussian noise to the actions of the actor network, followed by clipping within a predefined action range to prevent overfitting and improve exploration. Second, by leveraging two critical networks, TD3 selects the QV from the network with the lower output, reducing the risk of overestimation. Lastly, unlike DDPG, where the actor network is updated at every step, TD3 updates the actor network every two iterations, stabilizing the learning process and improving convergence. These enhancements make TD3 a more robust alternative for RL applications in complex environments. Table 5 presents a comparative overview of six RL techniques: QL, DQN, DDPG, TD3, DRL, and HDRL. It outlines their core components, advantages, limitations, and potential future research directions.
5.
Deep Reinforcement Learning (DRL)-Based DSA
DRL combines DNNs with RL to enable agents to learn optimal policies directly from high-dimensional inputs such as sensor data or images, eliminating manual feature extraction. However, preprocessing techniques like normalization can enhance performance. Most RL algorithms are based on the Markov decision process, which is represented as a tuple ⟨S, A, P, R, γ⟩, where S denotes the state space, A represents the set of possible actions, P defines state transition probabilities, R is the reward function, and γ (0 ≤ γ ≤ 1) is the discount factor that balances immediate and future rewards. The primary objective is to learn an optimal policy π(a|s) that maximizes cumulative discounted rewards over time [71].
Explicit QT becomes infeasible in complex RL tasks with large or continuous state–action spaces. DNNs serve as function approximators that map states to actions in policy-based methods or estimate Q-Vs in value-based methods. According to the universal approximation theorem, DNNs can approximate any continuous function given sufficient capacity, enabling efficient policy learning [72]. However, conventional DRL approaches struggle with large action spaces and slow learning in real-world applications such as integrated TN-NTNs.
Hierarchical deep reinforcement learning (HDRL) addresses these challenges by introducing hierarchical structures that decompose complex decision-making processes into manageable sub-tasks. Instead of learning a single policy, HDRL agents develop multiple policies at different abstraction levels, significantly improving learning efficiency. High-level policies focus on abstract goals or strategic decisions, while low-level policies handle fine-grained actions. This hierarchical structure enhances decision adaptability and scalability. Temporal abstraction improves efficiency, where high-level policies set long-term objectives (e.g., SA by a low Earth orbit satellite). In contrast, lower-level policies manage real-time resource distribution based on dynamic network conditions.
Another advantage of HDRL is its improved sample efficiency, as the hierarchical structure allows agents to learn effectively with fewer environmental interactions. This is particularly beneficial in scenarios where real-world data collection is costly or time intensive. Figure 12 illustrates the HDRL framework, where an agent interacts with its environment using a meta-controller that sets high-level sub-goals, a sub-controller that refines actions, and an action mapper that executes decisions. This structured approach enables HDRL agents to adapt to long-term global changes and short-term local fluctuations. It suits dynamic SS and real-time network optimization in TN-NTN environments. Figure 12 shows an architectural framework that creates hierarchical policymaking for agents to work between environmental layers using multiple abstraction levels. The strategic plan, established at a high-level policy, defines the overall direction, while lower-level policies activate specific behavioral actions through current detection patterns. The loop maintains continuous perception, decision-making, and action processes, which allow flexible control solutions for changing environments.

4.2. DL Techniques

DL has significantly advanced ML, gaining recognition in 2012 with its success in the ImageNet competition [73]. Its impact expanded in 2016 when AlphaGo defeated world champion Lee Sedol in Go, highlighting AI’s growing capabilities [74]. More recently, the release of ChatGPT [75] in 2022 intensified global discussions on DL and its applications. Beyond language and vision, DL is transforming wireless communications, particularly in areas like channel estimation, signal detection, and cognitive radio [76]. In spectrum sensing, DL-based methods surpass traditional techniques by extracting statistical features from received signals without prior knowledge, enhancing accuracy and robustness. Current research focuses on high-performing NNs, such as CNNs, residual networks (ResNet), and LSTMs, which improve spectrum awareness and adaptive communication.

4.2.1. CNN-Based DSA

Spectrum sensing is crucial in modern wireless communication, with CNNs enhancing detection accuracy. Using short time Fourier transform (STFT), spectrogram-based approaches enable CNNs to classify modulation schemes effectively. The research [77] pioneered STFT spectrograms as CNN inputs, optimizing low signal to noise ratio (SNRs) parameters. The study [78] further refined CNN architectures, achieving 98% accuracy. The research [79] integrated deep convolutional generative adversarial networks for data augmentation (DA), improving accuracy by 10%. Statistical features like covariance matrices have also been leveraged for CNN-based sensing. The work [80] combined CNN-extracted covariance features with SVM classifiers for improved detection. The research [81] proposed a multi-band CNN using inter-band correlations and applied kernel principal component analysis (PCA) for spectrum sensing in vehicular networks, achieving 100% detection probability above 3 dB SNR.
The spectral correlation function (SCF) has also proven effective in CNN-based spectrum sensing. The study [82] applied SCF to enhance cellular signal detection and extended this for multi-antenna scenarios, improving performance in low-SNR conditions. The work [83] explored CNN-based cooperative sensing for UAV-based networks, achieving 90.68% accuracy. Over the past five years, CNN-based spectrum sensing has significantly evolved, utilizing spectrograms, statistical features, and SCF to improve cognitive radio and DSA detection under noisy conditions. Figure 13 illustrates the architecture of a CNN, consisting of multiple layers including convolutional layers, pooling layers, and fully connected layers. The input image is passed through convolutional layers for feature extraction, followed by pooling layers for dimensionality reduction. Fully connected layers then process the output to produce the final classification or prediction. The figure emphasizes the data flow through these layers, showcasing the hierarchical feature learning process of CNNs.

4.2.2. Residual Networks (ResNets)-Based DSA

ResNets, introduced by Kaiming He et al. in 2015 [84], address the vanishing gradient problem in DNNs through residual connections that facilitate practical gradient propagation. This architecture has been widely adopted in spectrum sensing, often combined with advanced signal processing techniques to enhance detection accuracy. The study [85] evaluated multiple spectra sensing methods, including energy detection, differential entropy, geometric power, and the P-paradigm, employing various NN architectures such as DNNs, CNNs, ResNets, and multi-layer perceptrons (MLPs). Their results indicate that deeper CNNs improve performance but suffer from vanishing gradients, which ResNet mitigates. ResNet models utilizing energy statistics features notably showed superior performance at moderate SNR levels.
The study [86] developed a CNN-ResNet hybrid model for large-scale spectrum usage prediction, addressing incomplete power matrices via matrix completion and local interpolation. Their single and multiple transmitter scenarios experiments confirmed the model’s adaptability and predictive accuracy. The work [87] proposed an enhanced ResNet-based method, STFT-ImpResNet, incorporating the STFT for feature extraction. By optimizing residual blocks and replacing fully connected layers with global average pooling, the model reduced computational complexity while improving detection probability, achieving 94.5% accuracy at 19 dB SNR for a false alarm rate of 0.01. Recognizing the limitations of STFT in handling non-smooth signals due to fixed window functions, research [88] introduced wavelet transform (WT)-ResNet, integrating WT with ResNet. Unlike STFT, WT provides adaptive wavelet bases for flexible time-frequency analysis. Comparative experiments with energy detection, baseline convolutional energy detection, CNN, and STFT-CNN showed that WT-ResNet significantly enhances detection probability, particularly at 20 dB SNR. Figure 14 presents the residual module design that enables skip connections to transmit data between distant layers in the network. A shortcut path in the diagram connects the input of a module to its output, which allows one or several layers to be bypassed. During training, residual networks employ this structure because it helps fight the vanishing gradient problem and enables DNNs to boost efficient gradient propagation.

4.2.3. LSTM-Based DSA

RNNs have been widely applied in spectrum sensing because they can process time-series data efficiently [89]. These networks enable CRs to make better channel selection decisions using historical data to predict future spectrum utilization. Additionally, RNNs assist in detecting and identifying potential interference sources through temporal spectral data analysis. However, LSTM networks are often preferred over traditional RNNs for handling long-term dependencies and complex spectral data.
LSTM networks, a specialized variant of RNNs, effectively mitigate the vanishing and exploding gradient problems through their unique memory cell architecture, comprising input, forget, and output gates [90]. LSTMs effectively model long-term dependencies in spectrum sensing by retaining essential information and integrating past data with real-time observations. Research in this area focuses on two main approaches: feature extraction from spectral data and optimizing models for improved performance in dynamic and low-SNR environments.
In exploring various input strategies, the research [91] proposed leveraging temporal correlations in spectrum data through LSTMs, linking past and present sensing events for improved decision-making. Their approach demonstrated superior detection performance and classification accuracy at low SNR compared to alternative methods, though it incurred increased training time. Expansion in this introduced a spectrum-sensing model incorporating PUs activity statistics, known as primary user activity statistics-based spectrum sensing, which exhibited a 10% improvement in classification accuracy over ANN-based and random forest methods under similar SNR conditions.
Additionally, study [92] explored the capacity of LSTM for signal recognition and detection, comparing feature extraction from raw signals with cyclostationary analysis. His results revealed that while LSTMs trained on raw data achieved 93.5% classification accuracy, incorporating spectral correlation functions elevated this accuracy to 99.8%, underscoring the efficiency of LSTMs in learning original signal characteristics. Beyond temporal feature learning, LSTMs have been integrated with spatial feature extraction techniques. The research [93] proposed the covariance matrix-based-LSTM framework, which first extracts spatial correlations from signal covariance matrices before applying LSTMs for temporal learning. Their approach significantly outperformed SVM, gradient boosting machine, random forest, and energy detection models in the SNR range of 15 dB to 7.5 dB, demonstrating the benefits of jointly learning spatial and temporal features in spectrum sensing. Combining LSTMs with other algorithms has also been explored for spectrum prediction.
The study [94] proposed an LSTM-based spectrum prediction method that leverages historical spectral data to anticipate future spectrum usage. The technique was employed for network optimization, reducing computational complexity while improving predictive performance over traditional MLP networks. Addressing dynamic SNR environments, study [95] introduced an enhanced LSTM model optimized using CNN-based feature extraction and the cuttle fish algorithm [96] for hyperparameter tuning. Their experimental findings showed superior performance in spectrum detection across multiple frequency bands, surpassing CNN-LSTM and conventional sensing techniques. Future work in this domain may incorporate reinforcement learning to enhance spectrum sensing adaptability. Figure 15 shows the LSTM cell architecture, which presents the cooperation between the input gate and forget gate, as well as the output gate and cell state and hidden state. The gates function by managing information flow through their ability to determine what parts of the past state get preserved, modified, or forwarded. By incorporating this mechanism, LSTMs can maintain and process long-term dependencies in sequential databases.

4.2.4. Alternative Neural Network (NN) Approaches for Spectrum Sensing

Beyond CNNs and LSTMs, researchers are leveraging ANNs, DNNs, and generative adversarial networks (GANs) to enhance spectrum sensing. ANNs, known for their adaptive learning and fault tolerance, have shown promising results. The research [97] combined wavelet transforms with ANN for signal denoising, improving detection probability by 0.1 in the 20 to 14 dB SNR range. The study [98] introduced an ANN-based hybrid spectrum sensing model, demonstrating improved detection with six feature-extracting detectors.
DNNs offer scalability and robustness in noise-affected environments. The research [99] used phase-difference distribution with DNNs to mitigate noise uncertainty and carrier frequency mismatch, achieving at least a 0.1 detection probability improvement in SNRs from 20 dB to 5 dB. The work [100] integrated DNNs with a robust alternating-direction method of multipliers for broadband spectrum recovery, enhancing convergence and performance at low SNRs. GANs address domain adaptation challenges in spectrum sensing. The research [101] proposed a consistency-constrained deep GAN with consistency constraints and transfer learning (TL), outperforming the stochastic average gradient algorithm in accuracy. The work [102] developed a deep CNN with time–frequency fusion, leveraging PCA and TL to enhance low-SNR spectrum sensing and outperforming the deep compressive sensing GAN.
These approaches significantly improve spectrum sensing accuracy and adaptability, paving the way for future research on hybrid models and real-time optimization. Table 6 presents an extensive analysis of DL spectrum sensing approaches that details learning methods, essential characteristics, pros and cons, and predicted developments. The framework contains sequential data processing via RNNs and LSTMs, ANNs and DNNs for feature extraction, GANs for domain adaptation, and ResNet for deep feature learning.

4.3. Large Language Models (LLM)-Based DSA

Large language models (LLMs) are large-scale transformer-based NNs trained on vast textual corpora. Most modern LLMs are built upon the transformer architecture [103], which utilizes multi-head self-attention to model long-range dependencies efficiently. The architecture consists of an encoder and a decoder, which can be used separately or together depending on the model. For instance, bidirectional encoder representations from transformers use only the encoder, GPT-4 relies solely on the decoder, and T5 employs both components [104].
LLMs follow a pretrain-then-adapt paradigm. First, they undergo unsupervised pretraining on massive unlabeled datasets using tasks such as causal language modeling or denoising autoencoding. Then, they are adapted to specific downstream tasks through fine-tuning, instruction tuning, or alignment tuning. When direct model updates are impractical, prompt engineering guides the model’s responses.
Key capabilities of LLMs relevant to DSA include in-context learning, enabling few-shot task adaptation; reasoning abilities supported by chain-of-thought prompting; instruction-following from multi-task fine-tuning; and multimodal learning, as demonstrated in models like contrastive language–image pre-training, audio pre-training, and language model [105]. These strengths support several DSA applications in future 6G networks. For instance, processing radio frequency signals, cameras, and sensor data can improve multimodal spectrum sensing and allocation. Network slicing can be customized based on the contextual needs of use cases like IoT and V2X. In multi-agent systems, lightweight LLMs enable agents to communicate using natural language, improving coordination and reducing latency. LLMs can also be fine-tuned for spectrum spoofing detection, learning subtle distinctions between normal and malicious behavior.
Despite their promise, LLMs face challenges in DSA implementation. These include high inference latency, large model sizes unsuitable for edge deployment, and the risk of hallucination, which may lead to incorrect spectrum decisions. Moreover, continuous learning is essential to keep pace with evolving policies, while energy consumption remains a concern. Large language model meta-AI training consumed 449 MWh [106]. Finally, the lack of transparency and reliability due to the black-box nature of LLMs raises concerns for mission-critical systems. A comparative evaluation of ML, DL, and LLMs regarding DSA operations appears in Table 7. It demonstrates various architectural differences alongside learning paradigms, adaptiveness, reasoning characteristics, multimodal abilities, and practical implementation obstacles. LLMs possess better adaptability and reasoning, alongside weaker performance regarding latency, explainability, and energy efficiency.

5. Effectiveness of AI in Addressing Spectrum Management (SM) Challenges

Through AI techniques, wireless devices can automatically handle changing situations by learning basic network elements such as channel state information, traffic load, link quality, and interference pattern awareness. The system performance reaches predefined objectives by interpreting link-level metrics such as SNR and acknowledgment/negative acknowledgment feedback. AI has transformed wireless communications by delivering substantial network enhancements in efficiency, adaptability, and smarter intelligence capabilities. 6G services will become vital for beyond-5G capabilities because 5G networks are expected to reach their capacity limit by 2030. The evolution signals a new digital and intelligent connectivity era operating entirely in the digital realm. AI technology will serve as a vital force to push multiple access (MA) performance in 6G networks toward ultra-high data rates while enabling connectivity of massive devices, extreme low-latency, superior SE and energy efficiency, and unprecedented reliability.

5.1. AI Techniques in MA

6G networks will adopt AI as a fundamental technology because this system addresses complicated problems independently without explicit programming. Academia and industry operatives now recognize its success in integrating this technology into spectrum sensing areas alongside resource management and MAC layer optimization systems. The MAC layer manages several challenges at which AI-based solutions show exceptional promise for resolving PA problems and SM needs alongside beamforming algorithms, as well as switching control, user association/clustering scheduling requirements, and network security concerns. AI adoption during 2G to 5G limited itself, yet 6G plans a complete fusion of AI into all its MA strategies. The foreseeable integration will improve network access efficiency and decrease complexity and power usage, which is vital for creating wireless systems that respond quickly with intelligence across broad areas [107].

5.2. Spectrum Sensing and Interference

The demand for high spectral and energy efficiency at the MAC layer drives the exploration of advanced spectrum sensing techniques. AI techniques (including DL, RL, supervised learning, and unsupervised learning) are increasingly applied to enhance spectrum sensing, sharing, and interference management in 6G systems. These techniques help optimize channel access and improve network performance, particularly in dynamic SM and efficient RA. In spectrum sensing, AI-driven solutions such as SVMs, K-means clustering, and Gaussian mixture models are utilized for cooperative sensing, modulation detection, and dynamic SM. DL methods like LSTM networks and CNNs have improved detection accuracy, spectral efficiency, and interference suppression. Moreover, when integrated with emerging technologies like terahertz communication and reconfigurable intelligent surfaces (RIS), AI technologies offer enhanced capabilities for dynamic SM in 6G. Recent studies have focused on AI-enhanced spectrum sensing in cognitive radio communication, demonstrating the potential of AI to increase SE [108].
Effective SS, a key component of SM, benefits from AI techniques that reduce prior information requirements and enhance accuracy. AI methods such as CNNs and K-nearest neighbor algorithms have been explored for intelligent SS across CRNs, the coexistence of licensed and unlicensed bands, and passive–active user scenarios. RL-based approaches, including QL, are applied to optimize spectrum sharing in coexisting LTE and Wi-Fi systems and improve fairness in coexistence networks [109].
Spectrum interference, a significant challenge in spectrum sensing, is mitigated using AI-based spectrum interference management (SIM) techniques. DL and RL techniques help improve interference detection, signal classification, and mitigation strategies, enabling more efficient spectrum use. AI-driven SIM solutions, such as LSTM-based approaches, CNN models, and RL methods, adapt to dynamic environments and improve interference management in non-orthogonal and CRNs [110]. Integration with RIS technologies further enhances interference mitigation by adjusting RIS configurations in real time.
In summary, AI technologies are transforming spectrum sensing, sharing, and interference management in 6G networks, significantly improving efficiency, accuracy, and adaptability across MA scenarios. These advancements are essential for realizing intelligent and dynamic 6G communications, driving higher spectral efficiency, reduced interference, and improved network performance. Figure 16 depicts an AI system for spectrum sensing that utilizes ML techniques to recognize spectrum availability using more precise and effective methods. The diagram illustrates the processing of signal data features by intelligent models toward detecting free channels, which allows dynamic access in CRNs.

5.3. Natural Language Processing (NLP) in Network Security & Automation

Natural language processing (NLP) is revolutionizing communication networks by strengthening security, enhancing threat detection, and automating customer support tasks. It enables machines to interpret and generate human language, which aids in understanding network events, logs, and communications [111]. This capability enhances the ability of an organization to analyze network traffic and user behavior, playing a crucial role in preventing cyber threats, improving performance, and automating manual tasks. As communication networks grow more complex and data increases exponentially, traditional methods struggle to address security risks. NLP excels in processing unstructured data like logs, reports, and alerts, making it an essential tool for tackling these challenges. Additionally, NLP models can learn from large datasets, allowing them to detect emerging threats and adapt to new attack methods.
NLP-powered intrusion detection systems (IDSs) leverage transformer-based architectures [112] to analyze network data such as logs and alerts, identifying anomalies that may indicate security breaches with up to 98% accuracy. This integration enhances traditional IDS methods by improving the system’s ability to understand the context of network events. NLP efficiently processes natural language logs, which may contain valuable insights, like error messages and analyst descriptions, often missed by conventional detection systems. It can uncover subtle patterns linked to malicious activity, including insider or advanced persistent threats, which are difficult to detect with rule-based systems [113].
Moreover, by analyzing historical logs and event correlations, NLP-based IDS can detect sophisticated attacks, including obfuscated code or social engineering tactics. These systems can identify zero-day attacks by recognizing anomalous patterns, even those previously unseen [114]. Real-time detection capabilities allow for swift response, reducing false positives and improving security operations. In conclusion, NLP-enhanced IDS offers a highly effective solution for identifying and mitigating security risks, improving the accuracy and efficiency of network security efforts, and making them vital tools in combating cyber threats.

5.4. Detection and Techniques for Security and Privacy

AI plays a pivotal role in enhancing the security and privacy of communication networks through advanced intrusion detection, encryption, and privacy-preserving techniques. As cyber threats become more sophisticated, traditional security methods often fail to detect and mitigate emerging risks. AI, particularly machine learning, allows for the continuous analysis of network traffic, enabling the identification of suspicious patterns and evolving attacks [115]. AI-driven encryption methods adapt to real-time network conditions, ensuring a balance between robust security and performance optimization. Privacy-preserving techniques, such as FL and differential privacy, are strengthened by AI, enabling data analysis without exposing sensitive information and ensuring compliance with regulations like the General Data Protection Regulation (GDPR) [116].
AI-powered IDSs utilize ML models, such as transformers, to achieve high detection accuracy, outperforming traditional methods, particularly in detecting zero-day vulnerabilities. These models can process complex network traffic and identify known and novel threats in real time. Furthermore, AI enhances privacy techniques by ensuring data security in decentralized systems, such as FL and secure multi-party computation, which maintain confidentiality while allowing for collaborative data processing [117]. Additionally, AI enables privacy-preserving data analytics by performing calculations on encrypted data, extracting insights without exposing sensitive information. Overall, AI is reshaping communication network security, making systems more resilient, efficient, and privacy-conscious while ensuring compliance with privacy regulations. Table 8 delivers essential information about AI-based privacy-preserving approaches with specific descriptions about their fundamental properties, applications, and privacy-enhancing features. The analysis and calculation of data benefits from three techniques known as FL, along with differential privacy and secure multi-party computation, which protect sensitive information during processing. These methods operate in medical, financial, and IoT environments to protect data privacy through compliance with regulations as they enable protected data calculations and AI learning.

5.5. Comparison of AI and Traditional Methods

Integrating AI into communication networks raises ethical and regulatory concerns, particularly regarding bias in AI models, which could lead to unfair outcomes such as improper traffic prioritization or discrimination. Mitigating bias through dataset auditing, adversarial debiasing, fairness-aware training, and explainable AI (XAI) methods is essential to ensure fairness and accountability. Regulatory compliance with laws like the GDPR is also crucial. Data minimization, transparency, FL, and differential privacy help maintain data protection without compromising model accuracy [117]. Real-world examples show how these strategies are implemented. A DL-based IDS in a 5G network employed FL to keep data local and differential privacy to remain GDPR-compliant while achieving 95% accuracy. Similarly, an AI-driven traffic management system for smart cities utilized XAI for transparency, processing only anonymized data to comply with the data minimization requirements of GDPR.
In critical communications, AI models like DL and RL excel in accuracy but lack interpretability, which is crucial in high-stakes scenarios. Hybrid approaches combining interpretable models, such as decision trees, with high-performing models, like DL, offer a balanced solution. Future research should focus on XAI methods tailored to critical communications, where decision traceability is key. While AI models reduce latency and improve accuracy, they demand more computational power and energy than traditional methods. FL also faces challenges like inefficiencies in aggregating updates and heterogeneous data distribution, especially in IoT environments. Ultimately, the choice between AI-driven and traditional methods depends on available resources and application needs. The comparison in Table 9 evaluates AI approaches and standard communication network methods from the perspective of performance criteria, implementation aspects, and regulatory requirements. The advantages of AI include adaptability alongside accuracy and scalability, yet its operation requires stronger computational strength and presents difficulties for interpretability and compliance. Conventional methods maintain simplicity, lower energy requirements, and better transparency since they are appropriate for low-power and privacy-sensitive applications.

6. Applications of AI in DSA for 5G, 6G, and Beyond

The following section presents detailed examples of AI applications in current communication networks through their implementation in 5G/6G systems, IoT infrastructure, edge computing environments, and cloud-based platforms. AI-driven strategy applications in different fields show how these efforts solve essential communication problems such as network management, resource distribution, and security protection systems. The real-life applications demonstrate how AI transforms future communications networks by achieving automation, enhanced decision support, and system protection against emerging cyber threats.

6.1. AI-Driven Connectivity in Dense Urban 5G/6G Networks

The deployment of 5G and emerging 6G networks in densely populated urban environments faces numerous challenges, including high user and device density, fluctuating traffic demands, and the need for consistent coverage. AI has emerged as a key enabler for addressing these issues by optimizing network traffic, enhancing BW allocation, and maintaining reliable connectivity [118]. AI-based systems can forecast traffic patterns, monitor base station conditions, and adapt network parameters in real time to ensure efficient resource utilization. These systems facilitate seamless handovers, manage interference, and proactively identify potential congestion points. AI enhances network performance by dynamically managing BW, predicting congestion, and balancing load, ensuring high reliability and improved user experience in dense urban 5G/6G networks. Figure 17 illustrates an AI-driven traffic management framework integrating base stations, users, and a centralized AI optimization center. This center receives real-time feedback (blue arrows), processes congestion alerts (orange arrows), and sends optimized traffic distribution commands (red arrows) to maintain service quality.

6.2. AI for IoT and Edge Network Management and Security

The proliferation of IoT devices and the adoption of edge computing have created new possibilities and significant challenges in network management and security. AI technologies are increasingly employed to manage large-scale IoT ecosystems, enabling real-time optimization of device communication, resource distribution, and threat mitigation [119]. In such networks, AI models process data from numerous interconnected devices to identify anomalies, faulty components, inefficiencies, and potential security threats. By deploying AI at the edge, systems benefit from reduced latency, faster response times, and improved defense against threats like unauthorized access or data breaches. Advanced AI-driven protocols also continuously monitor abnormal behavior, enhancing overall security resilience. Figure 18 depicts an AI-integrated IoT-edge architecture comprising smart devices (e.g., sensors, wearables), local edge computing nodes for immediate processing, and embedded AI security mechanisms. These components ensure real-time data analysis, predictive maintenance, adaptive resource management, and proactive threat detection. This architecture illustrates the pivotal role of AI in maintaining the performance, reliability, and security of modern IoT and edge networks.

6.3. AI-Driven Network Security for Cloud Communication Systems

As cloud-based communication systems become more widespread, ensuring data security and privacy is a key challenge. AI enhances network security in cloud environments, focusing on intrusion detection, anomaly detection, and data protection [120]. AI-driven security systems analyze incoming traffic for abnormal patterns, detect threats like distributed denial of service attacks, and adjust security measures in real time. AI also ensures privacy by applying encryption and anonymization techniques and detecting anomalies in communication.
FL has recently emerged as a promising approach. It enables multiple devices to train AI models while keeping data local collaboratively. This method helps preserve privacy, as sensitive data remain on the device while contributing to model improvements. In cloud settings, FL strengthens intrusion and anomaly detection by aggregating data from distributed devices, improving the robustness of AI models without exposing raw data. Figure 19 presents an AI-based security framework for cloud communication systems, including an IDS, a data protection layer, and an anomaly detection layer. These layers work together to ensure comprehensive security by detecting threats, securing data exchanges, and identifying unusual patterns in real time.

6.4. Core Applications of Network Slicing

5G is transforming industries by leveraging AI, IoT, and AR/VR technologies. With an expected monthly data usage of 35 GB per user across 400 use cases in 70 sectors, 5G is transforming industries. These sectors are categorized into eMBB (high-BW services), mMTC (IoT devices with massive connectivity), and URLLC (low-latency, high-reliability services). Key 5G use cases include smart transportation, military services, education, industry, healthcare, agriculture, and smart cities, each with unique network demands.
Network slicing supports V2X communications in smart transportation to meet diverse needs like latency and reliability [121]. Secure, low-latency networks, optimized by network slicing, are vital for military applications. Innovative education platforms require high coverage and low latency, supported by network slicing for rural areas. Industry 4.0 needs low latency and sensor connectivity, which network slicing enables [122]. Healthcare 4.0 uses network slicing to meet service demands like tele-surgery and monitoring. In agriculture, smart farming relies on 5G’s low latency and reliability for automation and sensor networks [123].
Smart cities benefit from ability of 5G to manage diverse urban services like traffic and energy, with network slicing ensuring optimized service delivery [124]. Overall, network slicing is key to addressing the specific needs of each industry and providing efficient and reliable services.

6.5. Generative AI Models Use

The application of GenAI models for wireless network demand planning benefits multiple situations because of their advantages. Technology improves uplink and downlink network resource usage under specific circumstances of constrained network capacity. GenAI models effectively help with demand planning by applying network resource assessments and user-specific needs across user terminals and network systems. Figure 20 depicts multiple situations where GenAI optimizes demand shaping operations in WNs. The system illustrates how a GenAI model predicts traffic patterns and optimizes resource distribution using real-time user actions for dynamic network load balancing. This method improves network efficiency and user experience in evolving 5G/6G networks.

6.5.1. Green Networking Through Smart Cell Transitioning

One of the key challenges in addressing climate change is reducing carbon emissions, which can be mitigated by optimizing energy consumption in WNs. Cell switching is a technology that reduces energy usage by offloading traffic from idle small BSs to macro-BSs and putting small BSs into sleep mode. The efficiency of cell switching depends on the density of resource blocks and data traffic loads. GenAI-based demand-planning helps optimize resource utilization by adjusting the source blocks’ demand, thereby improving macro-BS capacity and increasing switching-off opportunities for small BSs. This reduces energy consumption, contributing to more sustainable WNs [125]. GenAI demand-shaping can significantly optimize energy use. The high-altitude IMT-based station (HIBS), which is envisioned to use green energy sources, could host offloaded users, further enhancing energy savings. However, a trade-off exists between the energy savings from GenAI-assisted cell switching and the energy consumed by GenAI algorithms, presenting a challenge in balancing carbon emissions. It is crucial to ensure that GenAI’s energy savings outweigh its consumption during the design process.

6.5.2. Intelligent Connection Allocation in Heterogeneous Networks

Ensuring efficient user association remains a critical challenge in WNs, especially in virtual heterogeneous networks (VHetNets) that integrate macro-BSs, small BSs, and HIBS. These multi-tiered networks aim to meet user demands while collaboratively maintaining performance and load balance. However, designing optimal and dynamic user association algorithms is constrained by limited network resources and modern applications’ increasing the volume of immersive traffic.
A significant bottleneck arises from the restricted wireless backhaul capacity, particularly in aerial platforms like HIBS or terrestrial BSs that lack high-capacity fiber connections. Due to constrained spectrum availability and limited backhaul bandwidth, the number of users a BS can support is inherently capped. GenAI models can be deployed at BSs with limited backhaul capacity to mitigate this. These models analyze the traffic patterns of users and compress the data into formats requiring lower data rates and less SA. This compression helps to alleviate backhaul congestion and optimally utilize the available wireless resources. Moreover, the GenAI algorithm can be selectively applied to specific user groups based on traffic characteristics and current network load, enabling demand-shaping strategies that further improve efficiency.
Another key issue in WNs involves uplink communication, particularly due to the user equipment’s limited transmission power and energy storage. In scenarios with high free-space path loss, such as remote or high-mobility environments, uplink transmissions become even more challenging. In such cases, GenAI models can be implemented directly at the user terminal to compress outgoing data based on its type and the available energy. This approach enables energy-efficient uplink communication, reducing the power required to transmit data without compromising essential content.

6.5.3. Smart Data Compression Strategies for Spectrum-Efficient VHetNets

Ultra-dense VHetNets, which integrate terrestrial macro/small cells with aerial platforms (e.g., high altitude platform stations operating in a shared frequency band), promise higher SE but suffer severe inter- and intra-tier interference due to spectrum harmonization. Traditional SA schemes alone struggle to keep pace with ever-growing traffic demands and finite spectrum resources. By incorporating GenAI for demand planning at the BS level, user data can be adaptively compressed, reducing the required data rate and spectrum footprint, thereby expanding the degrees of freedom available for resource block assignment. This compression alleviates backhaul and spectrum congestion and lowers interference levels, directly boosting the network’s sum SE [126].

6.5.4. Enhancing Network Resilience for Disasters and Mass Events

Ensuring rapid and reliable communication is vital for search and rescue operations in disaster scenarios. However, disasters often impair telecommunication infrastructures, limiting available resources and hindering effective response. Implementing GenAI for demand planning can enhance communication efficiency, enabling more users to be served despite these constraints. User-generated content can be categorized as critical or non-critical based on its relevance to search and save missions. Critical content, essential for rescue efforts, requires priority transmission. Non-critical data can be deferred or compressed using GenAI models to conserve resources. Although demand shaping may introduce processing delays, the overall transmission is expedited due to reduced data volume, alleviating network congestion and addressing capacity challenges during disasters.
Beyond disaster response, GenAI can predict network traffic demands by analyzing factors like time of day and special events. For instance, GenAI can simulate anticipated demand surges before large public gatherings, allowing network designers to plan infrastructure adjustments proactively. GenAI can manage network density during such events by distributing demand across BSs or compressing user-generated content, ensuring optimal performance. GenAI into communication networks enhances resilience and efficiency, particularly in resource-constrained scenarios like disasters and large-scale events.

6.5.5. Exploring New Revenue Streams with Generative AI (GenAI) in Telecom

Implementing generative AI (GenAI) for demand shaping in WNs offers significant business advantages. Network operators can avoid the substantial costs of acquiring additional spectrum by optimizing the existing spectrum and energy resources. Furthermore, the surplus spectrum can be leased to other operators, creating new revenue streams and enhancing service quality for secondary operators. Energy savings from BS switching translate into reduced operational costs, promoting economic sustainability. GenAI facilitates increased BS deactivation by decreasing network load. GenAI enhances user satisfaction and safety in critical scenarios like disaster response and significant public events, contributing to overall business resilience.

6.6. Open-Source Datasets for Wireless Communication Research

In recent years, open-source datasets have played an increasingly critical role in advancing machine learning (ML) research within wireless communications. While institutional datasets from established sources such as IEEE and university testbeds remain foundational, the rise of community-driven repositories, particularly those hosted on platforms like GitHub, has expanded the landscape of available data for experimental and benchmarking purposes. These datasets support wireless scenarios, including IoT security, UAV communications, human activity recognition, and advanced channel estimation tailored to emerging 6G technologies.
Table 10 presents a curated list of publicly available datasets relevant to wireless ML research. Each entry includes a direct access link, the experimental or deployment area, the underlying technology or communication standard, the types of data included (e.g., SINR, RSSI, mobility traces, jamming patterns), and a concise description of its intended use or application domain. The selected datasets cover real-world measurements and simulated environments, enabling reproducible research across various wireless paradigms.
These datasets empower researchers to evaluate algorithms under controlled and real-world conditions, addressing pressing challenges such as signal degradation, adversarial interference, mobility dynamics, and intelligent resource allocation. However, as many community-shared datasets lack strict curation, researchers are encouraged to assess metadata completeness, validation procedures, and representational fairness when selecting datasets for training or benchmarking purposes. This evolving ecosystem of datasets ensures broader access to high-quality data. It promotes reproducible research, essential for developing trustworthy AI models in wireless networks, particularly in the context of 5G evolution and 6G vision.

7. Challenges and Limitations of the DSA

Integrating AI into modern communication networks brings significant benefits but also introduces several critical challenges that must be addressed for effective and responsible deployment. One of the foremost concerns is data privacy and security. AI models, particularly those built on DL architectures, require large volumes of data, often including sensitive information such as user location, communication patterns, and behavioral traits. Without robust privacy safeguards, data breaches and unauthorized access risk increase significantly. Traditional encryption and anonymization techniques may not align well with the computational demands of AI systems. In response, privacy-preserving methods like FL have gained attention, allowing raw data to remain on local devices while sharing only model updates. However, FL poses challenges, including model synchronization, vulnerability to data poisoning, and difficulties preserving end-to-end data integrity.
Scalability and resource constraints form another major limitation, particularly in networks populated with low-power and edge devices, such as those found in IoT environments. Implementing computationally intensive models like DNNs in these settings is often impractical due to limited processing power and energy availability. While model compression techniques such as pruning, quantization, and offloading to edge computing platforms have been developed to mitigate these issues, they may reduce model accuracy or robustness, especially under complex and dynamic network conditions.
Equally important is the issue of model interpretability. Many AI models function as opaque “black boxes,” making it difficult for network administrators and stakeholders to understand the reasoning behind their decisions. This lack of transparency becomes especially problematic in mission-critical applications such as network anomaly detection and cybersecurity, where trust and clarity in decision-making are essential. Techniques from the emerging field of XAI aim to make these systems more transparent and trustworthy. However, incorporating XAI methods can introduce additional computational overhead and may involve trade-offs among model interpretability, performance, and complexity.
Communication networks encounter major obstacles when implementing AI technology through legal frameworks and moral standards. GDPR restrictions make it impossible for AI systems to acquire the extensive training data needed for operational effectiveness. The solutions for privacy preservation include FL together with data anonymization and differential privacy methods, while working closely with regulators helps guarantee compliance. The ethical framework must resolve problems linked to unbiased decision-making and transparency in accountability. Information technology confidence and understanding can be boosted through bias-reduction procedures during AI development and the use of explainable AI tools like local interpretable model-agnostic and Shapley additive explanations, together with constant human supervision. Integrating AI into actual network programs requires the solution of these critical matters to ensure both its effectiveness and responsible implementation [132].
ML has shown promise for dynamic SM, but its practical deployment in telecom networks remains limited, with current applications confined mainly to 4G–5G frequency-division duplex bands. A fully shared spectrum environment demands a significantly redesigned network architecture across radio frequency, physical, and MAC layers, introducing control signaling, frame structure, and interference coordination challenges. Reliability is critical, particularly for URLLC services requiring extremely low error rates, which current probabilistic ML models, including large GenAI systems, struggle to meet. Energy efficiency also becomes a concern, as spectrum sensing, model training, and inference are resource-intensive, particularly for energy-constrained devices. Latency introduced by real-time sensing and AI decision-making can violate the strict delay requirements of 5G. Moreover, poor generalization due to uneven spectrum data and unstable radio environments hinders model adaptability. Finally, achieving convergence in massive, dynamic WNs is difficult, limiting the practicality of many online learning methods. Overcoming these challenges is essential for realizing the full potential of intelligent, adaptive spectrum-sharing systems in future 5G-and-beyond networks.

8. Recent Trends, Future Research Directions, and Lessons Learned

Modern communication networks undergo significant changes due to the speedy advancement of AI technology, which creates innovative systems that adapt efficiently. Different AI-based solutions increase their presence across networks through optimization efforts while detecting anomalies and handling spectrum control and predictive maintenance applications. The recent evolution of AI systems has created multiple research problems and challenges that require investigation. This section reviews current advances and defines target investigation fields and essential principles learned from implemented systems to direct researchers and practitioners in developing resilient, ethical, and scalable AI-driven communication systems.

8.1. Recent Trends

The evolution of 5G and future wireless systems hinges heavily on strategic SA and adherence to international regulatory frameworks. Before 2015, the International Telecommunication Union (ITU) identified 11 candidate millimeter-wave (mmWave) frequency bands, ranging from 24 GHz to 86 GHz, as potential candidates for IMT-2020 and beyond. These allocations were formalized during the World Radiocommunication Conference 2015 (WRC-15), with ongoing studies recommended to assess spectrum-sharing feasibility, particularly in densely occupied bands.
Among these, the 28 GHz band (27.5–28.5 GHz) has been widely embraced by countries like the USA, South Korea, and Japan, although it is not formally listed among the recommended groups of the ITU. The most substantial BW potential lies in the 66–76 GHz range, offering up to 10 GHz for high-capacity communication. In contrast, the 47–47.2 GHz band offers the highest exchange speed (~200 MHz), making it suitable for ultrafast, low-latency applications. While not all 36.25 GHz total proposed spectrum will be allocated to 5G and beyond, the WRC-19 conference was expected to finalize key frequency bands based on ongoing global harmonization studies and regulatory decisions.
The 45.5–47 GHz band has emerged as a promising candidate for 5G and beyond due to its availability and performance potential, especially for BS-to-BS communications. Given the challenges in reallocating sub-6 GHz frequencies, the mmWave spectrum is more adaptable and compatible with IMT-2020 requirements. However, the successful integration of these bands will depend on several factors: the ability to repurpose existing spectrum, managing interference with adjacent services, spectrum-sharing study outcomes, and coordination across national regulatory bodies [133].
Furthermore, technical limitations such as shorter wavelengths at higher frequencies introduce challenges for antenna design and multiple-input multiple-output systems, enabling higher data rates and lower latency communications. These technological advancements underscore the need for additional regulatory support and harmonized global standards to ensure the seamless deployment of next-generation networks [134].

8.2. Future Research Directions

With the increasing demand for efficient spectrum utilization, DL has emerged as a promising tool to enhance spectrum sensing capabilities. Although considerable progress has been achieved, several research challenges remain unresolved, offering valuable directions for future exploration. One major challenge is improving detection accuracy in low SNR environments, where current models, despite high performance under favorable conditions, tend to degrade rapidly. Techniques such as CNNs for statistical feature extraction and domain adoption have shown potential, yet achieving robust detection across varying SNR levels remains a key research priority. Security is another pressing concern, as intelligent sensing systems are susceptible to malicious threats like adversarial attacks and spectrum sensing data falsification. Defensive strategies using adversarial training, DRL, and hybrid architecture have demonstrated promise, but a more comprehensive approach tailored to specific threat models and application scenarios is essential.
Furthermore, DL-based spectrum sensing often requires vast labeled datasets, which are costly and impractical to obtain in real-world scenarios. Approaches such as generative DA, TL from large pre-trained models, and semi-supervised or self-supervised learning will likely play a pivotal role in enabling robust performance under small-sample conditions. In addition, current spectrum sensing models typically rely on static or manually adjusted threshold settings, which are inadequate in highly dynamic wireless environments. Developing adaptive and context-aware thresholding mechanisms is critical for real-time, intelligent SM. Lastly, autoencoders have proven effective in various tasks such as feature extraction, anomaly detection, and resilience against adversarial interference. Their integration with other models, including LSTM and RL frameworks, has significantly improved sensing accuracy and system security. Future research is encouraged to delve deeper into optimizing and diversifying the use of autoencoder architectures to enhance the performance, reliability, and adaptability of DL-based spectrum sensing in complex and practical communication environments.

8.3. Lesson Learned

Examining AI-based spectrum allocation reveals essential details about wireless network systems that benefit from intelligent systems. The impressive capabilities of AI to enhance spectrum efficiency have been demonstrated alongside its limited ability to interpret models, data availability, and deployment scalability. The review establishes the requirement to develop intelligently designed AI models that are both simple and universal to accommodate next-generation networking needs. Current research findings both lead to present standard practices and promote future development methods for reliable and robust AI-based spectrum management approaches. The key lessons learned are outlined as follows. Figure 21 depicts the critical lessons learned from the application of AI in dynamic spectrum access.
Lesson 1: Advancements in Spectrum Sharing, models, and implications: Spectrum sharing, particularly through TV white spaces (TVWS), has gained attention as a means to use available radio frequencies efficiently. TVWS refers to unused frequencies in TV bands, which were first allowed for use in the U.S. in 2006, with database providers managing the spectrum. Similar models are used in the U.K., focusing on legal and technical frameworks. Three key SS-models are commonly discussed: LSA, President’s Council of Advisors on Science and Technology (PCAST), and Citizens’ Use Spectrum (CUS). LSA, primarily used in Europe, allows operators to share spectrum under defined conditions, ensuring QoS and enabling PUs to reclaim spectrum as needed. It is flexible and effective for large-scale deployments.
The PCAST approach uses a central database to track spectrum usage, reducing administrative burden and allowing SS without needing licenses. It provides flexibility but still maintains coordination to prevent interference. On the other hand, the CUS model will enable users to access spectrum without centralized reporting, making it more flexible but less reliable in terms of QoS and potential interference. Models like LSA and PCAST rely on well-managed databases and regulatory frameworks to ensure spectrum availability and QoS. At the same time, CUS offers more flexibility but at the cost of reliability. The success of these models depends on balancing flexibility with reliable spectrum access and the need for international coordination to maximize spectrum efficiency.
Lesson 2: Balancing Frequency Agility and Power Control for Interference-Aware SA: How CRNs operate in SA depends heavily on essential controlling parameters to determine both performance levels and operational efficiency. Operating frequency is the most essential factor to consider. SUs demonstrate exceptional functionality because they can automatically modify their frequency operation according to instantaneous signal quality and PU functionalities. The ability to change frequencies enables SUs to use the empty spectrum while steering clear of interfering signals. The power transmission level emerges as a necessary consideration. High transmission power simultaneously depletes SU batteries rapidly, creating more disturbances for active PUs. The requirement for adaptive power control mechanisms becomes crucial because these mechanisms manage transmission power limits during QoS considerations and channel conditions. Using this approach allows minimal disruption within SS operations.
Lesson 3: The Importance of Channel Modeling and Distributed Strategies in Spectrum Access Evaluation: SA algorithm evaluations and optimization require careful consideration of the selected channel model. Most research about SA nowadays uses Rayleigh fading as a model, but investigation into the Nakagami-m model remains scarce. The effectiveness of SA techniques needs testing through diverse fade behavior assessments since different deployment environments, such as urban, rural, indoor, and outdoor, require separate examination. Distributed SA strategies need additional development since centralized approaches dominate despite the increasing requirement for decentralized solutions that work well under complex network environments with high levels of mobility. Distributed algorithms represent an essential field of study for future research because they provide better user mobility management and high QoS standards.
Lesson 4: Bridging the gap from simulation to real-world validation in CRN: The sacred importance of simulations in SA method development for CRNs comes with the necessity of producing experimental test platforms to address dynamic real-world complexities [135]. Current CRN test environments lack sophistication, preventing the practical testing of theoretical models [136,137]. Improving ongoing research on scalable SA solutions demands that researchers develop flexible testbeds that simulate actual operational conditions.
Lesson 5: Towards hybrid intelligence, integrating RL with complementary techniques: The current literature shows a recurring problem because researchers depend excessively on RL algorithms operating independently when managing radio resources. RL has shown significant progress in research, but solutions can advance further when this technology merges with additional intelligent paradigms such as Markov decision processes, auction-based mechanisms, genetic algorithms, and game-theoretic techniques. Game-theoretic approaches effectively track agent learning patterns but do not effectively represent environmental stochastic behavior. Pathways designed to unite strategic foresight from game theory with adaptive learning techniques from RL disclose advancing opportunities for RA management approaches.
Lesson 6: Enhancing Applicability for DSA: The successful implementation of DSA requires identifying vacant spectrum bands and automatically selecting the best frequency spectrum and bandwidth that match the SUs instantaneous requirements. The fast evolution of AI has not led to adequate research concerning novel RL algorithms for the design of CRN systems. This research lacks sufficient exploration regarding how users who act in their self-interest can be encouraged to work together for SS purposes.
Lesson 7: Scalability and Generalization in Large-Scale Environments: Existing research about RL techniques faces a fundamental limitation because these methods only scale adequately with simplified or isolated DSA problems. Traditional tabular methods, including QL, become impractical because real-world CRNs feature massive deployments and extensive state–action dimensions. DL, joined with RL, presents itself as a solution that requires DQN or actor–critic frameworks to manage complex scenarios and vast data sets. Intelligent spectrum access systems require scalable architectures for operating efficiency in dynamic environments with resource constraints.
Lesson 8: Optimizing Distributed DSA through Spectrum Prediction and Adaptive Sharing Mode: Distributed DSA protocols typically use the ON/OFF model, where PUs maintain fixed channel usage statistics in each slot. However, since PUs can access licensed spectrum at any time, integrating a spectrum handoff mechanism is crucial to prevent disruption of SUs’ transmissions. Unlike centralized CWNs, where a central controller assigns backup spectrum, distributed CWNs require more complex components like distributed spectrum selection, handoff coordination, and transmission recovery. Many existing DSA protocols fail to fully leverage intelligence and learning capabilities, which could significantly improve performance through spectrum prediction. By predicting unoccupied channels, SUs can reduce sensing overhead, select backup channels more efficiently, and avoid packet collisions by accessing lower-traffic channels. Therefore, integrating spectrum prediction into DSA protocols could enhance CWN reliability and efficiency. The choice between overlay and underlay spectrum-sharing modes also depends on PU traffic. When PUs have low traffic, the overlay mode is more effective, while the underlay mode is preferred when PU traffic is high due to a limited unoccupied spectrum. DSA protocols must dynamically switch between these modes to optimize network performance and spectrum usage.
Lesson 9: Overcoming challenges in DSA and collision management: Most distributed DSA protocols rely on local spectrum sensing, which can be inaccurate due to noise, fading, and shadowing. Cooperative spectrum sensing addresses these issues by improving sensing accuracy. While cooperative sensing and DSA protocols have been studied separately, they are interdependent and should be designed together. For example, variations in transmitted power affect spectrum opportunities, which, in turn, influence the number of nodes involved in sensing and the detection threshold. A cross-layer approach is necessary to optimize both sensing accuracy and network performance. In distributed CWNs, collision avoidance is more complex due to dynamic spectrum availability, interference constraints, and the lack of a central controller. This requires managing collisions between SUs and between SUs and PUs. Fluctuating spectrum resources further challenge efficient SA and channel allocation. Ensuring that interference from multiple SUs stays below the threshold in a fully distributed manner remains a critical issue, warranting further research in distributed collision avoidance mechanisms.
Lesson 10: Implementing GenAI for Wireless Network Demand Shaping: The integration of GenAI into WNs holds immense promise but also presents several challenges that must be addressed for successful implementation. Hardware limitations, particularly in user devices, pose a bottleneck for GenAI-based demand shaping. Additionally, BS’s limited access to application-layer data hampers the efficient use of GenAI, though edge computing can help by processing data closer to the source. Regulatory policies, including net neutrality, must be adhered to, ensuring fair service for all users while managing the increased energy consumption and carbon emissions resulting from the computational demands of GenAI algorithms. Cybersecurity risks are another concern, as using GenAI in demand shaping could expose the system to malicious attacks or data breaches. Furthermore, handling encrypted data and maintaining security while ensuring efficiency requires further research. Information loss during data simplification is a significant challenge, potentially impacting the QoS and service-level agreements. It is essential to implement mechanisms that preserve data integrity, and in cases where information loss is unavoidable, data rescheduling may serve as an alternative. Finally, the black-box nature of GenAI models raises concerns about consistency, data privacy, and transparency, which are crucial for user trust and confidence [138]. Addressing these challenges is critical to fully realizing the potential of GenAI in the future of WNs.

9. Conclusions

In conclusion, the evolution of wireless communication from 5G to beyond-5G networks demands advanced techniques to address emerging challenges, particularly in spectrum management, network optimization, and security. With its DSA mechanisms, CR technology has proven essential for efficiently utilizing spectrum resources. As traditional methods face limitations in increasingly complex environments, DL and RL algorithms have emerged as powerful tools for enhancing spectrum sensing, RA, and cooperative network management. These AI-driven approaches significantly improve adaptive learning, detection accuracy, and data processing. However, challenges remain, such as the need for accurate channel state information and fairness in user access. Integrating AI technologies like GenAI holds great promise for future networks, particularly in optimizing SA and managing traffic demand. The potential of GenAI for demand-shaping and dynamic network adjustments is particularly relevant in the context of 6G, where AI is expected to play a pivotal role in ensuring efficient, secure, and scalable communication. Despite progress, key challenges such as data privacy, scalability, and model interpretability must be addressed to harness AI’s potential fully. Additionally, the collaboration between researchers, industry, and policymakers will be crucial to overcoming infrastructure and interoperability hurdles, paving the way for intelligent, adaptive, and secure networks. Future research would focus on refining AI techniques, improving network resilience, and exploring novel spectrum management strategies to support the next generation of ubiquitous and intelligent communication systems. Finally, future work would consider analyzing real-time performance and resource constraints, especially for mobile and edge deployments.

Author Contributions

The manuscript was written with contributions from all authors. Conceptualization, A.G.-I.; methodology, A.G.-I., A.L.I., K.N. and P.O.A.-O.; software, A.L.I.; validation, K.N. and A.L.I.; formal analysis, A.G.-I. and P.O.A.-O.; investigation, A.G.-I., A.L.I., K.N. and P.O.A.-O.; resources, K.N. and P.O.A.-O.; data curation, A.G.-I. and P.O.A.-O.; writing—original draft preparation, A.G.-I., K.N. and P.O.A.-O.; writing—review and editing, A.G.-I., A.L.I., K.N. and P.O.A.-O.; visualization, A.G.-I. and P.O.A.-O.; supervision, A.G.-I. and A.L.I.; project administration, A.G.-I. and A.L.I.; funding acquisition, A.G.-I. and A.L.I. All authors have read and agreed to the published version of the manuscript.

Funding

The authors received no specific funding for this study.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors sincerely appreciate the anonymous reviewers whose feedback enhanced the quality of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

3GPP3rd Generation Partnership Project
4GFourth Generation
5GFifth Generation
6GSixth Generation
AIArtificial Intelligence
AMCAutomatic Modulation Classification
ANNsArtificial Neural Networks
APsAccess Points
AR/VRAugmented/Virtual Reality
B5GBeyond 5G
BGPBorder Gateway Protocol
BSBase Station
BWBandwidth
CBRSCitizens Broadband Radio Service
CNNsConvolutional Neural Networks
CRCognitive Radio
CRNsCognitive Radio Networks
CRSCyclic Redundancy Synchronization
CRNsCognitive Radio Networks
CSIChannel State Information
CTDECentralized Training and Decentralized Execution
CUSCitizens’ Use Spectrum
CWNsCognitive Wireless Networks
DAData Augmentation
DCSDigital Cellular System
DCSSDynamic and Collaborative Spectrum Sharing
DDPGDeep Deterministic Policy Gradient
DDQNDouble Deep Q-Network
Dec-POMPDecentralized Partially Observable Markov Decision Process
DLDeep Learning
DNNsDeep Neural Networks
DQNDeep Q-Network
DRLDeep Reinforcement Learning
DSADynamic Spectrum Access
EDEnergy Detection
E-GSM-900Extended Global System for Mobile Communications 900 MHz
eMBBEnhanced Mobile Broadband
FCCFederal Communications Commission
FLFederated Learning
FMFrequency Modulation
FRFrequency Range
GANsGenerative Adversarial Networks
GenAIGenerative AI
GDPRGeneral Data Protection Regulation
HCSSAHierarchical Cognitive Spectrum Sharing Architecture
HDHalf Duplex
HDRLHierarchical Deep Reinforcement Learning
HetNetsHeterogeneous Networks
HIBSHigh-Altitude IMT-Based Station
IBFDIn Band Full Duplex
IDSIntrusion Detection Systems
IEEEInstitute of Electrical and Electronics Engineers
IoTInternet of Things
ITUInternational Telecommunication Union
KNNk-Nearest Neighbors
LEOLow Earth Orbit
LLMsLarge Language Models
LoRaLong Range
LoRaWANLong Range Wide Area Network
LPLinear Programming
LPEESLightweight Policy Enforcement and Evaluation System
LSALicensed Shared Access
LSTMLong Short-Term Memory
LTELong Term Evolution
MAMultiple Access
MABMulti-Armed Bandit
MACMedium Access Control
MADRLMulti-Agent Deep Reinforcement Learning
MARLMulti-Agent Reinforcement Learning
MBNMobile Broadband Network
MIMOMultiple-Input Multiple-Output
MLMachine Learning
MLPsMulti-Layer Perceptrons
mMTCMassive Machine-Type Communication
mmWaveMillimeter Wave
MSBTMulticast Source-Based Tree
NARNETNonlinear Autoregressive Neural Network
NFVNetwork Functions Virtualization
NLPNatural Language Processing
NNNeural Network
NOMANon-Orthogonal Multiple Access
NTNsNon-Terrestrial Networks
NRNew Radio
OTFSOrthogonal Time-Frequency Space
PAPower Allocation
PALPriority Access License (CBRS)
PCAPrincipal Component Analysis
PCASTPresident’s Council of Advisors on Science and Technology
PIBFPriority Index-Based Framework
PLAPhysical Layer Authentication
PLSPhysical Layer Security
PUsPrimary Users
QFQuality Function
QLQuality Learning
QoEQuality of Experience
QoSQuality of Service
QTQuality Table
QVQuality Value
RAResource Allocation
RANRadio Access Network
ResNetResidual Networks
RFRRandom Forest Regressor
RISReconfigurable Intelligent Surfaces
RLReinforcement Learning
RNNsRecurrent Neural Networks
SASpectrum Allocation
SAGINsSpace–Air–Ground Integrated Networks
SCFSpectral Correlation Function
SDNSoftware Defined Networking
SESpectral Efficiency
SIMSpectrum Interference Management
SINRSignal-to-Interference-Plus-Noise Ratio
SMSpectrum Management
SMPCSecure Multi-Party Computation
SNRSignal-to-Noise Ratio
SSSpectrum Sharing
STFTShort Time Fourier Transform
SUsSecondary Users
SVMSupport Vector Machines
TD3Twin Delayed Deep Deterministic Policy Gradient
TLTransfer Learning
TNsTerrestrial Networks
TPAADTwo-Phase Authentication for Attack Detection
TVTelevision
TVWSTV White Spaces
V2XVehicle-to-Everything
VANETsVehicular Ad Hoc Networks
UAVUnmanned Aerial Vehicle
URLLCUltra-Reliable Low-Latency Communications
VHetNetsVirtual Heterogeneous Networks
Wi-FiWireless Fidelity
WNsWireless Networks
WRANWireless Regional Area Network
WTWavelet Transform
WRCWorld Radiocommunication Conference
XAIExplainable AI
ZKPsZero-Knowledge Proofs

References

  1. Singh, S.; Anand, V. Load balancing clustering and routing for IoT-enabled wireless sensor networks. Int. J. Netw. Manag. 2023, 33, e2244. [Google Scholar] [CrossRef]
  2. dos Santos Junior, E.; Souza, R.D.; Rebelatto, J.L. Hybrid multiple access for channel allocation-aided eMBB and URLLC slicing in 5G and beyond systems. Internet Technol. Lett. 2021, 4, e294. [Google Scholar] [CrossRef]
  3. Jawad, A.T.; Maaloul, R.; Chaari, L. A comprehensive survey on 6G and beyond: Enabling technologies, opportunities of machine learning and challenges. Comput. Netw. 2023, 237, 110085. [Google Scholar] [CrossRef]
  4. Ravi, B.; Kumar, M.; Hu, Y.; Hassan, S.; Kumar, B. Stochastic modeling and performance analysis in balancing load and traffic for vehicular ad hoc networks: A review. Int. J. Netw. Manag. 2023, 33, e2224. [Google Scholar] [CrossRef]
  5. Sun, R.; Cheng, N.; Li, C.; Chen, F.; Chen, W. Knowledge-Driven Deep Learning Paradigms for Wireless Network Optimization in 6G. IEEE Netw. 2024, 38, 70–78. [Google Scholar] [CrossRef]
  6. Taleb, T.; Benzaïd, C.; Addad, R.A.; Samdanis, K. AI/ML for beyond 5G systems: Concepts, technology enablers & solutions. Comput. Netw. 2023, 237, 110044. [Google Scholar] [CrossRef]
  7. Sabur, A. Lightweight Flow-Based Policy Enforcement for SDN-Based Multi-Domain Communication. Int. J. Netw. Manag. 2025, 35, e2312. [Google Scholar] [CrossRef]
  8. Geraci, G.; López-Pérez, D.; Benzaghta, M.; Chatzinotas, S. Integrating Terrestrial and Non-Terrestrial Networks: 3D Opportunities and Challenges. IEEE Commun. Mag. 2022, 61, 42–48. [Google Scholar] [CrossRef]
  9. Çiloğlu, B.; Koç, G.B.; Shamsabadi, A.A.; Ozturk, M.; Yanikomeroglu, H. Strategic Demand-Planning in Wireless Networks: Can Generative-AI Save Spectrum and Energy? IEEE Commun. Mag. 2025, 63, 134–141. [Google Scholar] [CrossRef]
  10. Khanh, Q.V.; Hoai, N.V.; Manh, L.D.; Le, A.N.; Jeon, G. Wireless Communication Technologies for IoT in 5G: Vision, Applications, and Challenges. Wirel. Commun. Mob. Comput. 2022, 2022, 3229294. [Google Scholar] [CrossRef]
  11. Frieden, R. The evolving 5G case study in United States unilateral spectrum planning and policy. Telecommun. Policy 2020, 44, 102011. [Google Scholar] [CrossRef]
  12. Biswas, S.; Bishnu, A.; Khan, F.A.; Ratnarajah, T. In-Band Full-Duplex Dynamic Spectrum Sharing in Beyond 5G Networks. IEEE Commun. Mag. 2021, 59, 54–60. [Google Scholar] [CrossRef]
  13. Obite, F.; Usman, A.D.; Okafor, E. An overview of deep reinforcement learning for spectrum sensing in cognitive radio networks. Digit. Signal Process. 2021, 113, 103014. [Google Scholar] [CrossRef]
  14. Doshi, A.; Yerramalli, S.; Ferrari, L.; Yoo, T.; Andrews, J.G. A deep reinforcement learning framework for conten-tion-based spectrum sharing. IEEE J. Sel. Areas Commun. 2021, 39, 2526–2540. [Google Scholar] [CrossRef]
  15. Imoize, A.L.; Obakhena, H.I.; Anyasi, F.I.; Adelabu, M.A.; Kavitha, K.; Faruk, N. Spectral Efficiency Bounds of Cell-Free Massive MIMO Assisted UAV Cellular Communication. In Proceedings of the 2022 IEEE Nigeria 4th International Conference on Disruptive Technologies for Sustainable Development (NIGERCON), Lagos, Nigeria, 5–7 April 2022; pp. 1–5. [Google Scholar]
  16. Elhachmi, J. Distributed reinforcement learning for dynamic spectrum allocation in cognitive radio-based internet of things. IET Netw. 2022, 11, 207–220. [Google Scholar] [CrossRef]
  17. He, X.; Luo, M.; Hu, Y.; Xiong, F. Data sharing mode of dispatching automation system based on distributed machine learning. Int. J. Netw. Manag. 2024, 35, e2269. [Google Scholar] [CrossRef]
  18. Nisa, N.; Khan, A.S.; Ahmad, Z.; Abdullah, J. TPAAD: Two-phase authentication system for denial of service attack detection and mitigation using machine learning in software-defined network. Int. J. Netw. Manag. 2024, 34, e2258. [Google Scholar] [CrossRef]
  19. Rahman, A.; Khan, M.S.I.; Montieri, A.; Islam, M.J.; Karim, M.R.; Hasan, M.; Kundu, D.; Nasir, M.K.; Pescapè, A. BlockSD-5GNet: Enhancing security of 5G network through blockchain-SDN with ML-based bandwidth prediction. Trans. Emerg. Telecommun. Technol. 2024, 35, e4965. [Google Scholar] [CrossRef]
  20. Saha, R.K.; Cioffi, J.M. Dynamic Spectrum Sharing for 5G NR and 4G LTE Coexistence—A Comprehensive Review. IEEE Open J. Commun. Soc. 2024, 5, 795–835. [Google Scholar] [CrossRef]
  21. Khan, A.; Ahmad, S.; Ali, I.; Hayat, B.; Tian, Y.; Liu, W. Dynamic mobility and handover management in software-defined networking-based fifth-generation heterogeneous networks. Int. J. Netw. Manag. 2025, 35, e2268. [Google Scholar] [CrossRef]
  22. Yu, P.; Zhou, F.; Zhang, X.; Qiu, X.; Kadoch, M.; Cheriet, M. Deep Learning-Based Resource Allocation for 5G Broadband TV Service. IEEE Trans. Broadcast. 2020, 66, 800–813. [Google Scholar] [CrossRef]
  23. Kurunathan, H.; Huang, H.; Li, K.; Ni, W.; Hossain, E. Machine Learning-Aided Operations and Communications of Unmanned Aerial Vehicles: A Contemporary Survey. IEEE Commun. Surv. Tutorials 2023, 26, 496–533. [Google Scholar] [CrossRef]
  24. Matinmikko-Blue, M.; Yrjola, S.; Ahokangas, P. Spectrum Management in the 6G Era: The Role of Regulation and Spectrum Sharing. In Proceedings of the 2020 2nd 6G Wireless Summit (6G SUMMIT), Levi, Finland, 17–20 March 2020; pp. 1–5. [Google Scholar]
  25. Inamdar, M.A.; Kumaraswamy, H.V. Energy efficient 5G networks: Techniques and challenges. In Proceedings of the 2020 International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 10–12 September 2020; pp. 1317–1322. [Google Scholar]
  26. Sharma, H.; Kumar, N. Deep learning based physical layer security for terrestrial communications in 5G and beyond networks: A survey. Phys. Commun. 2023, 57, 102002. [Google Scholar] [CrossRef]
  27. Cao, Y.; Lien, S.-Y.; Liang, Y.-C.; Niyato, D. Multi-Tier Deep Reinforcement Learning for Non-Terrestrial Networks. IEEE Wirel. Commun. 2024, 31, 194–201. [Google Scholar] [CrossRef]
  28. Heydarishahreza, N.; Han, T.; Ansari, N. Spectrum Sharing and Interference Management for 6G LEO Satellite-Terrestrial Network Integration. IEEE Commun. Surv. Tutorials 2024. [Google Scholar] [CrossRef]
  29. Zhang, L.; Wei, Z.; Wang, L.; Yuan, X.; Wu, H.; Xu, W. Spectrum Sharing in the Sky and Space: A Survey. Sensors 2022, 23, 342. [Google Scholar] [CrossRef]
  30. Khalid, M.; Ali, J.; Roh, B.-H. Artificial Intelligence and Machine Learning Technologies for Integration of Terrestrial in Non-Terrestrial Networks. IEEE Internet Things Mag. 2024, 7, 28–33. [Google Scholar] [CrossRef]
  31. Si, J.; Huang, R.; Li, Z.; Hu, H.; Jin, Y.; Cheng, J.; Al-Dhahir, N. When Spectrum Sharing in Cognitive Networks Meets Deep Reinforcement Learning: Architecture, Fundamentals, and Challenges. IEEE Netw. 2023, 38, 187–195. [Google Scholar] [CrossRef]
  32. Tang, C.; Chen, Y.; Chen, G.; Du, L.; Liu, H. A Dynamic and Collaborative Spectrum Sharing Strategy Based on Multi-Agent DRL in Satellite-Terrestrial Converged Networks. IEEE Trans. Veh. Technol. 2024, 74, 7969–7984. [Google Scholar] [CrossRef]
  33. Zhou, Z.; Zhang, Q.; Ge, J.; Liang, Y.-C. Hierarchical Cognitive Spectrum Sharing in Space-Air-Ground Integrated Networks. IEEE Trans. Wirel. Commun. 2024, 24, 1430–1447. [Google Scholar] [CrossRef]
  34. Mehmood Mughal, D.; Mahboob, T.; Tariq Shah, S.; Kim, S.H.; Young Chung, M. Deep learning-based spectrum sharing in next generation multi-operator cellular networks. Int. J. Commun. Syst. 2025, 38, e5964. [Google Scholar] [CrossRef]
  35. Zhao, Q.; Zou, H.; Tian, Y.; Bariah, L.; Mouhouche, B.; Bader, F.; Almazrouei, E.; Debbah, M. Artificial Intelligence-Enabled Dy-namic Spectrum Management. In Intelligent Spectrum Management Towards 6G; Wiley & Sons: Hoboken, NJ, USA, 2025; pp. 73–89. [Google Scholar] [CrossRef]
  36. Brown, C.; Ghasemi, A. Evolution Toward Data-Driven Spectrum Sharing: Opportunities and Challenges. IEEE Access 2023, 11, 99680–99692. [Google Scholar] [CrossRef]
  37. Li, F.; Shen, B.; Guo, J.; Lam, K.-Y.; Wei, G.; Wang, L. Dynamic Spectrum Access for Internet-of-Things Based on Federated Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2022, 71, 7952–7956. [Google Scholar] [CrossRef]
  38. Das, S.K.; Rahman, S.; Mohjazi, L.; Imran, M.A.; Rabie, K.M. Reinforcement Learning-Based Resource Allocation for M2M Communications over Cellular Networks. In Proceedings of the 2022 IEEE Wireless Communications and Networking Conference (WCNC), Austin, TX, USA, 10–13 April 2022; pp. 1473–1478. [Google Scholar]
  39. Zhang, Y.; He, D.; He, W.; Xu, Y.; Guan, Y.; Zhang, W. Dynamic spectrum allocation by 5G base station. In Proceedings of the 2020 International Wireless Communications and Mobile Computing (IWCMC), Limassol, Cyprus, 15–19 June 2020; pp. 1463–1467. [Google Scholar]
  40. Srinivasan, M.; Kotagi, V.J.; Murthy, C.S.R. A Q-Learning Framework for User QoE Enhanced Self-Organizing Spectrally Efficient Network Using a Novel Inter-Operator Proximal Spectrum Sharing. IEEE J. Sel. Areas Commun. 2016, 34, 2887–2901. [Google Scholar] [CrossRef]
  41. Ahmad, W.S.H.M.W.; Radzi, N.A.M.; Samidi, F.S.; Ismail, A.; Abdullah, F.; Jamaludin, M.Z.; Zakaria, M.N. 5G Technology: Towards Dynamic Spectrum Sharing Using Cognitive Radio Networks. IEEE Access 2020, 8, 14460–14488. [Google Scholar] [CrossRef]
  42. Haldorai, A.; Sivaraj, J.; Nagabushanam, M.; Roberts, M.K.; Karras, D.A. Cognitive Wireless Networks Based Spectrum Sensing Strategies: A Comparative Analysis. Appl. Comput. Intell. Soft Comput. 2022, 2022, 6988847. [Google Scholar] [CrossRef]
  43. Parvini, M.; Zarif, A.H.; Nouruzi, A.; Mokari, N.; Javan, M.R.; Abbasi, B.; Ghasemi, A.; Yanikomeroglu, H. Spectrum Sharing Schemes From 4G to 5G and Beyond: Protocol Flow, Regulation, Ecosystem, Economic. IEEE Open J. Commun. Soc. 2023, 4, 464–517. [Google Scholar] [CrossRef]
  44. Zheleva, M.; Anderson, C.R.; Aksoy, M.; Johnson, J.T.; Affinnih, H.; DePree, C.G. Radio Dynamic Zones: Motivations, Challenges, and Opportunities to Catalyze Spectrum Coexistence. IEEE Commun. Mag. 2023, 61, 156–162. [Google Scholar] [CrossRef]
  45. Alkhayyat, A.; Abedi, F.; Bagwari, A.; Joshi, P.; Jawad, H.M.; Mahmood, S.N.; Yousif, Y.K. Fuzzy logic, genetic algorithms, and artificial neural networks applied to cognitive radio networks: A review. Int. J. Distrib. Sens. Netw. 2022, 18, 15501329221113508. [Google Scholar] [CrossRef]
  46. Hindia, M.N.; Qamar, F.; Ojukwu, H.; Dimyati, K.; Al-Samman, A.M.; Amiri, I.S. On platform to enable the cog-nitive radio over 5G networks. Wirel. Pers. Commun. 2020, 113, 1241–1262. [Google Scholar] [CrossRef]
  47. Bhattarai, S.; Park, J.-M.; Lehr, W. Dynamic Exclusion Zones for Protecting Primary Users in Database-Driven Spectrum Sharing. IEEE/ACM Trans. Netw. 2020, 28, 1506–1519. [Google Scholar] [CrossRef]
  48. Wei, M.; Li, X.; Xie, W.; Hu, C. Practical Performance Analysis of Interference in DSS System. Appl. Sci. 2023, 13, 1233. [Google Scholar] [CrossRef]
  49. Al-Dulaimi, O.M.K.; Al-Dulaimi, M.K.H.; Alexandra, M.O.; Al-Dulaimi, A.M.K. Performing strategic spectrum sensing study for the cognitive radio networks. In Proceedings of the 2022 International Conference on Communications, Information, Electronic and Energy Systems (CIEES), Veliko Tarnovo, Bulgaria, 24–26 November 2022; pp. 1–6. [Google Scholar]
  50. Perera, L.; Ranaweera, P.; Kusaladharma, S.; Wang, S.; Liyanage, M. A Survey on Blockchain for Dynamic Spectrum Sharing. IEEE Open J. Commun. Soc. 2024, 5, 1753–1802. [Google Scholar] [CrossRef]
  51. Barb, G.; Alexa, F.; Otesteanu, M. Dynamic Spectrum Sharing for Future LTE-NR Networks. Sensors 2021, 21, 4215. [Google Scholar] [CrossRef]
  52. Ivanov, A.; Tonchev, K.; Poulkov, V.; Manolova, A. Probabilistic Spectrum Sensing Based on Feature Detection for 6G Cognitive Radio: A Survey. IEEE Access 2021, 9, 116994–117026. [Google Scholar] [CrossRef]
  53. El Azaly, N.M.; Badran, E.F.; Kheirallah, H.N.; Farag, H.H. Performance analysis of centralized dynamic spectrum access via channel reservation mechanism in cognitive radio networks. Alex. Eng. J. 2021, 60, 1677–1688. [Google Scholar] [CrossRef]
  54. Kakkavas, G.; Tsitseklis, K.; Karyotis, V.; Papavassiliou, S. A Software Defined Radio Cross-Layer Resource Allocation Approach for Cognitive Radio Networks: From Theory to Practice. IEEE Trans. Cogn. Commun. Netw. 2020, 6, 740–755. [Google Scholar] [CrossRef]
  55. Babu, R.G.; Amudha, V.; Karthika, P. Architectures and Protocols for Next-Generation Cognitive Networking. Mach. Learn. Cogn. Comput. Mob. Commun. Wirel. Netw. 2020, 155–177. [Google Scholar] [CrossRef]
  56. Zuo, J.; Joe-Wong, C. Combinatorial multi-armed bandits for resource allocation. In Proceedings of the 2021 55th Annual Con-ference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 24–26 March 2021; pp. 1–4. [Google Scholar]
  57. Kang, S.; Joo, C. Low-Complexity Learning for Dynamic Spectrum Access in Multi-User Multi-Channel Networks. IEEE Trans. Mob. Comput. 2020, 20, 3267–3281. [Google Scholar] [CrossRef]
  58. Moos, J.; Hansel, K.; Abdulsamad, H.; Stark, S.; Clever, D.; Peters, J. Robust Reinforcement Learning: A Review of Foundations and Recent Advances. Mach. Learn. Knowl. Extr. 2022, 4, 13. [Google Scholar] [CrossRef]
  59. Trabelsi, N.; Fourati, L.C.; Chen, C.S. Interference management in 5G and beyond networks: A comprehensive survey. Comput. Netw. 2023, 239, 110159. [Google Scholar] [CrossRef]
  60. Mughees, A.; Tahir, M.; Sheikh, M.A.; Ahad, A. Towards Energy Efficient 5G Networks Using Machine Learning: Taxonomy, Research Challenges, and Future Research Directions. IEEE Access 2020, 8, 187498–187522. [Google Scholar] [CrossRef]
  61. Alhammadi, A.; Shayea, I.; El-Saleh, A.A.; Azmi, M.H.; Ismail, Z.H.; Kouhalvandi, L.; Saad, S.A. Artificial intel-ligence in 6G wireless networks: Opportunities, applications, and challenges. Int. J. Intell. Syst. 2024, 2024, 8845070. [Google Scholar] [CrossRef]
  62. Li, F.; Nie, W.; Lam, K.-Y.; Wang, L. Network traffic prediction based on PSO-LightGBM-TM. Comput. Netw. 2024, 254, 110810. [Google Scholar] [CrossRef]
  63. Pisa, P.S.; Costa, B.; Gonçalves, J.A.; de Medeiros, D.S.V.; Mattos, D.M.F. A Private Strategy for Workload Forecasting on Large-Scale Wireless Networks. Information 2021, 12, 488. [Google Scholar] [CrossRef]
  64. Fourati, H.; Maaloul, R.; Chaari, L.; Jmaiel, M. Comprehensive survey on self-organizing cellular network approaches applied to 5G networks. Comput. Netw. 2021, 199, 108435. [Google Scholar] [CrossRef]
  65. Jiang, W.; Zhang, Y.; Han, H.; Huang, Z.; Li, Q.; Mu, J. Mobile traffic prediction in consumer applications: A mul-timodal deep learning approach. IEEE Trans. Consum. Electron. 2024, 70, 3425–3435. [Google Scholar] [CrossRef]
  66. Zhang, G.; Zhou, H.; Wang, C.; Xue, H.; Wang, J.; Wan, H. Forecasting time series albedo using NARnet based on EEMD decomposition. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3544–3557. [Google Scholar] [CrossRef]
  67. Sarma, S.S.; Hazra, R.; Mukherjee, A. Symbiosis Between D2D Communication and Industrial IoT for Industry 5.0 in 5G mm-Wave Cellular Network: An Interference Management Approach. IEEE Trans. Ind. Inform. 2021, 18, 5527–5536. [Google Scholar] [CrossRef]
  68. Albinsaid, H.; Singh, K.; Biswas, S.; Li, C.-P. Multi-Agent Reinforcement Learning-Based Distributed Dynamic Spectrum Access. IEEE Trans. Cogn. Commun. Netw. 2021, 8, 1174–1185. [Google Scholar] [CrossRef]
  69. Albinsaid, H.; Singh, K.; Biswas, S.; Li, C.-P.; Alouini, M.-S. Block Deep Neural Network-Based Signal Detector for Generalized Spatial Modulation. IEEE Commun. Lett. 2020, 24, 2775–2779. [Google Scholar] [CrossRef]
  70. Pan, L.; Rashid, T.; Peng, B.; Huang, L.; Whiteson, S. Regularized softmax deep multi-agent q-learning. Adv. Neural Inf. Process. Syst. 2021, 34, 1365–1377. [Google Scholar]
  71. Naous, T.; Itani, M.; Awad, M.; Sharafeddine, S. Reinforcement Learning in the Sky: A Survey on Enabling Intelligence in NTN-Based Communications. IEEE Access 2023, 11, 19941–19968. [Google Scholar] [CrossRef]
  72. Alabi, C.A.; Idakwo, M.A.; Imoize, A.L.; Adamu, T.; Sur, S.N. AI for spectrum intelligence and adaptive resource management. In Artificial Intelligence for Wireless Communication Systems; CRC Press: Boca Raton, FL, USA, 2024; pp. 57–83. [Google Scholar]
  73. Ma, J.; Ushiku, Y.; Sagara, M. The Effect of Improving Annotation Quality on Object Detection Datasets: A Preliminary Study. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 4849–4858. [Google Scholar]
  74. Ienca, M. Don’t pause giant AI for the wrong reasons. Nat. Mach. Intell. 2023, 5, 470–471. [Google Scholar] [CrossRef]
  75. Kocoń, J.; Cichecki, I.; Kaszyca, O.; Kochanek, M.; Szydło, D.; Baran, J.; Bielaniewicz, J.; Gruza, M.; Janz, A.; Kanclerz, K.; et al. ChatGPT: Jack of all trades, master of none. Inf. Fusion 2023, 99, 101861. [Google Scholar] [CrossRef]
  76. Ozpoyraz, B.; Dogukan, A.T.; Gevez, Y.; Altun, U.; Basar, E. Deep Learning-Aided 6G Wireless Networks: A Comprehensive Survey of Revolutionary PHY Architectures. IEEE Open J. Commun. Soc. 2022, 3, 1749–1809. [Google Scholar] [CrossRef]
  77. Chen, Z.; Xu, Y.-Q.; Wang, H.; Guo, D. Deep STFT-CNN for Spectrum Sensing in Cognitive Radio. IEEE Commun. Lett. 2020, 25, 864–868. [Google Scholar] [CrossRef]
  78. El-Shafai, W.; Fawzi, A.; Sedik, A.; Zekry, A.; El-Banby, G.M.; Khalaf, A.A.M.; El-Samie, F.E.A.; Abd-Elnaby, M. Convolutional neural network model for spectrum sensing in cognitive radio systems. Int. J. Commun. Syst. 2022, 35, e5072. [Google Scholar] [CrossRef]
  79. Cai, L.; Cao, K.; Wu, Y.; Zhou, Y. Spectrum Sensing Based on Spectrogram-Aware CNN for Cognitive Radio Network. IEEE Wirel. Commun. Lett. 2022, 11, 2135–2139. [Google Scholar] [CrossRef]
  80. Wang, Q.; Guo, B. CNN-SVM Spectrum Sensing in Cognitive Radio Based on Signal Covariance Matrix. J. Phys. Conf. Ser. 2022, 2395, 012052. [Google Scholar] [CrossRef]
  81. Duan, Y.; Huang, F.; Xu, L.; Gulliver, T.A. Intelligent spectrum sensing algorithm for cognitive internet of vehicles based on KPCA and improved CNN. Peer-to-Peer Netw. Appl. 2023, 16, 2202–2217. [Google Scholar] [CrossRef]
  82. Chae, K.; Kim, Y. DS2MA: A Deep Learning-Based Spectrum Sensing Scheme for a Multi-Antenna Receiver. IEEE Wirel. Commun. Lett. 2023, 12, 952–956. [Google Scholar] [CrossRef]
  83. Suriya, M.; Sumithra, M.G. Enhancing cooperative spectrum sensing in flying cell towers for disaster management using convolutional neural networks. In Proceedings of the EAI International Conference on Big Data Innovation for Sustainable Cognitive Computing: BDCC 2018, Coimbatore, India, 13–15 December 2018; pp. 181–190. [Google Scholar]
  84. Shafiq, M.; Gu, Z. Deep Residual Learning for Image Recognition: A Survey. Appl. Sci. 2022, 12, 8972. [Google Scholar] [CrossRef]
  85. Chandra, S.S.; Upadhye, A.; Saravanan, P.; Gurugopinath, S.; Muralishankar, R. Deep Neural Network Architectures for Spectrum Sensing Using Signal Processing Features. In Proceedings of the 2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER), Nitte, India, 19–20 November 2021; pp. 129–134. [Google Scholar]
  86. Ren, X.; Mosavat-Jahromi, H.; Cai, L.; Kidston, D. Spatio-Temporal Spectrum Load Prediction Using Convolutional Neural Network and ResNet. IEEE Trans. Cogn. Commun. Netw. 2021, 8, 502–513. [Google Scholar] [CrossRef]
  87. Gai, J.; Zhang, L.; Wei, Z. Spectrum Sensing Based on STFT-ImpResNet for Cognitive Radio. Electronics 2022, 11, 2437. [Google Scholar] [CrossRef]
  88. Zhen, P.; Zhang, B.; Chen, Z.; Guo, D.; Ma, W. Spectrum Sensing Method Based on Wavelet Transform and Residual Network. IEEE Wirel. Commun. Lett. 2022, 11, 2517–2521. [Google Scholar] [CrossRef]
  89. Xiao, J.; Zhou, Z. Research progress of RNN language model. In Proceedings of the 2020 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), Dalian, China, 27–29 June 2020; pp. 1285–1288. [Google Scholar]
  90. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  91. Roopa, V.; Pradhan, H.S. Deep Learning Based Intelligent Spectrum Sensing in Cognitive Radio Networks. IETE J. Res. 2024, 70, 8425–8445. [Google Scholar] [CrossRef]
  92. Bkassiny, M. A Deep Learning-based Signal Classification Approach for Spectrum Sensing using Long Short-Term Memory (LSTM) Networks. In Proceedings of the 2022 6th International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE), Yogyakarta, Indonesia, 13–14 December 2022; pp. 667–672. [Google Scholar]
  93. Chen, W.; Wu, H.; Ren, S. CM-LSTM Based Spectrum Sensing. Sensors 2022, 22, 2286. [Google Scholar] [CrossRef]
  94. Wang, L.; Hu, J.; Jiang, R.; Chen, Z. A Deep Long-Term Joint Temporal–Spectral Network for Spectrum Prediction. Sensors 2024, 24, 1498. [Google Scholar] [CrossRef]
  95. Arunachalam, G.; SureshKumar, P. Optimized Deep Learning Model for Effective Spectrum Sensing in Dynamic SNR Scenario. Comput. Syst. Sci. Eng. 2023, 45, 1279–1294. [Google Scholar] [CrossRef]
  96. Al Daweri, M.S.; Abdullah, S.; Ariffin, K.A.Z. A Migration-Based Cuttlefish Algorithm With Short-Term Memory for Optimization Problems. IEEE Access 2020, 8, 70270–70292. [Google Scholar] [CrossRef]
  97. Zhang, Y.; Luo, Z. A Review of Research on Spectrum Sensing Based on Deep Learning. Electronics 2023, 12, 4514. [Google Scholar] [CrossRef]
  98. Nasser, A.; Chaitou, M.; Mansour, A.; Yao, K.C.; Charara, H. A Deep Neural Network Model for Hybrid Spectrum Sensing in Cognitive Radio. Wirel. Pers. Commun. 2021, 118, 281–299. [Google Scholar] [CrossRef]
  99. Wang, Y.; Xu, W.; Qin, Z.; Zhang, Y.; Gao, H.; Pan, M.; Lin, J. Deep Neural Network-Based Robust Spectrum Sensing: Exploiting Phase Difference Distribution. In Proceedings of the ICC 2021—IEEE International Conference on Communications, Montreal, QC, Canada, 4–23 June 2021; pp. 1–7. [Google Scholar]
  100. Zhang, X.; Ma, Y.; Liu, Y.; Wu, S.; Jiao, J.; Gao, Y.; Zhang, Q. Robust DNN-Based Recovery of Wideband Spectrum Signals. IEEE Wirel. Commun. Lett. 2023, 12, 1712–1715. [Google Scholar] [CrossRef]
  101. Zhao, R.; Ruan, Y.; Li, Y.; Li, T.; Zhang, R. CCD-GAN for Domain Adaptation in Time-Frequency Localization-Based Wideband Spectrum Sensing. IEEE Commun. Lett. 2023, 27, 2521–2525. [Google Scholar] [CrossRef]
  102. Li, X.; Hu, Z.; Shen, C.; Wu, H.; Zhao, Y. TFF_aDCNN: A Pre-Trained Base Model for Intelligent Wideband Spectrum Sensing. IEEE Trans. Veh. Technol. 2023, 72, 12912–12926. [Google Scholar] [CrossRef]
  103. Huang, C.; Zhou, H.; Zaïane, O.R.; Mou, L.; Li, L. Non-autoregressive Translation with Layer-Wise Prediction and Deep Supervision. Proc. AAAI Conf. Artif. Intell. 2022, 36, 10776–10784. [Google Scholar] [CrossRef]
  104. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 2020, 21, 1–67. [Google Scholar]
  105. Rubenstein, P.K.; Asawaroengchai, C.; Nguyen, D.D.; Bapna, A.; Borsos, Z.; Quitry, F.D.C.; Chen, P.; Badawy, D.E.; Han, W.; Kharitonov, E.; et al. Audiopalm: A large language model that can speak and listen. arXiv 2023, arXiv:2306.12925. [Google Scholar]
  106. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. Llama: Open and efficient foundation language models. arXiv 2023, arXiv:2302.13971. [Google Scholar]
  107. Szott, S.; Kosek-Szott, K.; Gawlowicz, P.; Gomez, J.T.; Bellalta, B.; Zubow, A.; Dressler, F. Wi-Fi Meets ML: A Survey on Improving IEEE 802.11 Performance With Machine Learning. IEEE Commun. Surv. Tutor. 2022, 24, 1843–1893. [Google Scholar] [CrossRef]
  108. Yang, B.; Cao, X.; Huang, C.; Guan, Y.L.; Yuen, C.; Di Renzo, M.; Niyato, D.; Debbah, M.; Hanzo, L. Spectrum-learning-aided reconfig-urable intelligent surfaces for “green” 6G networks. IEEE Netw. 2022, 35, 20–26. [Google Scholar] [CrossRef]
  109. Bosso, C.; Sen, P.; Cantos-Roman, X.; Parisi, C.; Thawdar, N.; Jornet, J.M. Ultrabroadband Spread Spectrum Techniques for Secure Dynamic Spectrum Sharing Above 100 GHz Between Active and Passive Users. In Proceedings of the 2021 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Angeles, CA, USA, 13–15 December 2021; pp. 45–52. [Google Scholar]
  110. Tekbıyık, K.; Akbunar, O.; Ekti, A.R.; Gorcin, A.; Kurt, G.K.; Qaraqe, K.A. Spectrum Sensing and Signal Identification With Deep Learning Based on Spectral Correlation Function. IEEE Trans. Veh. Technol. 2021, 70, 10514–10527. [Google Scholar] [CrossRef]
  111. Sharma, S.; Arjunan, T. Natural language processing for detecting anomalies and intrusions in unstructured cyber-security data. Int. J. Inf. Cybersecur. 2023, 7, 1–24. [Google Scholar]
  112. Ali, Z.; Tiberti, W.; Marotta, A.; Cassioli, D. Empowering Network Security: BERT Transformer Learning Approach and MLP for Intrusion Detection in Imbalanced Network Traffic. IEEE Access 2024, 12, 137618–137633. [Google Scholar] [CrossRef]
  113. Kuznetsov, O.; Frontoni, E.; Kryvinska, N.; Smirnov, O.; Imoize, A.L. Computational Modeling of Enhanced Spread Spectrum Codes for Asynchronous Wireless Communication. In Computational Modeling and Simulation of Advanced Wireless Communication Systems; CRC Press: Boca Raton, FL, USA, 2024; pp. 403–447. [Google Scholar]
  114. Ali, S.; Rehman, S.U.; Imran, A.; Adeem, G.; Iqbal, Z.; Kim, K.-I. Comparative Evaluation of AI-Based Techniques for Zero-Day Attacks Detection. Electronics 2022, 11, 3934. [Google Scholar] [CrossRef]
  115. Wu, Y.; Zou, B.; Cao, Y. Current Status and Challenges and Future Trends of Deep Learning-Based Intrusion Detection Models. J. Imaging 2024, 10, 254. [Google Scholar] [CrossRef]
  116. Drîngă, B.; Elhajj, M. Performance and Security Analysis of Privacy-Preserved IoT Applications. In Proceedings of the 2023 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT), Malang, Indonesia, 23–25 November 2023; pp. 549–556. [Google Scholar]
  117. Stahl, B.C.; Antoniou, J.; Bhalla, N.; Brooks, L.; Jansen, P.; Lindqvist, B.; Kirichenko, A.; Marchal, S.; Rodrigues, R.; Santiago, N.; et al. A systematic review of artificial intelligence impact assessments. Artif. Intell. Rev. 2023, 56, 12799–12831. [Google Scholar] [CrossRef]
  118. Shafin, R.; Liu, L.; Chandrasekhar, V.; Chen, H.; Reed, J.; Zhang, J.C. Artificial Intelligence-Enabled Cellular Networks: A Critical Path to Beyond-5G and 6G. IEEE Wirel. Commun. 2020, 27, 212–217. [Google Scholar] [CrossRef]
  119. Moustafa, N. A new distributed architecture for evaluating AI-based security systems at the edge: Network TON_IoT datasets. Sustain. Cities Soc. 2021, 72, 102994. [Google Scholar] [CrossRef]
  120. Olabanji, S.O.; Marquis, Y.A.; Adigwe, C.S.; Ajayi, S.A.; Oladoyinbo, T.O.; Olaniyi, O.O. AI-Driven Cloud Security: Examining the Impact of User Behavior Analysis on Threat Detection. Asian J. Res. Comput. Sci. 2024, 17, 57–74. [Google Scholar] [CrossRef]
  121. Khan, M.J.; Khan, M.A.; Beg, A.; Malik, S.; El-Sayed, H. An overview of the 3GPP identified Use Cases for V2X Services. Procedia Comput. Sci. 2022, 198, 750–756. [Google Scholar] [CrossRef]
  122. Walia, J.S.; Hämmäinen, H.; Kilkki, K.; Flinck, H.; Yrjölä, S.; Matinmikko-Blue, M. A Virtualization Infrastructure Cost Model for 5G Network Slice Provisioning in a Smart Factory. J. Sens. Actuator Netw. 2021, 10, 51. [Google Scholar] [CrossRef]
  123. Liu, J.; Shu, L.; Lu, X.; Liu, Y. Survey of Intelligent Agricultural IoT Based on 5G. Electronics 2023, 12, 2336. [Google Scholar] [CrossRef]
  124. Zhou, F.; Yu, P.; Feng, L.; Qiu, X.; Wang, Z.; Meng, L.; Kadoch, M.; Gong, L.; Yao, X. Automatic Network Slicing for IoT in Smart City. IEEE Wirel. Commun. 2020, 27, 108–115. [Google Scholar] [CrossRef]
  125. Çiloğlu, B.; Koç, G.B.; Ozturk, M.; Yanikomeroglu, H. Cell Switching in HAPS-Aided Networking: How the Obscurity of Traffic Loads Affects the Decision. IEEE Trans. Veh. Technol. 2024, 73, 17782–17787. [Google Scholar] [CrossRef]
  126. Shamsabadi, A.A.; Yadav, A.; Yanikomeroglu, H. Enhancing Next-Generation Urban Connectivity: Is the Integrated HAPS-Terrestrial Network a Solution? IEEE Commun. Lett. 2024, 28, 1112–1116. [Google Scholar] [CrossRef]
  127. Datasets. IEEE Dataport. Available online: https://ieee-dataport.org/datasets (accessed on 20 February 2025).
  128. Zarif, A.H. AoI Minimization in Energy Harvesting and Spectrum Sharing Enabled 6G Networks. IEEE Trans. Green Commun. Netw. 2022, 6, 2043–2054. [Google Scholar] [CrossRef]
  129. Tekbiyik, K.; Akbunar, Ö.; Ekti, A.R.; Görçin, A.; Kurt, G.K. COSINE: Cellular Communication SIgNal Dataset. IEEE Dataport, 20 January 2020. Available online: https://ieee-dataport.org/open-access/cosine-cellular-communication-signal-dataset (accessed on 22 May 2025).
  130. KU Leuven LTE Dataset. Available online: https://www.esat.kuleuven.be/wavecorearenberg/research/NetworkedSystems/projects/massive-mimo (accessed on 26 May 2025).
  131. Everett, E.; Shepard, C.; Zhong, L.; Sabharwal, A. SoftNull: Many-Antenna Full-Duplex Wireless via Digital Beamforming. IEEE Trans. Wirel. Commun. 2016, 15, 8077–8092. [Google Scholar] [CrossRef]
  132. El-Hajj, M. Enhancing Communication Networks in the New Era with Artificial Intelligence: Techniques, Applications, and Future Directions. Network 2025, 5, 1. [Google Scholar] [CrossRef]
  133. Alabi, C.A.; Imoize, A.L.; Giwa, M.A.; Faruk, N.; Tersoo, S.T.; Ehime, A.E. Artificial Intelligence in Spectrum Management: Policy and Regulatory Considerations. In Proceedings of the 2023 2nd International Conference on Multidisciplinary Engineering and Applied Science (ICMEAS), Abuja, Nigeria, 1–3 November 2023; pp. 1–6. [Google Scholar]
  134. Gupta, A.; Kausar, R.; Tanwar, S.; Alabdulatif, A.; Vimal, V.; Aluvala, S. Efficient spectrum sharing in 5G and beyond network: A survey. Telecommun. Syst. 2025, 88, 1–37. [Google Scholar] [CrossRef]
  135. Panda, S.B.; Swain, P.K.; Imoize, A.L.; Tripathy, S.S.; Lee, C. A Robust Spectrum Allocation Framework Towards Inference Management in Multichannel Cognitive Radio Networks. Int. J. Commun. Syst. 2025, 38, e6057. [Google Scholar] [CrossRef]
  136. Alozie, E.; Faruk, N.; Oloyede, A.; Sowande, O.; Imoize, A.; Abdulkarim, A. Intelligent process of spectrum handoff in cognitive radio network. SLU J. Sci. Technol. 2022, 4, 205. [Google Scholar] [CrossRef]
  137. Behera, J.R.; Imoize, A.L.; Singh, S.S.; Tripathy, S.S.; Bebortta, S. Optimizing Priority Queuing Systems with Server Reservation and Temporal Blocking for Cognitive Radio Networks. Telecom 2024, 5, 21. [Google Scholar] [CrossRef]
  138. Bovenzi, G.; Cerasuolo, F.; Ciuonzo, D.; Di Monda, D.; Guarino, I.; Montieri, A.; Persico, V.; Pescapé, A. Mapping the Landscape of Generative AI in Network Monitoring and Management. IEEE Trans. Netw. Serv. Manag. 2025, 22, 2441–2472. [Google Scholar] [CrossRef]
Figure 1. Distributed DSA in dual licensed bands.
Figure 1. Distributed DSA in dual licensed bands.
Ai 06 00126 g001
Figure 2. Different classifications of DSA.
Figure 2. Different classifications of DSA.
Ai 06 00126 g002
Figure 3. Overview of DSA strategies in cognitive wireless communication systems.
Figure 3. Overview of DSA strategies in cognitive wireless communication systems.
Ai 06 00126 g003
Figure 4. Enabling LTE-to-NR transition via efficient spectrum refarming.
Figure 4. Enabling LTE-to-NR transition via efficient spectrum refarming.
Ai 06 00126 g004
Figure 5. DSA for an Efficient LTE to NR Transition.
Figure 5. DSA for an Efficient LTE to NR Transition.
Ai 06 00126 g005
Figure 6. Classification of spectrum access approaches in wireless communication systems.
Figure 6. Classification of spectrum access approaches in wireless communication systems.
Ai 06 00126 g006
Figure 7. A cognitive radio loop illustrating how AI has emerged as a powerful tool to maximize spectrum utilization by predicting usage patterns and enhancing RA strategies.
Figure 7. A cognitive radio loop illustrating how AI has emerged as a powerful tool to maximize spectrum utilization by predicting usage patterns and enhancing RA strategies.
Ai 06 00126 g007
Figure 8. Process diagram of ML techniques.
Figure 8. Process diagram of ML techniques.
Ai 06 00126 g008
Figure 9. Q-learning-based approach for DSA.
Figure 9. Q-learning-based approach for DSA.
Ai 06 00126 g009
Figure 10. DQN-based framework for DSA.
Figure 10. DQN-based framework for DSA.
Ai 06 00126 g010
Figure 11. DDPG-based framework for DSA.
Figure 11. DDPG-based framework for DSA.
Ai 06 00126 g011
Figure 12. Hierarchical policy-based agent–environment interaction loop.
Figure 12. Hierarchical policy-based agent–environment interaction loop.
Ai 06 00126 g012
Figure 13. CNN framework.
Figure 13. CNN framework.
Ai 06 00126 g013
Figure 14. Framework of the residual module.
Figure 14. Framework of the residual module.
Ai 06 00126 g014
Figure 15. Internal structure and variable interactions in LSTM.
Figure 15. Internal structure and variable interactions in LSTM.
Ai 06 00126 g015
Figure 16. AI-based spectrum sensing framework.
Figure 16. AI-based spectrum sensing framework.
Ai 06 00126 g016
Figure 17. AI-driven traffic management framework integrating base stations, users, and a centralized AI optimization center.
Figure 17. AI-driven traffic management framework integrating base stations, users, and a centralized AI optimization center.
Ai 06 00126 g017
Figure 18. An AI-integrated IoT-edge architecture comprising smart devices (e.g., sensors, wearables), local edge computing nodes for immediate processing, and embedded AI security mechanisms.
Figure 18. An AI-integrated IoT-edge architecture comprising smart devices (e.g., sensors, wearables), local edge computing nodes for immediate processing, and embedded AI security mechanisms.
Ai 06 00126 g018
Figure 19. Smart security framework for cloud communication using AI.
Figure 19. Smart security framework for cloud communication using AI.
Ai 06 00126 g019
Figure 20. Application of GenAI for traffic shaping in future WNs.
Figure 20. Application of GenAI for traffic shaping in future WNs.
Ai 06 00126 g020
Figure 21. Critical lessons learned from the application of AI in dynamic spectrum access.
Figure 21. Critical lessons learned from the application of AI in dynamic spectrum access.
Ai 06 00126 g021
Table 1. Summary of related work in spectrum management (SM) research.
Table 1. Summary of related work in spectrum management (SM) research.
ReferenceMethodologyFindingsLimitationsFuture Scope
[7]Proposed the lightweight policy enforcement and evaluation system (LPEES), a framework that confines the border gateway protocol (BGP) to the control plane, enabling SDN-based data plane packet switching. Implemented a trust-based routing policy and ensured conflict-free flow rules.27 Gbps throughput, a 22.7% improvement over traditional BGP’s 22 Gbps, and reduced communication delay by an average of 17%. In a 32-domain system, LPEES’s control plane converged in about 52 s, whereas the traditional approach required 120 s, 56% improvement, and detected conflicts in flow rules within 9 milliseconds for 1400 rules.Potential challenges include ensuring compatibility with existing infrastructures, managing policy enforcement complexity across domains, and maintaining security and privacy in inter-domain communications.Plans to enhance LPEES by incorporating formal models for traffic engineering and conducting further evaluations to assess how various communication parameters affect trust between domains.
[9]Strategic demand planning using GenAI for demand labeling, shaping, and rescheduling in WNsGenAI enhances spectrum and energy efficiency by compressing and converting content, optimizing user association, load balancing, and interference management.Implementation feasibility, AI hardware requirements, and real-time adaptability.Expanding GenAI applications in WNs, optimizing AI-based demand-shaping for large-scale deployments.
[10]Survey on wireless communication technologies for IoT in 5G, covering vision, applications, and challenges.IoT in 5G enables massive connectivity with low latency, supporting applications like smart cities and intelligent transportation. Technologies like SigFox, Long Range (LoRa), Wireless Fidelity (Wi-Fi), and Long Range Wide Area Network (LoRaWAN) offer extensive coverage and low energy consumption.Security vulnerabilities in IoT devices, gateways, edge, and cloud servers; high energy consumption.Research on security-aware IoT frameworks, energy-efficient communication, and cloud-edge hybrid solutions for optimized RA.
[11]Case study on U.S. unilateral 5G spectrum planning and its impact on global SA.The U.S. Federal Communications Commission (FCC) has aggressively auctioned 5G spectrum without International Telecommunication Union (ITU) consensus, prioritizing national interests over global coordination. This strategy provides short-term benefits but risks long-term industry challenges.Potential ITU deadlock on spectrum planning, trade disputes, retaliation from other nations, and compatibility issues for wireless devices.A balanced approach between national spectrum policies and international coordination is needed to prevent market fragmentation and ensure global interoperability.
[12]Survey existing dynamic spectrum sharing (DSS) approaches, propose an in-band full-duplex (IBFD) CBRS network architecture, and design joint beamformers for interference mitigation—numerical analysis of IBFD vs. half duplex (HD) CBRS systems.IBFD-assisted CBRS improves the performance of priority access license (PAL) and general authorized access (GAA) users while reducing interference toward radar systems compared to half-duplex solutions. The proposed two-step beamformer design enhances detection probability while maintaining QoS constraints.Trade-offs between radar/mobile broadband network (MBN) transmit power, antenna count, SI cancellation, detection probability, and QoS requirements—complexity of IBFD implementation and real-world integration with DSS.Further research on IBFD implementation in DSS, optimization of beamforming techniques, and addressing interference management for large-scale networks.
[13]Surveyed DRL applications in spectrum sensing for CRNs, proposing a theoretical model for cooperative spectrum sensing using DRL to enhance detection.DRL can effectively address noise uncertainty and reduce reliance on prior knowledge of PUs, potentially overcoming limitations of traditional spectrum sensing methods.Noted challenges include the complexity of DRL models, the need for extensive training data, and the difficulty in real-time implementation within dynamic radio environments.Encouraged future research to focus on developing efficient DRL algorithms tailored for spectrum sensing, improving model training techniques, and exploring real-world deployments in CRNs.
[14]Formulated decentralized medium access as a partially observable Markov decision process; developed a distributed DRL algorithm with recurrent Q-learning for base stations.The proposed framework achieves proportional fairness in throughput, matching the performance of adaptive energy detection (ED) thresholds, and is robust to channel fading.Deploying individual deep Q-networks (DQNs) at each base station without parameter sharing, addressing scalability and convergence in large networks.Suggested extending the framework to other decision-making problems like rate control, beam selection, and coordinated scheduling in WNs.
[15]Utilized stochastic modeling to analyze load balancing and traffic distribution in vehicular ad hoc networks (VANETs). Applied queueing theory and probabilistic models for network performance assessment.Stochastic models improve load balance, reduce congestion, and enhance QoS in VANETs. Found that mobility-aware stochastic techniques optimize network RA.High computational complexity in real-time implementations. Limited evaluation in heterogeneous VANET environments.Enhancing stochastic models for real-time applications and integrating AI-driven mobility prediction for better load balancing and congestion control.
[16]Deep multi-user RL using deep neural networks (DNN), Q-learning, and cooperative multi-agent systems for CR-based IoT.Enhanced spectrum utilization, improved user satisfaction, and reduced network interference.Preliminary implementation lacks security considerations and scalability to larger networks.Integration with 6G CRNs, blockchain for security, fog computing for IoT, and quantum computing for large-scale data processing.
[17]Multicast source-based tree (MSBT) for point-to-multipoint transmissions; cloud scheduling with AI and big data.Improved efficiency in large data transfers; concurrent data reception enhances scheduling performance.Scalability and adaptability across different data center environments have not been fully explored.Extend to diverse applications, optimize real-world deployment, and enhance communication protocols.
[18]Two-phase authentication for attack detection (TPAAD) using SVM and KNN on CICDoS 2017 dataset in SDN environment.Improved DoS attack detection with reduced false positives, lower CPU utilization, optimized control channel BW, and better packet delivery ratio.Tested only in a simulated SDN environment, not validated on real networks.Implement and evaluate in a real-world SDN environment.
[19]BlockSD-5GNet: blockchain-SDN-network functions virtualization (NFV)-based 5G security framework with ML-based BW prediction (using random forest regressor (RFR)) in an IoT scenario.Improved BW estimation, enhanced security against threats, better RA, and robustness against failures.Limited scalability due to fewer nodes, reliance on traditional ML models, lack of real-world implementation, and blockchain overhead.Implement real-world deployment, integrate advanced ML/DL models, enhance blockchain consensus mechanisms, and optimize energy consumption for IoT devices.
[20]Literature review and comparative analysis of DSS for LTE-NR coexistence.Identifies key DSS challenges, deployment options, and the 3rd Generation Partnership Project (3GPP) standardization efforts; discusses interference, mixed numerology, and backward compatibility issues.System complexity, regulatory challenges, interference, and increased power consumption.Enhancing DSS flexibility, optimizing RA, improving scheduling, reducing control overhead, and addressing cyclic redundancy synchronization (CRS) interference
[21]An SDN-based cell selection scheme was proposed using linear programming (LP) and multi-attribute decision-making for optimized handover management in 5G/beyond-5G (B5G), heterogeneous networks (HetNets).Reduced handovers by 39%, minimized ping-pong effect, improved system throughput, and enhanced load balancing.Focused only on LP optimization, lacking predictive analytics; real-world deployment challenges not addressed.Plans to integrate ML with LP for better mobility management, improved throughput, and reduced latency.
[22]Proposed a DL-based RA framework for 5G television (TV) broadcasting using long short-term memory (LSTM) for demand prediction and DRL with convex optimization for BW and PA.Improved energy efficiency while maintaining QoS, accurate multicast service demand prediction, and optimized power and BW allocation.Limited by the complexity of DL models and the need for improved demand prediction accuracy.The plan is to enhance the DL model with more layers and explore 5G techniques like millimeter-wave (mmWave) beamforming and SDN/NFV for better TV service delivery.
[23]Survey of ML techniques in unmanned aerial vehicle (UAV) operations and communications, categorizing ML applications into feature extraction, environment modeling, planning, and control. Analyzed existing research and identified gaps.Enhance UAV automation, improving feature extraction, prediction, planning, and control. CNN is widely used for image processing, while DRL is effective for UAV control and scheduling. Integration of ML models is increasing.Lack of end-to-end ML-based UAV solutions. Security, reliability, and trustworthiness of ML applications remain unresolved challenges.Future research should focus on developing an end-to-end ML framework for UAV operations and improving security, reliability, and trust for UAV automation.
[24]Analysis of SM in 5G and its evolution towards 6G, focusing on regulatory frameworks and SS models.6G will require complex SM due to diverse frequency bands and increased SS. Localized and unlicensed spectrum use will grow alongside traditional mobile networks.Current regulatory frameworks may not keep up with rapid technological advancements, posing challenges for dynamic SA.Research should explore adaptive SM models for 6G, integrating dynamic sharing mechanisms and flexible regulatory approaches.
[26]Comprehensive survey on DL-based physical layer security (PLS) techniques for 5G and beyond networks, analyzing threats, DL applications, and performance metrics.DL enhances PLS by detecting attacks, improving physical layer authentication (PLA), secure beamforming, and automatic modulation classification (AMC). DL-based models can predict evolving security threats.Existing DL-based PLS techniques need improvements in real-time adaptability, robustness to adversarial attacks, and computational efficiency.Future research should focus on integrating DL with advanced wireless technologies, such as non-orthogonal multiple access (NOMA), massive multiple-input multiple-output (MIMO), mmWave, and quantum communications, for enhanced security.
[27]Developed a multi-tier DRL framework for NTNs, optimizing SA, trajectory planning, and user association across space, air, and ground tiers.Demonstrated improved overall throughput, reduced signaling overhead, and enhanced handover performance in NTNs through adaptive DRL configurations.High computational complexity, challenges in orchestration across different tiers, and a lack of a unified optimization approach.Integrating advanced learning techniques like split learning for distributed computing and adversarial learning for enhanced model training in dynamic NTN environments.
[28]Explores SS and interference management in low Earth orbit (LEO) satellite-TNs using cognitive SS, exclusion zones, beam-power optimization, reconfigurable intelligent surfaces (RIS), and AI-driven techniques.Enhances spectrum efficiency, interference control, and adaptability through various mitigation strategies.High computational complexity, regulatory issues, real-time processing constraints, and RIS/AI implementation challenges.Integrating large language models (LLMs) with DRL for adaptive SM, orthogonal time frequency space (OTFS) for link stability, AI for interference detection, and RIS for optimized signal control.
[29]Reviews SS in aerial/space-ground networks, analyzing spectrum utilization rules, sharing modes (interweave, underlay, overlay), and key technologies like cooperative sensing and beam management.Identifies challenges such as spectrum instability, beam misalignment, and interference in aerial/space networks, emphasizing the need for efficient SAHigh mobility leads to dynamic spectrum conditions, beam misalignment reduces communication quality, and cooperative sensing integration remains complex.Develop fast spectrum sensing methods, energy-efficient algorithms, and advanced beam management techniques to improve SS in integrated networks.
[30]Investigates AI/ML-driven approaches for integrating TN and NTN in 6G, focusing on adaptive RA, intelligent routing, autonomous network operation, and SM.AI/ML enhances NTN performance by optimizing resource utilization, improving connectivity, and enabling autonomous decision-making for efficient SA.Cross-layer optimization, efficient handover management, and regulatory/standardization challenges in global 6G-NTN deployment.Future research includes ML-based link-level enhancements, mmWave for higher data rates, ultra-low latency IoT applications, and regulatory frameworks for standardized NTN integration.
[31]Introduces a multi-agent reinforcement learning (MARL)-based SS scheme for cognitive networks (CNs) to enhance decision-making and spectrum efficiency. Explains explainable DRL to improve convergence efficiency.MARL optimizes spectrum utilization with self-decision-making in dynamic environments, reducing latency and improving efficiency. Explainable DRL enhances transparency in decision-making.Key challenges include multi-objective function formulation, multi-dimensional action space, partial channel state information, and communication overhead in MARL-based SS.Future work should focus on macro- and micro-strategic planning for heterogeneous networks, multi-agent transmission protocol design, and enhancing DRL explainability for real-world SS applications.
[32]Proposes a cooperative multichannel SS framework for satellite-TN using a dynamic and collaborative SS (DCSS) algorithm. Utilizes multi-agent deep reinforcement learning (MADRL) with double deep Q-network (DDQN) for multichannel assignment and deep deterministic policy gradient (DDPG) for PA under a centralized training and decentralized execution (CTDE) framework.DCSS enhances spectrum efficiency by optimizing multichannel allocation and power distribution. Simulation results confirm its superiority over existing approaches.Co-channel interference from inter-system, inter-cell, and intra-cell interactions; real-time global information acquisition; decentralized decision-making in dynamic spectrum environments.Further refinement of decentralized partially observable Markov decision process (Dec-POMDP) frameworks improved the adaptability of learning models and enhanced spectrum efficiency in large-scale networks.
[33]Hierarchical cognitive spectrum sharing architecture (HCSSA) framework with policy iteration–based beamforming (PIBF) and low-complexity beamforming schemes for SS in space-air-ground integrated networks (SAGINs).Enhances spectrum efficiency by prioritizing aerial networks over terrestrial networks, improving overall network performance.Non-convex optimization, channel estimation errors, and interference management.Refining robust beamforming strategies and optimizing performance under dynamic CSI conditions.
Current PaperReview of papers (2020–2025) from central databases, categorized by AI type, task, and use case; compared techniques on performance and feasibility, and identified trends and research gaps.RL (e.g., DQN), CNNs, and LSTMs are widely used. AI improves spectrum efficiency and adaptability, supports real-time, cross-layer decisions, and shows promising results in Edge AI.Lack of model transparency, limited training data, high resource demands, poor scalability and generalization, and few standard benchmarks.Develop explainable AI (XAI), use federated and edge learning, apply transfer/meta-learning, explore neuromorphic models, and create open datasets and standards.
Table 2. Comparison of DSA classifications.
Table 2. Comparison of DSA classifications.
ClassificationArchitectureSpectrum Sensing BehaviorSpectrum Access MethodAdvantagesChallenges
Non-Cooperative DSADecentralizedDevices individually sense the spectrum and make independent decisions.Spectrum access is based solely on local sensing results. No information sharing occurs.Reduced communication overhead; faster decision-making.Higher risk of false detection; susceptible to interference and spectrum sensing errors.
Cooperative DSADistributed or centralizedDevices collaborate by sharing sensing data to improve accuracy.Spectrum access decisions are made collectively, using data fusion (raw data sharing) or decision fusion (final decision sharing).Enhanced detection accuracy, reduces false alarms, and is more reliable spectrum sensing.Increased complexity, requires a dedicated control channel, and has synchronization issues.
Interweave DSADecentralized or hybridDetects and utilizes spectrum holes when PUs are inactive.Opportunistic access: SUs can only be transmitted when a spectrum hole is detected. Transmission stops when the PUs returnEfficient spectrum utilization; no interference with PUs.Requires precise sensing; spectrum holes may be scarce, limiting transmission opportunities.
Overlay DSADecentralized or centralizedKnowledge of PU signals is required to minimize interference.Allow simultaneous transmissions by employing interference cancellation or relaying PUs’ messages.Supports concurrent transmission; more efficient than interweave DSA.High computational complexity; requires knowledge of PUs signals.
Underlay DSADecentralized or hybridNo explicit spectrum hole detection; operates under strict interference constraints.SUs transmit simultaneously with PUs but must control power levels to avoid exceeding interference thresholds.Continuous transmission without waiting for spectrum holes; better spectrum efficiency.Transmission power limitations; requires advanced interference control mechanisms.
Table 3. Frequency allocation for legacy communication and navigation services.
Table 3. Frequency allocation for legacy communication and navigation services.
Service TypeFrequency Range
E-GSM-900 (Mobile Communication)880–915 MHz, 925–960 MHz
DCS (Mobile Communication)1710–1785 MHz, 1805–1880 MHz
FM Radio (Broadcasting)88–108 MHz
Standard C Band (Satellite Communication)5.850–6.425 GHz, 3.625–4.200 GHz
Non-Directional Radio Beacon (Navigation)190–1535 kHz
Table 4. Comparison of ML techniques for wireless communications and SM.
Table 4. Comparison of ML techniques for wireless communications and SM.
ML
Techniques
Learning TypeKey FeaturesAdvantagesLimitationsFuture Directions
Support Vector Machines (SVM)Supervised learningClassify data by finding optimal hyperplanesHigh accuracy for classification; adequate with small datasetsComputationally expensive with large datasets; sensitive to noiseIntegration with DL for feature extraction
K-Means ClusteringUnsupervised learningGroup data points based on similarityScalable and straightforward; efficient for large datasetsRequires a predefined number of clusters; sensitive to initializationAdaptive clustering for real-time SM
Gradient Follower (GF)Supervised learningOptimize decisions using gradient-based methodsFast convergence; widely used in optimization problemsMay get stuck in local minima; requires careful parameter tuningHybrid approaches with metaheuristic optimization
Artificial Neural Networks (ANNs)Supervised learningMimics human brain structure for pattern recognitionHighly effective in modeling complex, nonlinear relationshipsRequires large datasets; computationally intensiveApplication in intelligent spectrum sensing
Long Short-Term Memory (LSTM)Supervised learningA type of recurrent neural network (RNN) designed for sequential dataAddresses long-term dependencies; effective for time-series forecastingHigh computational complexity; requires significant training dataLightweight LSTM models for edge devices
Autoregressive Neural Network (NARNET)Supervised learningForecasts time-series data using past valuesExcellent for nonlinear time-series prediction; high accuracyLimited to autoregressive modeling; requires careful tuningIntegration with hybrid statistical-ML approaches
Reinforcement Learning (RL)Reinforcement learningLearns optimal decisions through rewards and penaltiesAdaptable in dynamic environments; effective for RAHigh training time; may struggle with exploration–exploitation balanceFederated RL for decentralized SM
Table 5. Comparative analysis of RL techniques for distributed DSA.
Table 5. Comparative analysis of RL techniques for distributed DSA.
TechniqueTypeKey ComponentsStrengthsLimitationsFuture Directions
Q-Learning (QL)Model-free RLQT, state–action pairsSimple, effective for small state–action spacesComputationally infeasible for large state/action spacesIntegrating function approximation (e.g., NNs) to handle large state–action spaces
Deep Q-Network (DQN)Value-based Deep RLNN, experience replayHandles large state spaces, prevents instability via experience replayStruggles with continuous action spaces, overestimates QVsExploring distributional RL and improved target update mechanisms to mitigate overestimation
Deep Deterministic Policy Gradient (DDPG)Actor–Critic RLActor network, critic network, experience replayHandles continuous action spaces, learns deterministic policiesSensitive to hyperparameters, suffers from QV overestimationEnhancing exploration strategies using entropy regularization and hybrid models
Twin Delayed Deep Deterministic Policy Gradient (TD3)Improved Actor–Critic RLDual critic networks, delayed policy updates, clipped Gaussian noiseMore stable learning reduces overestimation and prevents overfitting Higher computational cost, slower training due to delayed updatesReducing computational complexity through lightweight architectures and efficient parallelization
HDRL (Hierarchical Deep Reinforcement Learning)HierarchicalUses multiple hierarchical policies for decision-makingScalable, efficient in complex environmentsRequires careful reward design, increased training complexityExtend the multi-level hierarchy, apply it to large-scale multi-agent systems
DRL (Deep Reinforcement Learning)Hybrid (Value and Policy-based)Combines DL with RL for feature extraction and decision-makingLearning directly from high-dimensional inputs, eliminating manual feature engineeringHigh training time requires large datasetsImprove sample efficiency, enhance interpretability, and integrate with neuromorphic computing
Table 6. Comparative analysis of DL techniques for spectrum sensing.
Table 6. Comparative analysis of DL techniques for spectrum sensing.
DL TechniquesKey FeaturesAdvantagesLimitationsFuture Directions
Recurrent Neural Networks (RNNs)Processes sequential spectrum data and captures temporal dependencies.Effective for dynamic spectrum sensing.Struggles with long-term dependencies and vanishing gradients.Optimization with attention mechanisms.
Long Short-Term Memory (LSTM)Retains long-term dependencies with gating mechanisms.High accuracy in dynamic spectrum environments.Computationally expensive.Hybrid models with RL for adaptive sensing.
Artificial Neural Networks (ANNs)Wavelet transform for noise reduction, feature extraction.High fault tolerance, adaptive learning.Performance degrades at low SNR.Improved feature extraction and hybrid models.
Hybrid Spectrum Sensing (HSS) with ANNUses multiple detectors for feature extraction.Higher detection probability with more detectors.Requires significant computational resources.Optimization for real-time applications.
Deep Neural Networks (DNNs) with Phase-Difference (PD)Handles noise uncertainty and carrier frequency mismatch.Improved detection in low-SNR conditions.Dependent on training data quality.Enhancing robustness with adaptive learning.
DNN with Robust Alternating Direction Multiplier Method (RADMM)Stable spectrum recovery with numerical differential gradients.Efficient at low SNRs.High computational complexity.Reducing computational overhead for real-time applications.
Generative Adversarial Networks (GANs)Consistency constraints and TL for domain adaptation.Improved classification accuracy.Requires extensive training data.Applying GANs for real-world noisy spectrum environments.
Deep CNN with Time–Frequency FusionUses PCA and TL for feature extraction.High accuracy in low-SNR conditions.Increased model complexity.Exploring lightweight CNN architectures for real-time processing.
Residual Neural Networks (ResNet)Uses residual connections to overcome vanishing gradients.Efficient feature extraction with deeper networks.High computational cost.Reducing complexity through model pruning and quantization.
Table 7. Comparison of ML, DL, and LLMs for DSA.
Table 7. Comparison of ML, DL, and LLMs for DSA.
AspectTraditional MLDL TechniquesLLMs
ArchitectureRule-based or statistical models (e.g., SVM, RF).Neural networks (CNNs, RNNs).Transformer-based, large-scale (e.g., GPT, BERT).
Feature EngineeringManual and domain-specific.Partially automated (learns features from data).Fully automated through self-attention and large-scale pretraining.
Training ParadigmSupervised learning on task-specific dataSupervised or semi-supervised learning.Pretrain on large corpora, then adapt (e.g., instruction tuning, prompt engineering).
Adaptability to New TasksRequires retraining from scratch.Requires fine-tuning with labeled data.Few-shot and in-context learning; task adaptation without parameter updates.
Data RequirementTask-specific labeled datasets.Large, labeled datasets for each task.Massive unlabeled data for pretraining; minimal labeled data for adaptation.
Reasoning AbilityLimited (rule-based inference).Pattern recognition but limited logical reasoning.Capable of multi-step rationale (e.g., chain-of-thought).
Multimodal
Capability
Minimal (needs specialized models).Requires separate architectures (e.g., CNN for vision, RNN for text).Unified multimodal models (e.g., CLIP, AudioPaLM) for text, vision, audio.
InterpretabilityHigh (simple models are interpretable).Moderate (black box, but visualizable).Low (black-box nature; difficult to explain decisions).
Interference timeFast and lightweight.Moderate (depends on model size).Often slow due to large size and complexity.
On-device FeasibilityHigh (lightweight models).Possible with optimized models.Challenging; requires compression or distillation.
Energy EfficiencyHigh (low compute demands).Moderate (training can be energy-intensive).Low (training can consume hundreds of MWh).
Application in DSABasic spectrum prediction, anomaly detection.Spectrum sensing, modulation classification.Intelligent, context-aware SA, spoofing detection, cooperative reasoning.
Table 8. AI privacy-preserving techniques for enhanced data security.
Table 8. AI privacy-preserving techniques for enhanced data security.
TechniqueDescriptionKey FeaturesApplications
Federated LearningA decentralized learning technique where models are trained on local devices without sharing raw data.Data remains on local devices and only model updates are shared, enhancing privacy and reducing data transmission.Mobile networks, IoT systems, healthcare, and finance.
Differential PrivacyA method to ensure that individual data points cannot be distinguished within a dataset, offering privacy guarantees.Adds noise to data or queries, ensuring individual privacy while maintaining overall utility.Data analytics, machine learning, and statistical reporting.
Homomorphic EncryptionEnables computations on encrypted data without needing to decrypt it first.Allows for encrypted data analysis and maintains data confidentiality while still enabling insights.Secure data processing, cloud computing, and privacy-preserving analytics.
Secure Multi-Party Computation (SMPC)Protocols for joint computations on data from multiple parties while keeping individual input confidential.Ensures that no party can access another’s data, while still enabling computation on shared dataCollaborative data analysis, joint research, and financial services
Homomorphic Encryption- MLCombines homomorphic encryption with ML to allow secure data analysis without exposing sensitive information.ML on encrypted data, ensuring privacy during data analysis and model training.Healthcare, finance, IoT, and secure AI models.
Privacy-Preserving Data MiningTechniques such as clustering or classification ensure data privacy while performing mining operations.Applies privacy-preserving algorithms like secure k-means or secure decision trees.Data mining, customer behavior analysis, and sensitive data mining.
Zero-Knowledge Proofs (ZKPs)A cryptographic method to prove the validity of a statement without revealing the underlying data.Ensures privacy by proving knowledge of a fact without revealing the fact itself.Authentication, blockchain, and identity verification.
Table 9. Comparing AI and conventional methods across key network performance metrics.
Table 9. Comparing AI and conventional methods across key network performance metrics.
AspectAI TechniquesConventional Methods
LatencyLow latency (e.g., ~10 ms for DL)Higher latency (e.g., ~25 ms)
AccuracyHigh, especially for complex and dynamic dataModerate to low, suitable for static conditions
Computational PowerRequires GPUs/TPUs and parallel processingMinimal hardware requirements
Energy ConsumptionHigh, due to intensive training and inferenceLow, suitable for battery-powered devices
ScalabilityEasily scalable in distributed systemsLimited scalability due to rule-based designs
Data RequirementsRequires large, labeled datasets for trainingMinimal data, mostly rule-based or heuristics
InterpretabilityOften low (black-box models)High (easy to understand decision process)
AdaptabilitySelf-adapts using continuous learningRequires manual updates for changes
Resource RequirementsHigh memory and storage needsOptimized for constrained environments
Regulatory ComplianceComplex, especially in privacy-sensitive domainsEasier to control and audit
Deployment ComplexityHigh, requires training pipelines and updatesLow, often plug-and-play
Learning CapabilityLearn from past and real-time dataStatic behavior, no learning ability
MaintenanceRequires frequent updates, monitoring, and retrainingLow maintenance once the rules are defined
SecurityVulnerable to adversarial attacks and poisoningMore predictable but limited in attack detection
Use CasesDynamic SM, anomaly detection, traffic prediction, intrusion detectionFixed routing, static traffic shaping, threshold-based monitoring
Table 10. Representative open datasets for ML-based wireless communication research.
Table 10. Representative open datasets for ML-based wireless communication research.
DatasetAccessAreaTechnologyKey DataDescription
IEEE DataPort [127]OpenVariesVariousMulti-modal datasets across wireless domainsA central repository of IEEE-hosted datasets, covering wireless sensing, 5G/6G, V2X, CSI, radar, and ML-focused experiments. Supports benchmarking, reproducibility, and publication-linked data access.
Tarbiat Modares University [128]OpenSimulation6G networksArtificial intelligence and machine learning in spectrum sharingA neural network is used to approximate the Q-value function. The state is given as the input, and the Q-value of all possible actions is generated as the output.
COSINE (2020) [129]OpenLaboratoryCellularSNR, temporal signal patterns55 GB cellular dataset from lab tests that are useful for 5G/6G ML.
KU Leuven LTE Dataset [130]Partly OpenCampusLTESINR, throughputUniversity testbed data focused on LTE performance under realistic conditions.
RICE Wireless Dataset (2016) [131]OpenLaboratoryExperimentalChannel characteristics, signal logsDataset for physical layer and experimental protocol studies using testbed radio hardware.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gbenga-Ilori, A.; Imoize, A.L.; Noor, K.; Adebolu-Ololade, P.O. Artificial Intelligence Empowering Dynamic Spectrum Access in Advanced Wireless Communications: A Comprehensive Overview. AI 2025, 6, 126. https://doi.org/10.3390/ai6060126

AMA Style

Gbenga-Ilori A, Imoize AL, Noor K, Adebolu-Ololade PO. Artificial Intelligence Empowering Dynamic Spectrum Access in Advanced Wireless Communications: A Comprehensive Overview. AI. 2025; 6(6):126. https://doi.org/10.3390/ai6060126

Chicago/Turabian Style

Gbenga-Ilori, Abiodun, Agbotiname Lucky Imoize, Kinzah Noor, and Paul Oluwadara Adebolu-Ololade. 2025. "Artificial Intelligence Empowering Dynamic Spectrum Access in Advanced Wireless Communications: A Comprehensive Overview" AI 6, no. 6: 126. https://doi.org/10.3390/ai6060126

APA Style

Gbenga-Ilori, A., Imoize, A. L., Noor, K., & Adebolu-Ololade, P. O. (2025). Artificial Intelligence Empowering Dynamic Spectrum Access in Advanced Wireless Communications: A Comprehensive Overview. AI, 6(6), 126. https://doi.org/10.3390/ai6060126

Article Metrics

Back to TopTop