AI and Security in 5G Cooperative Cognitive Radio Networks

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: 30 September 2025 | Viewed by 16707

Special Issue Editors


E-Mail Website
Guest Editor
The William States Lee College of Engineering, Electrical and Computer Engineering Department, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
Interests: spectrum sensing; compressive sensing; cognitive radio; wireless communication; cybersecurity; machine learning; LoRa, internet of things; federated learning; adversarial attacks; edge computing

E-Mail Website
Guest Editor
Computer Sciences department in High School of Technology at the Hassan II University, Casablanca 20000, Morocco
Interests: wireless communications; machine learning; data science & smart system

E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science (EECS), Howard University, Washington, DC 20059, USA
Interests: Artificial Intelligence; cybersecurity; autonomous systems; IoT; metaverse
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Collogues,

5G cooperative cognitive radio continues to be a subject of great interest to researchers in wireless communications. It mitigates the radio spectrum scarcity by enabling opportunistic access to the spectrum. With spectrum sensing techniques, unlicensed users detect the available spectrum and use it for their transmissions without interfering with the licensed users.

In cooperative scenarios, unlicensed users collaborate and report their sensing results to a fusion center for the final decision about the spectrum occupancy. However, malicious users can easily interfere by eavesdropping or reporting falsified measurements to impact the sensing decision. These attacks negatively impact spectrum sensing accuracy. Examples of these attacks include primary user emulation, belief manipulation, eavesdropping, and malicious traffic injection. Therefore, detecting and effectively mitigating these attacks is required toward securing the cooperative spectrum sensing.

Artificial intelligence technology has been heralded as the revolutionary technology needed to successfully solve any problem of today’s networks. Integrating artificial intelligence into 5G networks allows efficiently detecting the presence of malicious users and other security concerns facing the 5G cooperative cognitive radio networks. In this context, this Special Issue is an opportunity to investigate how artificial intelligence can detect and mitigate security challenges facing cooperative spectrum sensing.

Dr. Fatima Salahdine
Dr. Hassan El Alami
Dr. Mohammed Ridouani
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 5G and beyond
  • cooperative networks
  • cognitive radio networks
  • security
  • artificial intelligence
  • machine learning
  • deep learning
  • federated learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

30 pages, 24605 KiB  
Article
Advanced Trajectory Analysis of NASA’s Juno Mission Using Unsupervised Machine Learning: Insights into Jupiter’s Orbital Dynamics
by Ashraf ALDabbas, Zaid Mustafa and Zoltan Gal
Future Internet 2025, 17(3), 125; https://doi.org/10.3390/fi17030125 - 11 Mar 2025
Viewed by 524
Abstract
NASA’s Juno mission, involving a pioneering spacecraft the size of a basketball court, has been instrumental in observing Jupiter’s atmosphere and surface from orbit since it reached the intended orbit. Over its first decade of operation, Juno has provided unprecedented insights into the [...] Read more.
NASA’s Juno mission, involving a pioneering spacecraft the size of a basketball court, has been instrumental in observing Jupiter’s atmosphere and surface from orbit since it reached the intended orbit. Over its first decade of operation, Juno has provided unprecedented insights into the solar system’s origins through advanced remote sensing and technological innovations. This study focuses on change detection in terms of Juno’s trajectory, leveraging cutting-edge data computing techniques to analyze its orbital dynamics. Utilizing 3D position and velocity time series data from NASA, spanning 11 years and 5 months (August 2011 to January 2023), with 5.5 million samples at 1 min accuracy, we examine the spacecraft’s trajectory modifications. The instantaneous average acceleration, jerk, and snap are computed as approximations of the first, second, and third derivatives of velocity, respectively. The Hilbert transform is employed to visualize the spectral properties of Juno’s non-stationary 3D movement, enabling the detection of extreme events caused by varying forces. Two unsupervised machine learning algorithms, DBSCAN and OPTICS, are applied to cluster the sampling events in two 3D state spaces: (velocity, acceleration, jerk) and (acceleration, jerk, snap). Our results demonstrate that the OPTICS algorithm outperformed DBSCAN in terms of the outlier detection accuracy across all three operational phases (OP1, OP2, and OP3), achieving accuracies of 99.3%, 99.1%, and 98.9%, respectively. In contrast, DBSCAN yielded accuracies of 98.8%, 98.2%, and 97.4%. These findings highlight OPTICS as a more effective method for identifying outliers in elliptical orbit data, albeit with higher computational resource requirements and longer processing times. This study underscores the significance of advanced machine learning techniques in enhancing our understanding of complex orbital dynamics and their implications for planetary exploration. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

32 pages, 2442 KiB  
Article
Federated Learning System for Dynamic Radio/MEC Resource Allocation and Slicing Control in Open Radio Access Network
by Mario Martínez-Morfa, Carlos Ruiz de Mendoza, Cristina Cervelló-Pastor and Sebastia Sallent-Ribes
Future Internet 2025, 17(3), 106; https://doi.org/10.3390/fi17030106 - 26 Feb 2025
Viewed by 721
Abstract
The evolution of cellular networks from fifth-generation (5G) architectures to beyond 5G (B5G) and sixth-generation (6G) systems necessitates innovative solutions to overcome the limitations of traditional Radio Access Network (RAN) infrastructures. Existing monolithic and proprietary RAN components restrict adaptability, interoperability, and optimal resource [...] Read more.
The evolution of cellular networks from fifth-generation (5G) architectures to beyond 5G (B5G) and sixth-generation (6G) systems necessitates innovative solutions to overcome the limitations of traditional Radio Access Network (RAN) infrastructures. Existing monolithic and proprietary RAN components restrict adaptability, interoperability, and optimal resource utilization, posing challenges in meeting the stringent requirements of next-generation applications. The Open Radio Access Network (O-RAN) and Multi-Access Edge Computing (MEC) have emerged as transformative paradigms, enabling disaggregation, virtualization, and real-time adaptability—which are key to achieving ultra-low latency, enhanced bandwidth efficiency, and intelligent resource management in future cellular systems. This paper presents a Federated Deep Reinforcement Learning (FDRL) framework for dynamic radio and edge computing resource allocation and slicing management in O-RAN environments. An Integer Linear Programming (ILP) model has also been developed, resulting in the proposed FDRL solution drastically reducing the system response time. On the other hand, unlike centralized Reinforcement Learning (RL) approaches, the proposed FDRL solution leverages Federated Learning (FL) to optimize performance while preserving data privacy and reducing communication overhead. Comparative evaluations against centralized models demonstrate that the federated approach improves learning efficiency and reduces bandwidth consumption. The system has been rigorously tested across multiple scenarios, including multi-client O-RAN environments and loss-of-synchronization conditions, confirming its resilience in distributed deployments. Additionally, a case study simulating realistic traffic profiles validates the proposed framework’s ability to dynamically manage radio and computational resources, ensuring efficient and adaptive O-RAN slicing for diverse and high-mobility scenarios. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

22 pages, 7085 KiB  
Article
Multiple PUE Attack Detection in Cooperative Mobile Cognitive Radio Networks
by Ernesto Cadena Muñoz, Gustavo Chica Pedraza and Alexander Aponte Moreno
Future Internet 2024, 16(12), 456; https://doi.org/10.3390/fi16120456 - 4 Dec 2024
Viewed by 717
Abstract
The Mobile Cognitive Radio Network (MCRN) are an alternative to spectrum scarcity. However, like any network, it comes with security issues to analyze. One of the attacks to analyze is the Primary User Emulation (PUE) attack, which leads the system to give the [...] Read more.
The Mobile Cognitive Radio Network (MCRN) are an alternative to spectrum scarcity. However, like any network, it comes with security issues to analyze. One of the attacks to analyze is the Primary User Emulation (PUE) attack, which leads the system to give the attacker the service as a legitimate user and use the Primary Users’ (PUs) spectrum resources. This problem has been addressed from perspectives like arrival time, position detection, cooperative scenarios, and artificial intelligence techniques (AI). Nevertheless, it has been studied with one PUE attack at once. This paper implements a countermeasure that can be applied when several attacks simultaneously exist in a cooperative network. A deep neural network (DNN) is used with other techniques to determine the PUE’s existence and communicate it with other devices in the cooperative MCRN. An algorithm to detect and share detection information is applied, and the results show that the system can detect multiple PUE attacks with coordination between the secondary users (SUs). Scenarios are implemented on software-defined radio (SDR) with a cognitive protocol to protect the PU. The probability of detection (PD) is measured for some signal-to-noise ratio (SNR) values in the presence of one PUE or more in the network, which shows high detection values above 90% for an SNR of -7dB. A database is also created with the attackers’ data and shared with all the SUs. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

16 pages, 1437 KiB  
Article
Effective Monoaural Speech Separation through Convolutional Top-Down Multi-View Network
by Aye Nyein Aung, Che-Wei Liao and Jeih-Weih Hung
Future Internet 2024, 16(5), 151; https://doi.org/10.3390/fi16050151 - 28 Apr 2024
Cited by 1 | Viewed by 1505
Abstract
Speech separation, sometimes known as the “cocktail party problem”, is the process of separating individual speech signals from an audio mixture that includes ambient noises and several speakers. The goal is to extract the target speech in this complicated sound scenario and either [...] Read more.
Speech separation, sometimes known as the “cocktail party problem”, is the process of separating individual speech signals from an audio mixture that includes ambient noises and several speakers. The goal is to extract the target speech in this complicated sound scenario and either make it easier to understand or increase its quality so that it may be used in subsequent processing. Speech separation on overlapping audio data is important for many speech-processing tasks, including natural language processing, automatic speech recognition, and intelligent personal assistants. New speech separation algorithms are often built on a deep neural network (DNN) structure, which seeks to learn the complex relationship between the speech mixture and any specific speech source of interest. DNN-based speech separation algorithms outperform conventional statistics-based methods, although they typically need a lot of processing and/or a larger model size. This study presents a new end-to-end speech separation network called ESC-MASD-Net (effective speaker separation through convolutional multi-view attention and SuDoRM-RF network), which has relatively fewer model parameters compared with the state-of-the-art speech separation architectures. The network is partly inspired by the SuDoRM-RF++ network, which uses multiple time-resolution features with downsampling and resampling for effective speech separation. ESC-MASD-Net incorporates the multi-view attention and residual conformer modules into SuDoRM-RF++. Additionally, the U-Convolutional block in ESC-MASD-Net is refined with a conformer layer. Experiments conducted on the WHAM! dataset show that ESC-MASD-Net outperforms SuDoRM-RF++ significantly in the SI-SDRi metric. Furthermore, the use of the conformer layer has also improved the performance of ESC-MASD-Net. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

16 pages, 457 KiB  
Article
Modeling and Analyzing Preemption-Based Service Prioritization in 5G Networks Slicing Framework
by Yves Adou, Ekaterina Markova and Yuliya Gaidamaka
Future Internet 2022, 14(10), 299; https://doi.org/10.3390/fi14100299 - 18 Oct 2022
Cited by 8 | Viewed by 5378
Abstract
The Network Slicing (NS) technology, recognized as one of the key enabling features of Fifth Generation (5G) wireless systems, provides very flexible ways to efficiently accommodate common physical infrastructures, e.g., Base Station (BS), multiple logical networks referred to as Network Slice Instances (NSIs). [...] Read more.
The Network Slicing (NS) technology, recognized as one of the key enabling features of Fifth Generation (5G) wireless systems, provides very flexible ways to efficiently accommodate common physical infrastructures, e.g., Base Station (BS), multiple logical networks referred to as Network Slice Instances (NSIs). To ensure the required Quality of Service (QoS) levels, the NS-technology relies on classical Resource Reservation (RR) or Service Prioritization schemes. Thus, the current paper aims to propose a Preemption-based Prioritization (PP) scheme “merging” the classical RR and Service Prioritization schemes. The proposed PP-scheme efficiency is evaluated or estimated given a Queueing system (QS) model analyzing the operation of multiple NSIs with various requirements at common 5G BSs. As a key result, the proposed PP-scheme can provide up to 100% gain in terms of blocking probabilities of arriving requests with respect to some baseline. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

Review

Jump to: Research

42 pages, 2733 KiB  
Review
A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks
by Hassan Khazane, Mohammed Ridouani, Fatima Salahdine and Naima Kaabouch
Future Internet 2024, 16(1), 32; https://doi.org/10.3390/fi16010032 - 19 Jan 2024
Cited by 20 | Viewed by 6448
Abstract
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, [...] Read more.
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill several security objectives, including detecting attacks, authenticating users before they gain access to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges, such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This paper provides a comprehensive review of the body of knowledge about adversarial attacks and defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs, and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT. Then, various methodologies employed in the generation of adversarial attacks are described and classified within a two-dimensional framework. Additionally, we describe existing countermeasures for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature on the vulnerability of three ML-based IoT security systems to adversarial attacks. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

Back to TopTop