Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (76)

Search Parameters:
Keywords = infrastructure slicing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1694 KiB  
Article
Green Network Slicing Architecture Based on 5G-IoT and Next-Generation Technologies
by Mariame Amine, Abdellatif Kobbane, Jalel Ben-Othman and Mohammed El Koutbi
Appl. Sci. 2025, 15(16), 8938; https://doi.org/10.3390/app15168938 - 13 Aug 2025
Viewed by 194
Abstract
The rapid expansion of device connectivity and the increasing demand for data traffic have become pivotal aspects of our daily lives, especially within the Internet of Things (IoT) ecosystem. Consequently, operators are striving to identify the most innovative and robust solutions capable of [...] Read more.
The rapid expansion of device connectivity and the increasing demand for data traffic have become pivotal aspects of our daily lives, especially within the Internet of Things (IoT) ecosystem. Consequently, operators are striving to identify the most innovative and robust solutions capable of accommodating these escalating requirements. The emergence of the sliced fifth-generation mobile network (sliced 5G) offers a promising architecture that leverages a novel Radio Access Technology known as New Radio (NR), promising significantly enhanced data rate experiences. By integrating the network slicing (NS) architecture, greater flexibility and isolation are introduced into the preexisting infrastructure. The isolation effect of NS is particularly advantageous in mitigating interference between slices, as it empowers each slice to function independently. This paper addresses the user association challenge within a sliced 5G (NR)-IoT network. To this end, we present an Unconstrained-Markov Decision Process (U-MDP) model formulation of the problem. Subsequently, we propose the U-MDP association algorithm, which aims to determine the optimal user-to-slice associations. Unlike existing approaches that typically rely on static user association or separate optimization strategies, our U-MDP algorithm dynamically optimizes user-to-slice associations within a sliced 5G-IoT architecture, thereby enhancing adaptability to varying network conditions and improving overall system performance. Our numerical simulations validate the theoretical model and demonstrate the effectiveness of our proposed solution in enhancing overall system performance, all while upholding the quality of service requirements for all devices. Full article
Show Figures

Figure 1

15 pages, 753 KiB  
Article
A Novel Cloud Energy Consumption Heuristic Based on a Network Slicing–Ring Fencing Ratio
by Vinay Sriram Iyer, Yasantha Samarawickrama and Giovani Estrada
Network 2025, 5(3), 27; https://doi.org/10.3390/network5030027 - 25 Jul 2025
Viewed by 254
Abstract
The widespread adoption of cloud computing has amplified the demand for electric power. It is strategically important to address the limitations of reliable sources and sustainability of power. Research and investment in data centres and power infrastructure are therefore critically important for our [...] Read more.
The widespread adoption of cloud computing has amplified the demand for electric power. It is strategically important to address the limitations of reliable sources and sustainability of power. Research and investment in data centres and power infrastructure are therefore critically important for our digital economy. A novel heuristic for the minimisation of energy consumption in cloud computing is presented. It draws similarities to the concept of “network slices”, in which an orchestrator enables multiplexing to reduce the network “churn” often associated with significant losses of energy consumption. The novel network slicing–ring fencing ratio is a heuristic calculated through an iterative procedure for the reduction in cloud energy consumption. Simulation results show how the non-convex equation optimises power by reducing energy from 10,680 kJ to 912 kJ, which is a 91.46% efficiency gain. In comparison, the Heuristic AUGMENT Non-Convex algorithm (HA-NC, by Hossain and Ansari) reported a 312.74% increase in energy consumption from 2464 kJ to 10,168 kJ, while the Priority Selection Offloading algorithm (PSO, by Anajemba et al.) also reported a 150% increase in energy consumption, from 10,738 kJ to 26,845 kJ. The proposed network slicing–ring fencing ratio is seen to successfully balance energy consumption and computing performance. We therefore think the novel approach could be of interest to network architects and cloud operators. Full article
Show Figures

Figure 1

19 pages, 3187 KiB  
Article
Development of an Automated Crack Detection System for Port Quay Walls Using a Small General-Purpose Drone and Orthophotos
by Daiki Komi, Daisuke Yoshida and Tomohito Kameyama
Sensors 2025, 25(14), 4325; https://doi.org/10.3390/s25144325 - 10 Jul 2025
Viewed by 453
Abstract
Aging port infrastructure demands frequent and reliable inspections, yet the existing automated systems often require expensive industrial drones, posing significant adoption barriers for local governments with limited resources. To address this challenge, this study develops a low-cost, automated crack detection system for port [...] Read more.
Aging port infrastructure demands frequent and reliable inspections, yet the existing automated systems often require expensive industrial drones, posing significant adoption barriers for local governments with limited resources. To address this challenge, this study develops a low-cost, automated crack detection system for port quay walls utilizing orthophotos generated from a small general-purpose drone. The system employs the YOLOR (You Only Learn One Representation) object detection algorithm, enhanced by two novel image processing techniques—overlapping tiling and pseudo-altitude slicing—to overcome the resolution limitations of low-cost cameras. While official guidelines for port facilities designate 3 mm as an inspection threshold, our system is specifically designed to achieve a higher-resolution detection capability for cracks as narrow as 1 mm. This approach ensures reliable detection with a sufficient safety margin and enables the proactive monitoring of crack progression for preventive maintenance. The effectiveness of the proposed image processing techniques was validated, with an F1 score-based analysis revealing key trade-offs between maximizing detection recall and achieving a balanced performance depending on the chosen simulated altitude. Furthermore, evaluation using real-world inspection data demonstrated that the proposed system achieves a detection performance comparable to that of a well-established commercial system, confirming its practical applicability. Crucially, by mapping the detected cracks to real-world coordinates on georeferenced orthophotos, the system provides a foundation for advanced, data-driven asset management, allowing for the quantitative tracking of deterioration over time. These results confirm that the proposed workflow is a practical and sustainable solution for infrastructure monitoring. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

32 pages, 6783 KiB  
Article
Adaptive Zero Trust Policy Management Framework in 5G Networks
by Abdulrahman K. Alnaim
Mathematics 2025, 13(9), 1501; https://doi.org/10.3390/math13091501 - 1 May 2025
Viewed by 1040
Abstract
The rapid evolution and deployment of 5G networks have introduced complex security challenges due to their reliance on dynamic network slicing, ultra-low latency communication, decentralized architectures, and highly diverse use cases. Traditional perimeter-based security models are no longer sufficient in these highly fluid [...] Read more.
The rapid evolution and deployment of 5G networks have introduced complex security challenges due to their reliance on dynamic network slicing, ultra-low latency communication, decentralized architectures, and highly diverse use cases. Traditional perimeter-based security models are no longer sufficient in these highly fluid and distributed environments. In response to these limitations, this study introduces SecureChain-ZT, a novel Adaptive Zero Trust Policy Framework (AZTPF) that addresses emerging threats by integrating intelligent access control, real-time monitoring, and decentralized authentication mechanisms. SecureChain-ZT advances conventional Zero Trust Architecture (ZTA) by leveraging machine learning, reinforcement learning, and blockchain technologies to achieve autonomous policy enforcement and threat mitigation. Unlike static ZT models that depend on predefined rule sets, AZTPF continuously evaluates user and device behavior in real time, detects anomalies through AI-powered traffic analysis, and dynamically updates access policies based on contextual risk assessments. Comprehensive simulations and experiments demonstrate the robustness of the framework. SecureChain-ZT achieves an authentication accuracy of 97.8% and reduces unauthorized access attempts from 17.5% to just 2.2%. Its advanced detection capabilities achieve a threat detection accuracy of 99.3% and block 95.6% of attempted cyber intrusions. The implementation of blockchain-based identity verification reduces spoofing incidents by 97%, while microsegmentation limits lateral movement attacks by 75%. The proposed SecureChain-ZT model achieved an authentication accuracy of 98.6%, reduced false acceptance and rejection rates to 1.2% and 0.2% respectively, and improved policy update time to 180 ms. Compared to traditional models, the overall latency was reduced by 62.6%, and threat detection accuracy increased to 99.3%. These results highlight the model’s effectiveness in both cybersecurity enhancement and real-time service responsiveness. This research contributes to the advancement of Zero Trust security models by presenting a scalable, resilient, and adaptive policy enforcement framework that aligns with the demands of next-generation 5G infrastructures. The proposed SecureChain-ZT model not only enhances cybersecurity but also ensures service reliability and responsiveness in complex and mission-critical environments. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Decision Making)
Show Figures

Figure 1

19 pages, 5116 KiB  
Article
Prediction of Shallow Landslide Runout Distance Based on Genetic Algorithm and Dynamic Slicing Method
by Wenming Ren, Wei Zhou, Zhixiao Hou and Chuan Tang
Water 2025, 17(9), 1293; https://doi.org/10.3390/w17091293 - 26 Apr 2025
Viewed by 599
Abstract
Shallow landslides are often unpredictable and seriously threaten surrounding infrastructure and the ecological environment. Traditional landslide prediction methods are time-consuming, labor-intensive, and inaccurate. Thus, there is an urgent need to enhance predictive techniques. To accurately predict the runout distance of shallow landslides, this [...] Read more.
Shallow landslides are often unpredictable and seriously threaten surrounding infrastructure and the ecological environment. Traditional landslide prediction methods are time-consuming, labor-intensive, and inaccurate. Thus, there is an urgent need to enhance predictive techniques. To accurately predict the runout distance of shallow landslides, this study focuses on a shallow soil landslide in Tongnan District, Chongqing Municipality. We employ a genetic algorithm (GA) to identify the most hazardous sliding surface through multi-iteration optimization. We discretize the landslide body into slice units using the dynamic slicing method (DSM) to estimate the runout distance. The model’s effectiveness is evaluated based on the relative errors between predicted and actual values, exploring the effects of soil moisture content and slice number on the kinematic model. The results show that under saturated soil conditions, the GA-identified hazardous sliding surface closely matches the actual surface, with a stability coefficient of 0.9888. As the number of slices increases, velocity fluctuations within the slices become more evident. With 100 slices, the predicted movement time of the Tongnan landslide is 12 s, and the runout distance is 5.91 m, with a relative error of about 7.45%, indicating the model’s reliability. The GA-DSM method proposed in this study improves the accuracy of landslide runout prediction. It supports the setting of appropriate safety distances and the implementation of preventive engineering measures, such as the construction of retaining walls or drainage systems, to minimize the damage caused by landslides. Moreover, the method provides a comprehensive technical framework for monitoring and early warning of similar geological hazards. It can be extended and optimized for all types of landslides under different terrain and geological conditions. It also promotes landslide prediction theory, which is of high application value and significance for practical use. Full article
Show Figures

Figure 1

28 pages, 2200 KiB  
Article
Fine-Tuning Network Slicing in 5G: Unveiling Mathematical Equations for Precision Classification
by Nikola Anđelić, Sandi Baressi Šegota and Vedran Mrzljak
Computers 2025, 14(5), 159; https://doi.org/10.3390/computers14050159 - 25 Apr 2025
Viewed by 606
Abstract
Modern 5G network slicing centers on the precise design of virtual, independent networks operating over a shared physical infrastructure, each configured to meet specific service requirements. This approach plays a vital role in enabling highly customized and flexible service delivery within the 5G [...] Read more.
Modern 5G network slicing centers on the precise design of virtual, independent networks operating over a shared physical infrastructure, each configured to meet specific service requirements. This approach plays a vital role in enabling highly customized and flexible service delivery within the 5G ecosystem. In this study, we present the application of a genetic programming symbolic classifier to a dedicated network slicing dataset, resulting in the generation of accurate symbolic expressions for classifying different network slice types. To address the issue of class imbalance, we employ oversampling strategies that produce balanced variations of the dataset. Furthermore, a random search strategy is used to explore the hyperparameter space comprehensively in pursuit of optimal classification performance. The derived symbolic models, refined through threshold tuning based on prediction correctness, are subsequently evaluated on the original imbalanced dataset. The proposed method demonstrates outstanding performance, achieving a perfect classification accuracy of 1.0. Full article
Show Figures

Figure 1

32 pages, 2442 KiB  
Article
Federated Learning System for Dynamic Radio/MEC Resource Allocation and Slicing Control in Open Radio Access Network
by Mario Martínez-Morfa, Carlos Ruiz de Mendoza, Cristina Cervelló-Pastor and Sebastia Sallent-Ribes
Future Internet 2025, 17(3), 106; https://doi.org/10.3390/fi17030106 - 26 Feb 2025
Viewed by 1387
Abstract
The evolution of cellular networks from fifth-generation (5G) architectures to beyond 5G (B5G) and sixth-generation (6G) systems necessitates innovative solutions to overcome the limitations of traditional Radio Access Network (RAN) infrastructures. Existing monolithic and proprietary RAN components restrict adaptability, interoperability, and optimal resource [...] Read more.
The evolution of cellular networks from fifth-generation (5G) architectures to beyond 5G (B5G) and sixth-generation (6G) systems necessitates innovative solutions to overcome the limitations of traditional Radio Access Network (RAN) infrastructures. Existing monolithic and proprietary RAN components restrict adaptability, interoperability, and optimal resource utilization, posing challenges in meeting the stringent requirements of next-generation applications. The Open Radio Access Network (O-RAN) and Multi-Access Edge Computing (MEC) have emerged as transformative paradigms, enabling disaggregation, virtualization, and real-time adaptability—which are key to achieving ultra-low latency, enhanced bandwidth efficiency, and intelligent resource management in future cellular systems. This paper presents a Federated Deep Reinforcement Learning (FDRL) framework for dynamic radio and edge computing resource allocation and slicing management in O-RAN environments. An Integer Linear Programming (ILP) model has also been developed, resulting in the proposed FDRL solution drastically reducing the system response time. On the other hand, unlike centralized Reinforcement Learning (RL) approaches, the proposed FDRL solution leverages Federated Learning (FL) to optimize performance while preserving data privacy and reducing communication overhead. Comparative evaluations against centralized models demonstrate that the federated approach improves learning efficiency and reduces bandwidth consumption. The system has been rigorously tested across multiple scenarios, including multi-client O-RAN environments and loss-of-synchronization conditions, confirming its resilience in distributed deployments. Additionally, a case study simulating realistic traffic profiles validates the proposed framework’s ability to dynamically manage radio and computational resources, ensuring efficient and adaptive O-RAN slicing for diverse and high-mobility scenarios. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

30 pages, 22571 KiB  
Article
Joint Pricing, Server Orchestration and Network Slice Deployment in Mobile Edge Computing Networks
by Yijian Hou, Kaisa Zhang, Gang Chuai, Weidong Gao, Xiangyu Chen and Siqi Liu
Electronics 2025, 14(5), 841; https://doi.org/10.3390/electronics14050841 - 21 Feb 2025
Viewed by 799
Abstract
The integration of mobile edge computing (MEC) and network slicing can provide low-latency and customized services. In such integrated wireless networks, we propose a pricing-driven joint MEC server orchestration and network slice deployment scheme (PD-JSOSD), jointly solving the pricing, MEC server orchestration and [...] Read more.
The integration of mobile edge computing (MEC) and network slicing can provide low-latency and customized services. In such integrated wireless networks, we propose a pricing-driven joint MEC server orchestration and network slice deployment scheme (PD-JSOSD), jointly solving the pricing, MEC server orchestration and network slicing deployment issues. We divide the system into an infrastructure provider layer (IPL), network planning layer (NPL) and resource allocation layer (RAL), and a three-stage Stackelberg game is proposed to describe their relationships. To obtain the Stackelberg equalization, we propose a three-layer deep reinforcement learning (DRL) algorithm. Specifically, the dueling double deep Q-network (D3QN) is used in the IPL, and the DRL with branching dueling Q-network (BDQ) is used in the NPL and the RAL to cope with the large-scale discrete action spaces. Moreover, we propose an innovative illegal action modification algorithm to improve the convergence of the BDQ. Simulations verify the convergence of the three-layer DRL and the superiority of modified-BDQ in dealing with large-scale action spaces, where modified-BDQ can improve the convergence by 21.9% and 28.3%. Furthermore, compared with the benchmark algorithms, JSOSD in the NPL and the RAL can improve system utility by up to 52.1%, proving the superiority of the server orchestration and slice deployment scheme. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Graphical abstract

21 pages, 2417 KiB  
Article
Deep Deterministic Policy Gradient-Based Resource Allocation Considering Network Slicing and Device-to-Device Communication in Mobile Networks
by Hudson Henrique de Souza Lopes, Lucas Jose Ferreira Lima, Telma Woerle de Lima Soares and Flávio Henrique Teles Vieira
Sensors 2024, 24(18), 6079; https://doi.org/10.3390/s24186079 - 20 Sep 2024
Cited by 2 | Viewed by 1546
Abstract
Next-generation mobile networks, such as those beyond the 5th generation (B5G) and 6th generation (6G), have diverse network resource demands. Network slicing (NS) and device-to-device (D2D) communication have emerged as promising solutions for network operators. NS is a candidate technology for this scenario, [...] Read more.
Next-generation mobile networks, such as those beyond the 5th generation (B5G) and 6th generation (6G), have diverse network resource demands. Network slicing (NS) and device-to-device (D2D) communication have emerged as promising solutions for network operators. NS is a candidate technology for this scenario, where a single network infrastructure is divided into multiple (virtual) slices to meet different service requirements. Combining D2D and NS can improve spectrum utilization, providing better performance and scalability. This paper addresses the challenging problem of dynamic resource allocation with wireless network slices and D2D communications using deep reinforcement learning (DRL) techniques. More specifically, we propose an approach named DDPG-KRP based on deep deterministic policy gradient (DDPG) with K-nearest neighbors (KNNs) and reward penalization (RP) for undesirable action elimination to determine the resource allocation policy maximizing long-term rewards. The simulation results show that the DDPG-KRP is an efficient solution for resource allocation in wireless networks with slicing, outperforming other considered DRL algorithms. Full article
Show Figures

Figure 1

19 pages, 1186 KiB  
Article
PrismParser: A Framework for Implementing Efficient P4-Programmable Packet Parsers on FPGA
by Parisa Mashreghi-Moghadam, Tarek Ould-Bachir and Yvon Savaria
Future Internet 2024, 16(9), 307; https://doi.org/10.3390/fi16090307 - 27 Aug 2024
Viewed by 1267
Abstract
The increasing complexity of modern networks and their evolving needs demand flexible, high-performance packet processing solutions. The P4 language excels in specifying packet processing in software-defined networks (SDNs). Field-programmable gate arrays (FPGAs) are ideal for P4-based packet parsers due to their reconfigurability and [...] Read more.
The increasing complexity of modern networks and their evolving needs demand flexible, high-performance packet processing solutions. The P4 language excels in specifying packet processing in software-defined networks (SDNs). Field-programmable gate arrays (FPGAs) are ideal for P4-based packet parsers due to their reconfigurability and ability to handle data transmitted at high speed. This paper introduces three FPGA-based P4-programmable packet parsing architectural designs that translate P4 specifications into adaptable hardware implementations called base, overlay, and pipeline, each optimized for different packet parsing performance. As modern network infrastructures evolve, the need for multi-tenant environments becomes increasingly critical. Multi-tenancy allows multiple independent users or organizations to share the same physical network resources while maintaining isolation and customized configurations. The rise of 5G and cloud computing has accelerated the demand for network slicing and virtualization technologies, enabling efficient resource allocation and management for multiple tenants. By leveraging P4-programmable packet parsers on FPGAs, our framework addresses these challenges by providing flexible and scalable solutions for multi-tenant network environments. The base parser offers a simple design for essential packet parsing, using minimal resources for high-speed processing. The overlay parser extends the base design for parallel processing, supporting various bus sizes and throughputs. The pipeline parser boosts throughput by segmenting parsing into multiple stages. The efficiency of the proposed approaches is evaluated through detailed resource consumption metrics measured on an Alveo U280 board, demonstrating throughputs of 15.2 Gb/s for the base design, 15.2 Gb/s to 64.42 Gb/s for the overlay design, and up to 282 Gb/s for the pipelined design. These results demonstrate a range of high performances across varying throughput requirements. The proposed approach utilizes a system that ensures low latency and high throughput that yields streaming packet parsers directly from P4 programs, supporting parsing graphs with up to seven transitioning nodes and four connections between nodes. The functionality of the parsers was tested on enterprise networks, a firewall, and a 5G Access Gateway Function graph. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

17 pages, 5068 KiB  
Article
MADDPG-Based Deployment Algorithm for 5G Network Slicing
by Lu Zhang, Junwei Li, Qianwen Yang, Chenglin Xu and Feng Zhao
Electronics 2024, 13(16), 3189; https://doi.org/10.3390/electronics13163189 - 12 Aug 2024
Cited by 1 | Viewed by 1746
Abstract
One of the core features of 5G networks is the ability to support multiple services on the same infrastructure, with network slicing being a key technology. However, existing network slicing architectures have limitations in efficiently handling slice requests with different requirements, particularly when [...] Read more.
One of the core features of 5G networks is the ability to support multiple services on the same infrastructure, with network slicing being a key technology. However, existing network slicing architectures have limitations in efficiently handling slice requests with different requirements, particularly when addressing high-reliability and high-demand services, where many issues remain unresolved. For example, predicting whether actual physical resources can meet network slice request demands and achieving flexible, on-demand resource allocation for different types of slice requests are significant challenges. To address the need for more flexible and efficient service demands, this paper proposes a 5G network slicing deployment algorithm based on the Multi-Agent Deep Deterministic Policy Gradient (MADDPG). Firstly, a new 5G network slicing deployment system framework is established, which measures resources for three typical 5G network slicing scenarios (eMBB, mMTC, uRLLC) and processes different types of slice requests by predicting slice request traffic. Secondly, by adopting the multi-agent approach of MADDPG, the algorithm enhances cooperation between multiple service requests, decentralizes action selection for requests, and schedules resources separately for the three types of slice requests, thereby optimizing resource allocation. Finally, simulation results demonstrate that the proposed algorithm significantly outperforms existing algorithms in terms of resource efficiency and slice request acceptance rate, showcasing the advantages of multi-agent approaches in slice request handling. Full article
Show Figures

Figure 1

12 pages, 3148 KiB  
Proceeding Paper
Evaluation of Cloud-Based Dynamic Network Scaling and Slicing for Next-Generation Wireless Networks
by Aykut Cubukcu, Ozlem Cubukcu, Adnan Kavak and Kerem Kucuk
Eng. Proc. 2024, 70(1), 45; https://doi.org/10.3390/engproc2024070045 - 12 Aug 2024
Cited by 1 | Viewed by 857
Abstract
The relentless growth of wireless networks coupled with the burgeoning demand for dynamic resource allocation has spurred research into innovative solutions. This paper presents an evaluation of Cloud-based Dynamic Network Scaling and Slicing (CDNSS) as a promising approach to meet the evolving demands [...] Read more.
The relentless growth of wireless networks coupled with the burgeoning demand for dynamic resource allocation has spurred research into innovative solutions. This paper presents an evaluation of Cloud-based Dynamic Network Scaling and Slicing (CDNSS) as a promising approach to meet the evolving demands of wireless networks. By leveraging cloud infrastructure and slicing techniques, CDNSS offers the flexibility to dynamically scale resources and allocate network slices tailored to diverse service requirements. The evaluation encompasses the performance of CDNSS in terms of scalability, resource utilisation and Quality of Service (QoS) provisioning. Through extensive simulations and analyses, the efficacy of CDNSS in addressing the challenges of resource management and service differentiation in wireless networks is demonstrated. The findings underscore the potential of CDNSS as a pivotal technology to enhance the efficiency and adaptability of wireless network architectures in the era of dynamic connectivity demands. Full article
Show Figures

Figure 1

36 pages, 3662 KiB  
Article
Enhancing Network Slicing Security: Machine Learning, Software-Defined Networking, and Network Functions Virtualization-Driven Strategies
by José Cunha, Pedro Ferreira, Eva M. Castro, Paula Cristina Oliveira, Maria João Nicolau, Iván Núñez, Xosé Ramon Sousa and Carlos Serôdio
Future Internet 2024, 16(7), 226; https://doi.org/10.3390/fi16070226 - 27 Jun 2024
Cited by 21 | Viewed by 7341
Abstract
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical [...] Read more.
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical infrastructure, each optimized for specific service requirements. Despite its numerous benefits, network slicing introduces significant security vulnerabilities that must be addressed to prevent exploitation by increasingly sophisticated cyber threats. This review explores the application of cutting-edge technologies—Artificial Intelligence (AI), specifically Machine Learning (ML), Software-Defined Networking (SDN), and Network Functions Virtualization (NFV)—in crafting advanced security solutions tailored for network slicing. AI’s predictive threat detection and automated response capabilities are analysed, highlighting its role in maintaining service integrity and resilience. Meanwhile, SDN and NFV are scrutinized for their ability to enforce flexible security policies and manage network functionalities dynamically, thereby enhancing the adaptability of security measures to meet evolving network demands. Thoroughly examining the current literature and industry practices, this paper identifies critical research gaps in security frameworks and proposes innovative solutions. We advocate for a holistic security strategy integrating ML, SDN, and NFV to enhance data confidentiality, integrity, and availability across network slices. The paper concludes with future research directions to develop robust, scalable, and efficient security frameworks capable of supporting the safe deployment of network slicing in next-generation networks. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

34 pages, 5772 KiB  
Review
A Comprehensive Overview of Network Slicing for Improving the Energy Efficiency of Fifth-Generation Networks
by Josip Lorincz, Amar Kukuruzović and Zoran Blažević
Sensors 2024, 24(10), 3242; https://doi.org/10.3390/s24103242 - 20 May 2024
Cited by 7 | Viewed by 4542
Abstract
The introduction of fifth-generation (5G) mobile networks leads to an increase in energy consumption and higher operational costs for mobile network operators (MNOs). Consequently, the optimization of 5G networks’ energy efficiency is crucial, both in terms of reducing MNO costs and in terms [...] Read more.
The introduction of fifth-generation (5G) mobile networks leads to an increase in energy consumption and higher operational costs for mobile network operators (MNOs). Consequently, the optimization of 5G networks’ energy efficiency is crucial, both in terms of reducing MNO costs and in terms of the negative environmental impact. However, many aspects of the 5G mobile network technology itself have been standardized, including the 5G network slicing concept. This enables the creation of multiple independent logical 5G networks within the same physical infrastructure. Since the only necessary resources in 5G networks need to be used for the realization of a specific 5G network slice, the question of whether the implementation of 5G network slicing can contribute to the improvement of 5G and future sixth-generation networks’ energy efficiency arises. To tackle this question, this review paper analyzes 5G network slicing and the energy demand of different network slicing use cases and mobile virtual network operator realizations based on network slicing. The paper also overviews standardized key performance indicators for the assessment of 5G network slices’ energy efficiency and discusses energy efficiency in 5G network slicing lifecycle management. In particular, to show how efficient network slicing can optimize the energy consumption of 5G networks, versatile 5G network slicing use case scenarios, approaches, and resource allocation concepts in the space, time, and frequency domains have been discussed, including artificial intelligence-based implementations of network slicing. The results of the comprehensive discussion indicate that the different implementations and approaches to network slicing pave the way for possible further reductions in 5G MNO energy costs and carbon dioxide emissions in the future. Full article
(This article belongs to the Special Issue Energy-Efficient Communication Networks and Systems: 2nd Edition)
Show Figures

Figure 1

17 pages, 5318 KiB  
Article
Orchestrating Isolated Network Slices in 5G Networks
by Ali Esmaeily and Katina Kralevska
Electronics 2024, 13(8), 1548; https://doi.org/10.3390/electronics13081548 - 18 Apr 2024
Cited by 4 | Viewed by 1726
Abstract
Sharing resources through network slicing in a physical infrastructure facilitates service delivery to various sectors and industries. Nevertheless, ensuring security of the slices remains a significant hurdle. In this paper, we investigate the utilization of State-of-the-Art (SoA) Virtual Private Network (VPN) solutions in [...] Read more.
Sharing resources through network slicing in a physical infrastructure facilitates service delivery to various sectors and industries. Nevertheless, ensuring security of the slices remains a significant hurdle. In this paper, we investigate the utilization of State-of-the-Art (SoA) Virtual Private Network (VPN) solutions in 5G networks to enhance security and performance when isolating slices. We deploy and orchestrate cloud-native network functions to create multiple scenarios that emulate real-life cellular networks. We evaluate the performance of the WireGuard, IPSec, and OpenVPN solutions while ensuring confidentiality and data protection within 5G network slices. The proposed architecture provides secure communication tunnels and performance isolation. Evaluation results demonstrate that WireGuard provides slice isolation in the control and data planes with higher throughput for enhanced Mobile Broadband (eMBB) and lower latency for Ultra-Reliable Low-Latency Communications (URLLC) slices compared to IPSec and OpenVPN. Our developments show the potential of implementing WireGuard isolation, as a promising solution, for providing secure and efficient network slicing, which fulfills the 5G key performance indicator values. Full article
Show Figures

Figure 1

Back to TopTop