Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (499)

Search Parameters:
Keywords = telecommunication network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2421 KiB  
Article
Cross-Receiver Radio Frequency Fingerprint Identification: A Source-Free Adaptation Approach
by Jian Yang, Shaoxian Zhu, Zhongyi Wen and Qiang Li
Sensors 2025, 25(14), 4451; https://doi.org/10.3390/s25144451 - 17 Jul 2025
Viewed by 328
Abstract
Radio frequency fingerprint identification (RFFI) leverages the unique characteristics of radio signals resulting from inherent hardware imperfections for identification, making it essential for applications in telecommunications, cybersecurity, and surveillance. Despite the advancements brought by deep learning in enhancing RFFI accuracy, challenges persist in [...] Read more.
Radio frequency fingerprint identification (RFFI) leverages the unique characteristics of radio signals resulting from inherent hardware imperfections for identification, making it essential for applications in telecommunications, cybersecurity, and surveillance. Despite the advancements brought by deep learning in enhancing RFFI accuracy, challenges persist in model deployment, particularly when transferring RFFI models across different receivers. Variations in receiver hardware can lead to significant performance declines due to shifts in data distribution. This paper introduces the source-free cross-receiver RFFI (SCRFFI) problem, which centers on adapting pre-trained RF fingerprinting models to new receivers without needing access to original training data from other devices, addressing concerns of data privacy and transmission limitations. We propose a novel approach called contrastive source-free cross-receiver network (CSCNet), which employs contrastive learning to facilitate model adaptation using only unlabeled data from the deployed receiver. By incorporating a three-pronged loss function strategy—minimizing information entropy loss, implementing pseudo-label self-supervised loss, and leveraging contrastive learning loss—CSCNet effectively captures the relationships between signal samples, enhancing recognition accuracy and robustness, thereby directly mitigating the impact of receiver variations and the absence of source data. Our theoretical analysis provides a solid foundation for the generalization performance of SCRFFI, which is corroborated by extensive experiments on real-world datasets, where under realistic noise and channel conditions, that CSCNet significantly improves recognition accuracy and robustness, achieving an average improvement of at least 13% over existing methods and, notably, a 47% increase in specific challenging cross-receiver adaptation tasks. Full article
Show Figures

Figure 1

18 pages, 769 KiB  
Article
Optimization of Transmission Power in a 3D UAV-Enabled Communication System
by Jorge Carvajal-Rodríguez, David Vega-Sánchez, Christian Tipantuña, Luis Felipe Urquiza, Felipe Grijalva and Xavier Hesselbach
Drones 2025, 9(7), 485; https://doi.org/10.3390/drones9070485 - 10 Jul 2025
Viewed by 214
Abstract
Unmanned Aerial Vehicles (UAVs) are increasingly used in the new generation of communication systems. They serve as access points, base stations, relays, and gateways to extend network coverage, enhance connectivity, or offer communications services in places lacking telecommunication infrastructure. However, optimizing UAV placement [...] Read more.
Unmanned Aerial Vehicles (UAVs) are increasingly used in the new generation of communication systems. They serve as access points, base stations, relays, and gateways to extend network coverage, enhance connectivity, or offer communications services in places lacking telecommunication infrastructure. However, optimizing UAV placement in three-dimensional (3D) environments with diverse user distributions and uneven terrain conditions is a crucial challenge. Therefore, this paper proposes a novel framework to minimize UAV transmission power while ensuring a guaranteed data rate in realistic and complex scenarios. To this end, using the particle swarm optimization evolution (PSO-E) algorithm, this paper analyzes the impact of user-truncated distribution models for suburban, urban and dense urban environments. Extensive simulations demonstrate that dense urban environments demand higher power than suburban and urban environments, with uniform user distributions requiring the most power in all scenarios. Conversely, Gaussian and exponential distributions exhibit lower power requirements, particularly in scenarios with concentrated user hotspots. The proposed model provides insight into achieving efficient network deployment and power optimization, offering practical solutions for future communication networks in complex 3D scenarios. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

30 pages, 3860 KiB  
Review
OTDR Development Based on Single-Mode Fiber Fault Detection
by Hui Liu, Tong Zhao and Mingjiang Zhang
Sensors 2025, 25(14), 4284; https://doi.org/10.3390/s25144284 - 9 Jul 2025
Viewed by 530
Abstract
With the large-scale application and high-quality development demands of optical fiber cables, higher requirements have been placed on the corresponding measurement technologies. In recent years, optical fiber testing has played a crucial role in evaluating cable performance, as well as in the deployment, [...] Read more.
With the large-scale application and high-quality development demands of optical fiber cables, higher requirements have been placed on the corresponding measurement technologies. In recent years, optical fiber testing has played a crucial role in evaluating cable performance, as well as in the deployment, operation, maintenance, fault repair, and upgrade of optical networks. The Optical Time-Domain Reflectometer (OTDR) is a fiber fault diagnostic tool recommended by standards such as the International Telecommunication Union and the International Electrotechnical Commission. It is used to certify the performance of new fiber links and monitor the status of existing ones, detecting and locating fault events with advantages including simple operation, rapid response, and cost-effectiveness. First, this paper introduces the working principle and system architecture of OTDR, along with a brief discussion of its performance evaluation metrics. Next, a comprehensive review of improved OTDR technologies and systems is provided, categorizing different performance enhancement methods, including the enhanced measurement distance with simple structure and low cost in 2024, and the high spatial resolution measurement of optical fiber reflection events and non-reflection events in 2025. Finally, the development trends and future research directions of OTDR are outlined, aiming to achieve the development of low-cost, high-performance OTDR systems. Full article
(This article belongs to the Special Issue Fault Diagnosis Based on Sensing and Control Systems)
Show Figures

Figure 1

22 pages, 1350 KiB  
Article
From Patterns to Predictions: Spatiotemporal Mobile Traffic Forecasting Using AutoML, TimeGPT and Traditional Models
by Hassan Ayaz, Kashif Sultan, Muhammad Sheraz and Teong Chee Chuah
Computers 2025, 14(7), 268; https://doi.org/10.3390/computers14070268 - 8 Jul 2025
Viewed by 394
Abstract
Call Detail Records (CDRs) from mobile networks offer valuable insights into both network performance and user behavior. With the growing importance of data analytics, analyzing CDRs has become critical for optimizing network resources by forecasting demand across spatial and temporal dimensions. In this [...] Read more.
Call Detail Records (CDRs) from mobile networks offer valuable insights into both network performance and user behavior. With the growing importance of data analytics, analyzing CDRs has become critical for optimizing network resources by forecasting demand across spatial and temporal dimensions. In this study, we examine publicly available CDR data from Telecom Italia to explore the spatiotemporal dynamics of mobile network activity in Milan. This analysis reveals key patterns in traffic distribution highlighting both high- and low-demand regions as well as notable variations in usage over time. To anticipate future network usage, we employ both Automated Machine Learning (AutoML) and the transformer-based TimeGPT model, comparing their performance against traditional forecasting methods such as Long Short-Term Memory (LSTM), ARIMA and SARIMA. Model accuracy is assessed using standard evaluation metrics, including root mean square error (RMSE), mean absolute error (MAE) and the coefficient of determination (R2). Results show that AutoML delivers the most accurate forecasts, with significantly lower RMSE (2.4990 vs. 14.8226) and MAE (1.0284 vs. 7.7789) compared to TimeGPT and a higher R2 score (99.96% vs. 98.62%). Our findings underscore the strengths of modern predictive models in capturing complex traffic behaviors and demonstrate their value in resource planning, congestion management and overall network optimization. Importantly, this study is one of the first to Comprehensively assess AutoML and TimeGPT on a high-resolution, real-world CDR dataset from a major European city. By merging machine learning techniques with advanced temporal modeling, this study provides a strong framework for scalable and intelligent mobile traffic prediction. It thus highlights the functionality of AutoML in simplifying model development and the possibility of TimeGPT to extend transformer-based prediction to the telecommunications domain. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

20 pages, 1067 KiB  
Article
The Impact of Dual-Channel Investments and Contract Mechanisms on Telecommunications Supply Chains
by Yongjae Kim
Systems 2025, 13(7), 539; https://doi.org/10.3390/systems13070539 - 1 Jul 2025
Viewed by 255
Abstract
This study examines how contract structures influence coordination and innovation incentives in dual-channel telecommunications supply chains. We consider a setting where a mobile network operator (MNO) supplies services both directly to consumers and indirectly through a mobile virtual network operator (MVNO), which competes [...] Read more.
This study examines how contract structures influence coordination and innovation incentives in dual-channel telecommunications supply chains. We consider a setting where a mobile network operator (MNO) supplies services both directly to consumers and indirectly through a mobile virtual network operator (MVNO), which competes in the retail market. Using a game-theoretic framework, we evaluate how different contracts—single wholesale pricing, revenue sharing, and quantity discounts—shape strategic decisions, particularly in the presence of investment spillovers between parties. A key coordination problem emerges from the externalized gains of innovation, where one party’s investment generates value for both participants. Our results show that single wholesale and revenue sharing contracts often lead to suboptimal investment and profit outcomes. In contrast, quantity discount contracts, especially when combined with appropriate transfer payments, improve coordination and enhance the total performance of the supply chain. We also find that innovation led by the MVNO, while generally less impactful, can still yield reciprocal benefits for the MNO, reinforcing the value of cooperative arrangements. These findings emphasize the importance of contract design in managing interdependence and improving efficiency in decentralized supply chains. This study offers theoretical and practical implications for telecommunications providers and policymakers aiming to promote innovation and mutually beneficial outcomes through well-aligned contractual mechanisms. Full article
(This article belongs to the Special Issue Systems Methodology in Sustainable Supply Chain Resilience)
Show Figures

Figure 1

37 pages, 4400 KiB  
Article
Optimizing Weighted Fair Queuing with Deep Reinforcement Learning for Dynamic Bandwidth Allocation
by Mays A. Mawlood and Dhari Ali Mahmood
Telecom 2025, 6(3), 46; https://doi.org/10.3390/telecom6030046 - 1 Jul 2025
Viewed by 457
Abstract
The rapid growth of high-quality telecommunications demands enhanced queueing system performance. Traditional bandwidth distribution often struggles to adapt to dynamic changes, network conditions, and erratic traffic patterns. Internet traffic fluctuates over time, causing resource underutilization. To address these challenges, this paper proposes a [...] Read more.
The rapid growth of high-quality telecommunications demands enhanced queueing system performance. Traditional bandwidth distribution often struggles to adapt to dynamic changes, network conditions, and erratic traffic patterns. Internet traffic fluctuates over time, causing resource underutilization. To address these challenges, this paper proposes a new adaptive algorithm called Weighted Fair Queues continual Deep Reinforcement Learning (WFQ continual-DRL), which integrates the advanced deep reinforcement learning Soft Actor-Critic (SAC) algorithm with the Elastic Weight Consolidation (EWC) approach. This technique is designed to overcome neural networks’ catastrophic forgetting, thereby enhancing network routers’ dynamic bandwidth allocation. The agent is trained to allocate bandwidth weights for multiple queues dynamically by interacting with the environment to observe queue lengths. The performance of the proposed adaptive algorithm was evaluated for eight queues until it expanded to twelve-queue systems. The model achieved higher cumulative rewards as compared to previous studies, indicating improved overall performance. The values of the Mean Squared Error (MSE) and Mean Absolute Error (MAE) decreased, suggesting effectively optimized bandwidth allocation. Reducing Root Mean Square Error (RMSE) indicated improved prediction accuracy and enhanced fairness computed by Jain’s index. The proposed algorithm was validated by employing real-world network traffic data, ensuring a robust model under dynamic queuing requirements. Full article
Show Figures

Figure 1

34 pages, 15255 KiB  
Article
An Experimental Tethered UAV-Based Communication System with Continuous Power Supply
by Veronica Rodriguez, Christian Tipantuña, Diego Reinoso, Jorge Carvajal-Rodriguez, Carlos Egas Acosta, Pablo Proaño and Xavier Hesselbach
Future Internet 2025, 17(7), 273; https://doi.org/10.3390/fi17070273 - 20 Jun 2025
Viewed by 401
Abstract
Ensuring reliable communication in remote or disaster-affected areas is a technical challenge due to unplanned deployment and mobilization, meaning placement difficulties and high operation costs of conventional telecommunications infrastructures. To address this problem, unmanned aerial vehicles (UAVs) have emerged as an excellent alternative [...] Read more.
Ensuring reliable communication in remote or disaster-affected areas is a technical challenge due to unplanned deployment and mobilization, meaning placement difficulties and high operation costs of conventional telecommunications infrastructures. To address this problem, unmanned aerial vehicles (UAVs) have emerged as an excellent alternative to provide quick connectivity in remote or disaster-affected regions at a reasonable cost. However, the limited battery autonomy of UAVs restricts their flight service time. This paper proposes a communication system based on a tethered UAV (T-UAV) capable of continuous operation through a wired power network connected to a ground station. The communications system is based on low-cost devices, such as Raspberry Pi platforms, and offers wireless IP telephony services, providing high-quality and reliable communication. Experimental tests assessed power consumption, UAV stability, and data transmission performance. Our results prove that the T-UAV, based on a quadcopter drone, operates stably at 16 V and 20 A, ensuring consistent VoIP communications at a height of 10 m with low latency. These experimental findings underscore the potential of T-UAVs as cost-effective alternatives for extending or providing communication networks in remote regions, emergency scenarios, or underserved areas. Full article
Show Figures

Figure 1

19 pages, 744 KiB  
Article
Three-Dimensional Trajectory Optimization for UAV-Based Post-Disaster Data Collection
by Renkai Zhao and Gia Khanh Tran
J. Sens. Actuator Netw. 2025, 14(3), 63; https://doi.org/10.3390/jsan14030063 - 16 Jun 2025
Viewed by 586
Abstract
In Japan, natural disasters occur frequently. Serious disasters may cause damage to traffic networks and telecommunication infrastructures, leading to the occurrence of isolated disaster areas. In this article, unmanned aerial vehicles (UAVs) are used for data collection instead of unavailable ground-based stations in [...] Read more.
In Japan, natural disasters occur frequently. Serious disasters may cause damage to traffic networks and telecommunication infrastructures, leading to the occurrence of isolated disaster areas. In this article, unmanned aerial vehicles (UAVs) are used for data collection instead of unavailable ground-based stations in isolated disaster areas. Detailed information about the damage situation will be collected from the user equipment (UE) by a UAV through a fly–hover–fly procedure, and then will be sent to the disaster response headquarters for disaster relief. However, mission completion time minimization becomes a crucial task, considering the requirement of rapid response and the battery constraint of UAVs. Therefore, the author proposed a three-dimensional UAV flight trajectory, discussing the optimal flight altitude and placement of hovering points by transforming the original problem of K-means clustering into a location set cover problem (LSCP) that can be solved via a genetic algorithm (GA) approach. The simulation results have shown the feasibility of the proposed method to reduce the mission completion time. Full article
Show Figures

Figure 1

13 pages, 2752 KiB  
Article
Chaos, Hyperchaos and Transient Chaos in a 4D Hopfield Neural Network: Numerical Analyses and PSpice Implementation
by Victor Kamdoum Tamba, Gaetant Ngoko, Viet-Thanh Pham and Giuseppe Grassi
Mathematics 2025, 13(11), 1872; https://doi.org/10.3390/math13111872 - 3 Jun 2025
Viewed by 422
Abstract
The human brain is an extremely sophisticated system. Several neural models have been proposed to mimic and understand brain function. Most of them incorporate memristors to simulate autapse or self-coupling, electromagnetic radiation and the synaptic weight of the neuron. This article introduces and [...] Read more.
The human brain is an extremely sophisticated system. Several neural models have been proposed to mimic and understand brain function. Most of them incorporate memristors to simulate autapse or self-coupling, electromagnetic radiation and the synaptic weight of the neuron. This article introduces and studies the dynamics of a Hopfield neural network (HNN) consisting of four neurons, where one of the synaptic weights of the neuron is replaced by a memristor. Theoretical aspects such as dissipation, the requirements for the existence of attractors, symmetry, equilibrium states and stability are studied. Numerical investigations of the model reveal that it develops very rich and diverse behaviors such as chaos, hyperchaos and transient chaos. These results obtained numerically are further supported by the results obtained from an electronic circuit of the system, constructed and simulated in PSpice. Both approaches show good agreement. In light of the findings from the numerical and experimental studies, it appears that the 4D Hopfield neural network under consideration in this work is more complex than its original version, which did not include a memristor. Full article
(This article belongs to the Special Issue Chaotic Systems and Their Applications, 2nd Edition)
Show Figures

Figure 1

34 pages, 568 KiB  
Review
The Connectivity of DVcube Networks: A Survey
by Ruo-Wei Hung
Mathematics 2025, 13(11), 1836; https://doi.org/10.3390/math13111836 - 30 May 2025
Viewed by 394
Abstract
Analyzing network connectivity is important for evaluating the robustness, efficiency, and overall performance of various architectural designs. By examining the intricate interactions among nodes and their connections, researchers can determine a network’s resilience to failures, its capacity to support efficient information flow, and [...] Read more.
Analyzing network connectivity is important for evaluating the robustness, efficiency, and overall performance of various architectural designs. By examining the intricate interactions among nodes and their connections, researchers can determine a network’s resilience to failures, its capacity to support efficient information flow, and its adaptability to dynamic conditions. These insights are critical across multiple domains—such as telecommunications, computer science, biology, and social networks—where optimizing connectivity can significantly enhance functionality and reliability. In the literature, there are many variations of connectivity to measure network resilience and fault tolerance. In this survey, we focus on connectivity, tightly super connectivity, and h-extra connectivity within DVcube networks—a compound architecture combining disk-ring and hypercube-like topologies. Additionally, we identify several open problems to encourage further exploration in future research. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

29 pages, 2566 KiB  
Article
Machine Learning and Deep Learning-Based Atmospheric Duct Interference Detection and Mitigation in TD-LTE Networks
by Rasendram Muralitharan, Upul Jayasinghe, Roshan G. Ragel and Gyu Myoung Lee
Future Internet 2025, 17(6), 237; https://doi.org/10.3390/fi17060237 - 27 May 2025
Viewed by 585
Abstract
The variations in the atmospheric refractivity in the lower atmosphere create a natural phenomenon known as atmospheric ducts. The atmospheric ducts allow radio signals to travel long distances. This can adversely affect telecommunication systems, as cells with similar frequencies can interfere with each [...] Read more.
The variations in the atmospheric refractivity in the lower atmosphere create a natural phenomenon known as atmospheric ducts. The atmospheric ducts allow radio signals to travel long distances. This can adversely affect telecommunication systems, as cells with similar frequencies can interfere with each other due to frequency reuse, which is intended to optimize resource allocation. Thus, the downlink signals of one base station will travel a long distance via the atmospheric duct and interfere with the uplink signals of another base station. This scenario is known as atmospheric duct interference (ADI). ADI could be mitigated using digital signal processing, machine learning, and hybrid approaches. To address this challenge, we explore machine learning and deep learning techniques for ADI prediction and mitigation in Time-Division Long-Term Evolution (TD-LTE) networks. Our results show that the Random Forest algorithm achieves the highest prediction accuracy, while a convolutional neural network demonstrates the best mitigation performance with accuracy. Additionally, we propose optimizing special subframe configurations in TD-LTE networks using machine learning-based methods to effectively reduce ADI. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Graphical abstract

20 pages, 817 KiB  
Article
Cross-Layer Security for 5G/6G Network Slices: An SDN, NFV, and AI-Based Hybrid Framework
by Zeina Allaw, Ola Zein and Abdel-Mehsen Ahmad
Sensors 2025, 25(11), 3335; https://doi.org/10.3390/s25113335 - 26 May 2025
Viewed by 899
Abstract
Within the dynamic landscape of fifth-generation (5G) and emerging sixth-generation (6G) wireless networks, the adoption of network slicing has revolutionized telecommunications by enabling flexible and efficient resource allocation. However, this advancement introduces new security challenges, as traditional protection mechanisms struggle to address the [...] Read more.
Within the dynamic landscape of fifth-generation (5G) and emerging sixth-generation (6G) wireless networks, the adoption of network slicing has revolutionized telecommunications by enabling flexible and efficient resource allocation. However, this advancement introduces new security challenges, as traditional protection mechanisms struggle to address the dynamic and complex nature of sliced network environments. This study proposes a Hybrid Security Framework Using Cross-Layer Integration, combining Software-Defined Networking (SDN), Network Function Virtualization (NFV), and AI-driven anomaly detection to strengthen network defenses. By integrating security mechanisms across multiple layers, the framework effectively mitigates threats, ensuring the integrity and confidentiality of network slices. An implementation was developed, focusing on the AI-based detection process using a representative 5G security dataset. The results demonstrate promising detection accuracy and real-time response capabilities. While full SDN/NFV integration remains under development, these findings lay the groundwork for scalable, intelligent security architectures tailored to the evolving needs of next-generation networks. Full article
Show Figures

Figure 1

20 pages, 6315 KiB  
Article
Analysis of Digital Skills and Infrastructure in EU Countries Based on DESI 2024 Data
by Kvitoslava Obelovska, Andrii Abziatov, Anastasiya Doroshenko, Ivanna Dronyuk, Oleh Liskevych and Rostyslav Liskevych
Future Internet 2025, 17(6), 228; https://doi.org/10.3390/fi17060228 - 22 May 2025
Viewed by 2465
Abstract
This paper presents an analysis of digital skills and network infrastructure in the European Union (EU) countries based on data from the Digital Economy and Society Index (DESI) 2024. We analyze the current state of digital skills and network infrastructure in EU countries, [...] Read more.
This paper presents an analysis of digital skills and network infrastructure in the European Union (EU) countries based on data from the Digital Economy and Society Index (DESI) 2024. We analyze the current state of digital skills and network infrastructure in EU countries, which in the DESI framework is called digital infrastructure, identifying key trends and differences between EU member states. The analysis shows significant differences in the relative share of citizens with a certain level of digital skills across countries, both among ordinary users of digital services and among information and communication technology professionals. The analysis of digital infrastructure includes fixed broadband coverage, mobile broadband, and edge networks, the latter of which are expected to become an important component of future digital infrastructure. The results highlight the importance of harmonizing the development of digital skills and digital infrastructure to support the EU’s digital transformation. Significant attention is paid to 5G technology. The feasibility of including a new additional indicator in DESI for next-generation 5G technology in the frequency range of 24.25–52.6 GHz is shown. The value of this indicator can be used to assess the readiness of the EU economy for technological leaps that place extremely high demands on reliability and data transmission delays. The results of the current state and the analysis of digital skills and infrastructure contribute to understanding the potential for the future development of the EU digital economy. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

30 pages, 1552 KiB  
Review
3GPP Evolution from 5G to 6G: A 10-Year Retrospective
by Xingqin Lin
Telecom 2025, 6(2), 32; https://doi.org/10.3390/telecom6020032 - 20 May 2025
Viewed by 2776
Abstract
The 3rd Generation Partnership Project (3GPP) evolution of mobile communication technologies from 5G to 6G has been a transformative journey spanning a decade, shaped by six releases from Release 15 to Release 20. This article provides a retrospective of this evolution, highlighting the [...] Read more.
The 3rd Generation Partnership Project (3GPP) evolution of mobile communication technologies from 5G to 6G has been a transformative journey spanning a decade, shaped by six releases from Release 15 to Release 20. This article provides a retrospective of this evolution, highlighting the technical advancements, challenges, and milestones that have defined the transition from the foundational 5G era to the emergence of 6G. Starting with Release 15, which marked the birth of 5G and its New Radio (NR) air interface, the journey progressed through Release 16, where 5G was qualified as an International Mobile Telecommunications-2020 (IMT-2020) technology, and Release 17, which expanded 5G into new domains such as non-terrestrial networks. Release 18 ushered in the 5G-Advanced era, incorporating novel technologies like artificial intelligence. Releases 19 and 20 continue this momentum, focusing on commercially driven enhancements while laying the groundwork for the 6G era. This article explores how 3GPP technology evolution has shaped the telecommunications landscape over the past decade, bridging two mobile generations. It concludes with insights into learned lessons, future challenges, and opportunities, offering guidelines on 6G evolution for 2030 and beyond. Full article
Show Figures

Figure 1

14 pages, 4156 KiB  
Article
Supercontinuum Generation in Suspended Core Fibers Based on Intelligent Algorithms
by Meiqian Jing and Tigang Ning
Photonics 2025, 12(5), 497; https://doi.org/10.3390/photonics12050497 - 16 May 2025
Viewed by 360
Abstract
This study presents a reverse-optimization framework for supercontinuum (SC) generation in Ge20Sb15Se65 suspended-core fibers (SCFs), integrating neural network modeling with the Nutcracker Optimization Algorithm to co-design optimal fiber structures and pump pulse parameters. A high-nonlinearity SCF structure (γ [...] Read more.
This study presents a reverse-optimization framework for supercontinuum (SC) generation in Ge20Sb15Se65 suspended-core fibers (SCFs), integrating neural network modeling with the Nutcracker Optimization Algorithm to co-design optimal fiber structures and pump pulse parameters. A high-nonlinearity SCF structure (γ ≈ 6–7 W−1·m−1) was first designed, and a neural network model was developed to accurately predict effective modal refractive indices and mode-field areas (RMSE < 1%). The generalized nonlinear Schrödinger equation was then used to study spectral broadening influenced by structural and pulse parameters. Global optimization was performed in four-dimensional structural and seven-dimensional combined parameter spaces, significantly enhancing computational efficiency. Simulation results demonstrated that the optimized design achieved a broad and flat SC spectrum extending from 0.7 µm to 25 µm (at –20 dB intensity), with lower peak power requirements compared to previous studies achieving similar coverage. The robustness and manufacturing tolerances of the optimized fiber structure were also analyzed, verifying the reliability of the design. This intelligent reverse-design strategy provides practical guidance and theoretical foundations for mid-infrared SC fiber design. Full article
(This article belongs to the Special Issue Optical Fiber Lasers and Laser Technology)
Show Figures

Figure 1

Back to TopTop