Advances in Mobile Network and Intelligent Communication

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 10 July 2025 | Viewed by 14142

Special Issue Editors

School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK
Interests: wireless communications; mobile networks; Internet of Things; mobile edge computing; artificial intelligence; intelligent transport systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computing, Macquarie University, Macquarie University, Sydney, NSW 2109, Australia
Interests: federated learning; privacy protection; networking

E-Mail Website
Guest Editor
School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, 430074, China
Interests: wireless communications; mobile computing; Internet of Things

E-Mail Website
Guest Editor
School of Information and Communication Engineering University of Electronic Science and Technology of China, Chengdu, China
Interests: wireless networking; Internet of Vehicles; edge intelligence

Special Issue Information

Dear Colleagues,

Recently, we have witnessed a rapid growth in mobile data traffic and the expansion of the IoT ecosystem with massive connectivity. This trend has resulted in the increased capabilities and intelligence of IoT devices, which have been boosted by the advances in edge computing and deep learning technologies. While 5G mobile networks are being gradually deployed to provide higher data speeds at low latency and enhanced throughput to accommodate the increased traffic and connectivity, research on 6G mobile networks is underway. Systems using 6G are envisioned to be intelligent and autonomous, with an evolution from connecting devices to connecting intelligence. Many emerging applications (such as unmanned aerial vehicles, the automation of factories, extended reality services and autonomous driving) are expected to be supported. The increasing network complexity and number of applications with diverse and demanding QoS requirements will drive technology innovations in mobile networks and intelligent communications.

This Special Issue will focus on the recent advances in theoretical and applied studies of mobile networks and intelligent communication. Topics include, but are not limited to:

  • Networks using 5G and beyond;
  • Cellular networks, WLAN, WPAN and LPWAN;
  • mmWave, THz, VLC, reconfigurable intellligent surfaces;
  • Radio access, resource allocation and spectrum sharing;
  • Edge computing and intelligence;
  • Quality-of-service, network planning and management;
  • Software-defined networking and network virtualization;
  • AI, machine learning and digital twin for wireless networks;
  • Connected unmanned aerial/terrestrial/underwater systems;
  • Internet of Things, smart wireless systems and applications;
  • Wireless power and energy-harvesting systems;
  • Mobile data science and analysis;
  • Integrated communication, sensing and localization;
  • Security and privacy for wireless networks;
  • Modeling, performance analysis and optimization;
  • Platforms, infrastructures and field trials for wireless networks.

Dr. Jianhua He
Dr. Yipeng Zhou
Prof. Dr. Wei Wang
Dr. Fan Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 5G/6G networks
  • mobile networks
  • intelligent communications
  • Internet of Things
  • edge computing and intelligence
  • AI for wireless networks

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 708 KiB  
Article
Improved Connected-Mode Discontinuous Reception (C-DRX) Power Saving and Delay Reduction Using Ensemble-Based Traffic Prediction
by Ji-Hee Yu, Yoon-Ju Choi, Seung-Hwan Seo, Seong-Gyun Choi, Hye-Yoon Jeong, Ja-Eun Kim, Myung-Sun Baek, Young-Hwan You and Hyoung-Kyu Song
Mathematics 2025, 13(6), 974; https://doi.org/10.3390/math13060974 - 15 Mar 2025
Viewed by 500
Abstract
This paper proposes a traffic prediction-based connected-mode discontinuous reception (C-DRX) approach to enhance energy efficiency and reduce data transmission delay in mobile communication systems. Traditional C-DRX determines user equipment (UE) activation based on a fixed timer cycle, which may not align with actual [...] Read more.
This paper proposes a traffic prediction-based connected-mode discontinuous reception (C-DRX) approach to enhance energy efficiency and reduce data transmission delay in mobile communication systems. Traditional C-DRX determines user equipment (UE) activation based on a fixed timer cycle, which may not align with actual traffic occurrences, leading to unnecessary activation and increased energy consumption or delays in data reception. To address this issue, this paper presents an ensemble model combining random forest (RF) and a temporal convolutional network (TCN) to predict traffic occurrences and adjust C-DRX activation timing. RF extracts traffic features, while TCN captures temporal dependencies in traffic data. The predictions from both models are combined to determine C-DRX activation timing. Additionally, the extended activation approach is introduced to refine activation timing by extending the activation window around predicted traffic occurrences. The proposed method is evaluated using real-world Netflix traffic data, achieving a 20.9% decrease in unnecessary active time and a 70.7% reduction in mean delay compared to the conventional periodic C-DRX approach. Overall, the proposed method significantly enhances energy efficiency and quality of service (QoS) in LTE and 5G networks, making it a viable solution for future mobile communication systems. Full article
(This article belongs to the Special Issue Advances in Mobile Network and Intelligent Communication)
Show Figures

Figure 1

16 pages, 1365 KiB  
Article
Optimal Feedback Rate for Multi-Antenna Maximum Ratio Transmission in Single-User MIMO Systems with One-Bit Analog-to-Digital Converters in Dense Cellular Networks
by Sungmin Lee and Moonsik Min
Mathematics 2024, 12(23), 3760; https://doi.org/10.3390/math12233760 - 28 Nov 2024
Viewed by 624
Abstract
Stochastic geometry has emerged as a powerful tool for modeling cellular networks, especially in dense deployment scenarios where inter-cell interference is significant. Previous studies have extensively analyzed multi-antenna systems with partial channel state information at the transmitter (CSIT) using stochastic geometry models. However, [...] Read more.
Stochastic geometry has emerged as a powerful tool for modeling cellular networks, especially in dense deployment scenarios where inter-cell interference is significant. Previous studies have extensively analyzed multi-antenna systems with partial channel state information at the transmitter (CSIT) using stochastic geometry models. However, most of these works assume the use of infinite-resolution analog-to-digital converters (ADCs) at the receivers. Recent advances in low-resolution ADCs, such as one-bit ADCs, offer an energy-efficient alternative for millimeter-wave systems, but the interplay between limited feedback and one-bit ADCs remains underexplored in such networks. This paper addresses this gap by analyzing the optimal feedback rate that maximizes net spectral efficiency in dense cellular networks, modeled using stochastic geometry, with both limited feedback and one-bit ADC receivers. We introduce an approximation of the achievable spectral efficiency to derive a differentiable expression of the optimal feedback rate. The results show that while the scaling behavior of the optimal feedback rate with respect to the channel coherence time remains unaffected by the ADC’s resolution, the absolute values are significantly lower for one-bit ADCs compared to infinite-resolution ADCs. Simulation results confirm the accuracy of our theoretical approximations and demonstrate the impact of ADC resolution on feedback rate optimization. Full article
(This article belongs to the Special Issue Advances in Mobile Network and Intelligent Communication)
Show Figures

Figure 1

15 pages, 814 KiB  
Article
Application of Large Language Models and Assessment of Their Ship-Handling Theory Knowledge and Skills for Connected Maritime Autonomous Surface Ships
by Dashuai Pei, Jianhua He, Kezhong Liu, Mozi Chen and Shengkai Zhang
Mathematics 2024, 12(15), 2381; https://doi.org/10.3390/math12152381 - 31 Jul 2024
Cited by 3 | Viewed by 2287
Abstract
Maritime transport plays a critical role in global logistics. Compared to road transport, the pace of research and development is much slower for maritime transport. It faces many major challenges, such as busy ports, long journeys, significant accidents, and greenhouse gas emissions. The [...] Read more.
Maritime transport plays a critical role in global logistics. Compared to road transport, the pace of research and development is much slower for maritime transport. It faces many major challenges, such as busy ports, long journeys, significant accidents, and greenhouse gas emissions. The problems have been exacerbated by recent regional conflicts and increasing international shipping demands. Maritime Autonomous Surface Ships (MASSs) are widely regarded as a promising solution to addressing maritime transport problems with improved safety and efficiency. With advanced sensing and path-planning technologies, MASSs can autonomously understand environments and navigate without human intervention. However, the complex traffic and water conditions and the corner cases are large barriers in the way of MASSs being practically deployed. In this paper, to address the above issues, we investigated the application of Large Language Models (LLMs), which have demonstrated strong generalization abilities. Given the substantial computational demands of LLMs, we propose a framework for LLM-assisted navigation in connected MASSs. In this framework, LLMs are deployed onshore or in remote clouds, to facilitate navigation and provide guidance services for MASSs. Additionally, certain large oceangoing vessels can deploy LLMs locally, to obtain real-time navigation recommendations. To the best of our knowledge, this is the first attempt to apply LLMs to assist with ship navigation. Specifically, MASSs transmit assistance requests to LLMs, which then process these requests and return assistance guidance. A crucial aspect, which has not been investigated in the literature, of this safety-critical LLM-assisted guidance system is the knowledge and safety performance of the LLMs, in regard to ship handling, navigation rules, and skills. To assess LLMs’ knowledge of navigation rules and their qualifications for navigation assistance systems, we designed and conducted navigation theory tests for LLMs, which consisted of more than 1500 multiple-choice questions. These questions were similar to the official theory exams that are used to award the Officer Of the Watch (OOW) certificate based on the Standards of Training, Certification, and Watchkeeping (STCW) for Seafarers. A wide range of LLMs were tested, which included commercial ones from OpenAI and Baidu and an open-source one called ChatGLM, from Tsinghua. Our experimental results indicated that among all the tested LLMs, only GPT-4o passed the tests, with an accuracy of 86%. This suggests that, while the current LLMs possess significant potential in regard to navigation and guidance systems for connected MASSs, further improvements are needed. Full article
(This article belongs to the Special Issue Advances in Mobile Network and Intelligent Communication)
Show Figures

Figure 1

29 pages, 1243 KiB  
Article
Risk Assessment Edge Contract for Efficient Resource Allocation
by Minghui Sheng, Hui Wang, Maode Ma, Yiying Sun and Run Zhou
Mathematics 2024, 12(7), 983; https://doi.org/10.3390/math12070983 - 26 Mar 2024
Cited by 1 | Viewed by 1243
Abstract
The rapid growth of edge devices and mobile applications has driven the adoption of edge computing to handle computing tasks closer to end-users. However, the heterogeneity of edge devices and their limited computing resources raise challenges in the efficient allocation of computing resources [...] Read more.
The rapid growth of edge devices and mobile applications has driven the adoption of edge computing to handle computing tasks closer to end-users. However, the heterogeneity of edge devices and their limited computing resources raise challenges in the efficient allocation of computing resources to complete services with different characteristics and preferences. In this paper, we delve into an edge scenario comprising multiple Edge Computing Servers (ECSs), multiple Device-to-Device (D2D) Edge Nodes (ENs), and multiple edge devices. In order to address the resource allocation challenge among ECSs, ENs, and edge devices in high-workload environments, as well as the pricing of edge resources within the resource market framework, we propose a Risk Assessment Contract Algorithm (RACA) based on risk assessment theory. The RACA enables ECSs to assess risks associated with local users by estimating their future revenue potential and updating the contract autonomously at present and in the future. ENs acquire additional resources from ECSs to efficiently complete local users’ tasks. Simultaneously, ENs can also negotiate reasonable resource requests and pricing with ECSs by a Stackelberg game algorithm. Furthermore, we prove the unique existence of Nash equilibrium in the established game, implying that equilibrium solutions can stably converge through computational methods in heterogeneous environments. Finally, through simulation experiments on the dataset, we demonstrate that risk assessment can better enhance the overall profit capability of the system. Moreover, through multiple experiments, we showcase the stability of the contract’s autonomous update capability. The RACA exhibits better utility in terms of system profit capabilities, stability in high-workload environments, and energy consumption. This work provides a more dynamic and effective solution to the resource allocation problem in edge systems under high-workload environments. Full article
(This article belongs to the Special Issue Advances in Mobile Network and Intelligent Communication)
Show Figures

Figure 1

17 pages, 514 KiB  
Article
Federated Learning with Efficient Aggregation via Markov Decision Process in Edge Networks
by Tongfei Liu, Hui Wang and Maode Ma
Mathematics 2024, 12(6), 920; https://doi.org/10.3390/math12060920 - 20 Mar 2024
Cited by 2 | Viewed by 1687
Abstract
Federated Learning (FL), as an emerging paradigm in distributed machine learning, has received extensive research attention. However, few works consider the impact of device mobility on the learning efficiency of FL. In fact, it is detrimental to the training result if heterogeneous clients [...] Read more.
Federated Learning (FL), as an emerging paradigm in distributed machine learning, has received extensive research attention. However, few works consider the impact of device mobility on the learning efficiency of FL. In fact, it is detrimental to the training result if heterogeneous clients undergo migration or are in an offline state during the global aggregation process. To address this issue, the Optimal Global Aggregation strategy (OGAs) is proposed. The OGAs first models the interaction between clients and servers of the FL as a Markov Decision Process (MDP) model, which jointly considers device mobility and data heterogeneity to determine local participants that are conducive to global aggregation. To obtain the optimal client participation strategy, an improved σ-value iteration method is utilized to solve the MDP, ensuring that the number of participating clients is maintained within an optimal interval in each global round. Furthermore, the Principal Component Analysis (PCA) is used to reduce the dimensionality of the original features to deal with the complex state space in the MDP. The experimental results demonstrate that, compared with other existing aggregation strategies, the OGAs has the faster convergence speed and the higher training accuracy, which significantly improves the learning efficiency of the FL. Full article
(This article belongs to the Special Issue Advances in Mobile Network and Intelligent Communication)
Show Figures

Figure 1

19 pages, 11641 KiB  
Article
Optimization of User Service Rate with Image Compression in Edge Computing-Based Vehicular Networks
by Liujing Zhang, Jin Li, Wenyang Guan and Xiaoqin Lian
Mathematics 2024, 12(4), 558; https://doi.org/10.3390/math12040558 - 12 Feb 2024
Viewed by 1543
Abstract
The prevalence of intelligent transportation systems in alleviating traffic congestion and reducing the number of traffic accidents has risen in recent years owing to the rapid advancement of information and communication technology (ICT). Nevertheless, the increase in Internet of Vehicles (IoV) users has [...] Read more.
The prevalence of intelligent transportation systems in alleviating traffic congestion and reducing the number of traffic accidents has risen in recent years owing to the rapid advancement of information and communication technology (ICT). Nevertheless, the increase in Internet of Vehicles (IoV) users has led to massive data transmission, resulting in significant delays and network instability during vehicle operation due to limited bandwidth resources. This poses serious security risks to the traffic system and endangers the safety of IoV users. To alleviate the computational load on the core network and provide more timely, effective, and secure data services to proximate users, this paper proposes the deployment of edge servers utilizing edge computing technologies. The massive image data of users are processed using an image compression algorithm, revealing a positive correlation between the compression quality factor and the image’s spatial occupancy. A performance analysis model for the ADHOC MAC (ADHOC Medium Access Control) protocol is established, elucidating a positive correlation between the frame length and the number of service users, and a negative correlation between the service user rate and the compression quality factor. The optimal service user rate, within the constraints of compression that does not compromise detection accuracy, is determined by using the target detection result as a criterion for effective compression. The simulation results demonstrate that the proposed scheme satisfies the object detection accuracy requirements in the IoV context. It enables the number of successfully connected users to approach the total user count, and increases the service rate by up to 34%, thereby enhancing driving safety, stability, and efficiency. Full article
(This article belongs to the Special Issue Advances in Mobile Network and Intelligent Communication)
Show Figures

Figure 1

18 pages, 423 KiB  
Article
A Resource Allocation Scheme for Packet Delay Minimization in Multi-Tier Cellular-Based IoT Networks
by Jin Li, Wenyang Guan and Zuoyin Tang
Mathematics 2023, 11(21), 4538; https://doi.org/10.3390/math11214538 - 3 Nov 2023
Cited by 1 | Viewed by 1393
Abstract
With advances in Internet of Things (IoT) technologies, billions of devices are becoming connected, which can result in the unprecedented sensing and control of the physical environments. IoT devices have diverse quality of service (QoS) requirements, including data rate, latency, reliability, and energy [...] Read more.
With advances in Internet of Things (IoT) technologies, billions of devices are becoming connected, which can result in the unprecedented sensing and control of the physical environments. IoT devices have diverse quality of service (QoS) requirements, including data rate, latency, reliability, and energy consumption. Meeting the diverse QoS requirements presents great challenges to existing fifth-generation (5G) cellular networks, especially in unprecedented scenarios in 5G networks, such as connected vehicle networks, where strict data packet latency may be required. The IoT devices with these scenarios have higher requirements on the packet latency in networking, which is essential to the utilization of 5G networks. In this paper, we propose a multi-tier cellular-based IoT network to address this challenge, with a particular focus on meeting application latency requirements. In the multi-tier network, access points (APs) can relay and forward packets from IoT devices or other APs, which can support higher data rates with multi-hops between IoT devices and cellular base stations. However, as multiple-hop relaying may cause additional delay, which is crucial to delay-sensitive applications, we develop new schemes to mitigate the adverse impact. Firstly, we design a traffic-prioritization scheduling scheme to classify packets with different priorities in each AP based on the age of information (AoI). Then, we design different channel-access protocols for the transmission of packets according to their priorities to ensure the QoS in networking and the effective utilization of the limited network resources. A queuing-theory-based theoretical model is proposed to analyze the packet delay for each type of packet at each tier of the multi-tier IoT networks. An optimal algorithm for the distribution of spectrum and power resources is developed to reduce the overall packet delay in a multi-tier way. The numerical results achieved in a two-tier cellular-based IoT network show that the target packet delay for delay-sensitive applications can be achieved without a large cost in terms of traffic fairness. Full article
(This article belongs to the Special Issue Advances in Mobile Network and Intelligent Communication)
Show Figures

Figure 1

20 pages, 2553 KiB  
Article
Data-Driven Diffraction Loss Estimation for Future Intelligent Transportation Systems in 6G Networks
by Sambit Pattanaik, Agbotiname Lucky Imoize, Chun-Ta Li, Sharmila Anand John Francis, Cheng-Chi Lee and Diptendu Sinha Roy
Mathematics 2023, 11(13), 3004; https://doi.org/10.3390/math11133004 - 6 Jul 2023
Cited by 3 | Viewed by 2643
Abstract
The advancement of 6G networks is driven by the need for customer-centric communication and network control, particularly in applications such as intelligent transport systems. These applications rely on outdoor communication in extremely high-frequency (EHF) bands, including millimeter wave (mmWave) frequencies exceeding 30 GHz. [...] Read more.
The advancement of 6G networks is driven by the need for customer-centric communication and network control, particularly in applications such as intelligent transport systems. These applications rely on outdoor communication in extremely high-frequency (EHF) bands, including millimeter wave (mmWave) frequencies exceeding 30 GHz. However, EHF signals face challenges such as higher attenuation, diffraction, and reflective losses caused by obstacles in outdoor environments. To overcome these challenges, 6G networks must focus on system designs that enhance propagation characteristics by predicting and mitigating diffraction, reflection, and scattering losses. Strategies such as proper handovers, antenna orientation, and link adaptation techniques based on losses can optimize the propagation environment. Among the network components, aerial networks, including unmanned aerial vehicles (UAVs) and electric vertical take-off and landing aircraft (eVTOL), are particularly susceptible to diffraction losses due to surrounding buildings in urban and suburban areas. Traditional statistical models for estimating the height of tall objects like buildings or trees are insufficient for accurately calculating diffraction losses due to the dynamic nature of user mobility, resulting in increased latency unsuitable for ultra-low latency applications. To address these challenges, this paper proposes a deep learning framework that utilizes easily accessible Google Street View imagery to estimate building heights and predict diffraction losses across various locations. The framework enables real-time decision-making to improve the propagation environment based on users’ locations. The proposed approach achieves high accuracy rates, with an accuracy of 39% for relative error below 2%, 83% for relative error below 4%, and 96% for both relative errors below 7% and 10%. Compared to traditional statistical methods, the proposed deep learning approach offers significant advantages in height prediction accuracy, demonstrating its efficacy in supporting the development of 6G networks. The ability to accurately estimate heights and map diffraction losses before network deployment enables proactive optimization and ensures real-time decision-making, enhancing the overall performance of 6G systems. Full article
(This article belongs to the Special Issue Advances in Mobile Network and Intelligent Communication)
Show Figures

Figure 1

11 pages, 293 KiB  
Article
A Novel Performance Bound for Massive MIMO Enabled HetNets
by Hao Li, Jiawei Cao, Guangkun Luo, Zhigang Wang and Houjun Wang
Mathematics 2023, 11(13), 2846; https://doi.org/10.3390/math11132846 - 25 Jun 2023
Viewed by 995
Abstract
Massive multiple-input and multiple-output (MIMO) networks with higher throughput rates, where a base station (BS) with a large-scale antenna array serves multiple users, have been widely employed in next-generation wireless communication test systems. Massive MIMO-enabled dense heterogeneous networks (HetNets) have also emerged as [...] Read more.
Massive multiple-input and multiple-output (MIMO) networks with higher throughput rates, where a base station (BS) with a large-scale antenna array serves multiple users, have been widely employed in next-generation wireless communication test systems. Massive MIMO-enabled dense heterogeneous networks (HetNets) have also emerged as a promising architecture to increase the system spectrum efficiency and improve the system reliability. Massive MIMO-enabled HetNets have been successfully exploited in sustainable Internet of Thing networks (IoTs). In order to facilitate the testing and performance estimation of IoTs communication systems, this paper studies the achievable rate performance of massive MIMO and HetNets. Differing from the existing literature, we first consider an interference power model for massive MIMO-enabled HetNets. We next obtain an expression for the signal-to-interference-plus-noise ratio (SINR) by introducing an interference power. Furthermore, we derive a new closed-form lower bound expression for the achievable rate. The proposed closedform expression shows that the achievable rate is an explicit expression of the number of transmit antennas. In simulation results, the impact of the number of transmit antennas on the achievable rate performance is investigated. Full article
(This article belongs to the Special Issue Advances in Mobile Network and Intelligent Communication)
Show Figures

Figure 1

Back to TopTop