New Advances in Network and Edge Computing

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 31 October 2025 | Viewed by 1710

Special Issue Editors


E-Mail Website
Guest Editor
Guangxi Key Laboratory of Multimedia Communications and Network Technology, School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
Interests: deep reinforcement learning; Internet of Things; edge computing; transfer learning; optimization

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Central South University, Changsha 410083, China
Interests: Internet of Things; edge computing; trust computing; services computing; deep reinforcement learning; wireless sensor networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Guangxi Key Laboratory of Multimedia Communications and Network Technology, School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
Interests: applied ML/DL; edge computing; Internet of Things; wireless network

Special Issue Information

Dear Colleagues,

Fifth Generation is undergoing a major revolution, connecting tens of billions of machines and tens of millions of people, bringing about vertical changes, and promoting scientific and technological development. The next-generation wireless communication network is expected to open up a new form of communication of "intelligent interconnection", which is deeply integrated with the Internet, the Internet of Things, arithmetic networks, etc. The data, algorithms, and arithmetic hardware and software are complementary to each other; widely connect people, machines, and things; and transmit and exchange intelligent resources (including models, parameters, etc.) to autonomously explore the cognitive physical world and provide intelligent decision-making services.

The convergence of edge computing and advanced learning, relying on the comprehensive intelligent interconnection of people, machines, objects, and computing environments, as well as the performance of AI in data computation, transmission, recognition, classification, mining, etc., and the multi-subjects and pan-terminals will all be connected to different environments for the transmission and sharing of intelligent resources, leading to the realization of digital and intelligent changes in next-generation wireless communication networks.

This Special Issue will bring together researchers from academia and industry to discuss the latest advances and research in leveraging edge computing and advanced learning to enable digitally intelligent applications of IoT for next-generation wireless communication networks.

Topics of possible interest include, but are not limited to, the following:

  • Next-generation wireless communication networks for edge AI applications;
  • Intelligent edge resource management for next-generation wireless communication networks;
  • Edge computing and advanced learning model co-design for next-generation wireless communication networks;
  • Intelligent cloud–edge-end architecture converged computing for next-generation wireless communication networks;
  • Privacy-preserving for edge AI in next-generation wireless communication networks;
  • Edge–AI interaction for healthcare in next-generation wireless communication networks;
  • Data-driven optimization of AI-based next-generation wireless communication networks;
  • Other emerging edge computing and advanced learning techniques and applications for next-generation wireless communication networks.

Dr. Miaojiang Chen
Prof. Dr. Anfeng Liu
Dr. Wenjing Xiao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge computing
  • deep learning
  • 5G
  • next-generation communication networks
  • applied ML/DL

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 2324 KiB  
Article
Multi-Satellite Task Parallelism via Priority-Aware Decomposition and Dynamic Resource Mapping
by Shangpeng Wang, Chenyuan Zhang, Zihan Su, Limin Liu and Jun Long
Mathematics 2025, 13(7), 1183; https://doi.org/10.3390/math13071183 - 3 Apr 2025
Viewed by 201
Abstract
Multi-satellite collaborative computing has achieved task decomposition and collaborative execution through inter-satellite links (ISLs), which has significantly improved the efficiency of task execution and system responsiveness. However, existing methods focus on single-task execution and lack multi-task parallel processing capability. Most methods ignore task [...] Read more.
Multi-satellite collaborative computing has achieved task decomposition and collaborative execution through inter-satellite links (ISLs), which has significantly improved the efficiency of task execution and system responsiveness. However, existing methods focus on single-task execution and lack multi-task parallel processing capability. Most methods ignore task priorities and dependencies, leading to excessive waiting times and poor scheduling results. To address these problems, this paper proposes a task decomposition and resource mapping method based on task priorities and resource constraints. First, we introduce a graph theoretic model to represent the task dependency and priority relationships explicitly, combined with a novel algorithm for task decomposition. Meanwhile, we construct a resource allocation model based on game theory and combine it with deep reinforcement learning to achieve resource mapping in a dynamic environment. Finally, we adopt the theory of temporal logic to formalize the execution order and time constraints of tasks and solve the dynamic scheduling problem through mixed-integer nonlinear programming to ensure the optimality and real-time updating of the scheduling scheme. The experimental results demonstrate that the proposed method improves resource utilization by up to about 24% and reduces overall execution time by up to about 42.6% in large-scale scenarios. Full article
(This article belongs to the Special Issue New Advances in Network and Edge Computing)
Show Figures

Figure 1

36 pages, 1390 KiB  
Article
Adaptive Real-Time Transmission in Large-Scale Satellite Networks Through Software-Defined-Networking-Based Domain Clustering and Random Linear Network Coding
by Shangpeng Wang, Chenyuan Zhang, Yuchen Wu, Limin Liu and Jun Long
Mathematics 2025, 13(7), 1069; https://doi.org/10.3390/math13071069 - 25 Mar 2025
Viewed by 269
Abstract
Network flow task management involves the efficient allocation and scheduling of data flow tasks within dynamic satellite networks, aiming to effectively address frequent changes in network topology and dynamic traffic fluctuations. Existing research primarily emphasizes traffic prediction and scheduling using spatiotemporal models and [...] Read more.
Network flow task management involves the efficient allocation and scheduling of data flow tasks within dynamic satellite networks, aiming to effectively address frequent changes in network topology and dynamic traffic fluctuations. Existing research primarily emphasizes traffic prediction and scheduling using spatiotemporal models and machine learning. However, these approaches often depend on extensive historical data for training, making real-time adaptation to rapidly changing network topologies and traffic patterns challenging in dynamic satellite environments. Additionally, their high computational complexity and slow convergence rates hinder their efficiency in large-scale networks. To address these issues, this paper proposes a collaborative optimization framework based on Coding Multi-Path Theory (CMPT). The framework utilizes a Nash bargaining game model to simulate resource competition among the different participants, ensuring fair resource distribution and load balancing. It also integrates real-time network state monitoring with optimization algorithms, within a multi-path scheduling strategy, enabling the dynamic selection of optimal transmission paths to accommodate frequent network topology changes and traffic variations. Experimental results indicate that the proposed method reduced resource allocation task execution time by at least 18.03% compared to traditional methods and enhanced task scheduling efficiency by at least 14.01%. Although CMPT exhibited a slightly higher task latency on certain small-scale datasets compared to some baseline algorithms, its performance remains exceptional in large-scale and high-dimensional scenarios. Full article
(This article belongs to the Special Issue New Advances in Network and Edge Computing)
Show Figures

Figure 1

31 pages, 1787 KiB  
Article
Distributed Gradient Descent Framework for Real-Time Task Offloading in Heterogeneous Satellite Networks
by Yanbing Li, Yuchen Wu and Shangpeng Wang
Mathematics 2025, 13(4), 561; https://doi.org/10.3390/math13040561 - 8 Feb 2025
Viewed by 472
Abstract
Task offloading in satellite networks, which involves distributing computational tasks among heterogeneous satellite nodes, is crucial for optimizing resource utilization and minimizing system latency. However, existing approaches such as static offloading strategies and heuristic-based offloading methods neglect dynamic topologies and uncertain conditions that [...] Read more.
Task offloading in satellite networks, which involves distributing computational tasks among heterogeneous satellite nodes, is crucial for optimizing resource utilization and minimizing system latency. However, existing approaches such as static offloading strategies and heuristic-based offloading methods neglect dynamic topologies and uncertain conditions that hinder adaptability to sudden changes. Furthermore, current collaborative computing strategies inadequately address satellite platform heterogeneity and often overlook resource fluctuations, resulting in inefficient resource sharing and inflexible task scheduling. To address these issues, we propose a dynamic gradient descent-based task offloading method. This method proposes a collaborative optimization framework based on dynamic programming. By constructing delay optimization and resource efficiency models and integrating dynamic programming with value iteration techniques, the framework achieves real-time updates of system states and decision variables. Then, a distributed gradient descent algorithm combined with Gradient Surgery techniques is employed to optimize task offloading decisions and resource allocation schemes, ensuring a precise balance between delay minimization and resource utilization maximization in dynamic network environments. Experimental results demonstrate that the proposed method enhances the global optimizing result by at least 1.97%, enhances resource utilization rates by at least 3.91%, and also reduces the solution time by at least 191.91% in large-scale networks. Full article
(This article belongs to the Special Issue New Advances in Network and Edge Computing)
Show Figures

Figure 1

37 pages, 935 KiB  
Article
Privacy-Preserving Incentive Allocation for Fair and Resilient Data Sharing in Resource-Constrained Edge Computing Networks
by Yanfang Wang, Shaobo Li, Kangkun Chen, Ran Guo and Judy Li
Mathematics 2025, 13(3), 422; https://doi.org/10.3390/math13030422 - 27 Jan 2025
Viewed by 648
Abstract
Efficient and secure data sharing is paramount for advancing modern digital ecosystems, especially within edge computing environments characterized by resource-constrained nodes and dynamic network topologies. In such settings, privacy preservation, computational efficiency, and system resilience are critical for user engagement and overall system [...] Read more.
Efficient and secure data sharing is paramount for advancing modern digital ecosystems, especially within edge computing environments characterized by resource-constrained nodes and dynamic network topologies. In such settings, privacy preservation, computational efficiency, and system resilience are critical for user engagement and overall system performance. However, existing approaches face three primary challenges: (i) limited optimization of privacy protection and absence of dynamic privacy budget scheduling for resource-constrained scenarios, (ii) static incentive mechanisms that overlook individual differences in data quality and resource consumption, and (iii) inadequate strategies to ensure resilience in environments with limited resources and unstable networks. This paper introduces the Federated Learning-based Dynamic Incentive Allocation Framework (FL-DIAF) to address these issues. FL-DIAF integrates differential privacy into the federated learning paradigm deployed on edge nodes, enabling collaborative model training that safeguards individual data privacy while maintaining computational efficiency and system resilience. Additionally, the framework employs a Shapley value-based dynamic incentive allocation model to ensure equitable and transparent distribution of incentives by accurately quantifying each participant’s contribution within an elastic edge computing infrastructure. Comprehensive experimental evaluations on diverse datasets demonstrate that FL-DIAF achieves a 9.573% reduction in the objective function value under typical conditions and attains a 100% task completion rate across all tested resilient edge scenarios. Full article
(This article belongs to the Special Issue New Advances in Network and Edge Computing)
Show Figures

Figure 1

Back to TopTop