Next Article in Journal
Impact of Delayed Decaying Corruption Effects on a Socioeconomic System with Economic Growth and Unemployment
Previous Article in Journal
Mathematical Calculations for the Design of Elliptical Isolated Foundations with Optimal Cost
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

When Mathematical Methods Meet Artificial Intelligence and Mobile Edge Computing

1
Advanced Institute of Natural Sciences, Beijing Normal University, Zhuhai 519087, China
2
College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310014, China
3
School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710129, China
4
Zhijiang College, Zhejiang University of Technology, Shaoxing 312030, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(11), 1779; https://doi.org/10.3390/math13111779
Submission received: 20 April 2025 / Revised: 24 May 2025 / Accepted: 26 May 2025 / Published: 27 May 2025

Abstract

:
The integration of mathematical methods with artificial intelligence (AI) and mobile edge computing (MEC) has emerged as a promising research direction to address the growing complexity of intelligent distributed systems. To chart the landscape of this interdisciplinary field, we first examine recent surveys that primarily focus on architectural designs, learning paradigms, and system-level deployments in edge AI. However, these studies largely overlook the theoretical foundations essential for ensuring reliability, interpretability, and efficiency. This paper fills this gap by conducting a comprehensive survey of mathematical methods and analyzing their applications in AI-enabled MEC systems. We focus on addressing three key challenges: heterogeneous data integration, real-time optimization, and computational scalability. We summarize state-of-the-art schemes to address these challenges and identify several open issues and promising future research directions.

1. Introduction

Artificial intelligence (AI) has demonstrated transformative capabilities in diverse domains such as image recognition, natural language processing, and intelligent decision making [1]. By leveraging algorithms like deep learning and reinforcement learning, AI excels at uncovering patterns in massive datasets and performing efficient reasoning. However, traditional cloud-centric computing models are often hindered by high latency and limited bandwidth, making them insufficient for time-sensitive applications [2]. Mobile edge computing (MEC) addresses this limitation by decentralizing computation and storage resources to the network edge, thereby significantly reducing data transmission delays and enhancing real-time responsiveness. The fusion of AI and MEC enables applications like autonomous driving and industrial IoT to process and act on data directly at the edge, alleviating cloud burdens and facilitating the realization of next-generation intelligent systems [3].
The convergence of AI and MEC presents transformative opportunities for building next-generation intelligent systems [4]. However, their integration faces critical limitations in practical deployment. Traditional AI methods often struggle to efficiently model and reason over large-scale, multi-source, and dynamically changing data streams in edge environments. These limitations hinder accurate resource allocation, real-time decision making, and system scalability [5]. Mathematical methods, known for their rigorous theoretical foundations and precise modeling capabilities, offer enhanced robustness, reliability, and interpretability for AI-driven edge systems. Nevertheless, they lack the flexibility and adaptability required to handle unstructured or uncertain real-world data. Therefore, the integration of mathematical methods and AI offers a synergistic pathway, where AI contributes adaptability and large-scale data handling, while mathematical tools provide structure, guarantees, and theoretical rigor. This hybrid paradigm is particularly vital for solving complex problems such as dynamic task scheduling, energy-aware resource management, and secure federated learning in resource-constrained and heterogeneous MEC scenarios [6].
Several existing surveys have explored the convergence of AI and MEC. For instance, Surianarayanan et al. [7] reviewed hardware-level and algorithmic optimizations in edge AI, including federated learning and model compression techniques. However, their analysis lacked a systematic treatment of the mathematical principles underlying these optimizations. Wang et al. [8] proposed a hybrid DRL-FL framework for edge caching and communication but did not examine the theoretical basis such as Markov decision processes (MDPs) or federated optimization algorithms. Similarly, Deng et al. [9] categorized edge intelligence into AI for edge and AI on edge and discussed aspects like service deployment and offloading, yet overlooked the mathematical structure guiding the algorithm design. Chang et al. [10] focused on IoT-AI integration via edge-cloud synergy, highlighting practical deployment techniques but missing an in-depth analysis of mathematical engineering’s role in AI for edge applications.
While recent studies have explored the technical integration of AI and MEC, few have addressed the mathematical foundations that underpin this convergence. Existing reviews predominantly focus on system architectures, federated learning frameworks, or algorithmic deployments, without deeply examining how mathematical tools support optimization, modeling, and performance guarantees. This survey fills this critical gap by offering a comprehensive and systematic analysis of the role of mathematical methods in enabling and enhancing AI-MEC systems. It aims to unify scattered insights from mathematical optimization, statistics, game theory, graph theory, and queuing theory in the context of edge computing; demonstrate how mathematical modeling complements AI in solving real-time, scalable, and interpretable problems at the edge; and highlight application domains such as intelligent transportation, healthcare, manufacturing, and smart cities, where this synergy leads to tangible improvements in responsiveness, reliability, and decision quality. By bridging the divide between theoretical mathematics and practical AI applications in MEC, this survey serves as a foundational reference for researchers and practitioners pursuing robust, intelligent, and future-proof edge systems.
The main contributions of this survey are summarized as follows:
  • We present a comprehensive survey of mathematical methods integrated with AI in MEC, highlighting how mathematical rigor enhances system robustness, interpretability, and efficiency.
  • We develop a comprehensive taxonomy linking mathematical methods to core MEC challenges, offering a structured guide for system-level optimization in edge intelligence.
  • We survey cross-domain application and retail—and illustrate how the fusion of AI and mathematical modeling enables real-time, resource-constrained decision making.
  • We focus on addressing three key challenges: heterogeneous data integration, real-time optimization, and computational scalability. We summarize state-of-the-art schemes to address these challenges and identify several open issues and promising future research directions.
The taxonomy of this paper is illustrated in Figure 1, which provides a conceptual overview of the survey structure. It visually organizes the major sections, including foundational topics, mathematical methods, integrated frameworks, and application domains, thereby helping readers navigate the logical flow of the paper. Section 2 presents the background of AI in MEC. Section 3 reviews mathematical methods applied in MEC. Section 4 discusses AI-based applications supported by mathematical methods. Section 5 explores the integration of mathematical methods with AI and MEC, highlighting key challenges and future directions. Finally, Section 6 concludes the paper.

2. Artificial Intelligence Algorithms in Mobile Edge Computing

2.1. Mobile Edge Computing

Cloud computing involves the delivery of a variety of services on demand, utilizing networking, virtualization, distributed systems, utility computing, and software-based solutions. However, with the proliferation of IoT devices, cloud infrastructures often struggle to meet the demands of latency-sensitive and privacy-critical applications [10]. In the Internet of Things (IoT) environment, transmitting all the vast volumes of data generated by connected devices directly to the cloud would impose a substantial burden. To alleviate this pressure, edge computing emerges as a necessary complement [11]. Edge computing fundamentally shifts processing capabilities closer to end devices and users. By handling data streams locally, systems can significantly reduce the need for centralized processing, thereby improving responsiveness, conserving network bandwidth, enhancing system scalability by reducing reliance on centralized infrastructure, and strengthening data privacy since not all information needs to be forwarded to remote cloud servers or data centers for additional processing [12].
Currently, edge and cloud computing are increasingly recognized as synergistic technologies. Their combined use enables the development of computing solutions that are not only more efficient but also scalable and flexible. Organizations capitalize on the unique advantages of each: edge computing addresses the necessity for real-time analytics and minimal latency, whereas cloud computing ensures scalability, cost-effectiveness, and access to a wide array of advanced services [13].
In recent years, edge computing has gained significant traction across diverse domains: industrial applications [14] employ 5G-driven digital twins for real-time equipment health monitoring, enhancing predictive maintenance in manufacturing systems; smart cities [15] integrate edge nodes with AI-powered video analytics to optimize traffic flow through adaptive signal control; healthcare systems [16] implement federated learning frameworks across distributed servers, enabling collaborative medical diagnosis without centralizing sensitive patient data; agricultural automation [17] utilizes edge-equipped drones with multispectral imaging for precision crop management in farming ecosystems; and retail intelligence [18] deploys edge-AI vision systems to analyze in-store consumer patterns for dynamic merchandising optimization.

2.2. Artificial Intelligence

Artificial Intelligence is an umbrella term for the scientific field focused on replicating human-like capabilities. It encompasses areas such as cognition, machine learning, emotion recognition, human–computer interaction, data management, and autonomous decision making [19]. In a broader sense, AI includes any method that empowers machines to imitate human behavior and either replicate or surpass human decision-making abilities, enabling them to tackle complex problems independently or with minimal human input [20]. AI serves as a methodological framework for transforming raw data into actionable insights through interpretability, adaptability, and contextual relevance. Founded in data science, AI leverages computational techniques to address challenges in dynamic data ecosystems that require efficient storage, scalable algorithms, and interdisciplinary analytical methods. Data science synthesizes methodologies from computer science (algorithm optimization), foundational disciplines (statistical modeling, graph theory), and social sciences (behavioral analytics), with cross-domain techniques such as pattern recognition [21], machine learning [22], data mining [23], database systems [24], and big data analytics [25] forming its methodological core.

2.3. Combination of Mobile Edge Computing and Artificial Intelligence

2.3.1. Motivations

We argue that the integration of AI and edge computing is both natural and inevitable. These two domains share a mutually reinforcing relationship. On one side, AI supplies edge computing with essential technologies and methodologies, enabling edge systems to realize greater potential and enhanced scalability. Conversely, edge computing offers AI diverse application scenarios and deployment platforms, thereby broadening AI’s scope of applicability [9].
Until now, researchers have achieved a series of significant results around these research problems. In order to help readers quickly grasp the latest research progress in this field, this paper systematically compiles and summarizes these results. The taxonomy of AI in MEC is shown in Figure 2, which presents a structured classification of learning paradigms and highlights their specific roles and goals in mobile edge computing environments.

2.3.2. Machine Learning

The basic idea of machine learning is to allow computers to complete tasks by learning data and patterns, rather than being explicitly programmed [26]. Machine learning (ML) exhibits strong applicability in handling tasks involving high-dimensional data, including classification, regression, and clustering. By leveraging past computations and identifying patterns within large-scale databases, ML techniques enable the generation of reliable and consistent decisions. Consequently, ML has found successful applications across numerous fields, such as fraud detection, credit scoring, next-best offer prediction, speech and image recognition, and natural language processing (NLP) [27]. ML can be categorized into several learning paradigms, among which the most prominent are supervised learning, unsupervised learning, and semi-supervised learning.
(1) Supervised learning (SL): SL trains models to predict and classify from unlabeled data by learning from existing labeled data. In supervised learning, each sample is labeled (tagged), and the model can use these labels to learn a classification model [28]. Some typical algorithms, like linear regression [29], support vector machines [30], decision trees [31], and neural networks [32].
(2) Unsupervised learning (UL): UL is the use of algorithms to analyze and cluster unlabeled datasets to discover hidden patterns and regularities in data without human intervention. Unsupervised learning models are generally used for the three main tasks of clustering, correlation, and dimensionality reduction. UL is more like self-learning than SL, where machines learn to do things on their own.
(3) Semi-supervised learning (SSL): Semi-supervised learning (SSL) improves model performance by utilizing both limited labeled data and abundant unlabeled data, addressing the reliance on extensive annotations in purely supervised approaches. Depending on the task objectives, SSL frameworks are classified into three types: classification (leveraging label propagation), clustering (enhancing group cohesion), and regression (reducing prediction uncertainty). This paradigm achieves a balance between annotation efficiency and model robustness, particularly in label-scarce scenarios [33].

2.3.3. Deep Learning

Deep learning (DL) is a subfield of ML and employs a hierarchical learning architecture inspired by the human neural brain. This architecture is designed to uncover intricate, nonlinear patterns within datasets. Analogous to machine learning, DL can utilize any of the aforementioned learning paradigms to extract latent patterns and structures from data. The widespread adoption of DL services, particularly in mobile applications, necessitates comprehensive edge computing support that spans both the architectural and implementation layers. Specifically, specialized edge hardware paired with optimized software frameworks enhances computational efficiency, while adaptive offloading mechanisms enable a dynamic distribution of DL workloads. Furthermore, edge computing architectures ensure reliable service maintenance through resource-sensitive scheduling, supported by standardized evaluation platforms that objectively quantify performance improvements in heterogeneous environments. DL offers adaptability in dynamic settings but is often data-hungry, difficult to interpret, and sensitive to hyperparameter tuning [34].

2.3.4. Reinforcement Learning

Reinforcement learning (RL) is a technique where an agent acquires rewards through interactions with its environment, evaluates the quality of actions based on the received rewards, and updates its model accordingly, as illustrated in Figure 3. The figure depicts a typical RL architecture in MEC, where multiple edge devices interact with a central controller through reward feedback and parameter updates, enabling distributed decision making and policy refinement. A persistent challenge in RL is balancing exploration and exploitation: to maximize rewards, one must select actions that promise the highest returns while also exploring unfamiliar actions to discover potentially better outcomes [35].
Autonomous decision making is critical for unmanned aerial vehicle (UAV) operational tasks such as tracking the target which require real-time trajectory planning. RL empowers UAVs to optimize action policies through environmental interactions, enhancing adaptive mission execution [36].

2.3.5. Federated Learning

Federated learning (FL) can be precisely defined as a ML framework. In this framework, multiple clients (or users) engage in collaborative learning to develop a model on a central server, while maintaining the decentralization of client-side data [37]. FL enables privacy-preserving model training across distributed edge nodes, but remains vulnerable to adversarial attacks, including model poisoning and gradient manipulation. Additionally, FL suffers from communication inefficiency and non-IID data heterogeneity, which hinder convergence and fairness [38].
In conventional ML frameworks, model training occurs on a centralized server where client data are aggregated to develop a global model. Although this model captures population-level patterns for broad client cohorts, it exhibits two critical limitations: (1) privacy risks from transferring sensitive data to central servers; (2) insufficient personalization due to generalized representations that overlook client-specific patterns. FL addresses these challenges through a decentralized paradigm that maintains client data on local devices while coordinating model training. Using iterative parameter aggregation, FL constructs a global model through distributed collaboration rather than raw data collection. This framework preserves privacy through localized data retention and improves personalization by integrating client-specific patterns from decentralized training. The hybrid architecture combines centralized coordination with distributed learning: client devices train localized models that contribute to a shared global model through secure parameter exchanges [39].

2.3.6. Role of IoT in Driving Edge Intelligence

IoT has emerged as a key enabler of edge computing, serving as a primary source of the vast, heterogeneous, and high-velocity data that edge systems must process. IoT devices, ranging from environmental sensors and smart home appliances to autonomous vehicles and industrial machinery, continuously generate multimodal data streams in real time [40]. The sheer scale and diversity of IoT data underscore the necessity of intelligent edge solutions that integrate AI algorithms for dynamic decision making, anomaly detection, and predictive analytics. This necessitates localized AI processing empowered by edge nodes. Therefore, the integration of IoT, AI, and MEC forms a synergistic triad, where IoT provides the data, AI enables intelligence, and MEC ensures low-latency, scalable deployment [41]. This triad sets the foundation for real-time, context-aware, and autonomous edge intelligence systems.
To enhance the manageability and adaptability of large-scale IoT deployments, software-defined IoT (SD-IoT) integrates software-defined networking (SDN) principles into IoT infrastructure. By decoupling the control and data planes, SD-IoT introduces a centralized controller that dynamically monitors network conditions, allocates resources, and configures routing policies [42]. This architecture enables more intelligent and responsive task offloading strategies in MEC environments. In AI-enabled MEC systems, SD-IoT facilitates global optimization by providing real-time visibility into device status, network traffic, and computational loads across distributed edge nodes. This centralized orchestration not only improves system scalability but also supports the adaptive quality of service (QoS), energy-aware routing, and fine-grained security policies. As edge applications become more complex and time-sensitive, SD-IoT serves as a crucial enabling technology for delivering agile and programmable edge intelligence [43].

2.3.7. Key Challenges in AI-MEC Integration

Despite significant progress, the integration of AI with MEC faces several persistent challenges that hinder large-scale deployment and consistent performance [44]. These include (1) model heterogeneity and device incompatibility, as resource-hungry AI frameworks often fail to align with the constrained and diverse edge hardware; (2) stringent data privacy and security concerns, especially when processing sensitive user data, with federated learning offering only partial mitigation against risks like model inversion and gradient leakage; (3) limited adaptability in dynamic edge environments characterized by fluctuating bandwidth, user mobility, and device availability; and (4) severe energy and computational constraints, where the need for real-time inference must be balanced against battery and thermal limitations in mobile and IoT devices. These bottlenecks motivate the integration of mathematical frameworks that can enhance robustness, efficiency, and adaptability issues explored in depth in subsequent sections.

2.4. Summary

To provide a comprehensive overview of how AI techniques are applied in edge computing, we summarize representative studies in Table 1. Rather than adhering to traditional groupings that focus narrowly on computation offloading, we adopt a methodology-centric taxonomy that highlights the interplay between AI techniques (such as deep learning, reinforcement learning, and federated learning) and specific problem domains. By systematically mapping each approach to its targeted challenge, optimization goal, and unique contribution, this classification not only illustrates the versatility of AI in addressing the heterogeneity of edge environments but also reveals underlying trends and gaps. Through this lens, readers can discern how different AI paradigms exploit their strengths—such as learning from limited data, optimizing under dynamic constraints, or preserving data privacy—to advance real-time, resource-constrained decision making at the network edge.

3. Mathematical Methods in Mobile Edge Computing

3.1. Motivations

With the rapid adoption of MEC in critical applications such as the IoT, intelligent transportation systems, and industrial automation, ensuring efficient and reliable computation at the edge—where resources are constrained and environments are highly dynamic—has become a key challenge. Traditional heuristic or empirically driven approaches often fall short in capturing the complex trade-offs and constraints inherent in such systems, making it difficult to guarantee performance or scalability.
Mathematical methods offer a principled and rigorous foundation for addressing these challenges. Techniques in mathematical modeling and optimization provide formal frameworks for resource allocation, task offloading, caching, and energy management. Probability theory and queuing models enable the characterization of uncertainty and latency, while game theory and graph theory offer powerful tools for modeling multi-agent cooperation and network topology. The taxonomy of mathematical methods in MEC is shown in Figure 4, which categorizes core mathematical approaches along with their targeted objectives in MEC.
By systematically incorporating mathematical approaches into MEC research, we can enhance algorithm interpretability, ensure optimality, and derive performance guarantees. This not only advances a theoretical understanding but also supports the practical deployment of intelligent and robust edge systems.

3.2. Mathematical Modeling and Optimization Techniques

In the field of edge computing, linear programming, nonlinear optimization, and integer programming play crucial roles in resource allocation and task scheduling optimization. The optimization models surveyed in our manuscript are based on several common assumptions to ensure mathematical tractability and analytical solvability. These include (1) known task arrival rates and resource demands; (2) static or slowly varying network topologies during optimization cycles; (3) convexity or quasi-convexity of objective functions; and (4) perfect or partial observability of system states. While these assumptions simplify the mathematical modeling process and support the derivation of performance guarantees, they may limit generalizability in dynamic real-world edge environments characterized by bursty traffic, mobility, device heterogeneity, and intermittent connectivity.
While these assumptions simplify the mathematical modeling process and support the derivation of performance guarantees, they may limit generalizability in dynamic real-world edge environments characterized by bursty traffic, mobility, device heterogeneity, and intermittent connectivity. Several empirical studies have evaluated the performance degradation of static optimization models under such conditions. For example, Zhu et al. [45] used NSGA-II in a vehicular MEC testbed and found that topology changes caused a large increase in task delay compared to idealized settings. Similarly, Zhong et al. [46] reported that CL-ADMM exhibited slower convergence in mobile edge networks where node availability changed frequently.
To address these challenges, adaptive algorithms have been proposed. Online convex optimization and multi-armed bandit models [47] allow task scheduling decisions to evolve as real-time feedback is received, without requiring full knowledge of the arrival rates or resource states. Robust optimization methods [48] handle bounded uncertainties in parameters such as wireless channel quality or task workload by optimizing for worst-case scenarios. Metaheuristics like genetic algorithms and particle swarm optimization also perform well in highly non-convex, dynamic environments, albeit at the cost of interpretability and convergence guarantees.
Key evaluation metrics used in recent work include (1) those quantified by the drop in QoS or task completion rate under simulated node failures or mobility events, (2) those measured by iteration counts or stability in objective values, and (3) those used to assess performance consistency under varying load conditions [45,47].
To address these limitations, recent research has increasingly adopted robust optimization, online learning, and adaptive algorithms that relax ideal assumptions and enhance practical applicability [49]. Younis et al. proposed an energy-latency computation offloading strategy that balances energy consumption and task delay using mathematical models, thereby improving the overall performance of edge servers [50]. The optimization model is formulated as follows:
min x α E ( x ) + ( 1 α ) D ( x ) ,
where E ( x ) denotes energy consumption, D ( x ) represents task delay, and α is a trade-off coefficient.
For resource management, Zhong et al. developed a CL-ADMM optimization framework based on cooperative learning, which demonstrated significant advantages in multi-edge node collaborative computing environments [46]. Their extended ADMM algorithm incorporates cross-node consensus constraints as follows:
L ρ ( x i , z , λ i ) = f i ( x i ) + λ i ( x i z ) + ρ 2 x i z 2 ,
where x i is the local variable at node i, z is the global consensus variable, λ i is the dual variable, and ρ is the penalty parameter. The method achieves convergence under mild convexity assumptions.
To address uncertainties in dynamic network environments, Tang et al. proposed a robust trajectory and offloading optimization strategy, which effectively enhanced the energy efficiency through mathematical modeling [51]. Their model considers worst-case channel fluctuations with bounded uncertainty Δ h , and optimizes:
min π max Δ h H t E t ( π , h t + Δ h ) ,
where π is the offloading policy and H is the uncertainty set.
In task scheduling, Zhu et al. introduced an NSGA-II-based approach that achieved an optimal trade-off between resource utilization and performance metrics in UAV-supported edge computing systems by leveraging Pareto front optimization [52]. In distributed edge computing scenarios, Lu et al.’s distributed stochastic gradient descent algorithm attracted attention for its efficient mathematical model, showing a superior performance in large-scale data training tasks by optimizing the computational efficiency and communication overhead, with theoretical guarantees of high-probability convergence [53]. The convergence guarantee is formalized as
E [ f ( x t ) 2 ] C T ,
where T is the number of iterations and C is a constant dependent on variance and step size.
Additionally, Meng et al. proposed an energy management strategy for fuel cell trams based on multidimensional dynamic programming, effectively minimizing the operating costs through precise mathematical models [54]. Their Bellman equation for cost minimization is given by
V ( s ) = min a c ( s , a ) + γ s P ( s | s , a ) V ( s ) ,
where s is the state, a is the action, c ( s , a ) is the cost, and γ is the discount factor.
In this part, we discuss several optimization techniques that explicitly model the trade-offs between latency and energy efficiency, which are central concerns in MEC. For instance, Pareto-based methods such as NSGA-II compute a set of non-dominated solutions that reveal the spectrum of trade-offs between latency and energy efficiency [52]. Other works adopt weighted cost functions, where latency and energy are jointly minimized using adjustable coefficients to prioritize system goals. The optimization techniques discussed, including NSGA-II and CL-ADMM, are grounded in rigorous theoretical principles. For CL-ADMM, formal convergence guarantees exist under assumptions such as the convexity of objective functions and Lipschitz continuity of gradients [46]. These are well established in distributed optimization theory and ensure that the solution iteratively converges to a saddle point or optimal value, particularly in convex MEC resource allocation problems. For NSGA-II, while analytical convergence bounds are less common due to its heuristic and evolutionary nature, its performance is often assessed via empirical convergence toward the Pareto front. Some studies do analyze asymptotic convergence in stochastic settings or use hypervolume indicators to quantify the closeness to the optimal front, but formal proofs of convergence are rare compared to convex optimization methods.

3.3. Applications of Probability and Statistics

Probabilistic and statistical techniques are essential in edge computing for uncertainty modeling, traffic prediction, and load balancing, offering a robust foundation for optimizing system performance. Wang et al. proposed an attentional Markov model to capture fluctuations in computing demands from mobile devices, significantly improving resource scheduling accuracy [55]. The model uses a discrete-time Markov chain where the transition probability P ( s t + 1 | s t ) is modulated by an attention mechanism α t based on recent demand patterns:
P ( s t + 1 | s t ) = α t · T s t , s t + 1 ,
where T is the base transition matrix and α t is computed via an attention network trained on historical sequences.
For system robustness, Li et al. introduced a hierarchical Bayesian network-based method that uses probabilistic inference to assess the fault probabilities of edge nodes, supporting fault-tolerant design [56]. The posterior probability of node failure is derived via Bayes’ theorem:
P ( F i | E ) = P ( E | F i ) P ( F i ) P ( E ) ,
where F i is the failure state of node i and E is the observed evidence from monitoring data. Conditional dependencies are captured via the network structure. The efficacy of survival analysis and Bayesian inference models in edge computing is commonly validated using statistical metrics tailored to predictive performance and probabilistic accuracy. Statistical techniques such as the concordance index (C-index), time-dependent ROC curves, and calibration plots are employed to evaluate the discriminative ability and calibration accuracy. For Bayesian inference models, validation typically includes posterior predictive checks, BIC/DIC scores for model selection, and log-likelihood or cross-validated predictive likelihood measures. These methods ensure that both predictive accuracy and uncertainty quantification are rigorously assessed, supporting reliable deployment in edge environments with potential failures or anomalies.
In traffic prediction, Blair et al. developed a collaborative task processing method using statistical modeling to capture spatial-temporal correlations in traffic data, optimizing hotspot predictions and supporting resource reallocation [57]. Their model applies multivariate linear regression with lagged variables:
y t = i = 1 p A i x t i + ϵ t ,
where y t is the predicted traffic intensity, x t i are historical spatial inputs, and ϵ t is the Gaussian noise.
For load balancing, Meydani et al. proposed a non-parametric Bootstrap method that enhances generalization in small-sample scenarios through statistical resampling, providing a reliable solution for smart grids [58]. The method generates empirical confidence intervals via
θ ^ = 1 B b = 1 B θ ( b ) ,
where θ ( b ) is the statistic computed from the b-th resampled dataset and B is the number of bootstrap iterations.
In risk assessment for network topology changes, Maina et al. applied Copula theory to model multivariate dependencies, optimizing collaborative computing among edge nodes [59]. The joint distribution is constructed via
F ( x 1 , x 2 , , x d ) = C ( F 1 ( x 1 ) , F 2 ( x 2 ) , , F d ( x d ) ) ,
where C is the copula function and F i are the marginal distributions. This allows for the flexible modeling of dependence across performance metrics. Copula theory models multivariate dependencies in collaborative edge computing by decoupling marginal distributions from their dependency structure. This enables the flexible modeling of complex, nonlinear relationships between performance metrics such as latency, energy, and load across edge nodes. Unlike traditional multivariate methods, copulas impose no strict constraints on the form of marginal distributions, allowing heterogeneous and empirically derived marginals. Commonly applied copulas (e.g., Gaussian, Clayton, Gumbel) are selected based on the nature of inter-variable dependence, such as tail correlation. This flexibility makes copula models especially suitable for capturing system-wide behaviors in decentralized and dynamic MEC environments.
Additionally, Ahmed et al. introduced a hard drive failure prediction method based on survival analysis, which optimizes resource reservation and reduces waste through statistical modeling [60]. The survival probability is modeled as:
S ( t ) = P ( T > t ) = exp 0 t λ ( u ) d u ,
where λ ( u ) is the hazard function estimated from historical failure data.
These studies demonstrate that integrating probabilistic and statistical methods significantly enhances resource utilization, stability, and performance in edge computing systems. As the complexity and uncertainty of edge computing environments grow, the application of these methods is expected to expand, providing a strong foundation for the intelligent evolution of edge computing systems.

3.4. Queuing Theory and Network Performance Analysis

Queuing theory is a crucial tool for performance analysis and optimization in edge computing, particularly in network throughput, latency control, and queue management. Xu et al. proposed a priority-aware M/G/1 queuing model that improves task scheduling efficiency and flexibility by supporting differentiated QoS requirements through a task priority mechanism [61]. The average waiting time W q for each class under non-preemptive priority is given by
W q i = λ E [ S 2 ] 2 ( 1 j = 1 i ρ j ) ,
where λ is the arrival rate, E [ S 2 ] is the second moment of the service time, and ρ j is the traffic intensity of class j.
Zhang et al. designed an edge-cloud collaborative architecture using queuing network theory, reducing end-to-end latency by approximately 15%, offering insights for real-time optimization [62]. The total latency T total in the tandem queue model is approximated as
T total = k = 1 n 1 μ k λ k ,
where μ k and λ k are the service and arrival rates at stage k.
To tackle resource contention in multi-hop edge networks, Wang et al. utilized an extended stochastic Petri net model to identify and mitigate system bottlenecks, providing an innovative approach for network performance optimization [63]. They define transition firing rates ν i and use reachability graphs to evaluate token accumulation, estimating bottleneck probabilities via Markov chain steady-state analysis. Compared to traditional queuing models, fluid flow models offer a scalable and continuous approximation of high-volume traffic behavior, enabling efficient analysis in large-scale edge environments. These models are particularly effective when dealing with aggregate task flows or bursty traffic patterns. Stochastic Petri nets, on the other hand, enhance the modeling of concurrent task execution and resource contention by capturing random transitions and dependencies between service stages. Their ability to represent interdependent queues and parallel workflows makes them well suited for analyzing performance bottlenecks in complex MEC systems.
For load balancing in heterogeneous server clusters, Lee et al. developed a G/G/m queue-based strategy that maintained system stability under high concurrency scenarios [64]. The mean queue length L q is estimated using Kingman’s approximation:
L q ρ 2 ( C a 2 + C s 2 ) 2 ( 1 ρ ) ,
where ρ is the server utilization, and C a , C s are the coefficients of variation for inter-arrival and service times, respectively.
In large-scale traffic prediction, Guo et al.’s Fluid Flow Model effectively captured dynamic traffic changes, enhancing prediction accuracy and supporting network resource reallocation [65]. The model is based on a partial differential equation:
ρ ( x , t ) t + x [ ρ ( x , t ) v ( x , t ) ] = 0 ,
where ρ ( x , t ) is traffic density and v ( x , t ) is traffic velocity. This allows modeling traffic evolution over time and space.
Addressing the impact of user behavior on system performance, Niyato et al. introduced a queuing game theory model that mitigates queue congestion caused by selfish user actions, optimizing system fairness and efficiency [66]. The user joining decision follows a Nash equilibrium derived from individual utility maximization:
U i = V i C i ( W i ) ,
where V i is the user’s valuation of service, and C i ( W i ) is the cost incurred due to expected waiting time W i . Equilibrium is achieved when no user has an incentive to change their strategy.
These studies demonstrate that various queuing theory models provide powerful tools for optimizing resource management and performance in edge computing systems. Queuing models, though analytically elegant, often assume that Poisson arrivals and exponential service times, which do not accurately reflect real-world edge workloads. Moreover, they struggle to scale in multi-service, multi-queue architectures or capture the full complexity of multi-hop, service-chained environments. Incorporating these mathematical formulations not only offers interpretability and analytical rigor but also facilitates the theoretical validation of AI-driven edge architectures.

3.5. Game Theory and Resource Sharing Mechanisms

Game theory is a vital framework for resource optimization and task allocation in edge computing, offering robust modeling and analytical tools for resource sharing, task allocation, and competition strategy design. Cheng et al. proposed a Stackelberg game-based framework for edge pricing and resource allocation, optimizing interactions between service providers and users to enhance resource utilization [67]. Expanding this approach, Tao et al. developed a Stackelberg game model to balance service pricing and resource allocation, providing insights into economic optimization for edge computing systems [68]. Stackelberg game models and auction theory frameworks differ fundamentally in their mathematical formulations. Stackelberg games adopt a bilevel optimization structure, modeling leader–follower dynamics where a dominant entity sets prices or resource policies and users respond accordingly. The solution, namely Stackelberg equilibrium, is typically derived through backward induction. In contrast, auction theory frames resource allocation as a mechanism design problem, incorporating bidding strategies, valuation functions, and incentive compatibility constraints. Auctions are particularly suited for handling information asymmetry and dynamic competition, offering flexible and decentralized resource sharing mechanisms in edge environments.
For collaborative computing, Abou El Houda et al. introduced a cooperative game-based framework that improved efficiency among edge nodes by implementing a profit-sharing mechanism, proving effective in complex IIoT scenarios [69]. In dynamic environments, Zhang et al. applied evolutionary game theory for energy-efficient subcarrier allocation, revealing stable equilibrium strategies through dynamic analysis [70]. In resource trading, Qiu et al.’s survey on auction theory highlighted various optimization methods to reduce resource waste, offering theoretical guidance for resource market design in edge computing [71].
To address cooperation under information asymmetry, Li et al. proposed a contract-theory-based incentive mechanism that ensures participant cooperation through well-designed contract terms, showing significant effectiveness in federated learning for health crowdsensing [72]. These studies illustrate that game theory provides powerful tools for resource allocation and competition management in edge computing, laying a solid theoretical foundation for enhancing system stability, fairness, and economic efficiency. As edge computing scenarios grow more complex, the application of game theory is expected to become increasingly significant in system design and optimization. Game-theoretic methods, while valuable for modeling strategic interactions, face scalability issues in large populations and may converge to suboptimal equilibria without strong assumptions.
Queuing game theory integrates principles from queuing theory and non-cooperative game theory to model and analyze scenarios where users make self-interested decisions about joining or leaving queues based on individual utility, typically defined as the trade-off between service benefit and queuing cost. The mathematical rationale lies in capturing strategic decision making under congestion, where each user’s choice affects the system’s queue state and, in turn, the payoff of others. Equilibrium concepts such as Nash equilibrium or Wardrop equilibrium are used to analyze steady-state behaviors in such systems. This contrasts with cooperative game-based mechanisms, where users (e.g., edge nodes or service providers) are incentivized to collaborate to achieve a socially optimal outcome. In cooperative settings, tools like coalition formation, profit-sharing schemes, and Shapley value allocations are applied to ensure fair and stable cooperation. While cooperative games aim for system-wide efficiency and fairness, queuing games are better suited for real-time, decentralized decision making in environments with limited coordination and selfish agents.

3.6. Graph Theory and Network Topology Optimization

Graph theory is a crucial tool for network modeling and optimization in edge computing, offering significant benefits for resource allocation, task scheduling, and data transmission. For network structure optimization, Yadav et al. proposed a minimum spanning tree-based energy-efficient cluster head election algorithm that reduced energy consumption by about 18% through topology optimization [73]. Zhao et al. introduced a community detection algorithm based on graph compression, which minimized cross-domain communication overhead by identifying correlated subgroups, enhancing resource utilization [74].
In dynamic resource allocation, Alex et al. employed a digital twin-based provisioning method that integrates deep Q-network (DQN) and graph theory models, significantly improving adaptability in 6G-enabled mobile edge networks [75]. For task scheduling in complex scenarios, Wu et al. proposed a multi-dimensional optimization approach using a hypergraph model to accurately represent resource constraints, providing a novel framework for task scheduling optimization [76]. In data transmission optimization, Alzaben et al. developed a path selection algorithm based on the max-flow min-cut theorem, which alleviated bandwidth bottlenecks and improved the data transmission efficiency between edge and cloud [77].
While traditional graph theory-based models primarily assume static or semi-static network topologies, recent advances incorporate dynamic graph structures to adapt to real-time MEC conditions. Time-varying graphs, sliding window models, and snapshot-based representations capture evolving connectivity. Moreover, online algorithms and graph neural networks (GNNs) are increasingly employed to learn and respond to topological changes in real time. These enhancements enable dynamic task scheduling, robust routing, and service migration in response to user mobility, link variability, and node churn. These studies demonstrate that graph theory provides powerful tools for optimizing network topology, resource allocation, and task scheduling in edge computing. As edge computing environments become increasingly complex, the role of graph theory in system optimization is expected to grow further.
The integration taxonomy proposed in our paper is designed to ensure compatibility among diverse mathematical paradigms by functionally aligning each method with specific MEC system challenges and layers rather than attempting to merge them into a single unified model. Graph theory supports network topology optimization; game theory addresses multi-agent interaction and resource allocation; and queuing theory models performance metrics such as latency and congestion. These paradigms operate independently but interface through shared variables (e.g., task arrival rates, resource prices, or link loads), enabling modular composition and hierarchical orchestration across the system.

3.7. Summary

Beyond applying existing mathematical frameworks, many studies have introduced novel theoretical formulations and model extensions to address the unique demands of mobile edge computing. For example, Cheng et al. [67] extend classical Stackelberg game models to handle edge resource pricing with a multi-leader, multi-follower structure. Their model incorporates latency-aware utility functions and enforces Stackelberg equilibrium using dual decomposition to ensure scalability. In graph theory, hypergraph-based models [69] have been introduced to capture many-to-many relationships in MEC task scheduling, where each hyperedge connects multiple tasks to a shared execution unit. This structure improves over simple bipartite graphs by modeling task affinity, resource contention, and co-execution constraints. Similarly, stochastic Petri nets [63] have been employed to simulate multi-hop service chains in heterogeneous networks, capturing concurrency, timing variability, and transition probabilities in edge-fog-cloud orchestration. These models generalize traditional queuing theory and support state–space analysis for reliability and throughput optimization. Other innovations include copula-based models for joint latency-energy tail-risk estimation, Lyapunov-drift-plus-penalty techniques adapted for federated systems, and non-convex ADMM variants customized for partial observability in decentralized MEC environments. Together, these innovations mark significant theoretical progress in aligning classical mathematical tools with the architectural realities of modern edge systems.
Table 2 provides a structured overview of mathematical methodologies that underpin optimization strategies in edge computing environments. Foundational techniques—such as linear, nonlinear, and integer programming—offer formalized frameworks to tackle core challenges like resource allocation, task scheduling, and latency minimization. Multi-objective optimization algorithms (e.g., NSGA-II) facilitate Pareto-efficient trade-offs in complex, resource-constrained scenarios such as UAV-assisted edge computing. For large-scale distributed training tasks, SGD ensures scalable convergence while mitigating communication overhead. Probabilistic and statistical models enhance system resilience by addressing uncertainty, fault tolerance, and load balancing. Furthermore, advanced modeling tools—such as Petri nets, queuing theory, and copula theory—capture dynamic behaviors and support fairness and efficiency under competitive or congested conditions. Finally, economic theories and graph-based approaches contribute to decentralized decision-making, resource market design, and topology-aware network optimization, highlighting the interdisciplinary nature of mathematical innovations in edge computing.

4. Mathematical Methods and Artificial Intelligence Based Applications

Recent years have witnessed remarkable advances in AI, with transformative applications emerging in diverse domains. In particular, mission-critical systems that include intelligent transportation systems (ITSs), intelligent manufacturing infrastructures, and the Internet of Vehicles (IoV) demonstrate substantially more stringent demands for ultra-low latency and network reliability compared to latency-tolerant applications such as augmented/virtual reality (AR/VR), online gaming platforms, and content delivery networks (CDNs). However, conventional cloud computing architectures frequently prove inadequate in meeting these rigorous QoS prerequisites due to their inherent limitations in geographical distribution and transmission overhead. This technological gap has motivated the scientific community to exploit EC paradigms, which strategically position computational and storage resources proximal to data generation sources, thereby enabling real-time decision-making capabilities and enhanced service continuity.
To systematically examine the synergistic potential between EC architectures and AI-driven solutions, this section presents a comprehensive review of cutting-edge research initiatives spanning multiple domains: intelligent transportation systems, intelligent manufacturing, smart cities, intelligent healthcare, and smart retail.

4.1. Intelligent Transportation Systems

Intelligent Transportation Systems (ITSs) represent an integrated framework that combines information technology, communication networks, sensor systems, computer science, and existing transportation infrastructure [78,79]. ITS environments generate massive volumes of data at the network edge through millions of interconnected devices and sensors. Data-driven AI plays a central role in advancing ITS capabilities. By extending AI processing to the network edge, edge intelligence (EI) allows ITS applications to achieve lower latency, enhanced security, reduced backbone network congestion, and a more effective utilization of edge-generated big data. This section introduces the primary deep reinforcement learning (deep RL) algorithms applied within ITS scenarios [80].
In adaptive planning for high-density traffic, interactive behavior-aware approaches gain advantages from hierarchical learning. However, a core challenge in the ITS remains: autonomous vehicles often fail to execute real-time responses. The root cause lies in their limited ability to perceive dynamic objects beyond the direct visual range. To address this, vehicle–road–cloud collaboration becomes essential. Hong et al. [81] introduce a hierarchical edge-decision framework, designed to handle real-time motion capabilities derived from analogical reasoning about spatiotemporal events, as shown in Figure 5. The figure illustrates how edge computing nodes interact with intelligent vehicles by processing raw data locally and returning results in real time, highlighting the integration of mathematical modeling and AI in latency-sensitive applications such as autonomous driving. This hierarchical edge-decision framework employs spatiotemporal pattern matching functions and Bayesian belief propagation to infer likely vehicle trajectories and decision sequences, enabling predictive control under partial observability. The hierarchical edge-decision framework focuses on the real-time motion skills derived from spatiotemporal event analogical reasoning. Its mathematical foundation includes decision trees for initial motion classification and Kalman filters for trajectory smoothing. Rigorous validation in challenging autonomous driving scenarios, including complex environments, dynamic conditions, and time-varying real-world settings.
Vehicular cooperative perception (VCP) enhances autonomous driving by enabling vehicles to share sensing data via V2X communication, extending individual sensing ranges and improving accuracy. However, challenges arise from redundant data due to overlapping sensing areas and limited on-board computational resources, which strain communication bandwidth and delay real-time processing. Existing VCP studies, such as intelligent data selection [82] and deep RL for redundancy mitigation [83], optimize information sharing to reduce communication load. However, these operate in isolation, neglecting integrated sensing-communication-computation (ISCC). Edge computing-based offloading [84,85] can address computational limits but ignores data quality and redundancy, leading to suboptimal resource allocation and increased latency.
To bridge this gap, Dong et al. [86] proposes an ISCC-based task offloading and resource allocation (ITORA) framework. By dividing the region of interest (RoI) into sub-regions, ITORA defines an information value function I ( x ) = ω d D ( x ) + ω r R ( x ) + ω c C ( x ) where D, R, and C, respectively, represent distance, available resources, and data completeness scores; ω i are tunable weight parameters. Edge servers facilitate collaborative computing by fusing data from multiple vehicles, adapting to real-time network conditions to balance local and edge processing. Simulation results show that ITORA achieves a 44.64% reduction in average energy and 98.16% improvement in total information value compared to baseline models. However, scalability may become a concern in dense urban deployments due to the increased computational complexity in solving the assignment problem.
In vehicular networks, long-term dependencies are captured through reinforcement learning models based on Markov models, where agents optimize discounted cumulative rewards over time. Advanced deep RL methods incorporate LSTM or RNN architectures to explicitly model temporal dependencies. The mathematical validation of long-term learning is achieved through the convergence analysis of the Bellman equation, stability proofs of value iteration, and empirical evaluation using metrics such as average cumulative reward and reward variance across episodes. These methods ensure that the learned policies not only perform well in the short term but also maintain consistent performance over extended driving horizons.
In vehicular networks, multimedia streaming, such as real-time video feeds for cooperative perception, driver assistance, and infrastructure monitoring, requires ultra-low latency and high bandwidth. To meet these demands under dynamic topology and high mobility, the software-defined Internet of Vehicles (SD-IoV) integrates SDN principles into vehicular communication frameworks. The SDN controller centrally manages routing policies, flow priorities, and bandwidth allocation based on real-time traffic and QoS requirements [87]. By decoupling the control and data planes, SD-IoV enables intelligent multimedia delivery that adapts to vehicular speeds, signal fading, and load fluctuations. For example, high-definition video streams for V2X-based collision avoidance can be prioritized over less critical data transmissions. Mathematical tools such as queuing theory and utility-based optimization can be applied within SD-IoV controllers to ensure that the bounded delay and optimal packet scheduling [88]. The combination of SD-IoV and MEC further enhances system responsiveness by enabling edge-side video preprocessing, caching, and adaptive encoding—key capabilities for robust and scalable intelligent transportation systems.

4.2. Smart Cities

Global smart city initiatives have rapidly expanded across diverse urban regions. Rapid urbanization and technological progress have driven cities to adopt multifunctional frameworks, prioritizing resident welfare, environmental sustainability, and resource efficiency through data-driven infrastructure and adaptive governance. These systems integrate socioeconomic, ecological, and technological domains to balance economic growth with cultural preservation, renewable energy integration, and community resilience [89].
In urban regions, the rising energy requirements call for innovative approaches to balance consumption optimization and data governance. Conventional energy management systems often struggle to meet both efficiency and security goals. This study [90] presents an integrated framework combining DRL and blockchain technology for smart city energy systems. The DRL agent is responsible for continuously interacting with the environment to learn optimal energy distribution strategies based on contextual variables such as supply–demand imbalance, dynamic pricing, and weather conditions. Once a decision is made, it is broadcast to the blockchain layer. The blockchain serves as a trust and execution layer, where smart contracts record, validate, and enforce the DRL-based decisions. For instance, in a peer-to-peer (P2P) energy market, smart contracts verify that the energy exchange terms align with DRL recommendations and ensure fair transactions among distributed participants without central oversight. This process enhances transparency, non-repudiation, and auditability [91].
The DRL model is formulated as a Markov decision process (MDP) ( S , A , R , P , γ ) , where states S include load demand and grid capacity, actions A are energy allocation strategies, R is a reward function based on cost savings and efficiency, and P encodes the transition probability. However, the DRL model requires extensive training data and the tuning of hyperparameters. Its performance may degrade in out-of-distribution scenarios, such as during blackouts or major infrastructure failures, indicating a need for robust offline reinforcement learning techniques.
Crowd surges and related accidents present substantial risks to public safety, leading to numerous casualties over recent decades [92]. To address crowd scale variations, existing methods typically rely on computationally intensive backbone networks or complex modules, which demand high runtime resources and restrict deployment scenarios. Lin et al. [93] introduced RepMobileNet, a lightweight multi-branch architecture with edge intelligence capabilities for multi-scale spatial feature extraction. The model incorporates lightweight multi-branch depthwise separable convolution blocks (DSBlocks) to effectively capture multi-scale features in dense crowds.
Mathematically, DSBlocks decompose standard convolutions into depthwise and pointwise components, reducing computation from O ( K 2 · C i n · C o u t ) to O ( K 2 · C i n + C i n · C o u t ) , where K is kernel size, and C i n , C o u t are input/output channels. This enables real-time inference on edge devices. To improve density estimation, Lin et al. applied a regression-based loss L = 1 N i = 1 N y i y ^ i 2 over annotated crowd maps. Experimental results demonstrate a 19.2% reduction in mean absolute error (MAE) compared to MobileNetV2. Nevertheless, such models are sensitive to camera angles and occlusion, which can degrade performance in unstructured environments. Future improvements may incorporate graph neural networks to model the spatial relationships among crowd clusters.
As smart cities increasingly rely on multimedia applications, such as real-time video analytics, traffic monitoring, and augmented reality, the need for efficient multimedia traffic management becomes critical. Software-defined Internet of Multimedia Things (SD-IoMT) extends the principles of SDN to multimedia IoT ecosystems, enabling centralized, programmable control over heterogeneous devices and data streams [94]. By dynamically adjusting routing paths, bandwidth allocation, and media transcoding strategies, SD-IoMT ensures the optimal quality of experience (QoE) for end users. In AI-driven edge scenarios, SD-IoMT architectures support adaptive streaming, load balancing, and service prioritization based on real-time network feedback. Mathematical methods such as queuing theory, utility optimization, and game theory can be integrated into SD-IoMT controllers to optimize multimedia flow scheduling and edge resource usage [95]. This synergy is particularly beneficial for bandwidth-intensive and latency-sensitive tasks in smart city environments, where large-scale sensor data must be processed and acted upon promptly to ensure safety, efficiency, and responsiveness.

4.3. Intelligent Healthcare

The integration of IoT and cloud computing has propelled the healthcare sector into an era of data-driven diagnostics, with wearable devices enabling continuous health monitoring [96]. While cloud-based AI analytics enhance diagnostic precision, traditional centralized cloud infrastructure faces challenges in meeting telemedicine’s demands for ultra-low latency and real-time data responsiveness. EC emerges as a pivotal solution, offering decentralized data processing capabilities at the network’s edge to address these limitations.
EC optimizes medical systems by minimizing latency, ensuring secure data transmission, and improving location-sensitive services [97]. Its distributed architecture reduces critical response times for emergency care, where delays in critical decision making can be life-threatening. Moreover, EC’s localized data handling mitigates privacy risks inherent in cloud-based models, aligning with stringent healthcare data regulations [98]. Current advancements underscore that integrating edge computing with emerging technologies like blockchain and 6G networks will enable future healthcare ecosystems to achieve seamless connectivity, scalable data management, and real-time intelligence, thus overcoming existing infrastructure and operational bottlenecks [99].
Subsequent sections will highlight the role of EC in the advancement of AI-driven healthcare applications, including remote disease diagnosis, epidemic surveillance, and personalized treatment protocols.
Multimodal multidomain multilingual foundation model (M3FM) [100] introduces decentralized frameworks that align visual and textual representations in a shared latent space, enabling zero-shot clinical inference across modalities and languages. This contrasts starkly with supervised learning methods, which rely on centralized cloud processing and labeled data, struggling in rare diseases or low-resource scenarios [100,101].
For time-sensitive conditions such as Alzheimer’s and cardiovascular diseases, EC and AI offer transformative advantages through real-time monitoring and multisource data fusion. Unlike conventional methods reliant on periodic hospital visits and isolated sensor data, IoT-integrated edge systems [102,103] continuously monitor physiological parameters (e.g., SpO2) and perform anomaly detection on the device. AI-driven ECG analysis [103] achieves 98.2% accuracy in arrhythmia detection through edge-based convolutional neural networks (CNNs), eliminating the delays caused by cloud transmission.
Mathematically, the CNN-based ECG classifier extracts temporal-spatial features using convolution layers defined as h l = f ( W l x + b l ) , where W l is the filter kernel, * denotes convolution, and f is a nonlinear activation function (e.g., ReLU). The final classification layer uses softmax activation y ^ = softmax ( W s h + b s ) to output disease probabilities. To optimize deployment on edge devices, model compression techniques such as pruning and quantization are applied. Quantization reduces bit precision (e.g., 32-bit to 8-bit), while pruning removes weights with magnitudes below a threshold τ : W i = 0 if | W i | < τ .
Furthermore, hybrid frameworks combining sensory data with nonsensory input (e.g., patient history) [102,104] use rule-based algorithms and machine learning to achieve 98.4% triage accuracy, surpassing single-modal systems that lack contextual integration. These technical advances, namely real-time processing, multi-source fusion, and decentralized intelligence, enable personalized treatment planning, as demonstrated by studies integrating remote cognitive testing with blood biomarkers for Alzheimer’s [105,106]. Future research will focus on optimizing energy-efficient edge devices, establishing interoperability standards, and addressing ethical concerns about data privacy and algorithmic bias [104,107].
Coronary heart disease necessitates continuous electrocardiogram (ECG) monitoring to prevent complications [108], but traditional clinical solutions and existing wearables struggle with processing constraints in real time and high energy consumption. A groundbreaking wearable system presented in reference [109] addresses these challenges by integrating a custom analog front-end with a low-power system-on-chip (SoC). This architecture leverages EC to perform on-device signal processing and AI-driven classification, eliminating cloud dependency.
Raw ECG signals undergo instantaneous preprocessing and feature extraction at the edge, while an optimized artificial neural network (ANN) performs inference directly on the SoC. The ANN is mathematically expressed as y = σ ( W 2 · σ ( W 1 · x + b 1 ) + b 2 ) , where σ denotes the activation function and W i , b i are trainable parameters. Energy consumption is modeled as E = C · V 2 · f · t , where C is capacitance, V is voltage, f is frequency, and t is execution time. Design choices aim to minimize E while maintaining accuracy. This integration ensures minimal latency and energy efficiency, critical for 24/7 monitoring [110]. However, the ANN’s size is constrained by SoC memory, limiting its applicability to low-complexity arrhythmia detection unless hierarchical or modular models are employed.

4.4. Smart Retail

In recent years, the convergence of advanced mathematical methodologies and AI has catalyzed transformative innovations in smart retail, redefining operational efficiency, customer engagement, and decision-making frameworks. The sector has witnessed a surge in adaptive technologies that integrate machine learning, optimization algorithms, and probabilistic modeling to address complex retail challenges.
In smart retail, the need for processing user behavior data is growing rapidly. Traditional cloud-based recommenders rely on centralized data centers, but they have clear drawbacks: potential privacy risks, network latency affecting real-time recommendations, and high resource consumption. When browsing and uploading purchased records from mobile devices to the cloud, delays in recommendations may occur due to network congestion, and centralized storage of sensitive information increases leakage risks. To address efficiency and privacy challenges in distributed data environments, federated and on-device recommendation technologies have become key focuses. In federated recommenders, the non-IID data issue is a major hurdle. Traditional methods like FedAvg use fixed learning rates and global model averaging, struggling to adapt to the diverse preferences of retail users [111]. Data differences between user groups often lead to ineffective parameter updates, increasing communication costs and reducing recommendation accuracy.
To tackle this, Yu et al. [112] proposed the Cluster FedMet algorithm, which clusters users based on embedding vectors to minimize the cross-cluster interference, integrates meta-learning for fast adaptation to local sparse data, and employs long short-term memory network (LSTM) to dynamically generate personalized learning rates. This layered strategy enhances the recommendation accuracy while reducing communication rounds, balancing the generality of global models with the personalization of local models. Mathematically, user clustering is formulated as a k-means problem: min { μ j } i = 1 N x i μ c ( i ) 2 , where x i is the embedding of user i, and μ j is the centroid of cluster j. Personalized learning rates are derived from a sequence-to-sequence LSTM model trained to minimize local loss under the constraint of convergence stability. Empirical evaluation shows that ClusterFedMeta achieves a 12.7% reduction in MAE and 13.7% reduction in RMSE on MovieLens 1M, along with a 26.1% reduction in MAE and 16.9% reduction in RMSE on Yahoo Music compared to FedAvg.
On-device recommenders target localized deployment on smart terminals such as smartphones and smart shelves. Traditional cloud architectures over-rely on cloud computing power, leading to wasted terminal resources and slow responses. The FedRec framework, as discussed in literature [113,114], avoids raw data upload through local training and encrypted parameter aggregation, enhancing privacy with differential privacy to achieve real-time end-cloud collaborative recommendations while ensuring compliance.
However, challenges such as sparse data and heterogeneous device capabilities in retail remain, as noted in the literature [115], requiring a balance between model compression accuracy and local training efficiency and vigilance against potential malicious attacks on local models.
Unmanned retail systems now deploy diverse algorithms to automate operations. Computer vision methods like YOLO paired with LiDAR or RFID enable real-time product tracking and checkout-free experiences [116]. RL optimizes dynamic pricing and inventory restocking by analyzing sales patterns [117]. Sensor fusion combines weight sensors, thermal imaging, and motion data to detect anomalies like theft or misplaced items [118,119]. Dynamic pricing is modeled as a stochastic optimization problem: max p t E R t ( p t ) λ · Var [ R t ( p t ) ] , where p t is the price at time t, R t is revenue, and λ is a risk aversion coefficient. Policy optimization is performed using Q-learning or actor-critic methods.
Edge computing and TinyML reduce latency by running lightweight models like MobileNet on local devices for instant decisions [120]. Digital twins simulate store layouts using graph algorithms [121], while federated learning trains AI models across distributed nodes without compromising customer privacy [113]. Graph-based digital twins apply shortest-path or flow optimization algorithms (e.g., Dijkstra, Ford–Fulkerson) to simulate shopper movement, minimizing congestion and optimizing shelf layouts.

4.5. Emerging AI-SDN-Fog Convergence and Streaming Applications

With the increasing convergence of SDN, AI, and fog/cloud/IoT architectures, several critical research topics have emerged. In green cloud multimedia networking, AI-assisted adaptive bitrate control and convex optimization models are used to balance streaming quality with energy efficiency. For AI-based load balancing in SDN, DRL and federated learning models enable dynamic traffic management in real time, improving network utilization and fairness [122].
In the streaming applications for SD-IoV (software-defined Internet of Vehicles), edge caching and RL-based bandwidth allocation enhance video delivery under high mobility and interference. Meanwhile, task scheduling and load balancing in SDN-based Fog-IoT environments leverage graph-theoretic scheduling and multi-objective optimization to minimize task latency and energy consumption [123]. These topics exemplify the continued evolution of mathematical and AI integration in emerging infrastructure paradigms and point to fertile ground for future research on hybrid, adaptive, and sustainable MEC systems.
Fog computing extends the MEC paradigm by introducing an intermediate layer between the edge and cloud, enabling distributed processing closer to data sources. In Fog-IoT environments, task scheduling must consider the heterogeneity and hierarchy of computational resources [124]. SDN enhances this process by providing a centralized controller that dynamically manages task flows and resource allocation across fog and edge nodes. By integrating SDN with mathematical optimization techniques, such as ILP, constraint satisfaction, and queue-based scheduling, task placement decisions can be globally optimized in real time. For instance, SDN controllers can monitor network status and node load to minimize task response time or balance workloads under energy constraints. Queue-based models (e.g., M/M/1, M/G/1) can be embedded into the controller’s scheduling logic to assess delay metrics and ensure QoS compliance. This SDN-based orchestration improves system scalability, reliability, and resource utilization across hierarchical fog-edge architectures, making it an essential component in large-scale, latency-sensitive IoT applications.

4.6. Summary

Table 3 presents a cross-domain synthesis of how artificial intelligence and mathematical methods collaboratively drive advancements across key application fields, including intelligent transportation, manufacturing, smart cities, healthcare, and retail. In each domain, AI contributes adaptability, learning capabilities, and real-time responsiveness, while mathematical modeling reinforces system robustness, optimization rigor, and analytical precision. It should be emphasized that AI techniques, especially those deployed in MEC, are inherently supported by mathematical models and methods. Core AI paradigms such as DL, RL, and FL rely heavily on mathematical constructs including optimization theory, probability, statistics, and queuing model. Their synergistic integration enables the development of intelligent, efficient, and resilient solutions capable of addressing the multifaceted challenges inherent in dynamic and resource-constrained environments. By illustrating how different technological paradigms complement each other, this taxonomy provides insights into emerging trends and highlights the growing necessity of interdisciplinary approaches for next-generation edge-enabled intelligent systems.

5. Integration of Mathematical Methods with Artificial Intelligence and Mobile Edge Computing

5.1. Motivations

With the advent of the era of ubiquitous connectivity, MEC is playing an increasingly important role in smart terminals, IoT, and 5G networks [125,126]. In the face of large-scale, multi-source, and dynamically changing data streams, traditional artificial intelligence methods exhibit significant limitations in efficient modeling, reasoning, and optimal resource allocation [127,128]. Mathematical methods, with their rigorous theoretical frameworks and strong modeling capabilities, provide more precise and interpretable theoretical support for edge computing. However, these methods often fall short when dealing with complex, uncertain, and unstructured data, particularly due to their lack of flexibility and adaptability in real-world application environments.
Therefore, combining AI with mathematical methods to leverage their complementary advantages in edge computing has emerged as a highly promising research direction, as shown in Figure 6. The figure illustrates a hierarchical MEC architecture where data flow from end devices to edge and cloud layers, where they are processed using AI models and mathematical techniques to enable intelligent and scalable service delivery. AI excels at efficiently processing large-scale data and enables automatic feature extraction and pattern recognition, while mathematical methods significantly enhance model stability, reliability, and interpretability [129]. The deep integration of the two is expected to build more intelligent, efficient, and robust edge computing systems, which can not only substantially improve service quality but also promote the autonomy and cooperation of smart terminals. Especially in resource-constrained and complex edge computing scenarios, such a hybrid approach offers systematic solutions to critical challenges such as dynamic task scheduling, energy consumption control, and service quality assurance [12].

5.2. Integration Challenges and Opportunities

5.2.1. Heterogeneous Data Integration and Management

The integration and management of heterogeneous data in edge intelligence face significant challenges, rooted in the inherent diversity of edge environments. Mathematical modeling methods and artificial intelligence techniques both face challenges in the processing stage: the former relies on explicit structural assumptions and highly structured data inputs, while the latter depends on unified formats and standardized labels to support effective model training and generalization. Geographically distributed IoT devices and mobile users generate non-IID data streams with spatial and temporal biases, which complicates AI model training. This is especially true in federated learning frameworks, where local updates must be aggregated while balancing privacy and accuracy [130]. Hardware heterogeneity, including CPUs, GPUs, FPGAs, and NPUs, creates compatibility issues for deploying AI models on resource-constrained edge devices. The exploration of model compression and quantization in [131] are necessary to balance performance and efficiency. Dynamic data sources such as sensors in smart cities or autonomous vehicles further exacerbate these challenges. Robust service composition and caching mechanisms are required to manage latency and energy consumption while ensuring QoE [132].
Opportunities lie in advanced AI-driven strategies. FL enhanced with meta-federated learning [133], secure multi-party computation [134], and differential privacy [135] especially addresses non-i.i.d. data and privacy concerns which enable secure, distributed model aggregation. Collaborative edge–cloud frameworks partition AI models, delegating lightweight tasks to edge devices and offloading complex computations to the cloud [136]. RL and deep RL algorithms optimize dynamic resource allocation in multi-user edge systems. They adapt to heterogeneous data flows and hardware constraints to minimize latency and energy overhead. Therefore, there are huge challenges in terms of the integration and management of Heterogeneous data.

5.2.2. Computational Complexity and Scalability

Mathematical frameworks often require iterative computations that strain resource-constrained edge devices. High-dimensional optimization and stochastic modeling demand iterative calculations that exceed the processing limits of edge devices. In [137,138,139], Lyapunov-based resource allocation in MEC scales quadratically with network size, creating latency bottlenecks. Meanwhile, AI models like deep neural networks require substantial computational power for training and inference, conflicting with edge environments’ energy constraints. A recent survey [140] emphasized model compression techniques to reduce AI overhead. However, these methods often sacrifice the theoretical robustness guaranteed by mathematical frameworks, highlighting a reliability–efficiency gap.
Scalability challenges intensify as MEC systems grow. Centralized optimization algorithms lack adaptability in dynamic edge networks with heterogeneous hardware. Distributed AI approaches, including federated learning, face synchronization delays and communication costs. Hybrid strategies like hierarchical model aggregation aim to balance efficiency but struggle with data inconsistency in decentralized settings. However, the work in [141,142] shows that dimensional complexity remains a barrier. Solving these issues requires co-designed frameworks that integrate sparse mathematical solvers with adaptive AI architectures, enabling scalable deployment without compromising performance.
Scalability in large-scale MEC deployments is a critical challenge due to limited computation and memory at edge nodes. Scalability in large-scale MEC systems is supported by sparse solvers and model compression methods, both of which offer strong mathematical justification [143]. Sparse optimization techniques reduce computational complexity by focusing on non-zero coefficients, with provable recovery guarantees under sparsity assumptions. Meanwhile, model compression techniques, such as pruning, quantization, and low-rank decomposition, mathematically reduce model parameters and inference cost while maintaining acceptable approximation error bounds. These approaches ensure that edge devices can execute intelligent services with bounded latency and minimal resource usage.

5.2.3. Real-Time Processing and Optimization

EC and AI systems face inherent challenges in real-time processing and optimization due to their dynamic, resource-constrained environments. EC requires low-latency data processing for applications, but fluctuating networks and limited computational resources often disrupt timely decision making [144]. AI models demand significant inference time, clashing with real-time needs [145]. Mathematical methods such as stochastic optimization and model predictive control help by dynamically allocating resources and balancing workloads across edge nodes. However, the interaction between distributed edge architectures and AI’s computational complexity creates trade-offs in latency, accuracy, and energy consumption, necessitating adaptive frameworks that maintain performance while responding to real-time constraints.
Integrating mathematical optimization with AI offers opportunities to enhance real-time capabilities in edge environments. Additionally, a two-stage optimization framework of “approximate computation—intelligent decision making” presents a new opportunity. In this framework, mathematical methods are utilized to construct an efficient approximate solution space, which is subsequently integrated with the predictive and decision-making capabilities of AI models. This combination enables the dynamic updating and adjustment of parameters, enhancing overall system adaptability and performance [146]. Lightweight AI models, combined with distributed optimization techniques, enable efficient inference and training on resource-constrained edge devices [147]. These methods improve responsiveness and ensure scalability in large-scale IoT deployments, demonstrating the potential of synergistic solutions that blend mathematical rigor with AI-driven adaptability. Despite recent progress in combining AI and mathematical modeling, several theoretical limitations remain underexplored. In particular, hybrid systems that integrate deep learning with optimization or game-theoretic models often lack formal convergence guarantees, especially under non-convexity or when trained in non-stationary edge environments. Most AI-assisted solvers use approximations or heuristics to guide search and decision making, but these lack rigorous proofs of solution quality or robustness. Additionally, the interaction between learned representations and mathematical constraints may introduce instability or unpredictable behavior, especially in real-time adaptive settings. Developing unified theoretical frameworks that ensure convergence, scalability, and interpretability for hybrid models remains an important open research challenge.
The mathematical rigor introduced in our survey plays a pivotal role in providing theoretical guarantees for AI-based mobile edge systems, particularly under dynamic and resource-constrained environments. Specifically, mathematical modeling techniques such as convex optimization, multi-objective optimization (e.g., NSGA-II), and cooperative learning frameworks (e.g., CL-ADMM) enable us to derive convergence properties, performance bounds, and system stability guarantees. For example, in Section 3.2, we discuss how optimization strategies grounded in linear and nonlinear programming provide closed-form or bounded solutions for resource allocation and task scheduling [148]. Moreover, stochastic models and queuing theory in Section 3.3 and Section 3.4 allow us to analyze latency distributions and system bottlenecks analytically, which is critical for ensuring reliable operation under uncertainty. These theoretical tools complement the adaptive capabilities of AI by bounding their behavior, enabling predictable and interpretable outcomes even as network conditions evolve.
To meet stringent latency and reliability demands in real-time edge computing scenarios, QoS-aware SD-IoT frameworks have emerged as a promising solution. These architectures combine the programmability of SDN with IoT infrastructures to enforce dynamic QoS policies [42]. By monitoring traffic conditions and application requirements through a centralized controller, SD-IoT enables intelligent scheduling, congestion control, and bandwidth allocation. QoS-aware SD-IoT frameworks integrate effectively with mathematical methods, particularly queuing theory and priority-based scheduling algorithms, to model and manage network congestion under varying load conditions [43]. For instance, M/G/1 and G/G/m queue models can be used to derive delay bounds and assess service differentiation strategies. Such integration ensures that critical tasks (e.g., health alerts or vehicle collision warnings) receive low-latency paths and prioritized computation resources, while non-critical traffic is gracefully degraded. This hybrid approach enhances the real-time responsiveness and system scalability, providing robust support for mission-critical AI-MEC applications.

5.2.4. Reconciling Trade-Offs Between AI and Mathematical Methods

While AI and mathematical methods offer powerful synergies in MEC, their integration is not without inherent conflicts [149]. AI models, particularly deep learning systems, typically require large datasets and exhibit black-box behavior, whereas mathematical models operate under structured assumptions and provide formal guarantees—often with limited data. This contrast leads to trade-offs in terms of data dependency vs. analytical rigor, adaptability vs. explainability, and optimization generality vs. convergence guarantees [150].
To reconcile these differences, several hybrid strategies have emerged: (1) Physics-informed neural networks (PINNs) embed differential equations or constraints into the loss function, blending domain theory with learning flexibility; (2) Model-constrained optimization frameworks restrict AI model search spaces using known mathematical rules; (3) Neuro-symbolic integration combines logical reasoning with sub-symbolic learning for interpretable AI; and (4) Meta-learning with structured priors adapts AI models under predefined optimization frameworks to enhance generalization and stability. These approaches point to a growing trend toward “gray-box” modeling, where theory guides learning to achieve both robustness and efficiency in edge computing systems.
Table 4 systematically categorizes the major integration challenges and opportunities in edge intelligence by examining heterogeneous data management, computational scalability, and real-time processing constraints. Heterogeneous data integration remains a foundational challenge, where device compatibility, non-IID data aggregation, and communication latency must be jointly optimized through edge collaboration and dynamic API provisioning. Computational complexity and scalability issues, including computational delay, model overhead, and communication bottlenecks, highlight the tension between resource-constrained edge devices and the demands of increasingly sophisticated AI models. Real-time processing poses additional challenges in maintaining low latency and rapid adaptive updates, necessitating lightweight architectures and distributed coordination strategies. By mapping these challenges against solution strategies such as handoff mechanisms, edge collaboration, API support, and client-side modifications, the taxonomy offers a detailed perspective on the current landscape and reveals key avenues for future system-level optimization in intelligent edge environments.

5.3. Future Directions

The future integration of mathematical methods, artificial intelligence, and MEC will be shaped by emerging technologies and interdisciplinary collaboration. To address the challenges in edge intelligence, future research should align with three key areas: (1) heterogeneous data integration and management, (2) computational complexity and scalability, and (3) real-time processing and optimization. Emerging technologies such as quantum computing, AIGC, neuro-symbolic AI, digital twins, and 6G offer promising solutions in these domains. While emerging approaches such as topological data analysis (TDA) and algebraic modeling hold promise, they have not yet reached the same level of empirical deployment or integration in existing MEC systems. We highlight these as potential future research directions deserving deeper exploration.
Future edge systems must effectively integrate diverse data sources. Digital twin technology offers a unified virtual environment to fuse multi-source data in real time, enabling consistent representations across devices and networks [151]. Combined with 6G semantic communication, which transmits task-relevant information with ultra-low latency, such systems can overcome data inconsistency issues [152]. AI advancements also aid in heterogeneous data fusion. Neuro-symbolic AI bridges low-level data and high-level knowledge via logic-guided learning, improving interpretability and integration [153]. AIGC models, such as diffusion and large generative models, can synthesize missing or rare modalities, filling data gaps and enhancing model robustness [154]. These methods together provide a scalable framework for managing diverse edge data.
Edge devices face limitations in handling complex models. Quantum-inspired algorithms offer faster solutions to optimization and scheduling problems, and hybrid quantum-classical frameworks show potential in edge deployment [155]. In parallel, 6G facilitates edge-cloud collaboration, enabling the dynamic partitioning of heavy models across devices with minimal communication overhead [156]. Algorithm-level optimizations are equally vital. Model compression and AutoML can generate lightweight models suited for specific edge conditions [152]. Physics-informed neural networks (PINNs) embed domain knowledge, reducing training needs and computational cost [157]. Distributed scheduling and stochastic optimization also ensure scalable resource management in large-scale deployments [151].
Real-time responsiveness is essential in edge intelligence. The 6G URLLC provides the infrastructure for ultra-low-latency communication and supports massive device access, ensuring consistent response times [156]. Digital twins enhance real-time monitoring and predictive control, allowing proactive decision making [151]. Adaptive learning methods like reinforcement learning and online optimization enable systems to adjust in real time. Distributed AIGC frameworks can orchestrate generative tasks across edge nodes to meet latency constraints [154]. Neuro-symbolic reasoning adds interpretability and quick rule-based decisions, especially in safety-critical scenarios.
Future research could focus on balancing privacy preservation and computational efficiency through approaches such as federated learning with model compression, differential privacy-aware task offloading strategies, and lightweight privacy-preserving multi-agent reinforcement learning frameworks. Topological data analysis may enhance edge network monitoring, and novel paradigms like hyperdimensional computing could underpin low-power edge intelligence. Ultimately, the synergy between mathematical theory and engineering practices is expected to advance adaptive and energy-efficient edge AI systems, yet their scalability relies on overcoming algorithmic generality and cross-layer interoperability challenges.

6. Conclusions

The convergence of mathematical methods, AI, and MEC represents a transformative paradigm for building intelligent, efficient, and scalable edge systems. In this paper, we conducted a comprehensive review of this interdisciplinary domain by surveying over 150 academic and industrial studies that explore the application of mathematical foundations in AI-driven MEC scenarios. We developed a structured taxonomy that links mathematical methods to core MEC challenges, revealing how mathematical foundations enhance robustness, interpretability, and system efficiency. To provide actionable insights, we classified these challenges into three principal dimensions: heterogeneous data integration, real-time optimization, and computational scalability. This mapping offers a systematic guide for system-level optimization in edge intelligence. We further illustrated how the fusion of AI and mathematical modeling enables efficient decision making under resource constraints, and supports the deployment of scalable, adaptive, and interpretable edge applications. Finally, we highlighted open issues and future research directions, including hybrid learning-model frameworks, privacy-preserving mathematical designs, and scalable optimization algorithms tailored to dynamic edge environments. These directions are at the forefront of current research and are expected to remain pivotal in advancing AI-driven MEC systems.

Author Contributions

Conceptualization, Y.L. and X.F.; methodology, Y.L. and X.F.; formal analysis, Y.L., X.B., R.S. and Z.H.; investigation, R.S., Z.H. and Y.W.; data curation, Y.W. and J.X.; writing—original draft preparation, Y.L. and X.B.; writing—review and editing, X.F. and Y.Z.; supervision, X.F. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “Pioneer” and “Leading Goose” R&D Program of Zhejiang (Grant No. 2025C01054).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AI    Artificial Intelligence
MEC    Mobile Edge Computing
TLAThree-Letter Acronym
LDLinear Dichroism
ITSsIntelligent Transportation Systems
IoTInternet of Things
MLMachine Learning
SLSupervised Learning
ULUnsupervised Learning
SSLSemi-Supervised Learning
DLDeep Learning
RLReinforcement Learning
FLFederated Learning
CNNConvolutional Neural Network
ANNArtificial Neural Network
ECEdge Computing
URLLCUltra-Reliable Low-Latency Communication
PINNPhysics-Informed Neural Network
DQNDeep Q-Network
QoEQuality of Experience
QoSQuality of Service
ISCCIntegrated Sensing–Communication–Computation
VCPVehicular Cooperative Perception
SoCSystem on Chip
CDNContent Delivery Network
AR/VRAugmented Reality/Virtual Reality
DBNDeep Belief Network
LSTMLong Short-Term Memory
AIGCAI-Generated Content
M3FMMultimodal Multidomain Multilingual Foundation Model
GNNGraph Neural Network
RoIRegion of Interest
ISCCIntegrated Sensing–Communication–Computation
TDATopological Data Analysis
SDNSoftware-Defined Networking
SD-IoVSoftware-Defined Internet of Vehicles
PINNPhysics-Informed Neural Network
P2PPeer to Peer

References

  1. Wang, T.; Liang, Y.; Jia, W.; Arif, M.; Liu, A.; Xie, M. Coupling resource management based on fog computing in smart city systems. J. Netw. Comput. Appl. 2019, 135, 11–19. [Google Scholar] [CrossRef]
  2. Maathuis, C.; Cidota, M.A.; Datcu, D.; Marin, L. Integrating Explainable Artificial Intelligence in Extended Reality Environments: A Systematic Survey. Mathematics 2025, 13, 290. [Google Scholar] [CrossRef]
  3. Liang, Y.; Li, G.; Guo, J.; Liu, Q.; Zheng, X.; Wang, T. Efficient Request Scheduling in Cross-Regional Edge Collaboration via Digital Twin Networks. In Proceedings of the 2024 IEEE/ACM 32nd International Symposium on Quality of Service (IWQoS), Guangzhou, China, 19–21 June 2024; pp. 1–6. [Google Scholar]
  4. Zhang, L.; Hua, L. Major Issues in High-Frequency Financial Data Analysis: A Survey of Solutions. Mathematics 2025, 13, 347. [Google Scholar] [CrossRef]
  5. Wang, T.; Liang, Y.; Shen, X.; Zheng, X.; Mahmood, A.; Sheng, Q.Z. Edge computing and sensor-cloud: Overview, solutions, and directions. ACM Comput. Surv. 2023, 55, 1–37. [Google Scholar] [CrossRef]
  6. Zhou, H.; Jiang, K.; He, S.; Min, G.; Wu, J. Distributed deep multi-agent reinforcement learning for cooperative edge caching in internet-of-vehicles. IEEE Trans. Wirel. Commun. 2023, 22, 9595–9609. [Google Scholar] [CrossRef]
  7. Surianarayanan, C.; Lawrence, J.J.; Chelliah, P.R.; Prakash, E.; Hewage, C. A survey on optimization techniques for edge artificial intelligence (AI). Sensors 2023, 23, 1279. [Google Scholar] [CrossRef]
  8. Wang, X.; Han, Y.; Wang, C.; Zhao, Q.; Chen, X.; Chen, M. In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning. IEEE Netw. 2019, 33, 156–165. [Google Scholar] [CrossRef]
  9. Deng, S.; Zhao, H.; Fang, W.; Yin, J.; Dustdar, S.; Zomaya, A.Y. Edge intelligence: The confluence of edge computing and artificial intelligence. IEEE Internet Things J. 2020, 7, 7457–7469. [Google Scholar] [CrossRef]
  10. Chang, Z.; Liu, S.; Xiong, X.; Cai, Z.; Tu, G. A survey of recent advances in edge-computing-powered artificial intelligence of things. IEEE Internet Things J. 2021, 8, 13849–13875. [Google Scholar] [CrossRef]
  11. Cao, K.; Liu, Y.; Meng, G.; Sun, Q. An overview on edge computing research. IEEE Access 2020, 8, 85714–85728. [Google Scholar] [CrossRef]
  12. Grzesik, P.; Mrozek, D. Combining machine learning and edge computing: Opportunities, challenges, platforms, frameworks, and use cases. Electronics 2024, 13, 640. [Google Scholar] [CrossRef]
  13. George, A.S.; George, A.H.; Baskar, T. Edge computing and the future of cloud computing: A survey of industry perspectives and predictions. Partners Univers. Int. Res. J. 2023, 2, 19–44. [Google Scholar]
  14. Sharma, M.; Tomar, A.; Hazra, A. Edge computing for industry 5.0: Fundamental, applications and research challenges. IEEE Internet Things J. 2024, 11, 19070–19093. [Google Scholar] [CrossRef]
  15. Pandya, S.; Srivastava, G.; Jhaveri, R.; Babu, M.R.; Bhattacharya, S.; Maddikunta, P.K.R.; Mastorakis, S.; Piran, M.J.; Gadekallu, T.R. Federated learning for smart cities: A comprehensive survey. Sustain. Energy Technol. Assess. 2023, 55, 102987. [Google Scholar] [CrossRef]
  16. Hartmann, M.; Hashmi, U.S.; Imran, A. Edge computing in smart health care systems: Review, challenges, and research directions. Trans. Emerg. Telecommun. Technol. 2022, 33, e3710. [Google Scholar] [CrossRef]
  17. Zhang, X.; Cao, Z.; Dong, W. Overview of edge computing in the agricultural internet of things: Key technologies, applications, challenges. IEEE Access 2020, 8, 141748–141761. [Google Scholar] [CrossRef]
  18. Cao, L. Artificial intelligence in retail: Applications and value creation logics. Int. J. Retail. Distrib. Manag. 2021, 49, 958–976. [Google Scholar] [CrossRef]
  19. Lu, Y. Artificial intelligence: A survey on evolution, models, applications and future trends. J. Manag. Anal. 2019, 6, 1–29. [Google Scholar] [CrossRef]
  20. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach; Pearson: London, UK, 2021. [Google Scholar]
  21. Zhang, X.Y.; Liu, C.L.; Suen, C.Y. Towards robust pattern recognition: A review. Proc. IEEE 2020, 108, 894–922. [Google Scholar] [CrossRef]
  22. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
  23. Zhong, Y.; Chen, L.; Dan, C.; Rezaeipanah, A. A systematic survey of data mining and big data analysis in internet of things. J. Supercomput. 2022, 78, 18405–18453. [Google Scholar] [CrossRef]
  24. DeWitt, D.; Gray, J. Parallel database systems: The future of high performance database systems. Commun. ACM 1992, 35, 85–98. [Google Scholar] [CrossRef]
  25. Nti, I.K.; Quarcoo, J.A.; Aning, J.; Fosu, G.K. A mini-review of machine learning in big data analytics: Applications, challenges, and prospects. Big Data Min. Anal. 2022, 5, 81–97. [Google Scholar] [CrossRef]
  26. Nilsson, N.J. Principles of Artificial Intelligence; Morgan Kaufmann: San Francisco, CA, USA, 2014. [Google Scholar]
  27. Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695. [Google Scholar] [CrossRef]
  28. Yves Kodratoff, R.S.M. Machine Learning: An Artificial Intelligence Approach; Morgan Kaufmann: San Francisco, CA, USA, 1990; Volume 3. [Google Scholar]
  29. Maulud, D.; Abdulazeez, A.M. A review on linear regression comprehensive in machine learning. J. Appl. Sci. Technol. Trends 2020, 1, 140–147. [Google Scholar] [CrossRef]
  30. Cervantes, J.; Garcia-Lamont, F.; Rodríguez-Mazahua, L.; Lopez, A. A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing 2020, 408, 189–215. [Google Scholar] [CrossRef]
  31. Costa, V.G.; Pedreira, C.E. Recent advances in decision trees: An updated survey. Artif. Intell. Rev. 2023, 56, 4765–4800. [Google Scholar] [CrossRef]
  32. Samek, W.; Montavon, G.; Lapuschkin, S.; Anders, C.J.; Müller, K.R. Explaining deep neural networks and beyond: A review of methods and applications. Proc. IEEE 2021, 109, 247–278. [Google Scholar] [CrossRef]
  33. Yang, X.; Song, Z.; King, I.; Xu, Z. A survey on deep semi-supervised learning. IEEE Trans. Knowl. Data Eng. 2022, 35, 8934–8954. [Google Scholar] [CrossRef]
  34. Wang, X.; Han, Y.; Leung, V.C.; Niyato, D.; Yan, X.; Chen, X. Convergence of edge computing and deep learning: A comprehensive survey. IEEE Commun. Surv. Tutor. 2020, 22, 869–904. [Google Scholar] [CrossRef]
  35. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  36. Ning, Z.; Hu, H.; Wang, X.; Guo, L.; Guo, S.; Wang, G.; Gao, X. Mobile edge computing and machine learning in the internet of unmanned aerial vehicles: A survey. ACM Comput. Surv. 2023, 56, 1–31. [Google Scholar] [CrossRef]
  37. Jararweh, Y.; Otoum, S.; Al Ridhawi, I. Trustworthy and sustainable smart city services at the edge. Sustain. Cities Soc. 2020, 62, 102394. [Google Scholar] [CrossRef]
  38. Zhou, H.; Li, M.; Sun, P.; Guo, B.; Yu, Z. Accelerating federated learning via parameter selection and pre-synchronization in mobile edge-cloud networks. IEEE Trans. Mob. Comput. 2024, 23, 10313–10328. [Google Scholar] [CrossRef]
  39. Banabilah, S.; Aloqaily, M.; Alsayed, E.; Malik, N.; Jararweh, Y. Federated learning review: Fundamentals, enabling technologies, and future applications. Inf. Process. Manag. 2022, 59, 103061. [Google Scholar] [CrossRef]
  40. Zou, H.; Guo, J.; Zeng, J.; Li, Y.; Cao, J.; Wang, T. Fine-Grained Service Lifetime Optimization for Energy-Constrained Edge-Edge Collaboration. In Proceedings of the 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS), Jersey City, NJ, USA, 23–26 July 2024; pp. 565–576. [Google Scholar]
  41. Xu, C.; Guo, J.; Li, Y.; Zou, H.; Jia, W.; Wang, T. Dynamic parallel multi-server selection and allocation in collaborative edge computing. IEEE Trans. Mob. Comput. 2024, 23, 10523–10537. [Google Scholar] [CrossRef]
  42. Bhayo, J.; Shah, S.A.; Hameed, S.; Ahmed, A.; Nasir, J.; Draheim, D. Towards a machine learning-based framework for DDOS attack detection in software-defined IoT (SD-IoT) networks. Eng. Appl. Artif. Intell. 2023, 123, 106432. [Google Scholar] [CrossRef]
  43. Ali, J.; Roh, B.H. A novel scheme for controller selection in software-defined internet-of-things (SD-IoT). Sensors 2022, 22, 3591. [Google Scholar] [CrossRef]
  44. Liang, Y.; Li, G.; Zhang, G.; Guo, J.; Liu, Q.; Zheng, J.; Wang, T. Latency Reduction in Immersive Systems through Request Scheduling with Digital Twin Networks in Collaborative Edge Computing. ACM Trans. Sens. Netw. 2024. [Google Scholar] [CrossRef]
  45. Sifeng, Z.; Haowei, D.; Yaxing, Y.; Hao, C.; Hai, Z. Improved NSGA-II algorithm-based task offloading decision in the internet of vehicles edge computing scenario. Multimed. Syst. 2025, 31, 1–14. [Google Scholar] [CrossRef]
  46. Zhong, X.; Wang, X.; Li, L.; Yang, Y.; Qin, Y.; Yang, T.; Zhang, B.; Zhang, W. CL-ADMM: A cooperative-learning-based optimization framework for resource management in MEC. IEEE Internet Things J. 2020, 8, 8191–8209. [Google Scholar] [CrossRef]
  47. De Curtò, J.; de Zarzà, I.; Roig, G.; Cano, J.C.; Manzoni, P.; Calafate, C.T. Llm-informed multi-armed bandit strategies for non-stationary environments. Electronics 2023, 12, 2814. [Google Scholar] [CrossRef]
  48. Ferry, J.; Aivodji, U.; Gambs, S.; Huguet, M.J.; Siala, M. Improving fairness generalization through a sample-robust optimization method. Mach. Learn. 2023, 112, 2131–2192. [Google Scholar] [CrossRef]
  49. Liang, Y.; Yin, M.; Zhang, Y.; Wang, W.; Jia, W.; Wang, T. Grouping reduces energy cost in directionally rechargeable wireless vehicular and sensor networks. IEEE Trans. Veh. Technol. 2023, 72, 10840–10851. [Google Scholar] [CrossRef]
  50. Younis, A.; Maheshwari, S.; Pompili, D. Energy-latency computation offloading and approximate computing in mobile-edge computing networks. IEEE Trans. Netw. Serv. Manag. 2024, 21, 3401–3415. [Google Scholar] [CrossRef]
  51. Tang, X.; Zhang, H.; Zhang, R.; Zhou, D.; Zhang, Y.; Han, Z. Robust trajectory and offloading for energy-efficient UAV edge computing in industrial Internet of Things. IEEE Trans. Ind. Inform. 2023, 20, 38–49. [Google Scholar] [CrossRef]
  52. Zhu, J.; Wang, X.; Huang, H.; Cheng, S.; Wu, M. A NSGA-II algorithm for task scheduling in UAV-enabled MEC system. IEEE Trans. Intell. Transp. Syst. 2021, 23, 9414–9429. [Google Scholar] [CrossRef]
  53. Lu, K.; Wang, H.; Zhang, H.; Wang, L. Convergence in high probability of distributed stochastic gradient descent algorithms. IEEE Trans. Autom. Control 2023, 69, 2189–2204. [Google Scholar] [CrossRef]
  54. Meng, X.; Li, Q.; Zhang, G.; Chen, W. Efficient multidimensional dynamic programming-based energy management strategy for global composite operating cost minimization for fuel cell trams. IEEE Trans. Transp. Electrif. 2021, 8, 1807–1818. [Google Scholar] [CrossRef]
  55. Wang, H.; Li, Y.; Jin, D.; Han, Z. Attentional Markov model for human mobility prediction. IEEE J. Sel. Areas Commun. 2021, 39, 2213–2225. [Google Scholar] [CrossRef]
  56. Li, T.; Zhou, Y.; Zhao, Y.; Zhang, C.; Zhang, X. A hierarchical object oriented Bayesian network-based fault diagnosis method for building energy systems. Appl. Energy 2022, 306, 118088. [Google Scholar] [CrossRef]
  57. Blair, L.; Varela, C.A.; Patterson, S. A continuum approach for collaborative task processing in UAV MEC networks. In Proceedings of the 2022 IEEE 15th International Conference on Cloud Computing (CLOUD), Barcelona, Spain, 11–15 July 2022; pp. 247–256. [Google Scholar]
  58. Meydani, A.; Shahinzadeh, H.; Ramezani, A.; Moazzami, M.; Nafisi, H.; Askarian-Abyaneh, H. Comprehensive review of artificial intelligence applications in smart grid operations. In Proceedings of the 2024 9th International Conference on Technology and Energy Management (ICTEM), Behshahr, Iran, 14–15 February 2024; pp. 1–13. [Google Scholar]
  59. Maina, S.C.; Mwigereri, D.; Weyn, J.; Mackey, L.; Ochieng, M. Evaluation of Dependency Structure for Multivariate Weather Predictors Using Copulas. ACM J. Comput. Sustain. Soc. 2023, 1, 1–23. [Google Scholar] [CrossRef]
  60. Ahmed, J.; Green, R.C., II. Leveraging survival analysis in cost-aware deepnet for efficient hard drive failure prediction. Neural Comput. Appl. 2025, 37, 1089–1104. [Google Scholar] [CrossRef]
  61. Xu, Y.; Wang, L.; Zhang, M. Priority-aware task scheduling using M/G/1 queueing model in edge computing. IEEE Trans. Parallel Distrib. Syst. 2021, 32, 1432–1445. [Google Scholar]
  62. Zhang, Q.; Gui, L.; Sun, Y. Edge-cloud collaboration architecture based on queuing networks for latency reduction. IEEE Trans. Cloud Comput. 2020, 8, 1205–1218. [Google Scholar]
  63. Wang, T.; Li, X.; Niyato, D. Stochastic Petri net modeling for resource contention analysis in multi-hop edge networks. IEEE Internet Things J. 2022, 9, 10123–10137. [Google Scholar]
  64. Lee, H.; Kim, J.; Park, S. G/G/m queue-based load balancing for heterogeneous edge servers. IEEE Trans. Serv. Comput. 2019, 12, 742–755. [Google Scholar]
  65. Guo, S.; Li, Q.; Wang, J. Fluid flow modeling for large-scale traffic prediction in edge networks. IEEE Trans. Netw. Sci. Eng. 2021, 8, 2100–2113. [Google Scholar]
  66. Niyato, D.; Wang, P.; Kim, D. Queueing game theory for mitigating congestion in selfish edge computing. IEEE Trans. Mob. Comput. 2020, 19, 643–658. [Google Scholar]
  67. Cheng, S.; Ren, T.; Zhang, H.; Huang, J.; Liu, J. A Stackelberg Game Based Framework for Edge Pricing and Resource Allocation in Mobile Edge Computing. IEEE Internet Things J. 2024, 11, 20514–20530. [Google Scholar] [CrossRef]
  68. Tao, M.; Ota, K.; Dong, M.; Yuan, H. Stackelberg game-based pricing and offloading in mobile edge computing. IEEE Wirel. Commun. Lett. 2021, 11, 883–887. [Google Scholar] [CrossRef]
  69. Abou El Houda, Z.; Brik, B.; Ksentini, A.; Khoukhi, L.; Guizani, M. When federated learning meets game theory: A cooperative framework to secure IIoT applications on edge computing. IEEE Trans. Ind. Inform. 2022, 18, 7988–7997. [Google Scholar] [CrossRef]
  70. Zhang, D.; Chen, C.; Cui, Y.; Zhang, T. New method of energy efficient subcarrier allocation based on evolutionary game theory. Mob. Netw. Appl. 2021, 26, 523–536. [Google Scholar] [CrossRef]
  71. Qiu, H.; Zhu, K.; Luong, N.C.; Yi, C.; Niyato, D.; Kim, D.I. Applications of auction and mechanism design in edge computing: A survey. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 1034–1058. [Google Scholar] [CrossRef]
  72. Li, L.; Yu, X.; Cai, X.; He, X.; Liu, Y. Contract-theory-based incentive mechanism for federated learning in health crowdsensing. IEEE Internet Things J. 2022, 10, 4475–4489. [Google Scholar] [CrossRef]
  73. Yadav, S.; Jat, S.C. A Minimum Spanning Tree-based Energy Efficient Cluster Head Election in WSN. Turk. J. Comput. Math. Educ. 2021, 12, 3065–3073. [Google Scholar]
  74. Zhao, X.; Liang, J.; Wang, J. A community detection algorithm based on graph compression for large-scale social networks. Inf. Sci. 2021, 551, 358–372. [Google Scholar] [CrossRef]
  75. Alex, S.A.; Singh, N.; Adhikari, M. Digital Twin-based Dynamic Resource Provisioning Using Deep Q-Network on 6G-enabled Mobile Edge Networks. In Proceedings of the 2025 17th International Conference on COMmunication Systems and NETworks (COMSNETS), Bengaluru, India, 6–10 January 2025; pp. 774–781. [Google Scholar]
  76. Wu, D.; Li, Z.; Shi, H.; Luo, P.; Ma, Y.; Liu, K. Multi-Dimensional Optimization for Collaborative Task Scheduling in Cloud-Edge-End System. Simul. Model. Pract. Theory 2025, 141, 103099. [Google Scholar] [CrossRef]
  77. Alzaben, N.; Engels, D.W. End-to-end routing in sdn controllers using max-flow min-cut route selection algorithm. In Proceedings of the 2021 23rd International Conference on Advanced Communication Technology (ICACT), PyeongChang, Republic of Korea, 7–10 February 2021; pp. 461–467. [Google Scholar]
  78. Gong, T.; Zhu, L.; Yu, F.R.; Tang, T. Edge intelligence in intelligent transportation systems: A survey. IEEE Trans. Intell. Transp. Syst. 2023, 24, 8919–8944. [Google Scholar] [CrossRef]
  79. Liang, Y.; Wang, W.; Zheng, X.; Liu, Q.; Wang, L.; Wang, T. Collaborative edge service placement for maximizing qos with distributed data cleaning. In Proceedings of the 2023 IEEE/ACM 31st International Symposium on Quality of Service (IWQoS), Orlando, FL, USA, 19–21 June 2023; pp. 1–4. [Google Scholar]
  80. Song, W.; Rajak, S.; Dang, S.; Liu, R.; Li, J.; Chinnadurai, S. Deep learning enabled IRS for 6G intelligent transportation systems: A comprehensive study. IEEE Trans. Intell. Transp. Syst. 2022, 24, 12973–12990. [Google Scholar] [CrossRef]
  81. Hong, Z.; Lin, Q.; Hu, B. Knowledge distillation-based edge-decision hierarchies for interactive behavior-aware planning in autonomous driving system. IEEE Trans. Intell. Transp. Syst. 2024, 25, 11040–11057. [Google Scholar] [CrossRef]
  82. Abdel-Aziz, M.K.; Perfecto, C.; Samarakoon, S.; Bennis, M.; Saad, W. Vehicular cooperative perception through action branching and federated reinforcement learning. IEEE Trans. Commun. 2021, 70, 891–903. [Google Scholar] [CrossRef]
  83. Sakr, A.H. Evaluation of Redundancy Mitigation Rules in V2X Networks for Enhanced Collective Perception Services. IEEE Access 2024, 12, 137696–137711. [Google Scholar] [CrossRef]
  84. Xiao, Z.; Shu, J.; Jiang, H.; Min, G.; Chen, H.; Han, Z. Perception task offloading with collaborative computation for autonomous driving. IEEE J. Sel. Areas Commun. 2022, 41, 457–473. [Google Scholar] [CrossRef]
  85. Zaki, A.M.; Elsayed, S.A.; Elgazzar, K.; Hassanein, H.S. Quality-Aware Task Offloading for Cooperative Perception in Vehicular Edge Computing. IEEE Trans. Veh. Technol. 2024, 73, 18320–18332. [Google Scholar] [CrossRef]
  86. Dong, M.; Fu, Y.; Li, C.; Tian, M.; Yu, F.R.; Cheng, N. Task Offloading and Resource Allocation in Vehicular Cooperative Perception with Integrated Sensing, Communication, and Computation. IEEE Trans. Intell. Transp. Syst. 2025. [Google Scholar] [CrossRef]
  87. Zhang, T.; Xu, C.; Zou, P.; Tian, H.; Kuang, X.; Yang, S.; Zhong, L.; Niyato, D. How to mitigate DDoS intelligently in SD-IoV: A moving target defense approach. IEEE Trans. Ind. Inform. 2022, 19, 1097–1106. [Google Scholar] [CrossRef]
  88. Zou, H.; Li, Y.; Chu, X.; Xu, C.; Wang, T. Improving Fairness in Coexisting 5G and Wi-Fi Network on Unlicensed Band with URLLC. In Proceedings of the 2023 IEEE/ACM 31st International Symposium on Quality of Service (IWQoS), Orlando, FL, USA, 19–21 June 2023; pp. 1–10. [Google Scholar]
  89. Al Sharif, R.; Pokharel, S. Smart city dimensions and associated risks: Review of literature. Sustain. Cities Soc. 2022, 77, 103542. [Google Scholar] [CrossRef]
  90. Li, M.; Mour, N.; Smith, L. Machine learning based on reinforcement learning for smart grids: Predictive analytics in renewable energy management. Sustain. Cities Soc. 2024, 109, 105510. [Google Scholar] [CrossRef]
  91. Hua, H.; Qin, Y.; Hao, C.; Cao, J. Optimal energy management strategies for energy Internet via deep reinforcement learning approach. Appl. Energy 2019, 239, 598–609. [Google Scholar] [CrossRef]
  92. Wang, X.; Tang, Z.; Guo, J.; Meng, T.; Wang, C.; Wang, T.; Jia, W. Empowering Edge Intelligence: A Comprehensive Survey on On-Device AI Models. ACM Comput. Surv. 2025, 57, 1–39. [Google Scholar] [CrossRef]
  93. Lin, C.; Hu, X. Efficient crowd density estimation with edge intelligence via structural reparameterization and knowledge transfer. Appl. Soft Comput. 2024, 154, 111366. [Google Scholar] [CrossRef]
  94. Cicioğlu, M.; Çalhan, A. A multiprotocol controller deployment in SDN-based IoMT architecture. IEEE Internet Things J. 2022, 9, 20833–20840. [Google Scholar] [CrossRef]
  95. Huang, H.; Meng, T.; Guo, J.; Wei, X.; Jia, W. SecEG: A secure and efficient strategy against DDoS attacks in mobile edge computing. ACM Trans. Sens. Netw. 2024, 20, 1–21. [Google Scholar] [CrossRef]
  96. Dang, V.A.; Vu Khanh, Q.; Nguyen, V.H.; Nguyen, T.; Nguyen, D.C. Intelligent healthcare: Integration of emerging technologies and Internet of Things for humanity. Sensors 2023, 23, 4200. [Google Scholar] [CrossRef]
  97. Sabireen, H.; Neelanarayanan, V. A review on fog computing: Architecture, fog with IoT, algorithms and research challenges. ICT Express 2021, 7, 162–176. [Google Scholar]
  98. Al-Shareeda, M.; Hergast, D.; Manickam, S. Review of Intelligent Healthcare for the Internet of Things: Challenges, Techniques and Future Directions. J. Sens. Netw. Data Commun. 2024, 4, 1–10. [Google Scholar]
  99. Rahman, A.; Debnath, T.; Kundu, D.; Khan, M.S.I.; Aishi, A.A.; Sazzad, S.; Sayduzzaman, M.; Band, S.S. Machine learning and deep learning-based approach in smart healthcare: Recent advances, applications, challenges and opportunities. AIMS Public Health 2024, 11, 58. [Google Scholar] [CrossRef]
  100. Liu, F.; Li, Z.; Yin, Q.; Huang, J.; Luo, J.; Thakur, A.; Branson, K.; Schwab, P.; Yin, B.; Wu, X.; et al. A multimodal multidomain multilingual medical foundation model for zero shot clinical diagnosis. npj Digit. Med. 2025, 8, 86. [Google Scholar] [CrossRef]
  101. Yaqoob Akbar, M.I. Leveraging Edge AI and IoT for Remote Healthcare: A Novel Framework for Real-Time Diagnosis and Disease Prediction. Available online: https://www.researchgate.net/publication/388641638_Leveraging_Edge_AI_and_IoT_for_Remote_Healthcare_A_Novel_Framework_for_Real-Time_Diagnosis_and_Disease_Prediction (accessed on 1 February 2025).
  102. Mohsin, S.S.; Salman, O.H.; Jasim, A.A.; Alwindawi, H.; Abdalkareem, Z.A.; Salman, O.S.; Kairaldeen, A.R. AI-Powered IoMT Framework for Remote Triage and Diagnosis in Telemedicine Applications. Al-Iraqia J. Sci. Eng. Res. 2025, 4, 61–76. [Google Scholar]
  103. Islam, M.R.; Kabir, M.M.; Mridha, M.F.; Alfarhood, S.; Safran, M.; Che, D. Deep learning-based IoT system for remote monitoring and early detection of health issues in real-time. Sensors 2023, 23, 5204. [Google Scholar] [CrossRef]
  104. Kolawole, O.O. IoT and AI-Based Remote Patient Monitoring for Chronic Disease Management. Available online: https://www.researchgate.net/publication/389504944_IoT_and_AI-Based_Remote_Patient_Monitoring_for_Chronic_Disease_Management/ (accessed on 1 March 2024).
  105. Leuzy, A.; Heeman, F.; Bosch, I.; Lenér, F.; Dottori, M.; Quitz, K.; Moscoso, A.; Kern, S.; Zetterberg, H.; Blennow, K.; et al. REAL AD—Validation of a realistic screening approach for early Alzheimer’s disease. Alzheimer’s Dementia 2024, 20, 8172–8182. [Google Scholar] [CrossRef] [PubMed]
  106. Berron, D.; Olsson, E.; Andersson, F.; Janelidze, S.; Tideman, P.; Düzel, E.; Palmqvist, S.; Stomrud, E.; Hansson, O. Remote and unsupervised digital memory assessments can reliably detect cognitive impairment in Alzheimer’s disease. Alzheimer’s Dementia 2024, 20, 4775–4791. [Google Scholar] [CrossRef] [PubMed]
  107. Umer, M.; Aljrees, T.; Karamti, H.; Ishaq, A.; Alsubai, S.; Omar, M.; Bashir, A.K.; Ashraf, I. Heart failure patients monitoring using IoT-based remote monitoring system. Sci. Rep. 2023, 13, 19213. [Google Scholar] [CrossRef]
  108. Zang, J.; An, Q.; Li, B.; Zhang, Z.; Gao, L.; Xue, C. A novel wearable device integrating ECG and PCG for cardiac health monitoring. Microsyst. Nanoeng. 2025, 11, 7. [Google Scholar] [CrossRef]
  109. Rahman, M.; Morshed, B.I. A Smart Wearable for Real-Time Cardiac Disease Detection Using Beat-by-Beat ECG Signal Analysis with an Edge Computing AI Classifier. In Proceedings of the 2024 IEEE 20th International Conference on Body Sensor Networks (BSN), Chicago, IL, USA, 15–17 October 2024; pp. 1–4. [Google Scholar]
  110. Moody, G.B.; Mark, R.G. The impact of the MIT-BIH arrhythmia database. IEEE Eng. Med. Biol. Mag. 2001, 20, 45–50. [Google Scholar] [CrossRef] [PubMed]
  111. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics, Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  112. Yu, E.; Ye, Z.; Zhang, Z.; Qian, L.; Xie, M. A federated recommendation algorithm based on user clustering and meta-learning. Appl. Soft Comput. 2024, 158, 111483. [Google Scholar] [CrossRef]
  113. Chronis, C.; Varlamis, I.; Himeur, Y.; Sayed, A.N.; Al-Hasan, T.M.; Nhlabatsi, A.; Bensaali, F.; Dimitrakopoulos, G. A survey on the use of federated learning in privacy-preserving recommender systems. IEEE Open J. Comput. Soc. 2024, 5, 227–247. [Google Scholar] [CrossRef]
  114. Yin, H.; Qu, L.; Chen, T.; Yuan, W.; Zheng, R.; Long, J.; Xia, X.; Shi, Y.; Zhang, C. On-Device Recommender Systems: A Comprehensive Survey. arXiv 2024, arXiv:2401.11441. [Google Scholar]
  115. He, Y.; Wei, L.; Chen, F.; Zhang, H.; Yu, J.; Wang, H. Fedai: Federated recommendation system with anonymized interactions. Expert Syst. Appl. 2025, 271, 126564. [Google Scholar] [CrossRef]
  116. Li, J.; Tang, F.; Zhu, C.; He, S.; Zhang, S.; Su, Y. BP-YOLO: A Real-Time Product Detection and Shopping Behaviors Recognition Model for Intelligent Unmanned Vending Machine. IEEE Access 2024, 12, 21038–21051. [Google Scholar] [CrossRef]
  117. Qiao, W.; Huang, M.; Gao, Z.; Wang, X. Distributed dynamic pricing of multiple perishable products using multi-agent reinforcement learning. Expert Syst. Appl. 2024, 237, 121252. [Google Scholar] [CrossRef]
  118. Ganga, B.; Lata, B.; Venugopal, K. Object detection and crowd analysis using deep learning techniques: Comprehensive review and future directions. Neurocomputing 2024, 597, 127932. [Google Scholar] [CrossRef]
  119. Duja, K.U.; Khan, I.A.; Alsuhaibani, M. Video Surveillance Anomaly Detection: A Review on Deep Learning Benchmarks. IEEE Access 2024, 12, 164811–164842. [Google Scholar] [CrossRef]
  120. Rajapakse, V.; Karunanayake, I.; Ahmed, N. Intelligence at the extreme edge: A survey on reformable TinyML. ACM Comput. Surv. 2023, 55, 1–30. [Google Scholar] [CrossRef]
  121. Jia, W.; Wang, W.; Zhang, Z. From simple digital twin to complex digital twin part II: Multi-scenario applications of digital twin shop floor. Adv. Eng. Inform. 2023, 56, 101915. [Google Scholar] [CrossRef]
  122. Hua, H.; Li, Y.; Wang, T.; Dong, N.; Li, W.; Cao, J. Edge computing with artificial intelligence: A machine learning perspective. ACM Comput. Surv. 2023, 55, 1–35. [Google Scholar] [CrossRef]
  123. Wang, T.; Lu, Y.; Cao, Z.; Shu, L.; Zheng, X.; Liu, A.; Xie, M. When sensor-cloud meets mobile edge computing. Sensors 2019, 19, 5324. [Google Scholar] [CrossRef]
  124. Xu, C.; Guo, J.; Zeng, J.; Li, Y.; Cao, J.; Wang, T. Incorporating Startup Delay into Collaborative Edge Computing for Superior Task Efficiency. In Proceedings of the 2024 IEEE/ACM 32nd International Symposium on Quality of Service (IWQoS), Guangzhou, China, 19–21 June 2024; pp. 1–10. [Google Scholar]
  125. Zhou, H.; Wang, H.; Yu, Z.; Bin, G.; Xiao, M.; Wu, J. Federated distributed deep reinforcement learning for recommendation-enabled edge caching. IEEE Trans. Serv. Comput. 2024, 17, 3640–3656. [Google Scholar] [CrossRef]
  126. Saeik, F.; Avgeris, M.; Spatharakis, D.; Santi, N.; Dechouniotis, D.; Violos, J.; Leivadeas, A.; Athanasopoulos, N.; Mitton, N.; Papavassiliou, S. Task offloading in Edge and Cloud Computing: A survey on mathematical, artificial intelligence and control theory solutions. Comput. Netw. 2021, 195, 108177. [Google Scholar] [CrossRef]
  127. Zhou, H.; Wu, T.; Chen, X.; He, S.; Guo, D.; Wu, J. Reverse auction-based computation offloading and resource allocation in mobile cloud-edge computing. IEEE Trans. Mob. Comput. 2022, 22, 6144–6159. [Google Scholar] [CrossRef]
  128. Wang, T.; Liang, Y.; Zhang, Y.; Zheng, X.; Arif, M.; Wang, J.; Jin, Q. An intelligent dynamic offloading from cloud to edge for smart iot systems with big data. IEEE Trans. Netw. Sci. Eng. 2020, 7, 2598–2607. [Google Scholar] [CrossRef]
  129. Trilles, S.; Hammad, S.S.; Iskandaryan, D. Anomaly detection based on artificial intelligence of things: A systematic literature mapping. Internet Things 2024, 25, 101063. [Google Scholar] [CrossRef]
  130. Nguyen, D.C.; Ding, M.; Pathirana, P.N.; Seneviratne, A.; Li, J.; Poor, H.V. Federated learning for internet of things: A comprehensive survey. IEEE Commun. Surv. Tutor. 2021, 23, 1622–1658. [Google Scholar] [CrossRef]
  131. Zhu, X.; Li, J.; Liu, Y.; Ma, C.; Wang, W. A survey on model compression for large language models. Trans. Assoc. Comput. Linguist. 2024, 12, 1556–1577. [Google Scholar] [CrossRef]
  132. Zhao, Y.; Xiao, A.; Wu, S.; Jiang, C.; Kuang, L.; Shi, Y. Adaptive partitioning and placement for two-layer collaborative caching in mobile edge computing networks. IEEE Trans. Wirel. Commun. 2024, 23, 8215–8231. [Google Scholar] [CrossRef]
  133. Ji, Z.; Qin, Z.; Tao, X. Meta federated reinforcement learning for distributed resource allocation. IEEE Trans. Wirel. Commun. 2023, 23, 7865–7876. [Google Scholar] [CrossRef]
  134. Zhou, I.; Tofigh, F.; Piccardi, M.; Abolhasan, M.; Franklin, D.; Lipman, J. Secure multi-party computation for machine learning: A survey. IEEE Access 2024, 12, 53881–53899. [Google Scholar] [CrossRef]
  135. Demelius, L.; Kern, R.; Trügler, A. Recent advances of differential privacy in centralized deep learning: A systematic survey. ACM Comput. Surv. 2025, 57, 1–28. [Google Scholar] [CrossRef]
  136. Tian, Y.; Zhang, Z.; Yang, Y.; Chen, Z.; Yang, Z.; Jin, R.; Quek, T.Q.; Wong, K.K. An edge-cloud collaboration framework for generative AI service provision with synergetic big cloud model and small edge models. IEEE Netw. 2024, 38, 37–46. [Google Scholar] [CrossRef]
  137. Wang, J.; Zhang, H.; Han, X.; Zhao, J.; Wang, J. Lyapunov-Assisted Decentralized Dynamic Offloading Strategy based on Deep Reinforcement Learning. IEEE Internet Things J. 2024, 12, 8368–8380. [Google Scholar] [CrossRef]
  138. Zhao, M.; Zhang, R.; He, Z.; Li, K. Joint Optimization of Trajectory, Offloading, Caching, and Migration for UAV-Assisted MEC. IEEE Trans. Mob. Comput. 2024, 24, 1981–1998. [Google Scholar] [CrossRef]
  139. Kumar, A.S.; Zhao, L.; Fernando, X. Task offloading and resource allocation in vehicular networks: A Lyapunov-based deep reinforcement learning approach. IEEE Trans. Veh. Technol. 2023, 72, 13360–13373. [Google Scholar] [CrossRef]
  140. Dantas, P.V.; Sabino da Silva, W., Jr.; Cordeiro, L.C.; Carvalho, C.B. A comprehensive review of model compression techniques in machine learning. Appl. Intell. 2024, 54, 11804–11844. [Google Scholar] [CrossRef]
  141. Mao, Y.; Hao, Y.; Cao, X.; Fang, Y.; Lin, X.; Mao, H.; Xu, Z. Dynamic Graph Embedding via Meta-Learning. IEEE Trans. Knowl. Data Eng. 2023, 36, 2967–2979. [Google Scholar] [CrossRef]
  142. Zhang, H.; Ding, J.; Feng, L.; Tan, K.C.; Li, K. Solving Expensive Optimization Problems in Dynamic Environments with Meta-learning. IEEE Trans. Cybern. 2024, 54, 7430–7442. [Google Scholar] [CrossRef]
  143. Liang, Y.; Yin, M.; Wang, W.; Liu, Q.; Wang, L.; Zheng, X.; Wang, T. Collaborative Edge Server Placement for Maximizing QoS with Distributed Data Cleaning. IEEE Trans. Serv. Comput. 2025. [Google Scholar] [CrossRef]
  144. Cecchinato, D.; Erseghe, T.; Rossi, M. Elastic and predictive allocation of computing tasks in energy harvesting IoT edge networks. IEEE Trans. Netw. Sci. Eng. 2021, 8, 1772–1788. [Google Scholar] [CrossRef]
  145. Li, E.; Zeng, L.; Zhou, Z.; Chen, X. Edge AI: On-demand accelerating deep neural network inference via edge computing. IEEE Trans. Wirel. Commun. 2019, 19, 447–457. [Google Scholar] [CrossRef]
  146. Damsgaard, H.J.; Grenier, A.; Katare, D.; Taufique, Z.; Shakibhamedan, S.; Troccoli, T.; Chatzitsompanis, G.; Kanduri, A.; Ometov, A.; Ding, A.Y.; et al. Adaptive approximate computing in edge AI and IoT applications: A review. J. Syst. Archit. 2024, 150, 103114. [Google Scholar] [CrossRef]
  147. Liu, H.I.; Galindo, M.; Xie, H.; Wong, L.K.; Shuai, H.H.; Li, Y.H.; Cheng, W.H. Lightweight deep learning for resource-constrained environments: A survey. ACM Comput. Surv. 2024, 56, 1–42. [Google Scholar] [CrossRef]
  148. Singh, R.; Gill, S.S. Edge AI: A survey. Internet Things-Cyber-Phys. Syst. 2023, 3, 71–92. [Google Scholar] [CrossRef]
  149. Zhou, H.; Gu, Q.; Sun, P.; Zhou, X.; Leung, V.C.; Fan, X. Incentive-Driven and Energy Efficient Federated Learning in Mobile Edge Networks. IEEE Trans. Cogn. Commun. Netw. 2025, 11, 832–846. [Google Scholar] [CrossRef]
  150. Zhou, Z.; Chen, X.; Li, E.; Zeng, L.; Luo, K.; Zhang, J. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. Proc. IEEE 2019, 107, 1738–1762. [Google Scholar] [CrossRef]
  151. Peng, Y.; Duan, J.; Zhang, J.; Li, W.; Liu, Y.; Jiang, F. Stochastic long-term energy optimization in digital twin-assisted heterogeneous edge networks. IEEE J. Sel. Areas Commun. 2024, 42, 3157–3171. [Google Scholar] [CrossRef]
  152. Shen, Y.; Shao, J.; Zhang, X.; Lin, Z.; Pan, H.; Li, D.; Zhang, J.; Letaief, K.B. Large language models empowered autonomous edge AI for connected intelligence. IEEE Commun. Mag. 2024, 62, 140–146. [Google Scholar] [CrossRef]
  153. Fan, L.; Han, Z. Hybrid quantum-classical computing for future network optimization. IEEE Netw. 2022, 36, 72–76. [Google Scholar] [CrossRef]
  154. Chai, Z.; Lin, Y.; Gao, Z.; Yu, X.; Xie, Z. Diffusion Model Empowered Efficient Data Distillation Method for Cloud-Edge Collaboration. IEEE Trans. Cogn. Commun. Netw. 2025, 11, 902–913. [Google Scholar] [CrossRef]
  155. Bhatia, M.; Sood, S. Quantum-Computing-Inspired Optimal Power Allocation Mechanism in Edge Computing Environment. IEEE Internet Things J. 2024, 11, 17878–17885. [Google Scholar] [CrossRef]
  156. Abbas, A.; Ambainis, A.; Augustino, B.; Bärtschi, A.; Buhrman, H.; Coffrin, C.; Cortiana, G.; Dunjko, V.; Egger, D.J.; Elmegreen, B.G.; et al. Challenges and opportunities in quantum optimization. Nat. Rev. Phys. 2024, 6, 718–735. [Google Scholar] [CrossRef]
  157. Naeini, H.K.; Shomali, R.; Pishahang, A.; Hasanzadeh, H.; Mohammadi, M.; Asadi, S.; Lonbar, A.G. PINN-DT: Optimizing Energy Consumption in Smart Building Using Hybrid Physics-Informed Neural Networks and Digital Twin Framework with Blockchain Security. arXiv 2025, arXiv:2503.00331. [Google Scholar]
Figure 1. The taxonomy of the discussed topics.
Figure 1. The taxonomy of the discussed topics.
Mathematics 13 01779 g001
Figure 2. The taxonomy of AI in MEC.
Figure 2. The taxonomy of AI in MEC.
Mathematics 13 01779 g002
Figure 3. Architecture of RL.
Figure 3. Architecture of RL.
Mathematics 13 01779 g003
Figure 4. The taxonomy of mathematical methods in MEC.
Figure 4. The taxonomy of mathematical methods in MEC.
Mathematics 13 01779 g004
Figure 5. Architecture of mathematical methods and artificial intelligence-based applications.
Figure 5. Architecture of mathematical methods and artificial intelligence-based applications.
Mathematics 13 01779 g005
Figure 6. Architecture of mathematical methods and artificial intelligence in MEC.
Figure 6. Architecture of mathematical methods and artificial intelligence in MEC.
Mathematics 13 01779 g006
Table 1. Summary of AI methods for edge computing optimization.
Table 1. Summary of AI methods for edge computing optimization.
AI MethodProblemGoalContributionCitation
Supervised learning (SL)Data labeledness requirementClassification and predictionLearns from labeled data for tasks like classification/regression[28]
Linear regressionModeling linear relationshipsReduce prediction errorApplies to continuous output prediction tasks[29]
Support vector machines (SVMs)Nonlinear classificationImprove classification accuracyLearns complex boundaries via kernel tricks[30]
Decision treesDecision interpretabilitySimplify decisionsHierarchical rule-based decisions for interpretability[31]
Neural networksComplexity in DLImprove transparencyLayered learning + interpretability of internal representations[32]
Unsupervised learning (UL)Unstructured data without labelsDiscover hidden patternsEnables clustering/correlation/dimensionality reduction[27]
Semi-supervised learning (SSL)Label scarcityBoost learning performanceCombines labeled and unlabeled data to balance efficiency and accuracy[33]
DL offloading mechanismOffloading DL workloadsRuntime efficiencyOptimizes edge DL execution with adaptive offloading and resource allocation[34]
Reinforcement learning (RL)Exploration vs. exploitation in RLLearn optimal action policyBalances reward seeking and state–space exploration[35]
RL for UAVsUAV trajectory planningReal-time decision optimizationLearns adaptive policies via interaction with environment[36]
Federated learning (FL)Privacy in distributed trainingProtect user dataTrains global model while keeping data local[37]
Personalized FLPoor personalization in FLPersonalize global modelAggregates local models for better personalized prediction[39]
Table 2. Summary of mathematical methods for edge computing optimization.
Table 2. Summary of mathematical methods for edge computing optimization.
Mathematical MethodsProblemGoalContributionCitation
Linear/nonlinear programmingResource allocation and task schedulingBalance energy and latencyMathematical modeling for optimized offloading and resource coordination[46,50,51]
Multi-objective optimizationTrade-offs in UAV edge computingImprove scheduling efficiencyPareto-based optimization balancing performance and resource use[52]
SGDLarge-scale distributed trainingReduce delay and communicationDistributed SGD with high-probability convergence guarantees[53]
Dynamic programmingEnergy strategy for vehiclesMinimize operating costsMulti-dimensional optimization for smart transport systems[54]
Probability modelsUncertainty and faultsEnhance prediction and robustnessDemand fluctuation prediction and fault inference[55,56]
Statistical learningLoad balancing and risk assessmentImprove generalization and fault toleranceData resampling and failure probability modeling[58,60]
Copula theoryDependency modelingOptimize cooperationMultivariate dependency modeling for collaborative computing[59]
Queuing modelsTask latency and congestionReduce queuing delayPriority-aware and scalable queuing systems for edge/cloud[61,64,77]
Petri net, fluid model, Queuing GameBottleneck detection and fairnessOptimize performance and fairnessStochastic modeling for congestion control and fairness enhancement[63,65,66]
Game theoryTask/resource allocation under competitionImprove system efficiencyIncentive and pricing models for fair and efficient resource sharing[67,68,69,70]
Auction theory and contract theoryMarket-based resource tradingReduce waste and ensure cooperationEconomic models for edge market and federated learning incentives[71,72]
Graph theoryNetwork topology and schedulingOptimize structure and communicationTopology-aware scheduling and resource clustering models[73,74,75,76]
Table 3. Summary of AI and mathematical methods in application domains.
Table 3. Summary of AI and mathematical methods in application domains.
FieldGoalAIMathematical MethodCitation
Intelligent transportation systemsAutonomous drivingDecision trees, Kalman filters[81]
Task offloading and resource allocationMulti-objective optimization[86]
Smart CitiesUrban energy managementMarkov decision[90,91]
Public safety and emergency responseRegression, statistical modeling[92,93]
Intelligent healthcareRemote disease diagnosisOptimization[100,102,103,104]
Epidemic surveillanceStatistical forecasting[109]
Smart RetailPersonalized recommendationCollaborative filtering, matrix factorization[112,113,115]
Unmanned retailQueuing models[116,117,120]
Table 4. Comparison of integration challenges and opportunities in edge intelligence.
Table 4. Comparison of integration challenges and opportunities in edge intelligence.
Key IssuesHandoffEdge CollaborationProvided APIsClient ModificationCitation
Heterogeneous data integrationNumber of abnormal nodes[130,131,132]
Compatibility across devices×[131]
Communication cycles and latency[133,134,135,136]
Computational complexity and scalabilityComputational delay×[137,138,139]
Model overhead××[140]
Communication costs and scalability[141,142]
Real-time processingLatency×[144,145]
Processing delay×[146,147]
Adaptive update cycles××[146]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, Y.; Bi, X.; Shen, R.; He, Z.; Wang, Y.; Xu, J.; Zhang, Y.; Fan, X. When Mathematical Methods Meet Artificial Intelligence and Mobile Edge Computing. Mathematics 2025, 13, 1779. https://doi.org/10.3390/math13111779

AMA Style

Liang Y, Bi X, Shen R, He Z, Wang Y, Xu J, Zhang Y, Fan X. When Mathematical Methods Meet Artificial Intelligence and Mobile Edge Computing. Mathematics. 2025; 13(11):1779. https://doi.org/10.3390/math13111779

Chicago/Turabian Style

Liang, Yuzhu, Xiaotong Bi, Ruihan Shen, Zhengyang He, Yuqi Wang, Juntao Xu, Yao Zhang, and Xinggang Fan. 2025. "When Mathematical Methods Meet Artificial Intelligence and Mobile Edge Computing" Mathematics 13, no. 11: 1779. https://doi.org/10.3390/math13111779

APA Style

Liang, Y., Bi, X., Shen, R., He, Z., Wang, Y., Xu, J., Zhang, Y., & Fan, X. (2025). When Mathematical Methods Meet Artificial Intelligence and Mobile Edge Computing. Mathematics, 13(11), 1779. https://doi.org/10.3390/math13111779

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop