You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

12 November 2025

Stochastic Geometric-Based Modeling for Partial Offloading Task Computing in Edge-AI Systems

and
College of Engineering and Technology, University of Doha for Science and Technology, Doha P.O. Box 24449, Qatar
*
Author to whom correspondence should be addressed.
This article belongs to the Section Internet of Things

Abstract

This paper proposes a cooperative framework for resource allocation in multi-access edge computing (MEC) under a partial task offloading setting, addressing the joint challenges of learning performance and system efficiency in heterogeneous edge environments. In the proposed architecture, selected users act as edge servers (SEs) that collaboratively assist others alongside a central server (CS). A joint optimization problem is formulated to integrate model training with resource allocation while accounting for data freshness and spatial correlation among user tasks. The correlation-aware formulation penalizes outdated and redundant data, leading to improved robustness against non-i.i.d. distributions. To solve the NP-hard problem efficiently, a projected gradient descent (PGD) method is developed. The simulation results demonstrate that the proposed cooperative approach achieves a balanced delay of 0.042 s, close to edge-only computing ( 0.033 s) and 30 % lower than the CS-only mode, while improving clustering accuracy to 99.2 % (up to 15 % higher than the baseline). Moreover, it reduces the central server load by nearly half, ensuring scalability and latency compliance within 3GPP limits. These findings confirm that cooperation between SEs and the CS substantially enhances reliability and performance in distributed Edge-AI system.

1. Introduction

1.1. State-of-the-Art and Motivations

Recent advances in artificial intelligence (AI) have sparked innovation in various technological domains. Most conventional AI solutions rely on centralized computing architectures that depend on large-scale data collection. While these centralized models offer powerful learning, perception, and decision-making capabilities, they have their own limitations in dynamic and uncertain environments. Transferring data to a central server and accumulating it for computing is time-consuming and costly. Additionally, providing adequate computing resources on a central server is challenging, especially under high demand or time-sensitive conditions. To address these challenges, researchers have proposed more distributed and adaptive architectures. One prominent approach is multi-access edge computing (MEC), which brings computing and storage resources closer to end devices at the edge of the network. This configuration improves task offloading efficiency and enables real-time system responsiveness. Edge AI enables the execution of AI models directly at the network edge, allowing real-time decision-making with reduced latency and lower reliance on centralized servers. This approach is particularly valuable in time-sensitive and resource-constrained environments [,]. Due to the significant advantages of Edge AI, such as enhanced operational efficiency, improved data privacy, and ultra-low latency, global interest and investment in this field have surged in recent years. According to market analysis, the global Edge AI market is projected to exceed USD 66 billion by 2030 [,,].
Despite its benefits, Edge AI and MEC are faced with several significant challenges. One of the most critical issues is the propagation of uncertainty in distributed environments. In this regard, stochastic geometry models can provide a powerful analytical framework for characterizing spatial randomness and evaluating network performance [,,,,]. Furthermore, due to asynchronous updates and decentralized model execution, local errors can accumulate and spread across the network, ultimately degrading the overall model accuracy and reliability. Another major challenge lies in the non-independent and identically distributed (non-i.i.d) nature of user data. While various solutions have been proposed in the context of federated learning (FL) to mitigate this issue, concerns remain regarding the generalization and reliability of global models trained on highly heterogeneous local datasets []. Such data distribution disparities can significantly disrupt model convergence and performance. While most existing works on non-i.i.d data in FL and machine learning (ML) focus on non-uniformity of local data distributions (e.g., label skew or quantity skew), fewer studies address the non-independence aspect, where user data may be statistically correlated or dependent across clients [,,].
These challenges highlight the need for robust and correlation-aware learning frameworks that maintain accuracy, adaptability, and convergence under decentralized and statistically dependent conditions. In such environments, traditional centralized or federated approaches often fall short, particularly when dealing with heterogeneous and temporally dynamic data [,,].
To address these limitations, we propose a distributed learning framework tailored for partial task offloading in MEC environments. Unlike conventional paradigms, our approach enables user devices to selectively offload task-related data to edge or central servers (CS) via data-level cooperation. This setup leverages local computation, respects delay constraints, and accommodates statistical heterogeneity across users. A key novelty of our model lies in its integration of spatial and directional task correlations. This is often overlooked in existing works, i.e, the data is treated as being i.i.d which is not an accurate assumption. We introduce a correlation-aware loss function that explicitly incorporates data freshness and penalizes contributions from distant or weakly correlated users. This design enhances the relevance of updates, reduces the impact of outdated or biased data, and improves overall learning robustness in Edge AI systems. Using stochastic geometry analysis, we can analytically guarantee that users can be served by at least one local server at any time. In summary, the main question addressed in this paper is as follows: How can user tasks in a network be served efficiently under a partial offloading MEC scenario, with priority for edge servers, while taking into account the characteristics of an AI-driven edge model, including communication latency and the challenges posed by non-i.i.d data?

1.2. Contributions

Different from previous works [,,,,,], this paper introduces a novel cooperative computation framework for partial offloading in MEC environments. The main contributions are as follows:
  • We propose a cooperative partial offloading model in MEC environments, where tasks can be processed either locally by neighboring users or centrally by a server, taking spatial and directional correlations into account where a closed-form upper bound for spatial correlation is derived to constrain offloading decisions based on user proximity and sensing overlap.
  • To minimize learning loss under delay and resource constraints, a novel optimization problem is formulated that incorporates freshness-aware weighting, correlation modeling, and allocation decisions. Moreover, to improve robustness in non-i.i.d. settings, we integrate earth mover’s distance (EMD) into the loss function to capture distributional dissimilarity among users’ data.
  • To ensure scalability, we develop a coordination-free solution method suitable for practical deployment in distributed MEC systems.
  • Leveraging stochastic geometry, we provide tractable analytical characterizations of coverage probability and delay distribution, which not only enable probabilistic guarantees on task offloading but also ensure the generalizability of the proposed framework to large-scale and heterogeneous MEC networks.
  • We show that using the proposed framework, we can significantly reduce the computation load on the central server compared to baseline schemes and for a given delay threshold, we can give service to considerably higher number of users.

3. System Model and Parameters

This article focuses on a task offloading scenario in which data-generating equipment, referred to as requesting entities (REs), can delegate their computational tasks either to nearby local edge devices, referred to as serving entities (SEs), or to a centralized server (CS). The goal is to enable efficient and delay-sensitive learning by leveraging Edge AI frameworks, where training occurs across distributed nodes with limited resources and potentially heterogeneous data distributions. In this regard, as can be seen in Figure 1, we consider a central server for task computing.
Figure 1. A hybrid edge architecture in which some entities request task execution (REs) and others provide computing resources (SEs). A CS is also available for additional offloading. Typical use cases include smart campuses or factories, where IoT devices and robots can be considered as REs, and where attendees smartphones can be considered as SEs performing partial task offloading.
The set of equipment whose tasks needed to be addressed is represented by RE = { r } 1 R . In addition, the set of equipment that can provide computing resources for local task computing is denoted by SE = { s } 1 S . We assume that the devices are randomly distributed over an area of size | A | (in square meters), following a Poisson Point Process (PPP) [,,]. Let λ SE and λ RE denote the spatial intensities of SEs and REs, respectively, measured in devices per square meter. According to the properties of the PPP, the probability of observing exactly k devices of a given type—either SEs or REs—within a region of area | A | is given by f ( k ) = e λ | A | ( λ | A | ) k k ! , where λ { λ SE , λ RE } denotes the spatial density (intensity) of SEs or REs, respectively. Furthermore, we assume that the spatial distributions of SEs and REs are independent (In our model, REs represent IoT devices or sensors that are typically deployed in large numbers and remain stationary within a given environment (e.g., a campus or factory). In contrast, SEs correspond to mobile devices, such as smartphones or laptops, which are carried by human users. Since the placement of REs is governed by specific deployment requirements, while the mobility of SEs is driven by human movement patterns, it is reasonable to assume that the spatial distributions of REs and SEs are mutually independent. Moreover, the locations of individual REs and SEs are also assumed to be independent of one another. This independence assumption is widely adopted in stochastic geometry models, as it enhances both realism and analytical tractability) of each other, consistent with the properties of independent homogeneous PPPs [,,].
The size of the task for RE r is denoted by D r (in byte), while the computing capacity of SE s is represented by C s (in CPU cycles/s). Similarly, the computing capacity of the CS is denoted by C ^ . In this paper, we assume that the task data consists of image-based content. (For instance, one can envision a scenario involving sensors and mobile robots deployed across environments such as a university campus, smart factory, or smart city. These devices are responsible for capturing images of their surroundings to support perception and decision-making tasks.) Accordingly, we adopt a widely used benchmark dataset to model the image data in our experiments. In line with this, we define a dynamic field of view (FoV) for each RE r, represented by an angular direction ϕ r t at time slot t. This direction evolves over time according t: ϕ r t = ϕ r 0 + t φ , where ϕ r 0 [ 0 , 2 π ] is the initial viewing angle and φ is a constant angular increment per time slot. In addition, each RE is assumed to have a symmetric viewing range with a total width of φ ˜ , meaning that at time t, RE r can observe all the objects located within the angular sector ϕ r t φ ˜ 2 , ϕ r t + φ ˜ 2 . If the angular displacement φ between two consecutive time slots satisfies φ φ ˜ , then the task based on the data collected at time t is assumed to be independent of the task based on the data collected at time t 1 . Otherwise, a dependency exists, which will be addressed in the subsequent sections.
Furthermore, the maximum visual sensing range for all REs is assumed to be equal and is represented by L. In addition, the time that RE r collects the data for its task, is denoted by t 1 < τ r t . Furthermore, the maximum range of each edge server is denoted by L . In this paper, we assume that each RE can be covered by at least S servers but can only be assigned to one, the details of which is provided in Section 5. A data sample from the dataset of RE r is denoted by x r D r , where D r represents the dataset associated with RE r. The size of each REs dataset is modeled as a uniform random variable:
| D r | Uniform [ n min , n max ] , r ,
where n min and n max denote the minimum and maximum number of samples, respectively. Here, n max = ϱ N , where N is the total number of samples in the global dataset, and ϱ [ 0 , 1 ] quantifies the degree of quantity skew across the REs. Here we assume that for the transmitted x r , the received version at SEs is represented by x r s , and at the CS, by x ^ r , where the effect of wireless channel transmission such as additive noise and fading have in been incorporated within them.
We use notation δ r , s t { 0 , 1 } to represent the assignment variable, where δ r , s t = 1 if RE r is assigned to SE s, and δ r , s t = 0 otherwise. Let α r t [ 0 , 1 ] represent the fraction of the data task from RE r that is offloaded to the CS. Consequently, ( 1 α r t ) denotes the fraction of the task offloaded to the assigned SE s. Accordingly, the number of data samples offloaded to the CS by RE r is α r t D r , and we denote this subset as D ^ r = { x ^ r } . Similarly, the number of samples offloaded to SE s is ( 1 α r t ) D r , and we denote this subset as D r s = { x r s } . To increase the readability of paper, the list of symbols are provided in Table 2.
Table 2. List of system model parameters.

4. The Proposed Problem

4.1. Constraints

Each RE must be assigned to one SE, as a result,
s δ r , s t = 1 , r , t .
Furthermore, the following constraints ensure that the total computational load assigned to each SE and the CS does not exceed their respective computing capacities:
r γ δ r , s t ( 1 α r t ) D r C s , s , t ,
r γ α r t D r C ^ , t ,
where γ denotes the number of CPU cycles required per byte []. In addition, we consider computing delay, transmission delay, and queuing delay. The computing delay is calculated by:
T r , Comp = γ ( α r t D r C ^ + ( 1 α r t ) D r C s ) , r .
The transmission delay also is obtained by the following:
T r , Trans = α r t D r V ^ + ( 1 α r t ) D r V , r ,
where V and V ^ are the transmission data rate between REs and SEs and the CS, in byte per second, respectively. By considering the M / G / 1 queuing model for both the CS and the SEs, the total queuing delay experienced by RE r under partial offloading is modeled by Pollaczek–Khinchin formula. We denote by λ task the task generation rate of each RE (in tasks per second). The net arrival rates to CS and to SE s are then
Λ CS = λ task r α r λ task | A | λ RE α ¯ ,
Λ s = λ task r δ ˜ r , s ( 1 α r ) ,
where α ¯ denotes the average offloading fraction across REs (for large systems one may use α ¯ = E [ α r ] ). For the CS, the service time of a task originating from RE r (seconds) is
S CS , r = γ α r D r C ^ , D r in [ byte ] , C ^ in [ CPU cycles / s ] .
Consequently, the first and second moments of the service time at the CS are
E [ S CS ] = E [ γ α D ] C ^ , E [ S CS 2 ] = E [ ( γ α D ) 2 ] C ^ 2 .
By the Pollaczek–Khinchin formula for an M / G / 1 queue, the mean waiting time in queue (excluding service) at the CS is
W q , CS = Λ CS E [ S CS 2 ] 2 ( 1 ρ CS ) , ρ CS = Λ CS E [ S CS ] .
Analogously, for SE s with service time S s , r = γ ( 1 α r ) D r C s we have
E [ S s ] = E [ γ ( 1 α ) D ] C s , E [ S s 2 ] = E [ γ 2 ( 1 α ) 2 D 2 ] C s 2 ,
W q , s = Λ s E [ S s 2 ] 2 ( 1 ρ s ) , ρ s = Λ s E [ S s ] .
Finally, the expected queuing delay experienced by RE r (in seconds) under partial offloading and soft assignments is given by the mixture
T r , Que = α r W q , CS + ( 1 α r ) s δ ˜ r , s W q , s .
Finally, by the following constraint, we guarantee that total delay T Max must be less than threshold of maximum delay T Max :
T r , Comp + T r , Trans + T r , Que = T r , Total T Max , r .

4.2. Data Freshness and Distribution-Aware Loss Modeling

Due to the geographical distribution of REs, each RE observes a distinct data distribution, leading to statistical heterogeneity (non-i.i.d) across the system. This diversity may adversely affect model convergence, as many solutions assume i.i.d data and are developed based on this assumption. We refer to this scenario as non-i.i.d.-blind and we try to develop a non-i.i.d.-aware model. To do so, we formulate a delay-aware and distribution-sensitive loss model that incorporates statistical dissimilarity, temporal dynamics, and communication delay penalties. The global loss function at the CS is given by the following:
L ( α , θ ) = r x ^ r D ^ r α r t D r r α r t D r l ( x ^ r ; θ ) ν ( τ r ) , t ,
where l ( x ^ r ; θ ) is the local loss. In parallel, the loss function at each SE s is computed as follows:
L s ( α , δ , θ s ) = r x r s D r s δ r , s t ( ( 1 α r t ) D r ) r δ r , s t ( ( 1 α r t ) D r ) l ( x r s ; θ s ) g r ( t ) ν ( τ r ) , s ,
where the delay penalty term is defined as ν ( τ r ) = e ( t τ r ) , and τ r denotes the time when the sample from RE r was generated. This function prioritizes fresher data by assigning smaller penalties to more recent samples. The dissimilarity coefficient g r ( t ) quantifies the statistical distance between RE r’s local distribution and the global distribution using EMD:
g r ( t ) = exp ( k = 1 K P ^ r , k ( t ) P k ( t ) w k ) , r .
To model temporal dynamics, assuming that at time slot t 1 , only K classes have been observed (i.e., P r , k ( t 1 ) > 0 ), while the remaining K K classes are unseen, the estimated class probabilities at time t are as follows:
P ^ r , k ( t ) P r , k ( t 1 ) · e κ time max ( 1 , K K ) , if P r , k ( t 1 ) > 0 , ϵ , if P r , k ( t 1 ) = 0 ,
where κ time controls the decay of outdated distributions and ϵ ensures normalization (as shown in Appendix A). Furthermore, if all classes have been seen at time slot t 1 , ( P r , k ( t 1 ) > 0 , k ) , we assume that P ^ r , k ( t ) P r , k ( t 1 ) , r , k . Although this work focuses on quantity skew due to non-uniform sensing rates and task sizes, feature skew is partly reflected through the spatial randomness of RE datasets and wireless noise distortions. Label skew is not directly relevant here since the framework operates in an unsupervised setting. Future extensions could explicitly incorporate feature-level or domain-shift variations to capture broader heterogeneity conditions in Edge AI.
In addition, each RE is assumed to have a symmetric FoV with total angular width φ ˜ , centered around its viewing angle ϕ r . That is, at time t, RE r can observe all objects located within an angular sector of width φ ˜ centered at ϕ r .
We consider REs uniformly distributed within a circular region of radius L:
( l r Uniform [ 0 , L ] , r ) ,
We incorporate both spatial and directional correlations between the tasks of REs r and r using a physically meaningful and dimensionally consistent formulation. The correlation depends on their positions l r , l r and viewing angles ϕ r , ϕ r , with angular difference Δ ϕ = | ϕ r ϕ r | . As illustrated in Figure 2, the task correlation is modeled as follows:
C r , r = exp l r l r 2 2 s 2 exp 1 cos ( Δ ϕ ) 2 σ ϕ 2 ,
where s is the spatial correlation length and σ ϕ is the angular correlation parameter, ensuring dimensional consistency. To ensure the correlation remains below a threshold κ ( 0 , 1 ] , we require
C r , r κ , r , r ,
Figure 2. An illustrative example of how the FoV of REs overlap and how their spatial correlation influences their observations. The yellow star represents a typical point that is visible to both users.
This leads to the following geometric condition:
l r l r 2 + s 2 ( 1 cos ( Δ ϕ ) ) 2 s 2 ln 1 κ .
For system design, we consider the expected spatial configuration. Under uniform distribution, the expected squared distance is E [ l r l r 2 ] = L 2 / 3 . This yields the following simplified angular correlation:
C max ( Δ ϕ ) = κ s · exp 1 cos ( Δ ϕ ) 2 σ ϕ 2 ,
where κ s = exp L 2 / 6 s 2 represents the spatial correlation baseline. The fundamental design constraint becomes the following:
φ ˜ arccos 1 2 σ ϕ 2 ln exp L 2 / 6 s 2 κ ,
provided the argument lies in [ 1 , 1 ] . This closed-form expression provides a practical design guideline, clearly showing the trade-offs between spatial coverage (L), correlation parameters ( s , σ ϕ ), and the correlation threshold ( κ ) (more details are provided in Appendix B).

4.3. Problem Formulation

With the aim of minimizing the total loss of the system, i.e., the loss of the CS and SEs over variables α , δ , and θ , subject to the constraints discussed above, we state the following optimization problem:
min α , δ , θ L CS ( α , θ ) + s L s ( α , δ , θ ) ,
s . t : s δ r , s t = 1 , r , t ,
T r , Total T Max , r ,
r γ δ r , s t ( 1 α r t ) D r C s , s ,
r γ α r t D r C ^ ,
where constraint (26b) guarantees that each RE is assigned to exactly one SE, while (26c) ensures that the total delay does not exceed the maximum allowable threshold T Max . In addition, constraints (26d,e) restrict the sizes of tasks offloaded to the local SEs and the CS so that they remain within their respective computing capacities.

5. Feasibility Analysis

In this section, we present a mathematical analysis to evaluate the feasibility of the system and ensure its consistency. Based on the properties of a homogeneous PPP, the PDF of the distance D between an RE and its nearest serving entity SE is given by
f D ( d ) = 2 π λ SE d e λ SE π d 2 , d 0 ,
where λ SE denotes the density of SEs. The expected distance can be derived as
E [ D ] = 1 2 λ SE .
To guarantee that REs are within a maximum distance threshold d max from at least one SE with high probability, we impose
Pr ( D d max ) = 1 e λ SE π d max 2 1 ε ,
where ε is the outage tolerance (i.e., the probability that an RE is not covered within d max ). Rearranging the above condition yields the following requirement on the SE density:
λ SE ln ( ε ) π d max 2 .
This condition provides a lower bound on the spatial density of SEs to achieve a coverage probability of at least 1 ε within the distance threshold d max . To satisfy the requirement that each RE is covered by at least S servers, we analyze the coverage under a homogeneous PPP. Let S ˜ denote the number of SEs within distance L of a typical RE. Then, S ˜ Poisson ( λ SE π L 2 ) . To ensure P ( N S ) 1 ε ˜ , we must have the following:
e λ SE π L 2 k = 0 S 1 ( λ SE π L 2 ) k k ! ε ˜ ,
and finally we have the following:
P ( N S ) 1 e λ SE π L 2 k = 0 S 1 ( λ SE π L 2 ) k k ! ,
which implicitly provides a lower bound on λ SE as a function of L and the target reliability 1 ε ˜ . To illustrate, consider S = 3 required SEs per RE. When the reliability target is 1 ε ˜ = 0.95 , the corresponding coverage intensity is approximately 6.30 . Hence, the minimum server density should satisfy the following:
λ SE 6.30 π L 2 .
For L = 100 m , this yields the following:
λ SE 2.01 × 10 4 servers / m 2 ,
which is equivalent to one SE per 70 × 70 m 2 area on average. This corresponds to approximately 20 SEs per square kilometer, ensuring each RE has at least three SEs in range with 95 % reliability.
If the reliability requirement is increased to 1 ε ˜ = 0.99 , the coverage intensity increases to about 8.45 , resulting in the following:
λ SE 8.45 π L 2 2.69 × 10 4 servers / m 2 .
For L = 150 m , this reduces to λ SE 1.20 × 10 4 servers / m 2 , which still guarantees that each RE is covered by three SEs with probability at least 0.99 .
These results numerically confirm that moderate SE densities, on the order of 10 4  servers/m2, are sufficient to maintain reliable multi-server coverage within typical urban cell sizes. Hereupon the set of servers to which RE r can be assigned based on the physical distance is denoted by S r . Assuming identical task sizes D r = D ¯ and SE capacities C s = C ¯ , the total required computing resources in the network is R E D ¯ γ . If all REs adopt a uniform offloading strategy α r t = α ¯ , then the computing loads are split as D ¯ γ α ¯ R E for the CS and D ¯ γ ( 1 α ¯ ) R E for the SEs. Assuming the total CS capacity is C ^ and the total SE capacity is λ SE | A | C ¯ , the maximum number of REs the network can support under this strategy is R E Max ( α ¯ ) = min C ^ D ¯ γ α ¯ , λ SE | A | C ¯ D ¯ γ ( 1 α ¯ ) .
While this analysis provides an upper bound, it does not incorporate the binary task assignment variables δ r , s , which govern the actual RE-to-SE allocation decisions. As a result, the derived expression represents an idealized scenario. The practical feasibility of this result depends on whether the local constraints at each SE can be satisfied under the discrete assignment structure. Analytically, for α ¯ we have α ¯ = C ^ λ SE | A | C ¯ + C ^ .  Figure 3 illustrates this trade-off across values of α ¯ .
Figure 3. Effect of α ¯ on the maximum number of REs supported by the network.

6. Solution Method

6.1. Dataset Description

To evaluate the performance of the proposed model, we use the widely adopted MNIST dataset in image classification and ML-based systems  [,,]. The MNIST dataset consists of 70,000 gray-scale images of handwritten digits (0–9), each of size 28 × 28 pixels, with 60,000 samples for training and 10,000 for testing [,]. To simulate a realistic ML based setting, both datasets are partitioned in a non-i.i.d fashion across multiple edge nodes. Each node is assigned a unique subset of the data to reflect user-specific distributions, capturing the impact of data heterogeneity and decentralized learning on model performance.

6.2. Proposed Solution

We address the joint optimization problem in (26), which involves the offloading ratios α , task assignment variables δ , and learning model parameters θ . The combinatorial nature of the binary assignment variables makes direct optimization computationally prohibitive. To overcome this challenge, we employ a continuous relaxation framework, in which discrete decision variables are parameterized through smooth mappings and optimized via projected gradient descent (PGD).

6.2.1. Joint and Disjoint Optimization Protocols

To evaluate the effectiveness of the proposed optimization, two schemes are developed. In the joint optimization approach, as the proposed method, all the parameters ( α , δ , θ ) are updated simultaneously using PGD, which allows end-to-end coordination between learning dynamics and resource allocation decisions. In contrast, the disjoint optimization approach adopts, as a baseline, a sequential structure; one variable (e.g., α ) is fixed while the remaining parameters ( δ , θ ) are optimized iteratively; and the process repeats by substituting the latest updates in subsequent optimization rounds until convergence. Both optimization protocols operate under identical dataset partitions, computing capacities, and latency constraints, ensuring a fair and consistent comparison of their convergence behavior and performance.

6.2.2. Assignment Relaxation

Let d r , s denote the distance between RE r and SE s. The normalized distance is defined as
d ¯ r , s d r , s d max , d max = max r , s d r , s .
The normalized load of SE s at time t is
C ˜ s t = r δ ˜ r , s t ( 1 α r t ) D r C s , h s t [ 1 C ˜ s t ] + ,
where h s t represents the fraction of available capacity at SE s, and [ x ] + = max { 0 , x } . Based on these features, we define an affinity score between RE r and SE s:
q r , s t = exp λ d d ¯ r , s exp λ C ˜ s t , λ d , λ > 0 ,
which encourages assignments toward nearby and less-loaded servers. Matrix b t { b r , s t } r , s , is then expressed as a linear parametric function:
b r , s t = β 0 + β d ( 1 d ¯ r , s ) + β h s t + β q log q r , s t .
The relaxed assignment is obtained via a softmax mapping:
δ ˜ r , s t = exp ( b r , s t ) s exp ( b r , s t ) ,
which ensures δ ˜ r t lies on the probability simplex.

6.2.3. Offloading Ratio Relaxation

For each RE r, we define an aggregate edge suitability score:
G r t s δ ˜ r , s t h s t ,
which increases when nearby SEs have higher available capacity. The central-server logit, collected into the vector a t { a r t } r , is parameterized as
a r t = τ 0 τ 1 G r t , τ 1 > 0 ,
leading to the relaxed offloading ratio
α r t = σ ( a r t ) ,
where σ ( · ) is the sigmoid function. Thus, higher edge suitability G r t reduces α r t , prioritizing task processing at the edge.

6.2.4. Joint Optimization

The relaxed decision variables ( a , b ) and the learning model parameters θ are optimized on the augmented objective:
L total = L Task + λ 1 P capacity + λ 2 P delay .
Here, L Task denotes the unsupervised learning loss, composed of a reconstruction term and a distribution-alignment term to address non-i.i.d. data across servers:
L Task = E x x ^ ( z ) 2 + μ D Div p ( z | edge ) , p ( z | CS ) ,
where x ^ ( z ) denotes the reconstructed task from latent representation z, and D Div is a statistical divergence measure. The penalty terms ensure feasibility with respect to capacity and delay constraints:
P capacity = E max { 0 , U ( θ ) C max } ,
P delay = E max { 0 , T ( θ ) T max } .

6.2.5. PGD Mathematical Details

The optimization problem in (26) is of the constrained form,
min x C f ( x ) , x ( α , δ , θ ) ,
where f ( x ) is the total system loss and C is the feasible set induced by (26b–e). To solve this, PGD alternates between a gradient step
y ( k + 1 ) = x ( k ) η f ( x ( k ) ) ,
and a projection step
x ( k + 1 ) = Π C y ( k + 1 ) ,
where η > 0 is the learning rate and Π C ( · ) denotes Euclidean projection onto C :
Π C ( y ) = arg min z C z y 2 .
The gradient step reduces the objective in the unconstrained space, while the projection enforces feasibility with respect to capacity and delay. Under mild assumptions (e.g., Lipschitz continuity of f ), PGD converges to a first-order stationary point. PGD updates at each iteration k, and the relaxed logits ( a , b ) and the model parameters θ are updated via
a ( k + 1 ) = a ( k ) η a a L total ( k ) ,
b ( k + 1 ) = b ( k ) η b b L total ( k ) ,
θ ( k + 1 ) = θ ( k ) η θ θ L total ( k ) .
The sigmoid and softmax mappings ensure that α and δ remain valid throughout the updates. Finally, discrete assignments are obtained as
δ r , s t = 1 s = arg max s δ ˜ r , s t .
The overall procedure is summarized in Algorithm 1.
Algorithm 1 Joint PGD-based offloading and assignment optimization
 1:
Hyper-parameters: Server capacities { C s } , CS capacity C ^ , maximum delay T max , learning rates η a , η b , η θ , penalty weights λ , initial logits a 0 , b 0 , initial model parameters θ 0
 2:
for each time slot t do
 3:
   for iteration k do
 4:
     Compute offloading ratios: α r t , k = σ ( a r ( k ) ) , r
 5:
     Compute soft assignments: δ ˜ r , s ( k ) = exp ( b r , s ( k ) ) s exp ( b r , s ( k ) ) , r , s
 6:
     Evaluate loss: L ( k ) = L ( α ( k ) , δ ˜ ( k ) , θ ( k ) )
 7:
     Compute gradients:
      g a = a L ( k ) , g b = b L ( k ) , g θ = θ L ( k )
 8:
     Update parameters:
      a ( k + 1 ) = a ( k ) η a g a ;
      b ( k + 1 ) = b ( k ) η b g b ;
      θ ( k + 1 ) = θ ( k ) η θ g θ
 9:
   end for
10:
end for

6.2.6. Computational Complexity and Scalability Discussion

The computational complexity of the proposed PGD-based joint optimization mainly arises from gradient evaluations and projection operations. Each iteration involves O ( R S + P θ ) operations, where R and S denote the numbers of REs and SEs, respectively, and P θ is the number of learnable model parameters. Since the projection step is performed in closed form for α and δ , the overall per-iteration complexity scales linearly with network size. This makes the framework scalable to large MEC deployments. In terms of energy efficiency, cooperative task processing reduces redundant transmissions and central computations, leading to an estimated 35 % decrease in total energy consumption compared to the fully centralized baseline, as verified in our simulations. Hence, the PGD formulation achieves a balanced trade-off between convergence speed, scalability, and energy efficiency for real-time Edge AI.

6.2.7. Implementation

The entire framework is implemented in PyTorch. Both the unsupervised reconstruction/alignment objective and the penalty terms are differentiable tensors. Using autograd, gradients are propagated through all components, enabling end-to-end training of ( α , δ , θ ) via stochastic PGD updates. The learning component is implemented as a lightweight convolutional neural network (CNN) to ensure compatibility with edge devices. The model consists of two convolutional layers with ReLU activations, followed by two fully connected layers. All the trainable parameters ( θ , a , b ) are optimized jointly using the Adam optimizer with learning rates η θ = 10 3 , η a = 5 × 10 4 , and η b = 5 × 10 4 . The overall optimization minimizes the total loss L total , which combines the unsupervised reconstruction and distribution-alignment terms with capacity and delay penalties as defined in (26). This design enables a stable, end-to-end training process while maintaining low computational overhead suitable for resource-constrained edge environments.

6.2.8. Convergence and Penalty Analysis

The convergence of the proposed PGD scheme can be characterized using standard results from constrained optimization theory. Let the total loss function f ( x ) = L total be continuously differentiable with an L -Lipschitz gradient, i.e.,
f ( x 1 ) f ( x 2 ) 2 L x 1 x 2 2 , x 1 , x 2 C .
Then, for a fixed learning rate 0 < η < 2 L , the PGD iteration
x ( k + 1 ) = Π C x ( k ) η f ( x ( k ) ) ,
is guaranteed to converge to a first-order stationary point satisfying f ( x ) , z x 0 , z C . In our setting, the feasible set C arises from the capacity and delay constraints, while the sigmoid and softmax relaxations ensure that ( α , δ ) remain differentiable and bounded during optimization.
Furthermore, the penalty terms in (26) provide a smooth relaxation of the original hard constraints:
L total = L Task + λ 1 P capacity + λ 2 P delay ,
where λ 1 and λ 2 act as trade-off coefficients. By jointly incorporating both penalty components, the optimizer effectively balances constraint satisfaction and model performance within a unified objective surface, leading to improved numerical stability and faster convergence compared to treating each constraint separately.

7. Performance Evaluation

In this part, we evaluate the performance of the proposed MEC-based Edge AI framework under different system configurations, focusing on the interplay between task offloading, computation capacity, and data heterogeneity. The source code and data is available in [].

7.1. Task Offloading and Delay

In Figure 4, we obtain the loss value for CS and SEs. As can be seen, as the steps increase, the loss value for both cases decreases pretty quickly. Since the CS has more data samples and higher computing capacity, the loss function falls more quickly.
Figure 4. The loss values for L C S and L s servers.
In Figure 5 we observe that by increasing the number of REs, due to the increase in data volume, the loss decreases. This will contribute in better accuracy as can be seen in Figure 6. note that since our framework follows an unsupervised learning paradigm, the term “accuracy” in all plots refers to clustering accuracy (CA), computed as the optimal label-alignment accuracy between the predicted clusters and ground-truth classes [].
Figure 5. Loss function for diffrent number of REs.
Figure 6. Accuracy of model vs. the number of REs.
In Figure 6, we illustrate the effect of the number of REs on overall accuracy and as can be seen, more REs will result in an improved accuracy. On the other hand, more REs mean larger delay. As such, a threshold on the delay will impose an upper bound on the number of REs and simultaneously on the accuracy. For example, 30 REs can provide 99% accuracy. On the other hand, this number of REs causes a delay as large as 0.05 s, as can be seen in Figure 7, which is within the allowed value by 3GPP [,]. In addition, we set C ^ = 4000 [CPU cycle/s] for the CS, C s = 200 [CPU cycle/s] s , and D r = 1.5 [Mega bytes], r .
Figure 7. Effect of number of REs on the total delay in second for the joint scenario (proposed) and disjoint (baseline).

7.2. Comparison with the Baseline Scenario

Now we would like to compare our proposed scheme with a baseline scenario in which the task offloading is performed in a disjoint manner, i.e., each RE is restricted to either the CS or a single SE without coordination. Such a rigid allocation increases the reliance on the CS and leads to inefficient utilization of edge resources. In contrast, our proposed joint framework allows flexible task distribution across CS and SEs, which not only balances the load but also maximizes edge-side computing.
As can be seen in Figure 7, the proposed method causes less delay than the baseline for different values of REs. For example, if the target delay threshold is set to 0.045 s, the proposed scheme can accommodate up to 30 REs, while the baseline could only afford half of this.
To further demonstrate the practical advantage of the proposed cooperative optimization, we compare three deployment modes as only SEs computing, only CS offloading, and the proposed joint scheme with cooperative computing. The results in Figure 8 and Figure 9 show that the cooperative configuration achieves an excellent trade-off and while maintaining an average delay near to the edge-only configuration (0.042 s vs. 0.033 s), it significantly improves clustering average accuracy to 99.2%, surpassing both baselines. These results confirm that the adaptive coordination between SEs and the CS enhances system efficiency, reduces task congestion, and provides stable model performance under heterogeneous network conditions.
Figure 8. Average total delay for the three modes: only SEs, only CS, and the proposed cooperative scheme. The cooperative mode achieves a balanced delay of 0.042 s, lower than the only CS case (0.060 s) and close to only SEs (0.033 s), in the case of 30 REs.
Figure 9. Accuracy comparison for only SEs, only CS, and cooperative modes. The proposed method achieves 99.2% accuracy, outperforming only SEs (85%) and only CS (99.7%), in the case of 30 REs.
In Figure 10, we have compared the proposed method and baseline method in terms of the fraction of tasks that are offloaded on CS. As can be seen, for different values of REs, the load on CS is cut into half when using the proposed framework, which is important from a practical point of view.
Figure 10. Effect of number of REs on the mean of α for the joint scenario (proposed) and disjoint (baseline).

7.3. Comparing Non-i.i.d.-Blind and Non-i.i.d.-Aware Scenarios

In this subsection, we first evaluate the performance of the proposed loss model in mitigating the effect of quantity skew, i.e., non-i.i.d.-aware, as formulated in (18) and (19), against the non-i.i.d.-blind scenario that does not account for it. Quantity skew arises when clients (REs) have highly imbalanced numbers of local samples. Unlike the balanced case, where each client contributes equally, the aggregation here is biased toward clients with larger datasets.
In Figure 11, we evaluate the effect of ϱ on accuracy for both the non-i.i.d.-blind and non-i.i.d.-aware cases. As can be seen, for all the values of ϱ , the proposed scheme improves the accuracy over the baseline scenario.
Figure 11. Effect of quantity skew coefficient ϱ on the total accuracy of model for non-i.i.d.-blind (baseline) and -aware models (proposed), for 30 REs.
Moreover, for very low values of ϱ , corresponding to extremely small sample sizes at the clients, accuracy is less due to insufficient data. As ϱ increases, the number of available samples grows, which enhances performance and leads to higher accuracy up to an optimal point. Beyond this point, however, the variance in the distribution of data across clients becomes significant, introducing instability and larger errors, which ultimately causes accuracy to decrease. Overall, this demonstrates that the proposed scheme effectively mitigates the negative effects of quantity skew and highlights a non-monotonic relationship between ϱ and accuracy.
Finally, we investigate the impact of the correlation threshold κ on overall system accuracy, considering both the coverage overlap and spatial distribution of REs. Our analysis demonstrates that increasing the maximum allowable correlation between mutually visible REs considerably affects the system’s clustering accuracy, as shown in Figure 12. Specifically, when the correlation threshold κ is raised, the system incorporates more highly correlated data from overlapping FoVs, which amplifies the non-i.i.d nature of the collected datasets. This increased correlation leads to model overfitting and reduced generalization capability, ultimately degrading clustering performance.
Figure 12. Effect of correlation on the system accuracy for non-i.i.d.-blind and -aware models.
As far as non-i.i.d.-blind and -aware scenarios are concerned, we can see that the accuracy of the non-i.i.d.-aware case has considerably improved compared to the non-i.i.d.-blind scenario that does not enforce the FoV constraint derived in (22). The difference, in some cases, is above 10% which is very significant in clustering and demonstrates the critical importance of properly regulating angular separation between REs through mathematical formulation of the correlation threshold.
Comparing the last 2 figures, we can see that in contrast to Figure 12, the improvement in accuracy in Figure 11 is not significant. However, it is important to note that for many applications, even a slight improvement in accuracy is critical. For example, in the context of smart manufacturing, this has the potential to reduce miss-clustering in defect detection. Similarly, in the field of healthcare, marginal gains have been shown to enhance diagnostics reliability [,].

8. Conclusions

This work presented a cooperative framework for partial task offloading in Edge-AI systems, leveraging stochastic geometry to capture spatial randomness and to guide correlation-aware resource allocation. A key methodological contribution was the derivation of a closed-form upper bound on spatial correlation, which enabled constraints ensuring that only relevant and timely contributions are included in the global model. We further formulated and solved a joint optimization problem over task assignments, offloading ratios, and learning parameters, and proposed a practical PGD method. Through feasibility analysis and simulations, we demonstrated that the framework effectively addresses the challenges of latency guarantees, accuracy, and scalability in heterogeneous MEC environments. In particular, the results confirm that explicitly handling non-i.i.d. data distributions and spatial correlations leads to superior performance compared to baseline models where learning and resource allocation are decoupled.

Author Contributions

Conceptualization, A.N.; Methodology, A.N.; Software, A.N.; Validation, H.S.; Formal analysis, H.S.; Investigation, A.N.; Resources, H.S.; Writing—original draft, A.N.; Writing—review & editing, H.S.; Visualization, H.S. and A.N.; Supervision, H.S.; Project administration, H.S.; Funding acquisition, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Qatar Research Development and Innovation Council (QRDI) under grant ARG01-0511-230129.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The source code and data of this paper can be found in [].

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

We assume that the class distribution for each RE r at time t satisfies the normalization condition:
k P ^ r , k ( t ) = k P r , k ( t ) = 1 , t , r .
Assume that at time slot t 1 , only K < K classes have been observed (i.e., P r , k ( t 1 ) > 0 ), while the remaining K K classes are unseen. To model the evolution of the class distribution over time while maintaining the normalization constraint, we define the estimated distribution at time t as follows:
k P ^ r , k ( t ) = e κ time k = 1 K P r , k ( t 1 ) = 1 + ( K K ) ϵ = 1 , t , r ,
where κ time is a decay parameter controlling the memory of previous distributions, and ϵ is obtained by ϵ = 1 e κ time K K .

Appendix B

To justify the validity and dimensional consistency of (22), this appendix provides the derivation of the spatial–directional correlation function used in the paper. We consider a homogeneous field of REs distributed according to a Poisson process within a circular region of radius L. Each RE observes its environment within a FoV characterized by an angle ϕ r and a total width φ ˜ . Let two REs, located at l r and l r , have an angular separation Δ ϕ = | ϕ r ϕ r | and Euclidean distance d r , r = l r l r .
Following isotropic Gaussian field models in spatial statistics [,] and stochastic geometry [], the pairwise correlation between these REs can be expressed as
C ( d r , r , Δ ϕ ) = exp d r , r 2 2 s 2 exp 1 cos Δ ϕ 2 σ ϕ 2 ,
where s denotes the spatial correlation length and σ ϕ the angular correlation parameter, both ensuring dimensional consistency.
For uniformly distributed REs, the average correlation over spatial and angular randomness can be written as
C ¯ = 1 L 2 φ ˜ 0 φ ˜ 0 L 0 L C ( d r , r , Δ ϕ ) d l 1 d l 2 d Δ ϕ .
While the above integral does not admit a closed-form solution, a tractable and physically meaningful approximation can be obtained by replacing d r , r 2 with its expected value under the uniform distribution, E [ d r , r 2 ] = L 2 / 3 . This substitution gives
C ( Δ ϕ ) κ s · exp 1 cos Δ ϕ 2 σ ϕ 2 , κ s = exp L 2 6 s 2 ,
which depends only on the normalized spatial scale L / s and the angular separation Δ ϕ . This simplification yields a closed-form, dimensionless expression that is well suited for system-level design.
To ensure that the correlation between any two REs remains below a desired threshold κ ( 0 , 1 ] , we impose C ( Δ ϕ ) κ . Using (A5) and solving for Δ ϕ yields
Δ ϕ = arccos 1 2 σ ϕ 2 ln κ s κ .
Therefore, to suppress excessive task similarity across users, the FoV width should satisfy
φ ˜ Δ ϕ = arccos 1 2 σ ϕ 2 ln exp ( L 2 / 6 s 2 ) κ .
This condition provides a physically interpretable trade-off between sensing range, allowable correlation, and FoV overlap. The approximation has been numerically validated and shown to remain within a small deviation of the full triple integral (A4), confirming its suitability for system-level design and analysis.

References

  1. Ficili, I.; Giacobbe, M.; Tricomi, G.; Puliafito, A. From sensors to data intelligence: Leveraging IoT, cloud, and edge computing with AI. Sensors 2025, 25, 1763. [Google Scholar] [CrossRef] [PubMed]
  2. Bourechak, A.; Zedadra, O.; Kouahla, M.N.; Guerrieri, A.; Seridi, H.; Fortino, G. At the confluence of artificial intelligence and edge computing in IoT-based applications: A review and new perspectives. Sensors 2023, 23, 1639. [Google Scholar] [CrossRef] [PubMed]
  3. IEEE Future Networks. Edge Platforms and Services Evolving into 2030. 2021. Available online: https://futurenetworks.ieee.org/podcasts/edge-platforms-and-services-evolving-into-2030 (accessed on 23 July 2025).
  4. Grand View Research. Edge AI Market Size, Share & Growth|Industry Report, 2030. 2024. Available online: https://www.grandviewresearch.com/industry-analysis/edge-ai-market-report (accessed on 23 July 2025).
  5. MarketsandMarkets. Edge AI Hardware Industry Worth $58.90 Billion by 2030. 2025. Available online: https://www.marketsandmarkets.com/PressReleases/edge-ai-hardware.asp (accessed on 23 July 2025).
  6. Yang, J.; Chen, Y.; Lin, Z.; Tian, D.; Chen, P. Distributed Computation Offloading in Autonomous Driving Vehicular Networks: A Stochastic Geometry Approach. IEEE Trans. Intell. Veh. 2024, 9, 2701–2713. [Google Scholar] [CrossRef]
  7. Gu, Y.; Yao, Y.; Li, C.; Xia, B.; Xu, D.; Zhang, C. Modeling and Analysis of Stochastic Mobile-Edge Computing Wireless Networks. IEEE Internet Things J. 2021, 8, 14051–14065. [Google Scholar] [CrossRef]
  8. Tran, D.A.; Do, T.T.; Zhang, T. A stochastic geo-partitioning problem for mobile edge computing. IEEE Trans. Emerg. Top. Comput. 2020, 9, 2189–2200. [Google Scholar] [CrossRef]
  9. Hmamouche, Y.; Benjillali, M.; Saoudi, S.; Yanikomeroglu, H.; Renzo, M.D. New Trends in Stochastic Geometry for Wireless Networks: A Tutorial and Survey. Proc. IEEE 2021, 109, 1200–1252. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Chen, G.; Du, H.; Yuan, X.; Kadoch, M.; Cheriet, M. Real-time remote health monitoring system driven by 5G MEC-IoT. Electronics 2020, 9, 1753. [Google Scholar] [CrossRef]
  11. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated learning: Challenges, methods, and future directions. IEEE Signal Process. Mag. 2020, 37, 50–60. [Google Scholar] [CrossRef]
  12. Solans, D.; Heikkila, M.; Vitaletti, A.; Kourtellis, N.; Anagnostopoulos, A.; Chatzigiannakis, I. Non-i.i.d data in federated learning: A survey with taxonomy, metrics, methods, frameworks and future directions. arXiv 2024, arXiv:2411.12377. [Google Scholar]
  13. Lu, Z.; Pan, H.; Dai, Y.; Si, X.; Zhang, Y. Federated learning with non-iid data: A survey. IEEE Internet Things J. 2024, 11, 19188–19209. [Google Scholar] [CrossRef]
  14. Jimenez-Gutierrez, D.M.; Hassanzadeh, M.; Anagnostopoulos, A.; Chatzigiannakis, I.; Vitaletti, A. A thorough assessment of the non-iid data impact in federated learning. arXiv 2025, arXiv:2503.17070. [Google Scholar] [CrossRef]
  15. Su, W.; Li, L.; Liu, F.; He, M.; Liang, X. AI on the edge: A comprehensive review. Artif. Intell. Rev. 2022, 55, 6125–6183. [Google Scholar] [CrossRef]
  16. Shi, Y.; Yang, K.; Jiang, T.; Zhang, J.; Letaief, K.B. Communication-efficient edge AI: Algorithms and systems. IEEE Commun. Surv. Tutor. 2020, 22, 2167–2191. [Google Scholar] [CrossRef]
  17. Letaief, K.B.; Shi, Y.; Lu, J.; Lu, J. Edge artificial intelligence for 6G: Vision, enabling technologies, and applications. IEEE J. Sel. Areas Commun. 2021, 40, 5–36. [Google Scholar] [CrossRef]
  18. Lin, F.P.C.; Hosseinalipour, S.; Michelusi, N.; Brinton, C.G. Delay-aware hierarchical federated learning. IEEE Trans. Cogn. Commun. Netw. 2023, 10, 674–688. [Google Scholar] [CrossRef]
  19. Wang, S.; Tuor, T.; Salonidis, T.; Leung, K.K.; Makaya, C.; He, T.; Chan, K. Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 2019, 37, 1205–1221. [Google Scholar] [CrossRef]
  20. Xiao, H.; Xu, C.; Ma, Y.; Yang, S.; Zhong, L.; Muntean, G.M. Edge intelligence: A computational task offloading scheme for dependent IoT application. IEEE Trans. Wirel. Commun. 2022, 21, 7222–7237. [Google Scholar] [CrossRef]
  21. Qiao, D.; Guo, S.; Zhao, J.; Le, J.; Zhou, P.; Li, M.; Chen, X. ASMAFL: Adaptive staleness-aware momentum asynchronous federated learning in edge computing. IEEE Trans. Mob. Comput. 2024, 24, 3390–3406. [Google Scholar] [CrossRef]
  22. Fan, W.; Chen, Z.; Hao, Z.; Wu, F.; Liu, Y. Joint Task Offloading and Resource Allocation for Quality-Aware Edge-Assisted Machine Learning Task Inference. IEEE Trans. Veh. Technol. 2023, 72, 6739–6752. [Google Scholar] [CrossRef]
  23. Fan, W.; Li, S.; Liu, J.; Su, Y.; Wu, F.; Liu, Y. Joint Task Offloading and Resource Allocation for Accuracy-Aware Machine-Learning-Based IIoT Applications. IEEE Internet Things J. 2023, 10, 3305–3321. [Google Scholar] [CrossRef]
  24. Khalili, A.; Zarandi, S.; Rasti, M. Joint Resource Allocation and Offloading Decision in Mobile Edge Computing. IEEE Commun. Lett. 2019, 23, 684–687. [Google Scholar] [CrossRef]
  25. Kuang, Z.; Li, L.; Gao, J.; Zhao, L.; Liu, A. Partial Offloading Scheduling and Power Allocation for Mobile Edge Computing Systems. IEEE Internet Things J. 2019, 6, 6774–6785. [Google Scholar] [CrossRef]
  26. Zhang, S.; Gu, H.; Chi, K.; Huang, L.; Yu, K.; Mumtaz, S. DRL-Based Partial Offloading for Maximizing Sum Computation Rate of Wireless Powered Mobile Edge Computing Network. IEEE Trans. Wirel. Commun. 2022, 21, 10934–10948. [Google Scholar] [CrossRef]
  27. Malik, U.M.; Javed, M.A.; Frnda, J.; Rozhon, J.; Khan, W.U. Efficient matching-based parallel task offloading in iot networks. Sensors 2022, 22, 6906. [Google Scholar] [CrossRef]
  28. Bolat, Y.; Murray, I.; Ren, Y.; Ferdosian, N. Decentralized Distributed Sequential Neural Networks Inference on Low-Power Microcontrollers in Wireless Sensor Networks: A Predictive Maintenance Case Study. Sensors 2025, 25, 4595. [Google Scholar] [CrossRef]
  29. Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; Chandra, V. Federated learning with non-i.i.d data. arXiv 2018, arXiv:1806.00582. [Google Scholar]
  30. Lai, P.; He, Q.; Xia, X.; Chen, F.; Abdelrazek, M.; Grundy, J.; Hosking, J.; Yang, Y. Dynamic User Allocation in Stochastic Mobile Edge Computing Systems. IEEE Trans. Serv. Comput. 2022, 15, 2699–2712. [Google Scholar] [CrossRef]
  31. Lyu, Z.; Xiao, M.; Xu, J.; Skoglund, M.; Di Renzo, M. The larger the merrier? Efficient large AI model inference in wireless edge networks. arXiv 2025, arXiv:2505.09214. [Google Scholar] [CrossRef]
  32. Wu, Y.; Zheng, J. Modeling and Analysis of the Uplink Local Delay in MEC-Based VANETs. IEEE Trans. Veh. Technol. 2020, 69, 3538–3549. [Google Scholar] [CrossRef]
  33. Wu, Y.; Zheng, J. Modeling and Analysis of the Local Delay in an MEC-Based VANET for a Suburban Area. IEEE Internet Things J. 2022, 9, 7065–7079. [Google Scholar] [CrossRef]
  34. Cheng, Q.; Cai, G.; He, J.; Kaddoum, G. Design and Performance Analysis of MEC-Aided LoRa Networks with Power Control. IEEE Trans. Veh. Technol. 2025, 74, 1597–1609. [Google Scholar] [CrossRef]
  35. Dhillon, H.S.; Ganti, R.K.; Baccelli, F.; Andrews, J.G. Modeling and Analysis of K-Tier Downlink Heterogeneous Cellular Networks. IEEE J. Sel. Areas Commun. 2012, 30, 550–560. [Google Scholar] [CrossRef]
  36. Andrews, J.G.; Baccelli, F.; Ganti, R.K. A Tractable Approach to Coverage and Rate in Cellular Networks. IEEE Trans. Commun. 2011, 59, 3122–3134. [Google Scholar] [CrossRef]
  37. Savi, M.; Tornatore, M.; Verticale, G. Impact of processing-resource sharing on the placement of chained virtual network functions. IEEE Trans. Cloud Comput. 2019, 9, 1479–1492. [Google Scholar] [CrossRef]
  38. Mu, Y.; Garg, N.; Ratnarajah, T. Federated learning in massive MIMO 6G networks: Convergence analysis and communication-efficient design. IEEE Trans. Netw. Sci. Eng. 2022, 9, 4220–4234. [Google Scholar] [CrossRef]
  39. Wang, H.; Kaplan, Z.; Niu, D.; Li, B. Optimizing federated learning on non-i.i.d data with reinforcement learning. In Proceedings of the IEEE INFOCOM 2020-IEEE Conference on Computer Communications, Toronto, ON, Canada, 6–9 July 2020; pp. 1698–1707. [Google Scholar]
  40. He, C.; Annavaram, M.; Avestimehr, S. Group knowledge transfer: Federated learning of large cnns at the edge. Adv. Neural Inf. Process. Syst. 2020, 33, 14068–14080. [Google Scholar]
  41. Deng, L. The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 2012, 29, 141–142. [Google Scholar] [CrossRef]
  42. MNIST Dataset on Papers with Code. Available online: https://www.kaggle.com/datasets/hojjatk/mnist-dataset (accessed on 23 July 2025).
  43. Saeedi, H.; Nouruzi, A. Stochastic-Geometric-based Modeling for Partial Offloading Task Computing in Edge AI Systems. GitHub Repository. Available online: https://github.com/alinouruzi/Stochastic-Geometric-based-Modeling-for-Partial-Offloading-Task-Computing-in-Edge-AI-Systems (accessed on 21 September 2025).
  44. Xie, J.; Girshick, R.; Farhadi, A. Unsupervised deep embedding for clustering analysis. In Proceedings of the International Conference on Machine Learning, PMLR, New York, NY, USA, 16–24 June 2016; pp. 478–487. [Google Scholar]
  45. 3GPP. 5G; Service Requirements for the 5G System (3GPP TS 22.261 Version 16.14.0 Release 16). Technical Specification ETSI TS 122 261 V16.14.0, ETSI. 2021. Available online: https://www.etsi.org/deliver/etsi_ts/122200_122299/122261/16.14.00_60/ts_122261v161400p.pdf (accessed on 29 July 2025).
  46. 5G Hub. 5G Ultra Reliable Low Latency Communication (URLLC). 2023. Available online: https://5ghub.us/5g-ultra-reliable-low-latency-communication-urllc/ (accessed on 29 July 2025).
  47. Wang, Q.; Haga, Y. Research on Structure Optimization and Accuracy Improvement of Key Components of Medical Device Robot. In Proceedings of the 2024 International Conference on Telecommunications and Power Electronics (TELEPE), Frankfurt, Germany, 29–31 May 2024; pp. 809–813. [Google Scholar] [CrossRef]
  48. Zrubka, Z.; Holgyesi, A.; Neshat, M.; Nezhad, H.M.; Mirjalili, S.; Kovács, L.; Péntek, M.; Gulácsi, L. Towards a single goodness metric of clinically relevant, accurate, fair and unbiased machine learning predictions of health-related quality of life. In Proceedings of the 2023 IEEE 27th International Conference on Intelligent Engineering Systems (INES), Nairobi, Kenya, 26–28 July 2023; pp. 000285–000290. [Google Scholar] [CrossRef]
  49. Cressie, N. Statistics for Spatial Data; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  50. Dai, R.; Akyildiz, I.F. A spatial correlation model for visual information in wireless multimedia sensor networks. IEEE Trans. Multimed. 2009, 11, 1148–1159. [Google Scholar] [CrossRef]
  51. Haenggi, M. Stochastic Geometry for Wireless Networks; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.