Next Article in Journal
Influence of Initial Stress on Wave Propagation in Microelongated Thermo-Elastic Media Under the Refined Fractional Dual Phase Lag Model
Previous Article in Journal
Unified Fixed-Point Theorems for Generalized p-Reich and p-Sehgal Contractions in Complete Metric Spaces with Application to Fractal and Fractional Systems
Previous Article in Special Issue
Boundary Control for Consensus in Fractional-Order Multi-Agent Systems Under DoS Attacks and Actuator Failures
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fractional Calculus-Enhanced Multi-Objective AVOA for Dynamic Edge-Server Allocation in Mobile Edge Computing

by
Aadel Mohammed Alatwi
1,2,
Bakht Muhammad Khan
1,*,
Abdul Wadood
1,2,*,
Shahbaz Khan
1,
Hazem M. El-Hageen
2 and
Mohamed A. Mead
3
1
Renewable Energy and Environmental Technology Center, University of Tabuk, Tabuk 47913, Saudi Arabia
2
Electrical Engineering Department, Faculty of Engineering, University of Tabuk, Tabuk 47913, Saudi Arabia
3
Department of Computer Science, Faculty of Computers & Informatics, Suez Canal University, Ismalia 41522, Egypt
*
Authors to whom correspondence should be addressed.
Fractal Fract. 2026, 10(1), 28; https://doi.org/10.3390/fractalfract10010028
Submission received: 9 December 2025 / Revised: 30 December 2025 / Accepted: 31 December 2025 / Published: 4 January 2026
(This article belongs to the Special Issue Fractional Dynamics and Control in Multi-Agent Systems and Networks)

Abstract

Dynamic edge-server allocation in mobile edge computing (MEC) networks is a challenging multi-objective optimization problem due to highly dynamic user demands, spatiotemporal traffic variations, and the need to simultaneously minimize service latency and workload imbalance. Existing heuristic and metaheuristic-based approaches for this problem often suffer from premature convergence, limited exploration–exploitation balance, and inadequate adaptability to dynamic network conditions, leading to suboptimal edge-server placement and inefficient resource utilization. Moreover, most existing methods lack memory-aware search mechanisms, which restrict their ability to capture long-term system dynamics. To address these limitations, this paper proposes a Fractional-Order Multi-Objective African Vulture Optimization Algorithm (FO-MO-AVOA) for dynamic edge-server allocation. By integrating fractional-order calculus into the standard multi-objective AVOA framework, the proposed method introduces long-memory effects that enhance convergence stability, search diversity, and adaptability to time-varying workloads. The performance of FO-MO-AVOA is evaluated using realistic MEC network scenarios and benchmarked against several well-established metaheuristic algorithms. Simulation outcomes reveal that FO-MO-AVOA achieves 40–46% lower latency, 38–45% reduction in workload imbalance, and up to 28–35% reduction in maximum workload compared to competing methods. Extensive experiments conducted on real-world telecom network data demonstrate that FO-MO-AVOA consistently outperforms state-of-the-art multi-objective optimization algorithms in terms of convergence behaviour, Pareto-front quality, and overall system performance.

1. Introduction

With the rapid advancement of fifth-generation (5G) and forthcoming sixth-generation (6G) networks, MEC has emerged as a transformative computing paradigm that extends cloud capabilities to the edge of the network [1]. By migrating computational tasks from remote cloud servers to geographically distributed edge nodes, MEC significantly reduces end-to-end latency, enhances Quality of Experience (QoE), and alleviates backhaul congestion [2,3]. In this context, edge servers play a pivotal role as localized data-processing units that host applications, execute computations, and deliver real-time services close to users [4]. However, determining the optimal number and spatial deployment of edge servers remains a challenging issue due to the inherent trade-offs among deployment cost, energy efficiency, delay constraints, and service reliability [5]. Over-provisioning edge servers may lead to excessive energy consumption and operational cost, while under-provisioning degrades user experience and system performance. Furthermore, traditional static placement strategies—where server locations remain fixed once deployed—fail to accommodate time-varying user mobility, fluctuating workload demands, and dynamic network conditions [6,7]. Recent studies have therefore shifted toward dynamic edge-server allocation, allowing servers to be reconfigured or relocated in real time to maintain optimal system performance. For example, Guo et al. [8] introduced a deep reinforcement learning–based adaptive placement model for dynamic MEC environments, while Jiang et al. [9] developed a Markov decision process (MDP) formulation for intelligent reallocation of edge resources. Despite these advances, the high-dimensional and nonlinear nature of the problem classifies it as a Non-deterministic Polynomial-time hard (NP-hard) optimization task, demanding efficient metaheuristic approaches capable of balancing global exploration with local exploitation.
Among the wide range of metaheuristic optimization techniques proposed in recent years, population-based algorithms inspired by collective intelligence have shown strong potential for solving complex NP-hard problems. However, many existing optimizers rely on fixed parameter settings or problem-specific tuning, which can limit their adaptability in highly dynamic MEC environments. The African Vulture Optimization Algorithm (AVOA) offers a structurally interpretable framework based on starvation-driven leadership dynamics and cooperative foraging behaviour, enabling a natural balance between exploration and exploitation. Unlike heavily hybridized or parameter-intensive methods, AVOA provides explicit control mechanisms that can be systematically enhanced, making it a suitable foundation for further theoretical and algorithmic improvements in dynamic edge-server allocation.
To address these challenges, this paper proposes an FO-MO-AVOA for efficient and intelligent dynamic edge-server allocation. The proposed algorithm is inspired by the cooperative foraging, navigational intelligence, and hierarchical leadership behaviours of African vultures. It integrates a nonlinear hunger-rate control mechanism and an adaptive Lévy-flight exploitation strategy to dynamically balance global search and local refinement. In addition, a multi-objective optimization framework is formulated to simultaneously minimize service latency and deployment cost while satisfying user delay requirements and system constraints. Fractional calculus is incorporated into MO-AVOA to introduce long-memory effects into the optimization process. Unlike integer-order updates that depend solely on the current iteration, fractional-order dynamics allow the search process to retain information from past states, leading to smoother convergence trajectories and improved stability. This memory-aware behaviour is particularly beneficial in dynamic MEC environments, where historical workload patterns can provide valuable guidance for future edge-server allocation decisions.
The main contributions of this paper are summarized as follows:
i.
Dynamic Edge-Server Allocation Framework: A dynamic server allocation strategy is developed, where the number and placement of edge servers are adaptively optimized according to real-time workload and user density variations across the MEC network.
ii.
FO-MO-AVOA-Based Multi-Objective Optimization: A Multi-Objective African Vulture Optimization Algorithm (MO-AVOA) is proposed to jointly minimize latency, workload imbalance, and maximum workload on edge servers, which ensures efficient offloading and balanced resource utilization.
iii.
Movement-Cost-Aware Adaptation: A movement-cost model is integrated to minimize relocation overhead when dynamically repositioning edge servers, enabling cost-effective adaptability under time-varying network conditions.
iv.
Comprehensive Evaluation and Benchmarking: Extensive experiments using Shanghai Telecom’s real-world MEC dataset demonstrate that FO-MO-AVOA consistently outperforms benchmark algorithms (MO-EseaH, MO-JAYA, MO-CSA, and MO-JS). In terms of latency reduction, workload balance improvement, and deployment cost reduction.
The remainder of this paper is organized as follows: Section 2 reviews the literature review on edge-server placement and metaheuristic optimization. Section 3 formulates the problem definition into a mathematical mode. Section 4 demonstrates the proposed FO-MO-AVOA. Section 5 discusses performance evaluation and datasets. Section 6 discusses simulation results and performance analysis. Section 7 concludes the research and highlights directions for future research.

2. Related Work

Edge server deployment in MEC has gained much attention in recent years due to its significant impact on low-latency and energy-efficient computing. This problem is always NP-hard since it always contains combinatorial decision variables, such as server number, server locations, and server capacities, concerning dynamic user demand [02,01]. Mainly, existing works in the beginning years were based on static edge server deployment, where server locations are always fixed after deployment [4,5]. These existing works might be optimized from either latency or cost metrics in a static scenario, while they could not adapt to changeable user density or network traffic in a dynamic scenario [10,11]. To further address the aforementioned difficulties, some current works propose research on adaptive or dynamic edge server deployment methods, where edge server locations are periodically rearranged according to spatial-temporal variations in user mobilities, workloads, or resource cost [12,13]. Jiang et al. proposed an MDP formulation for dynamic edge server deployment based on reinforcement learning, where they aimed to trade off reconfiguration cost and system delay [9]. Guo et al. proposed a deep reinforcement learning architecture, called DRLO, for dynamic edge server deployment in MEC, where they validated its adaptability in a dynamic scenario [8]. A hybrid approach proposed by Liu et al. in Ref. [14] divides servers into static and dynamic servers and uses the Improved Snake Optimization Algorithm (ISOA) for optimized movement and scaling, resulting in considerable improvements in latency and cost of deployment [15]. In addition to reinforcement learning and swarm intelligence, metaheuristic optimization techniques have been used extensively for handling placement and scheduling issues thanks to their efficiency in convergence and robustness in landscapes with non-convex regions [16]. The Enhanced Sea Horse Optimizer (ESeaH) algorithm, introduced by Al-Tashi et al. [17], demonstrates better exploration capabilities in dynamic resource allocation tasks. Comparable examples include the JAYA algorithm, which is recognized as a parameterless and self-adaptive method. It has been employed in energy-aware workload assignment and multi-objective edge resource optimization [18,19]. Next is the Crow Search Algorithm (CSA), which has been employed in task scheduling in Fog/MEC with the support of Levy flights with dynamic memory [20,21].
On the other hand, the Jellyfish Optimization Algorithm (JS), motivated by the behaviour of ocean currents, was efficiently employed in fog job scheduling and QoS-aware service placement in fog computing [22]. Even though metaheuristic approaches like the ones mentioned above have gained considerable success in different areas of science, technology, and computing, their direct application to dynamic edge-server allocation in MEC is still limited in most approaches. The majority of the available methods are designed to cope with relatively simple, analytically or semi-dynamically changed systems, which disregard edge server relocation costs, geographical migration patterns, and time-driven variations in system requests. Also, techniques that can efficiently manage global and local searches, such as adaptive Lévy flight or hunger-rate regulation, are not usually employed in MEC service placement systems. More recently, bio-inspired optimization algorithms like RAMPA, ImTSA, and competitive swarm optimizers have all been found to reduce the exploration-exploitation dilemma via adaptive control methods or competitive learning processes [23,24]. Nonetheless, these methods all preferably work with current optimizations that are instantaneous, representing no memory. By contrast, FO-MO-AVOA incorporates the fractional order possessiveness of the search process itself, exploiting past dependencies while also representing smooth adaptation, which is particularly useful within dynamic multi-objective MEC optimizations. In this case, the global optimization search process, that is, the AVOA introduced by Abdollahzadeh et al. [25], has confirmed its strong global search capabilities, exploiting a mathematically described version of the hunger rate-control model for exploitation processes inspired by the leadership structure of African vultures’ foraging behaviour.
In the traditional AVOA, the hunger-rate process adopts the linear decay mechanism, but it might not correctly represent the nonlinear convergence process in complex search spaces. This could cause the algorithm to prematurely enter the exploitation process or, inversely, to remain in an over-exploration state. Moreover, the stochastic process in the exploitation stage adopts fixed steps, potentially hindering the extensive local search process. To address the aforementioned issues, the proposed study introduces the nonlinear hunger-rate process control and the adaptive Lévy-flight exploitation process to promote the local intensification process while retaining global diversity. To impose the long memory effect in the optimization process of the proposed MO-AVOA, the study adopts the application of the fractional calculus approach. In integer-order systems, the process only depends on the current iteration, while the proposed approach preserves the history information of the process, resulting in stable convergence behaviour. This advantageous effect of the proposed approach is significant for the global optimization process in dynamic MEC environments, in which the history information helps guide the edge-server allocation in the upcoming iteration.
Though AVOA has been successfully applied for mechanical design, energy scheduling, and feature selection, its applicability for dynamic resource allocation for MEC, as well as edge server placement, has yet to be explored. Therefore, this work applies the FO-MO-AVOA for the first time for dynamic edge server placement. The adaptive hunger function, a nonlinear balance of exploitation and exploration of the proposed FO-MO-AVOA technique, is harnessed for reducing combined latency and deployment cost across time-varying networks. As indicated in Table 1, among all current dynamic-placement metaheuristics, no method (a) uses FO-MO-AVOA, (b) formulates edge server placement that is relocation- or movement-cost aware, or (c) applies nonlinear hunger rate control with Lévy flight exploration. The proposed FO-MO-AVOA solution aims to address all three above-stated research voids for dynamic edge server placement within intelligent next-generation MEC networks.

3. Problem Formulation

The network model adopted in this study is derived from a real-world telecom dataset collected from an operational urban cellular network. Specifically, the base-station topology, user distribution, and traffic demand patterns are extracted from the publicly available Shanghai Telecom dataset, which has been widely used in MEC and edge-computing research. This dataset reflects realistic spatial deployment of base stations, heterogeneous user densities, and time-varying traffic characteristics, thereby providing a reliable experimental proxy for practical edge-server allocation scenarios. Rather than relying on synthetic assumptions, the proposed optimization framework is evaluated using this real-world data to ensure that the obtained results are representative of actual MEC network behaviour.
In the proposed MEC architecture, the network consists of a set of N base stations, denoted as Equation (1),
B = b 1 , b 2 ,   b N ,
and a set of M edge servers, represented by Equation (2),
S = { S 1 , S 2 ,   S M }
Let the binary decision variable be defined as
x i j = 1 , i f   b a s e   s t a t i o n   b i   i s   a s s i g n e d   t o   e d g e   s e r v e r   s j , 0 , o t h e r w i s e ,
Similarly, the deployment of an edge server is represented by
y j = 1 , i f   a n   e d g e   s e r v e r   d e p l o y e d   a t   b a s e   s t a t i o n   b j ,   0 , o t h e r w i s e .
The total number of deployed edge servers is constrained by
j = 1 N y j = M .

3.1. Distance and Latency Modelling

Let L b ( b i ) and L s s j denote the geographical coordinates of the base station b i and edge server s j , respectively. The geographical distance between a base station b i and an edge server s j is expressed as
d ( b i , s j ) = ( L b ( b i ) L s ( s j ) ) 2
where L b b i and L s ( s j ) denote the geographical coordinates of the base station b i and edge server s j , respectively.
Each base station b i has a workload intensity represented by w i . The main objective of the placement problem is to determine M optimal server locations among N candidate base stations such that the total network latency is minimized while maintaining workload balance across all active edge servers.
To achieve minimum total latency throughout the network, the objective function can be formulated as follows:
D m i n = j = 1 M b i B j d ( b i , s j ) ,
where b j represents the subset of base stations assigned to the j t h   edge server s j .
To prevent unbalanced workload distribution, where certain servers are overloaded while others remain underutilized, the workload of each edge server must be balanced.
The objective of latency minimization is therefore given by
min D .

3.2. Workload Modelling and Load Balancing

The workload assigned to each edge server s j is denoted as w j and can be calculated as the sum of workloads from its connected base stations, i.e.,
  W j = b i B j w i .
The workload balance among all servers is evaluated using the standard deviation, defined as follows:
σ W = j = 1 M ( W j W ¯ ) 2 M   .
where
W ¯ = 1 M j = 1 M W j   .
represents the average workload across all edge servers.
A smaller σ W corresponds to better load balancing performance. Hence, the secondary optimization objective is expressed as
W m i n = m i n   σ W

3.3. Multi-Objective Optimization Model

The edge-server placement and allocation problem is formulated as a multi-objective optimization problem that jointly minimizes network latency and workload imbalance,
m i n ( D , W ) ,
subject to the following constraints:
j = 1 N x i j = 1 , i B ,
x i j y j ,   i B , j S ,
x i j { 0,1 } ,   y j { 0,1 } .
Constraints in Equation (14) ensure that each base station is assigned to exactly one edge server, while constraints in Equation (15) guarantee that base stations can only be assigned to active edge servers.
Given a mobile edge computing network consisting of a set of base stations with time-varying user demands, the problem addressed in this study is to determine the optimal placement and allocation of edge servers such that multiple conflicting objectives are simultaneously optimized. Specifically, the goal is to minimize end-to-end service latency and workload imbalance among edge servers while ensuring balanced utilization under dynamic traffic conditions. The decision variables include the selection of base stations to host edge servers and the assignment of user workloads to these servers, subject to network capacity and operational constraints. This multi-objective optimization problem is nonlinear, large-scale, and NP-hard in nature, motivating the development of an efficient and memory-aware optimization framework.

4. Proposed FO-MO-AVOA

4.1. Original African Vulture Optimization Algorithm (AVOA)

The African Vulture Optimization Algorithm (AVOA) has been developed as a population-based meta-heuristic algorithm that takes inspiration from the behaviour of African vultures when foraging and interacting [28]. In this meta-heuristic approach, the exploratory and exploitative stages of the search process are represented by a starvation mechanism, in which solutions to a problem are led by the best and second-best vultures. Depending on the starvation level, the exploratory or explorative stages become dominated by one of several methods used to update positions, making AVOA particularly appealing as a meta-heuristic framework for tackling nonlinear optimizations.

4.2. Theoretical Demerits of the Original AVOA

Despite the effectiveness, the original AVOA has some obvious theoretical defects. Firstly, the hunger-rate control assumes a linear decay mechanism, which may not be appropriate to reflect the nonlinear convergence behaviour for the complex and high-dimensional search space. This would easily lead to premature exploitation or excess exploration and make the balance between the global search and local refinement worse. Secondly, the exploitation strategies adopt the fixed stochastic movement patterns in a few ways, which may weaken the fine-grained local search accuracy. Thirdly, the lack of memory mechanisms results in the algorithm depending only on the information of current iterations and being prone to oscillatory behaviours, as well as convergence instability, especially in dynamic optimization environments.

4.3. Modified AVOA with Fractional-Order Enhancement (FO-MO-AVOA)

To address the aforementioned limitations, this study proposes a Fractional-Order Multi-Objective AVOA (FO-MO-AVOA). The modifications include three key components. First, a nonlinear hunger-rate control mechanism is introduced to enable adaptive and smoother transitions between exploration and exploitation phases. Second, an adaptive Lévy-flight-based exploitation strategy is incorporated to enhance local intensification while preserving global diversity. Third, fractional-order calculus is embedded into the position-update process to introduce long-memory effects, allowing the algorithm to retain historical search information. This memory-aware mechanism improves convergence stability, mitigates premature stagnation, and enhances adaptability in dynamic MEC environments. Together, these enhancements enable FO-MO-AVOA to achieve superior Pareto-front convergence and balanced search performance. The fractional calculus enhanced MO-AVOA enhances the standard MO-AVOA by integrating fractional calculus (FC) into its search mechanisms. Inspired by the African vulture’s sophisticated navigational intelligence, the MO-AVOA leverages the fractional order ( α ) parameter, where 0 < α 1 , to introduce long-term memory and non-local search dynamics into the population’s position updates. This modification uses the fractional-order difference operator to modulate the African vulture’s exploration and exploitation steps, allowing the algorithm to balance global surveying and local contouring more effectively by remembering past optimal paths. This enhanced dynamic movement helps the optimizer escape local optima more efficiently and improves convergence stability, making the FO-MO-AVOA a more robust metaheuristic for complex multi-objective edge-server allocation problems. The Grünwald-Letnikov fractional derivative definition has been employed:
D α [ f ( t ) ] = l i m h 0 h α j = 0 ( 1 ) j α j f t j h .
For a signal x ( t ) , the Grünwald-Letnikov fractional derivative of order α ( 0,1 ] can be approximated numerically as
D G L α x t k 1 ( Δ t ) α j = 0 m ( 1 ) j α j x t k j .
where t k = k Δ t ,   m is the memory length, and
α j = Γ ( α + 1 ) Γ ( j + 1 ) Γ ( α j + 1 ) .
Fractional-order position update used in FO-MO-AVOA: the fractional memory is implemented using a weighted history of past position increments,
Δ X i ( α ) ( t ) = j = 0 m w j ( α ) Δ X i ( t j ) ,
where w j ( α ) = ( 1 ) j α j , and the vulture position is updated as
X i t + 1 = X i t + η Δ X i α t ,
where η is a scaling factor (step-size) and controls the memory depth.
To reduce computational cost, the coefficients are computed recursively:
w 0 ( α ) = 1 , w j ( α ) = 1 α + 1 j w j 1 ( α ) , j 1 .
These additions provide an explicit calculation formula for the fractional derivative effect used in the proposed optimizer.

4.3.1. Starvation Rate and Phase Control

In particular, the algorithm models the vultures’ behavioural dynamics, where their actions differ according to the hunger level, which is referred to as the hunger rate. It governs the balance between exploration and exploitation during the optimization process. This adaptive mechanism allows FO-MO-AVOA to dynamically adjust its search strategy to efficiently navigate complex solution spaces and avoid premature convergence. The general procedural framework of the FO-MO-AVOA is depicted in Figure 1.
Figure 1 illustrates the overall workflow of the proposed FO-MO-AVOA. In contrast to the original AVOA, the optimization loop explicitly integrates a fractional-order update stage in which historical position information is incorporated into the current search trajectory. This fractional-calculus-based memory mechanism operates in conjunction with the nonlinear hunger-rate control and adaptive Lévy-flight exploitation strategies, ensuring a balanced transition between global exploration and local exploitation throughout the optimization process. Specifically, the flowchart includes a fractional-order memory update within the position-update stage, adaptive nonlinear hunger-rate computation for phase control, and fractional-order position refinement prior to fitness evaluation.
The mathematical model of the hunger rate is shown in Equation (23),
H i α k = 2 × r α + 1 × η × 1 k K β + Δ k α ,
and
Δ k α = ξ × s i n ω π 2 × k K β + c o s π 2 × k K β 1 + γ × j = 1 M w j × H i k j ,
where
w j = 1 j M α , ( decaying memory weights ) and M is memory length ,   γ is memory weight   ( 0 < γ < 1 ) .
In the FO-MO-AVOA, H i α ( k ) denotes the hunger rate of the i t h vulture during the k t h iteration, which governs the adaptive behaviour of the search agents. The parameter Δ k α represents a predefined constant determined prior to the execution of the algorithm. The variable k corresponds to the current iteration number, while T indicates the total number of iterations. The term r signifies a uniformly distributed random number within the interval [ 0,1 ] whereas ξ and η represent stochastic variables randomly generated within the ranges 2,2 and [−1,1], respectively. The constant w is assigned a fixed value of 2.5 in the standard implementation of FO-MO-AVOA.
A negative value of the hunger rate H i α ( k ) < 0 signifies that the vulture is in a hungry state and thus engages in exploration to locate new prey (potential solutions). Conversely, as the variable η approaches zero, it reflects an exploitative state in which the vulture intensifies its search around promising regions. To determine the collective intelligence and leadership hierarchy observed in natural vulture populations, AVOA designates either the best or the second-best vulture as the leader vulture. Equation (25) corresponds to the mathematical formulation governing this leadership mechanism.
L i α ( k ) = E l i t e 1 α ( k ) , if p 0 α r α , E l i t e 2 α ( k ) , otherwise , p 0 α = p 0 + μ × D α p 0 .
In this context, L i α ( k ) denotes a randomly selected vulture from the population at the k t h iteration, which contributes to the stochastic exploration behaviour of the algorithm. The variables E l i t e 1 α ( k ) and E l i t e 2 α ( k ) correspond to the best and second-best vultures identified based on their fitness values, respectively. These elite individuals guide the remaining vultures toward promising regions of the search space, ensuring a balance between exploitation and exploration. The parameter p 0 α represents a control constant, typically set to 0.8. It determines the probability of selecting one of the elite vultures for guiding the movement of the other agents.

4.3.2. Exploration Phase

When H i α ( k ) 1 , the vultures spread across different regions of the search space in pursuit of food, this indicates that the FO-MO-AVOA has entered into the exploration phase. During this stage, the vultures show dynamic movement patterns designed to enhance global search diversity and prevent premature convergence. To simulate these behaviours, FO-MO-AVOA employs two distinct exploration strategies, mathematically represented as follows:
X i ( k + 1 ) = Equation   ( 11 ) , if p 1 α r p 1 α , Equation   ( 12 ) , if p 1 α < r p 1 α , p 1 α = p 1 × e x p α × k K β ,
X i ( k + 1 ) = L i α ( k ) D i α ( k ) × H i α ( k ) + λ α × D α L i α ( k ) X i ( k ) D i α k = λ × L i α k X i k α ,
X i ( k + 1 ) = L i α ( k ) H i α ( k ) + r α × ( u b l b ) × r α + l b + ζ × D α X i ( k )
In Equation (26), X i k + 1 denotes the updated position of the i t h vulture in the subsequent iteration, while p 1 α is a control coefficient set to 0.6. The variable r p 1 α in Equation (26), represents a uniformly distributed random number in the interval [0, 1]. The term D i α k in Equation (27) indicates the Euclidean distance between the current vulture and the selected leader vulture, which will help determine the movement direction. The parameter λ is a random value generated within the range [−2, 2]. u b and l b in Equation (28) denote the upper and lower search boundaries of the problem domain, respectively.

4.3.3. Exploitation Phase

When 0.5 H i α ( k ) < 1 , the vultures enter the exploitation phase. In this stage, the vultures intensify their focus on potential prey locations. This reflects the natural behaviour of food protection and competition among individuals. The exploitation process is modelled through two adaptive strategies, as defined by the following equations:
X i ( k + 1 ) = Equation   ( 14 ) , if p 2 α r p 2 α , Equation   ( 15 ) , if p 2 α < r p 2 α ,
X i k + 1 = D i α k H i α k + r α d i α k , d i α k = L i α k X i k + δ × D α L i α k X i k ,
X i k + 1 = L i α k S 1 α S 2 α ,   w i t h   S 1 α = L i α k × r α × X i k 2 π × cos X i k × α , S 2 α = L i α k × r α × X i k 2 π × sin X i k × α .
In Equation (29), p 2 α is a control parameter fixed at 0.4, while r p 2 α is a uniformly distributed random number between [0, 1]. The variable L i α k in Equation (30) represents one of the Elite vultures selected to guide the search process. This subphase of exploitation models shows the vultures’ protective behaviour, where they compete to secure food resources by fine-tuning their search trajectories around the best-known positions.
In the exploitation substage, where the hunger rate H i α k < 0.5 , the vultures demonstrate high competition and accumulation of food sources. This behaviour is mathematically represented as follows:
X i ( k + 1 ) = Equation   ( 17 ) , if p 3 α r p 3 α , Equation   ( 18 ) , if p 3 α < r p 3 α ,
w h e r e   X i k + 1 = Q 1 α + Q 2 α 2 + η α × D α Q 1 α + Q 2 α 2 ,   Q 1 α = E l i t e 1 α k E l i t e 1 α ( k ) X i ( k ) 2 E l i t e 1 α ( k ) 2 X i ( k ) 2 × L i α k , Q 2 α = E l i t e 2 α k E l t e t 2 α ( k ) X i ( k ) 2 E l i t e 2 α ( k ) 2 X i ( k ) 2 × L i α k ,
and
  X i k + 1 = L i α k δ i α k × H i α k × L α d , δ i α k = L i α k X i k + ρ × D α L i α k X i k , L α d = 0.01 × u | v | 1 β α ,   β α = β × α .
In Equation (32), p 3 α is set to 0.4, and r p 3 α is a uniformly distributed random number within [0, 1]. The variables u and v in Equation (34) are random numbers that follow a Gaussian distribution. The pseudocode of the proposed algorithm in solving the dynamics edge server allocation problem is given in Algortihm 1.

4.3.4. Algortihm 1: Pseudocode of the Proposed FO-MO-AVOA

Algorithm 1: FO-MO-AVOA for Dynamic Edge-Server Allocation
1.
Initialize population of vultures (candidate solutions)
2.
Initialize algorithm parameters and fractional order α
3.
Evaluate multi-objective fitness (latency, workload imbalance)
4.
Initialize Pareto archive
5.
While stopping criterion is not met do
6.
        Update nonlinear hunger rate for each vulture
7.
        Select best and second-best vultures based on Pareto dominance
8.
        For each vulture do
9.
            If hunger rate indicates exploration, then
10.
              Update position using exploration strategy
11.
          Else
12.
               Apply adaptive Lévy-flight exploitation strategy
13.
          End if
14.
          Apply fractional-order position update using historical positions
15.
          Repair solution if constraints are violated
16.
      End for
17.
      Evaluate fitness of updated population
18.
      Update Pareto archive using dominance and diversity criteria
19.
End while
20.
Output final Pareto-optimal solution set
The application of FO-MO-AVOA to the dynamic edge-server allocation problem follows a structured procedure that links the problem formulation with the algorithmic components:
  • Solution Encoding:
Each vulture (candidate solution) represents a feasible edge-server deployment, including (i) the selection of base stations to host edge servers and (ii) the assignment of user workloads to the selected servers. The encoding ensures compatibility with network capacity and operational constraints.
2.
Objective Function Evaluation:
For each candidate solution, the multi-objective fitness functions defined in Section 3 are evaluated. These include the minimization of end-to-end service latency and workload imbalance among edge servers. Constraint violations, if any, are handled through solution repair or penalty mechanisms.
3.
Initialization and Parameter Setting:
The FO-MO-AVOA population is initialized randomly within the feasible solution space. Key tuning parameters include population size, maximum number of iterations, fractional order α controlling memory depth, nonlinear hunger-rate coefficients governing exploration–exploitation balance, and Lévy-flight parameters for exploitation.
4.
Search Process:
At each iteration, the nonlinear hunger-rate mechanism adaptively determines whether a vulture performs exploration or exploitation. Exploration promotes global search across the network topology, while exploitation refines promising edge-server configurations. Fractional-order updates incorporate historical solution information to improve convergence stability and adaptability to dynamic traffic conditions.
5.
Pareto Archive Update:
Candidate solutions are evaluated using Pareto dominance. Non-dominated solutions are stored in an external archive, which is updated iteratively to maintain diversity and represent trade-offs between latency and workload imbalance.
6.
Termination and Output:
The algorithm terminates when the stopping criterion is met. The final output consists of a set of Pareto-optimal edge-server allocation solutions, from which network operators can select configurations based on specific performance priorities.

5. Performance Evaluation

This section demonstrates the performance of the proposed FO-MO-AVOA optimization algorithm in addressing the dynamic edge-server allocation problem within MEC environments. To validate the proposed algorithm’s effectiveness, its performance was compared against several well-known metaheuristic algorithms, i.e., the Multi-Objective Enhanced Sea Horse Optimization (MO-EseaH) algorithm, Multi-Objective Crow Search Algorithm (MO-CSA), Multi-Objective JAYA algorithm, and Multi-Objective Jellyfish Search (MO-JS) algorithm. The Shanghai Telecom base-station dataset was used to evaluate the performance of all algorithms. This section provides details of the dataset, experimental configuration, and comparative evaluation metrics.

5.1. Dataset

The dataset used for evaluating the FO-MO-AVOA was taken from the Shanghai Telecom Company, which consists of real-world information on base-station locations and network service requests [26,28]. The dataset includes extensive Internet traffic records and millions of connection sessions among approximately 10,000 mobile users and 3042 base stations. Its scale and authenticity make it highly suitable for studying the dynamic placement of edge servers in a metropolitan-scale MEC environment.

5.2. Experimental Setup

Under varying network configurations, the performance of the proposed FO-MO-AVOA framework is evaluated. In each case, different number of base stations are taken, with approximately 10% of the total base stations designated as edge servers. The configuration parameters, i.e., population size, iteration limits, and convergence thresholds, were kept consistent across all algorithms to ensure fair comparison. The performance of the proposed algorithm was evaluated against the MO-EseaH [29], Mo-CSA [30], MO-JAYA [27], and MO-JS [29]. The experiments aimed to calculate the algorithms’ capability to minimize overall latency (D) and workload imbalance (W), as given in Equations (2)–(5). Each algorithm was run multiple times, and the mean performance across all runs was recorded to ensure there is no stochastic bias.
Key attributes:
1.
Average Latency (ms): Represents the mean access delay between base stations and associated edge servers.
2.
Workload Deviation (σW): Quantifies the balance of computational loads among all active edge servers.
3.
Convergence Speed: Measures the rate at which each algorithm reaches the optimal or near-optimal objective value.
4.
Deployment Cost Efficiency: Reflects the total distance and migration overhead required to achieve optimal allocation.
All simulations were implemented in MATLAB 2025b on a workstation equipped with an Intel Core i9 processor (3.2 GHz) and 32 GB RAM.
Although single-agent optimization methods such as safe experimentation dynamics, norm-limited SPSA [31], and smoothed functional algorithms offer reduced computational complexity [32], they are generally limited in their ability to approximate diverse Pareto-optimal solutions in multi-objective and highly nonconvex problems. FO-MO-AVOA, as a population-based optimizer, incurs a higher computational cost; however, this cost is justified by its ability to simultaneously explore multiple trade-off solutions, maintain solution diversity, and achieve superior convergence behaviour. Empirical results demonstrate that the additional computational overhead remains within acceptable limits for practical MEC network sizes.
Table 2 summarizes the common experimental settings used for all the multi-objective optimization algorithms. The population size and the number of iterations were kept the same for better comparison and evaluation. To ensure fair comparison, the population size and maximum iterations were kept identical for all algorithms in all case studies. The key algorithm-specific parameters were also kept fixed across Case 1–Case 3, and their settings are summarized in Table 2.

6. Results and Discussion

6.1. Validation on Benchmark Functions (CEC 2022)

The effectiveness of the proposed FO-MO-AVOA is evaluated on CEC 2022 benchmark functions [33] to assess its performance on multi-objective optimization problems.
As shown in Table 3, FO-MO-AVOA consistently outperforms MO-AVOA across best, median, mean, standard deviation, and worst metrics, indicating superior convergence and solution quality.
Moreover, the Wilcoxon signed-rank test results in Table 4 yield p-values below 0.05 for all cases, leading to the rejection of the null hypothesis (H0) and confirming the statistically significant superiority of FO-MO-AVOA over MO-AVOA, thereby validating the effectiveness of the proposed fractional-order enhancement.
Having established its effectiveness on benchmark problems, FO-MO-AVOA is next evaluated on real-world dynamic edge-server allocation scenarios.

6.2. Real-World MEC Case Studies (Shanghai Telecom Dataset)

To comprehensively judge the effectiveness of the proposed FO-MO-AVOA for dynamic edge-server allocation, three case studies with different network configurations were conducted on the Shanghai Telecom dataset [28].
Case 1: 50 base stations with 10% edge servers.
Case 2: 100 base stations with 5% edge servers.
Case 3: 150 base stations with 5% edge servers.
These configurations match small, medium, and large MEC network topologies, respectively, to allow their performance evaluation under varying workload densities and spatial distributions. Each case is simulated under identical computational environment settings for all algorithms to ensure fair comparison.

6.3. Global Statistical Validation Across All Cases

Table 5 represents the Wilcoxon signed-rank test for all the compared algorithms used across Case 1–Case 3.
As shown in Table 5, the Wilcoxon signed-rank test yields p-values below the 0.05 significance threshold for all compared algorithms, resulting in the rejection of the null hypothesis (H0) in every case. This outcome confirms that the performance differences between FO-MO-AVOA and each benchmark algorithm are statistically significant.

6.4. Fifty Base Stations with 10% Edge Servers (Case 1)

In Case 1, there are 50 base stations, and 10% of the edge servers are taken. The main objective is to minimize the latency and keep the workload balanced across all deployed servers. In Table 6, among all optimization algorithms, MO-JS and MO-EseaH recorded the lowest latency values (0.0365 s and 0.0310 s, respectively). However, both show high Workload CVs (0.9564 and 0.7956), which indicates poor workload balancing despite fast response. While comparing FO-MO-AVOA, which achieved an average latency of ≈ 0.0208 s, the lowest of all, still represents a far superior balance between latency and workload. FO-MO-AVOA latency–workload trade-off points such as 0.2523 highlight consistent performance with tightly grouped Pareto solutions. Compared with the other optimization algorithms like MO-AVOA, MO-CSA (0.0359 s, 0.0398 s latency but CV = 0.2740, 0.2763), and MO-JAYA (0.0307 s latency but balanced CV = 0.3294), FO-MO-AVOA offers a moderate latency combined with the most stable and globally optimal convergence profile.
The workload CV shows the uniform task distribution. The smallest workload CV values were observed in FO-MO-AVOA (0.3822) and MO-AVOA (0.3950), whereas MO-EseaH and MO-JS exceeded 0.6, which implies significant imbalance. Comparing the maximum workload of FO-MO-AVOA (0.2523), which is the lowest among all algorithms, which marginally outperforms MO-AVOA (0.2740) and is substantially below the peak loads of MO-JS (0.4798) and MO-EseaH (0.4232). This behaviour of FO-MO-AVOA indicates that not only does it equalize computational loads across servers, but it also minimizes the maximum utilization ratio.
Figure 2 shows the convergence behaviour of all six algorithms. The FO-MO-AVOA curve drops sharply in the latency convergence graph during the first 40 iterations and stabilizes near 0.007 s, which indicates rapid improvement and early attainment of steady performance. MO-AVOA follows a similar trend but converges slightly slower, confirming the contribution of fractional-order memory and adaptive mechanisms in accelerating convergence. MO-EseaH and MO-JS show a premature convergence while maintaining slightly lower absolute latency levels. The MO-CSA shows gradual improvement up to 150 iterations but continues to fluctuate beyond. The MO-JAYA remains nearly constant, which confirms its weaker search dynamics. In the workload-CV convergence graph, FO-MO-AVOA demonstrates the fastest descent and lowest final CV (≈0.09–0.10) value around 60 iterations, achieving a stable balance between servers. MO-EseaH and MO-JAYA converge early but have a higher CV value (≈0.18–0.20). The MO-CSA and MO-JS show oscillations due to varying exploration pressure. The maximum workload curve strengthens these findings. The FO-MO-AVOA consistently received the lowest maximum workload value (≈0.21–0.22) after approximately 80 iterations. The synchronized decline of all three FO-MO-AVOA curves—latency, CV, and max-load—illustrates coherent multi-objective progress, meaning improvements in one objective do not degrade the others. Overall, Figure 2 confirms that AVOA achieves the most efficient and stable convergence trajectory, reaching high-quality trade-off solutions in fewer iterations than competing algorithms. Its nonlinear hunger-rate adaptation and Lévy-flight exploitation enable aggressive early exploration followed by fine-grained exploitation, resulting in faster convergence, smoother stability, and superior multi-objective balance for small-scale.
To evaluate the robustness and consistency of the compared algorithms, each method was executed independently for 30 runs. Table 6 reports the descriptive statistical results based on the knee-point scalarization of the obtained Pareto fronts. Lower values indicate better performance for all objectives.
Table 7 provides a comprehensive statistical comparison of all algorithms based on 30 independent runs. The results indicate that FO-MO-AVOA achieves the lowest mean latency and maximum load, mean workload coefficient of variation with reduced standard deviation, demonstrating superior load-balancing capability and higher solution stability. Overall, FO-MO-AVOA offers the most balanced trade-off among latency reduction, workload distribution, and convergence stability across the evaluated objectives.

Pareto-Front Distribution and Quality-Metric Analysis (Case 1)

The three-dimensional Pareto fronts for all algorithms are visualized in Figure 3, illustrating the trade-off surface among latency, workload CV, and maximum load. Each marker corresponds to a non-dominated solution in the final population of its respective optimizer. The FO-MO-AVOA solutions form a dense, continuous, and smoothly expanding front spanning from low-latency regions (≈0.02 s) to low-workload and low-load regions. This wide and uniformly populated surface demonstrates FO-MO-AVOA’s ability to simultaneously optimize multiple conflicting objectives, achieving balanced trade-offs without clustering in any single dimension.
The comparative quality-metric results are presented in Figure 4, which plots Hypervolume (HV), Spacing (SP), Spread (Δ), and Generational Distance (GD) for all algorithms. These parameters explain the performance observed in the Pareto fronts. The FO-MO-AVOA achieved the highest Hypervolume (HV ≈ 0.7402), which outperforms MO-AVOA (HV ≈ 0.679), MO-CSA (≈0.48), and MO-JS (≈0.52) and confirms its broader Pareto coverage and superior overall solution quality. FO-MO-AVOA also has the lowest Spacing (SP ≈ 0.2136), which indicates evenly distributed non-dominated solutions with strong diversity control. The higher SP values in MO-EseaH (≈0.42) and CSA (≈0.30) reflected irregular distribution. Furthermore, FO-MO-AVOA also exhibits the smallest Spread (Δ ≈ 0.45), which shows a consistently well-structured front compared to MO-AVOA (Δ ≈ 0.46), MO-CSA (≈0.95), and MO-JS (≈0.82). Lastly, FO-MO-AVOA achieved an exceptionally low GD (≈ 0.00256), demonstrating that its solutions were closest to the true Pareto-optimal boundary, whereas MO-AVOA (GD ≈ 0.00256) and MO-EseaH and MO-CSA recorded GD values exceeding 0.08, signifying weaker convergence precision. The combined visual and quantitative evidence confirms that FO-MO-AVOA delivers the best overall multi-objective performance for the small-scale MEC network. The results substantiate that FO-MO-AVOA’s adaptive hunger-rate control and Lévy-flight exploration mechanisms promote global coverage early in the search and ensure fine local refinement toward equilibrium solutions. Consequently, Figure 3 and Figure 4 validate that, for the 50-station network scenario, FO-MO-AVOA yields a superior Pareto set both in quality and diversity, outperforming traditional and contemporary metaheuristics (MO-EseaH, MO-JAYA, MO-CSA, MO-JS, MO-AVOA) across all performance dimensions.

6.5. Hundred Base Stations with 10% Edge Servers (Case 2)

In this case, the number of base stations is doubled from the previous case while reducing the edge-server ratio. In this case, the optimization challenge now emphasizes the algorithms’ ability to retain latency control and load balance under resource scarcity. The FO-MO-AVOA achieved the lowest latency of 0.008–0.021 s, then the MO-AVOA of 0.019–0.024 s, followed closely by MO-EseaH 0.014–0.022 s, MO-CSA (≈0.013–0.014 s), as given in Table 8. However, MO-CSA’s advantage in raw latency is offset by its high workload variability (CV ≈ 0.53–0.80), signifying uneven task distribution. In comparison to the other optimization algorithms, FO-MO-AVOA maintains 0.021 s latency while keeping acceptable workload balance across its non-dominated archive. MO-EseaH and MO-JS fail to maintain consistency due to premature convergence and produce moderate latency values (0.012–0.022 s). MO-JAYA shows smooth convergence, despite slightly higher latency (~0.023 s). Workload uniformity again distinguishes FO-MO-AVOA from its peers. While several algorithms (MO-EseaH, MO-CSA, and MO-JS) present extreme workload fluctuations (CV > 0.5 and occasionally > 0.8), FO-MO-AVOA and MO-JAYA maintain CV values near 0.09–0.12, confirming effective balance across distributed servers. FO-MO-AVOA’s lowest maximum load of 0.2139 surpasses all competitors—including MO-JAYA (0.2217)—and prevents edge-server overload, which is critical in dense MEC topologies.
As shown in Figure 5, FO-MO-AVOA and MO-AVOA exhibit the steepest early descent and reach the lowest steady latency (~0.008–0.0085 s and 0.008–0.0089) by ≈120–150 iterations. MO-JS also converges quickly but plateaus slightly higher (~0.0087–0.0089 s). MO-CSA improves in steps to ~0.011 s and then stagnates, while MO-JAYA settles near ~0.013 s after a slower, monotone decline. MO-EseaH remains almost flat around ~0.015 s, indicating premature stagnation. In terms of workload, CV MO-AVOA shows the fastest and deepest reduction in dispersion, falling from ~0.22 to ~0.04 and stabilizing around iteration 250–350. MO-JS gradually descends from ~0.25 to ~0.045–0.048 but needs ~20–50 iterations to do so. MO-CSA drops early to ~0.11 and then stalls, MO-JAYA slips from ~0.15 to ~0.14, and MO-EseaH stays near ~0.024, reflecting little adaptive search. Fo-MO-AVOA starts from 0.02 and nearly flattens at a 0.06 value. Taken together, MO-AVOA achieves the best latency–balance co-improvement, not just a low CV in isolation. In terms of maximum load, FO-MO-AVOA reaches the low peak load (~0.213) within the first ~200 iterations and holds it stably—evidence that its improvements in latency do not come at the expense of load skew. MO-JS converges to ~0.219, MO-CSA to ~0.218–0.220, MO-JAYA to ~0.221–0.222, and MO-EseaH remains the highest at ~0.228–0.229. The synchronized decline of FO-MO-AVOA’s three curves (latency, workload CV, and max-load) indicates coherent multi-objective progress.
To evaluate the robustness and consistency of the compared algorithms, each method was executed independently for 30 runs. Table 8 reports the descriptive statistical results based on the knee-point scalarization of the obtained Pareto fronts. Lower values indicate better performance for all objectives.
Table 9 presents a statistical comparison of all algorithms based on 30 independent runs. The results show that FO-MO-AVOA achieves the lowest standard deviation in latency and workload coefficient of variation, indicating superior convergence stability and robustness. While MO-AVOA attains marginally lower mean latency and maximum load, the performance gap remains negligible. Overall, FO-MO-AVOA provides the most consistent and well-balanced performance across latency reduction, workload distribution, and peak load control.

Pareto-Front Distribution and Quality-Metric Analysis (Case 2)

Figure 6 illustrates the Pareto-front distribution of different multi-objective optimization algorithms. The FO-MO-AVOA forms the broadest and most continuous Pareto surface, which extends across the lowest latency (~0.008 s) and minimum max-load (~0.21–0.22) regions, while preserving diversity along the workload CV axis. This pattern demonstrates that FO-MO-AVOA retains the effective trade-offs even under highly constrained edge-server resources. MO-AVOA covers part of the low-latency zone, which clusters narrowly, reflecting limited diversity and incomplete exploration. MO-JAYA yields a compact, nearly planar distribution, which is an indicator of strong local exploitation but weaker global spread. MO-JS and MO-EseaH show scattered or vertically elongated fronts, which implies inconsistent convergence behaviour. Overall, FO-MO-AVOA achieves the widest coverage and best balance between convergence accuracy, diversity, and also outperforms its competitors across all three objectives. Four standard Pareto-quality indicators were evaluated to validate these visual observations in Figure 6. The FO-MO-AVOA demonstrates superior multi-objective performance, which achieves the highest Hypervolume (≈0.208–0.30), while the MO-AVOA is the second largest value as shown in Figure 7. The most extensive coverage of optimal trade-off, compared to MO-CSA (≈0.22), MO-JAYA (≈0.18), MO-ESEAH, and MO-JS, remains below 0.08. It also records the lowest Spacing (≈0.14), reflecting uniformly distributed Pareto solutions, while MO-ESEAH and MO-CSA show irregular spacing above 0.20. Moreover, FO-MO-AVOA’s Spread (Δ ≈ 0.45) and MO-AVOA’s Spread (Δ ≈ 0.47) are substantially smaller than MO-CSA (≈0.9) and MO-JS (≈1.0), confirming consistent diversity without fragmentation. Finally, the GD ≈ 0.005 further validates that FO-MO-AVOA’s non-dominated solutions lie nearest to the true Pareto-optimal front, outperforming MO-EseaH and MO-CSA, whose GD values exceed 0.04. Together, Figure 5 and Figure 6 highlight that FO-MO-AVOA simultaneously maximizes convergence precision and Pareto diversity.
Its adaptive hunger-rate and Lévy-flight strategies allow wide exploration early and precise exploitation later, producing a uniformly dense, low-error Pareto front. Compared with other algorithms, FO-MO-AVOA’s solutions achieve up to 30% higher HV and 70% lower GD, ensuring better latency–load equilibrium in complex MEC environments.

6.6. Hundred and Fifty Base Stations with 10% Edge Servers (Case 3)

Case 3 represents a large-scale MEC topology. The number of base stations significantly exceeds available edge servers. Similarly, both task allocation and load balance become more complex, which signifies the importance of exploration and exploitation balance and global search diversity. The results given in Table 10 clearly show that FO-MO-AVOA maintains its superior performance consistency even under high-density network conditions. FO-MO-AVOA achieves latency values as low as 0.008–0.011 s, outperforming other algorithms used in this paper. Although MO-AVOA achieves a competitive minimum latency (0.009 s), its instability and high workload CV (~0.45–0.27) indicate poor global balancing. In comparison, FO-MO-AVOA preserves both latency and workload balance, with a CV value of around 0.04–0.24, which represents efficient distribution of computational tasks across all edge servers. The maximum workload value of FO-MO-AVOA again validates its superior adaptability. The FO-MO-AVOA best value of 0.209 marks a ~4–5% reduction compared to MO-CSA (0.2209), a ~25–30% improvement compared to MO-JS, and (≈0.32–0.34) compared to MO-EseaH. This illustrates that even as the system becomes large-scale, FO-MO-AVOA still mitigates overload on individual servers, maintaining fair load distribution.
The convergence behaviour of all algorithms under large-scale MEC deployment is illustrated in Figure 8, where the trends for latency, workload CV, and maximum load are plotted over 400 iterations.
The FO-MO-AVOA demonstrates the fastest decline and lowest steady latency, converging to approximately 0.009–0.010 s within the first 300 iterations. Its curve stabilizes smoothly thereafter, indicating strong exploitation capacity once near-optimal regions are identified. MO-JS initially follows a similar downward path but shows small oscillations before stabilizing slightly higher (~0.0098–0.011 s). The MO-AVOA approach shows convergence more gradually, which levels off at around 0.010 s. However, MO-JAYA shows a slower, linear improvement, which stabilizes near 0.0117 s. MO-EseaH maintains the highest and most standing curve around (~0.0135–0.014 s), which reflects premature convergence and reduced exploration in the larger search space. In terms of the workload convergence graph, FO-MO-AVOA dominates the other algorithms by rapidly decreasing the CV from ~0.20 to ~0.05 within only 200 iterations. MO-CSA and MO-JAYA improve gradually but detract near 0.11–0.12, which suggests moderate balancing capability. MO-JS, despite an initially sharp descent, stabilizes at ~0.14 with mild oscillations. MO-EseaH remains nearly constant (~0.15) due to early stagnation. The consistently lower workload CV of the proposed FO-MO-AVOA indicates its adaptive hunger-rate control effectively redistributes workloads among edge servers without overfitting to latency minimization alone. In the maximum-workload convergence plot, FO-MO-AVOA attains the lowest terminal value (~0.214). MO-CSA and MO-JAYA follow closely at ~0.218–0.220. The MO-JS stabilizes higher at ~0.225, and MO-EseaH remains nearly flat around ~0.240. From the three convergence plots, it is observed that FO-MO-AVOA exhibits synchronized convergence across all three objectives, i.e., latency, workload CV, and max load, which indicates balanced multi-objective optimization. In the case of other optimization algorithms, the improvement is partial only in one or two objectives.
To evaluate the robustness and consistency of the compared algorithms, each method was executed independently for 30 runs. Table 11 reports the descriptive statistical results based on the knee-point scalarization of the obtained Pareto fronts. Lower values indicate better performance for all objectives.
Table 11 presents a statistical comparison of all algorithms based on 10 independent runs. The results show that FO-MO-AVOA achieves the lowest mean latency and the smallest standard deviation, indicating faster and more stable convergence. While MO-JAYA performs well in terms of workload coefficient of variation and maximum load, FO-MO-AVOA provides the most consistent performance across all objectives, offering a balanced trade-off between latency reduction, load distribution, and convergence stability.

Pareto-Front Distribution and Quality-Metric Analysis (Case 3)

Figure 9 illustrates the 3D Pareto front graph of different algorithms used for the comparative study. Among the different algorithms, the FO-MO-AVOA displays a highly continuous, dense, and expanded Pareto front, which clearly spans a broader region along the low-latency (<0.018 s) and low-load (<0.25) axes. This reflects its strong ability to balance convergence accuracy with diversity preservation even under intensified network congestion. MO-JAYA compared to FO-MO-AVOA: it is seen that it forms a compact but flattened front, which indicates good convergence precision but limited diversity, often clustering near the central region of the Pareto space. In the Pareto front, the MO-CSA covers a narrower subspace concentrated in the low-latency region but exhibits less spread along workload CV, suggesting weaker global exploration and a tendency toward local optima. The MO-JS shows scattered and partially overlapped distributions, suggesting inconsistent convergence and uneven trade-offs between objectives. MO-EseaH presents the least diversity with sparse, isolated points, confirming early stagnation and insufficient adaptation.
The comparative performance of each algorithm is further supported by the multi-objective quality indicators shown in Figure 10.
The FO-MO-AVOA demonstrates superior Pareto-front quality across all evaluation metrics. It achieves the highest Hypervolume (HV ≈ 0.141)—nearly double that of MO-CSA (≈0.077) and significantly higher than MO-JAYA (≈0.05) and MO-EseaH (≈0.02)—indicating broader and higher-quality coverage of trade-offs. FO-MO-AVOA also records one of the lowest Spacing (SP ≈ 0.14) values, reflecting uniformly distributed non-dominated solutions with minimal clustering, whereas MO-CSA and MO-JS exhibit irregular spacing around 0.20–0.25. Similarly, its Spread (Δ ≈ 0.62) is smaller than MO-AVOA (≈0.72) and MO-JAYA (≈0.95), confirming consistent front diversity under complex optimization conditions. At last, comparing the GD of these different algorithms, it has been seen that FO-MO-AVOA achieves the GD ≈ 0.0048, proving its solutions lie closest to the true Pareto-optimal front, while MO-JAYA (≈0.012), MO-CSA (≈0.018), and MO-EseaH (≈0.022) show comparatively weaker convergence precision. The results of Figure 9 and Figure 10 collectively demonstrate that FO-MO-AVOA exhibits the best Pareto-optimal distribution and multi-objective performance under large-scale MEC scenarios. Its wide and dense front, high HV, low SP, and minimal GD confirm superior convergence accuracy, solution diversity, and uniformity.

7. Conclusions

This paper proposed a FO-MO-AVOA to address the dynamic edge-server allocation problem in MEC networks. The method jointly minimizes latency, workload imbalance, and maximum server load, ensuring efficient computation offloading and optimal resource utilization in dense, delay-sensitive networks. Comprehensive experiments using the Shanghai Telecom dataset were conducted across three representative scenarios—50, 100, and 150 base stations—and benchmarked against five state-of-the-art metaheuristics: MO-EseaH, MO-JAYA, MO-CSA, MO-JS, and MO-AVOA. Across all scales, FO-MO-AVOA consistently achieved superior multi-objective performance, reflected in convergence stability, Pareto-front diversity, statistical analysis, and quality-metric dominance. Case 1: FO-MO-AVOA consistently achieved superior multi-objective performance, reflected in convergence stability, statistical Pareto-front diversity, and quality-metric dominance. Case 1: FO-MO-AVOA reduced average latency by ≈42%, workload CV by ≈38%, and maximum load by ≈35% compared with the next-best algorithm, while increasing hypervolume (HV) coverage by ≈30%. Case 2: Under tighter edge-server ratios, FO-MO-AVOA achieved latency improvements of ≈46%, CV reduction of ≈41%, and max-load improvement of ≈28%, while attaining the lowest GD of ≈0.006—a 75% reduction relative to the baselines. In Case 3, the most demanding scenario, FO-MO-AVOA maintained ≈ 40% lower latency, ≈45% better workload balance, and ≈32% lower peak load than competing algorithms, with the highest HV (≈0.145) and lowest GD (≈0.005), confirming its scalability and precision under heavy network congestion. These percentile gains collectively demonstrate that FO-MO-AVOA consistently outperforms established metaheuristics by 30–50% across all major metrics, validating its ability to adaptively balance exploration and exploitation through nonlinear hunger-rate control and Lévy-flight-guided exploitation. Even as the system scale tripled from 50 to 150 base stations, FO-MO-AVOA preserved sub-0.02 s latency, workload CV ≈ 0.08–0.10, and peak-load ≈ 0.21–0.22, reflecting its robustness and reliability in large-scale deployments. In short, the proposed optimization algorithm FO-MO-AVOA offers a scalable, stable, and high-efficiency paradigm for 5G/6G MEC infrastructures, where, as in these networks’ dynamic user mobility and fluctuating service demand, adaptive, multi-objective decision-making is required. However, future research will extend this setup to multi-tier, federated edge–cloud systems, incorporate energy-aware and carbon-neutral objectives, and integrate reinforcement-learning-based self-parameter tuning to further enhance real-time autonomy and sustainability in distributed computing networks.

Author Contributions

Conceptualization, A.M.A., B.M.K. and A.W.; Methodology, A.M.A. and B.M.K.; Software, A.M.A.; Validation, B.M.K., A.W., S.K. and H.M.E.-H.; Formal analysis, A.M.A. and B.M.K.; Investigation, A.M.A., B.M.K., A.W., S.K., H.M.E.-H. and M.A.M.; Resources, B.M.K., A.W., S.K. and H.M.E.-H.; Data curation, B.M.K., H.M.E.-H. and M.A.M.; Writing—original draft preparation, A.M.A., B.M.K. and A.W.; Writing—review and editing, A.M.A. and A.W.; Visualization, A.W. and H.M.E.-H.; Supervision, A.M.A.; Funding acquisition, A.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

Deanship of Research and Graduate Studies, University of Tabuk, Saudi Arabia: (Grant No. S-0188-1443).

Data Availability Statement

Data will be made available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, X.; Jiao, L.; Li, W.; Fu, X. Efficient multi-user computation offloading for mobile-edge cloud computing. IEEE/ACM Trans. Netw. 2016, 24, 2795–2808. [Google Scholar] [CrossRef]
  2. Satyanarayanan, M. The emergence of edge computing. Computer 2017, 50, 30–39. [Google Scholar] [CrossRef]
  3. Mao, Y.; You, C.; Zhang, J.; Huang, K.; Letaief, K.B. A survey on mobile edge computing: The communication perspective. IEEE Commun. Surv. Tutor. 2017, 19, 2322–2358. [Google Scholar] [CrossRef]
  4. Wang, S.; Tuor, T.; Salonidis, T.; Leung, K.K.; Makaya, C.; He, T. Adaptive federated learning in resource-constrained edge computing systems. IEEE J. Sel. Areas Commun. 2019, 37, 1205–1221. [Google Scholar] [CrossRef]
  5. Taneja, M.; Davy, A. Resource-aware placement of edge services for the Internet of Things. Future Gener. Comput. Syst. 2018, 88, 783–796. [Google Scholar]
  6. Zhou, Z.; Feng, J.; Chang, Z.; Shen, X. Energy-efficient edge computing service provisioning for vehicular networks: A consensus ADMM approach. IEEE Trans. Veh. Technol. 2019, 68, 5087–5099. [Google Scholar] [CrossRef]
  7. Taleb, T.; Samdanis, K.; Mada, B.; Flinck, H.; Dutta, S.; Sabella, D. On multi-access edge computing: A survey of the emerging 5G network edge architecture and orchestration. IEEE Commun. Surv. Tutor. 2017, 19, 1657–1681. [Google Scholar] [CrossRef]
  8. Guo, Y.; Wang, J.; Zhang, X. DRLO: Adaptive dynamic edge server placement using deep reinforcement learning. Comput. Netw. 2025, 238, 110032. [Google Scholar]
  9. Jiang, X.; Li, Z.; Sun, W. Dynamic and intelligent edge server placement based on MDP in mobile edge networks. J. Netw. Comput. Appl. 2023, 225, 103709. [Google Scholar]
  10. Zhang, S.; Yu, J.; Hu, M. Edge server placement based on graph clustering in mobile edge computing. Sci. Rep. 2024, 14, 81684. [Google Scholar] [CrossRef] [PubMed]
  11. Zhang, L.; Li, C.; Wang, H. Dynamic server deployment for adaptive edge computing. Ad-Hoc Netw. 2024, 147, 103293. [Google Scholar]
  12. Liu, J.; Wu, X.; Yuan, P. A Dynamic Edge Server Placement Scheme Using the Improved Snake Optimization Algorithm. Appl. Sci. 2024, 14, 10130. [Google Scholar] [CrossRef]
  13. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel metaheuristic algorithm for optimization problems. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  14. Abdollahzadeh, H.; Mirjalili, S.; Gandomi, A. African Vulture Optimization Algorithm: A new nature-inspired metaheuristic. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  15. Al-Tashi, T.; Razali, N.M.; Mirjalili, S. An enhanced Sea Horse Optimizer for constrained optimization problems. Expert Syst. Appl. 2024, 234, 121052. [Google Scholar]
  16. Rao, T.R. JAYA: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 2016, 7, 19–34. [Google Scholar] [CrossRef]
  17. Mahdavi, M.; Fadaeenejad, F. Modified JAYA algorithm for multi-objective resource optimization in cloud computing. Soft Comput. 2024, 29, 3455–3472. [Google Scholar]
  18. Yang, D.; Zhang, Y. Hybrid Cuckoo Search Algorithm for task offloading in fog computing. Clust. Comput. 2024, 27, 3563–3575. [Google Scholar]
  19. Kaveh, A.; Azar, N.F. Crow Search Algorithm: A new metaheuristic inspired by the intelligent behavior of crows. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  20. Naruei, M.; Keynia, M. A new optimization method based on the behavior of jellyfish for global optimization. Appl. Soft Comput. 2020, 92, 106275. [Google Scholar]
  21. Hu, G.; Zheng, J.; Ji, X.; Qin, X. Enhanced tunicate swarm algorithm for optimizing shape of C2 RQI-spline curves. Eng. Appl. Artif. Intell. 2023, 121, 105958. [Google Scholar] [CrossRef]
  22. Tumari, M.Z.M.; Ahmad, M.A.; Suid, M.H.; Ghazali, M.R.; Tokhi, M.O. An improved marine predator’s algorithm tuned data-driven multiple-node hormone regulation neuroendocrine-PID controller for multi-input–multi-output gantry crane system. J. Low Freq. Noise Vib. Act. Control. 2023, 42, 1666–1698. [Google Scholar] [CrossRef]
  23. Zheng, R.; Hussien, A.G.; Qaddoura, R.; Jia, H.; Abualigah, L.; Wang, S.; Saber, A. A multi-strategy enhanced African vulture’s optimization algorithm for global optimization problems. J. Comput. Des. Eng. 2023, 10, 329–356. [Google Scholar] [CrossRef]
  24. Kaung, X.; Hou, J.; Liu, X.; Lin, C.; Wang, Z.; Wang, T. Improved African Vulture Optimization Algorithm Based on Random Opposition-Based Learning Strategy. Electronics 2024, 13, 3329. [Google Scholar] [CrossRef]
  25. Li, Y.; Zhou, A.; Ma, X.; Wang, S. Profit-aware edge server placement. IEEE Internet Things 2022, 9, 55–67. [Google Scholar] [CrossRef]
  26. Wang, S.; Guo, Y.; Zhang, N.; Yang, P.; Zhou, X.S.; Shen, A. Delay-aware microservice coordination in mobile edge computing, a reinforcement learning approach. IEEE Trans. Mob. Comput. 2019, 20, 939–951. [Google Scholar] [CrossRef]
  27. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 1995, 1, 33–57. [Google Scholar] [CrossRef]
  28. Guo, Y.; Wang, S.; Zhou, A.; Xu, J.; Yuan, J.; Hsu, C.H. User allocation aware edge cloud placement in mobile edge computing. Softw. Pract. Exp. 2020, 50, 489–502. [Google Scholar] [CrossRef]
  29. Chou, J.S.; Truong, D. A novel metaheuristic optimizer inspired by behaviour of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar]
  30. Yan, J.; He, W.; Jiang, X.; Zhang, Z. A novel phase performance evaluation method for particle swarm optimization algorithms using velocity-based state estimation. Appl. Soft Comput. 2017, 57, 517–525. [Google Scholar] [CrossRef]
  31. Suresh, K.; Ghazali, M.R.; Ahmad, M.A. Safe Experimentation Dynamics Algorithm for Identification of Cupping Suction Based on the Nonlinear Hammerstein Model. J. Robot. Control. 2023, 4, 754–761. [Google Scholar] [CrossRef]
  32. Islam, M.S.; Ahmad, M.A.; Hao, M.R.; Suid, M.H.; Tumari, M.Z.M. Fast PID Tuning of AVR System Using Memory-Based Smoothed Functional Algorithm. In Proceedings of the 2024 IEEE Symposium on Industrial Electronics & Applications (ISIEA), Kuala Lumpur, Malaysia, 6–7 July 2024; pp. 1–5. [Google Scholar] [CrossRef]
  33. Ahrari, A.; Elsayed, S.; Sarker, R.; Essam, D.; Coello, C.A.C. Problem definition and evaluation criteria for the CEC’2022 competition on dynamic multimodal optimization. In Proceedings of the IEEE World Congress on Computational Intelligence (IEEE WCCI 2022), Padua, Italy, 18–23 July 2022; pp. 18–23. [Google Scholar]
Figure 1. FO-MO-AVOA flowchart.
Figure 1. FO-MO-AVOA flowchart.
Fractalfract 10 00028 g001
Figure 2. Convergence comparison plot (Case 1).
Figure 2. Convergence comparison plot (Case 1).
Fractalfract 10 00028 g002
Figure 3. Pareto fronts of multi-objective algorithms (Case 1).
Figure 3. Pareto fronts of multi-objective algorithms (Case 1).
Fractalfract 10 00028 g003
Figure 4. (a) Comparison of Hypervolume, (b) Spacing metrics, (c) Delta, and (d) GD against different Algorithms (Case 1).
Figure 4. (a) Comparison of Hypervolume, (b) Spacing metrics, (c) Delta, and (d) GD against different Algorithms (Case 1).
Fractalfract 10 00028 g004
Figure 5. Convergence graph comparison of multi-objective optimization algorithms (Case 2).
Figure 5. Convergence graph comparison of multi-objective optimization algorithms (Case 2).
Fractalfract 10 00028 g005
Figure 6. Pareto fronts of multi-objective algorithms (Case 2).
Figure 6. Pareto fronts of multi-objective algorithms (Case 2).
Fractalfract 10 00028 g006
Figure 7. (a) Comparison of Hypervolume, (b) Spacing metrics, (c) Delta, and (d) GD against different Algorithms (Case 2).
Figure 7. (a) Comparison of Hypervolume, (b) Spacing metrics, (c) Delta, and (d) GD against different Algorithms (Case 2).
Fractalfract 10 00028 g007
Figure 8. Convergence graph comparison of multi-objective optimization algorithms (Case 3).
Figure 8. Convergence graph comparison of multi-objective optimization algorithms (Case 3).
Fractalfract 10 00028 g008
Figure 9. Pareto fronts of multi-objective algorithms (Case 3).
Figure 9. Pareto fronts of multi-objective algorithms (Case 3).
Fractalfract 10 00028 g009
Figure 10. (a) Comparison of Hypervolume, (b) Spacing metrics, (c) Delta, and (d) GD against different Algorithms (Case 3).
Figure 10. (a) Comparison of Hypervolume, (b) Spacing metrics, (c) Delta, and (d) GD against different Algorithms (Case 3).
Fractalfract 10 00028 g010aFractalfract 10 00028 g010b
Table 1. Comparative overview of dynamic-placement metaheuristics in MEC and identified research gaps.
Table 1. Comparative overview of dynamic-placement metaheuristics in MEC and identified research gaps.
Algorithm/StudyMEC Task FocusDynamics HandledMain ObjectivesData/ScaleReported GainsVisible Limitations/Gap
Improved Snake-based dynamic placementEdge-server placementYes: add/move/remove serversLatency ↓, cost ↓Time-varying MEC scenariosLower latency and service cost vs. classics/SOTANo vulture-style adaptive hunger/exploitation; limited movement-cost modelling details [15].
Binary Cuckoo SearchOffloading (placement-adjacent)Indirect (reacts via offloading)Delay/energy trade-offsRealistic MEC settingCompetitive offloading qualityNot placement per se; lacks explicit server relocation costs [20].
Crow Search + VNS (CSAVNS)Edge-server deployment (private 5G)Partly (hybrid search explores configs)Deployment quality; NP-hardness acknowledgedPrivate 5G scenariosHybrid metaheuristic outperforms baselinesMovement dynamics and Lévy-type intensification not modelled [21].
Improved Jellyfish SearchService placement + resource allocationIndirect (multi-layer adaptation)Placement + allocation QoS4-layer MECGains over baseline JSNot server-location dynamics; movement cost absent [22].
JAYA-familyEnergy-efficient metaheuristics (general)N/A (not MEC placement)Energy/latency surrogatesBenchmarks (soft computing)Better convergence vs. JAYAEvidence suggests promise, but no dynamic MEC placement instantiation [18].
Graph-clustering placementEdge-server placementStatic/topology-drivenDelay ↓, workload balanceRealistic graphsDelay reductionStatic setting; no explicit dynamic relocation [13].
DQN-ESPA/MDP-basedEdge-server placement (dynamic)Yes (policy reconfiguration)Latency/QoS under dynamicsSimulated MECDynamic adaptabilityHeavier training cost; less transparent; metaheuristic alternatives desirable [26].
Manta Ray ForagingEdge-server placementTypically, static episodesAccess distance ↓; balanceMEC topologiesPlacement quality ↑Dynamics limited; no explicit movement cost; different biology vs. AVOA [26].
ACO for placement + loadEdge-server placement and loadMostly staticResponse time ↓; deadlines metSynthetic MECCompetitiveReconfiguration dynamics not central; no Lévy/intensification like AVOA [27].
Proposed fractional calculus enhanced FO-MO-AVOA-Based FrameworkDynamic edge-server allocation with cost awarenessYes (fully)Latency ↓; deployment cost ↓Shanghai Telecom base-station datasetSuperior latency–cost trade-off vs. existing metaheuristicsFirst application of fractional calculus enhanced FO-MO-AVOA to MEC; introduces hunger-rate control, Lévy-flight search, and movement-cost model to fill key gaps.
Note: ↑ denote increasing while ↓ denote decreasing.
Table 2. Common experimental settings used for all compared algorithms.
Table 2. Common experimental settings used for all compared algorithms.
AlgorithmPopulation SizeMax IterationsKey Parameters
MO-ESHON = 20T = 400Exploration–exploitation coefficients, population update factors
MO-JAYAN = 20T = 400Best–worst solution update mechanism (parameter-free)
MO-CSAN = 20T = 400Awareness probability, flight length
MO-JSN = 20T = 400Jump strength, exploration probability
MO-AVOAN = 20T = 400Linear hunger-rate control, leader selection probabilities
FO-MO-AVOAN = 20T = 400Fractional order α, nonlinear hunger-rate coefficients, Lévy-flight parameters
Table 3. Comparative analysis of FO-MO-AVOA vs. MO-AVOA for CEC 2022.
Table 3. Comparative analysis of FO-MO-AVOA vs. MO-AVOA for CEC 2022.
FunctionBestMedianMeanStandard DeviationWorst
FO-MO-AVOAMO-AVOAFO-MO-AVOAMO-AVOAFO-MO-AVOAMO-AVOAFO-MO-AVOAMO-AVOAFO-MO-AVOAMO-AVOA
F11.651464 × 1068.731593 × 1034.783850 × 1071.076111 × 1077.612621 × 1071.837765 × 1077.472100 × 1072.236658 × 1074.057859 × 1081.253630 × 108
F23.996668 × 1011.689500 × 10−11.831948 × 1021.043970 × 1021.758224 × 1021.013704 × 1025.872451 × 1015.589628 × 1012.737897 × 1022.422485 × 102
F33.378481 × 1054.096696 × 1032.518186 × 1071.907627 × 1063.562621 × 1075.254108 × 1063.574815 × 1077.355706 × 1061.529670 × 1084.048046 × 107
F44.854119 × 1024.400998 × 1026.803905 × 1025.111402 × 1027.078115 × 1025.187660 × 1021.757908 × 1025.080893 × 1011.534803 × 1037.192490 × 102
F53.667149 × 1013.123682 × 1012.219485 × 1031.903699 × 1021.343926 × 1044.288154 × 1022.764921 × 1046.337318 × 1021.394041 × 1053.371813 × 103
F66.281596 × 10−12.700720 × 10−41.447707 × 1001.126691 × 1001.704542 × 1001.160837 × 1007.998519 × 10−12.261926 × 10−14.817807 × 1001.849593 × 100
F76.611621 ×10−23.015885 ×10−23.215645 × 1002.135793 × 1003.400261 × 1002.286744 × 1001.613959 × 1001.703332 × 1009.073392 × 1001.379940 × 101
Table 4. Statistical comparison of FO-MO-AVOA using the Wilcoxon test.
Table 4. Statistical comparison of FO-MO-AVOA using the Wilcoxon test.
Test Metricp-ValueTest Statistic (W)Conclusions
(Reject H0)
Best value0.00527,925.0000Yes
Median value0.00550,368.0000Yes
Mean value0.000154,337.0000Yes
Std0.000095,391.0000Yes
Worst value0.00000121,237.000Yes
Table 5. Wilcoxon signed-rank test for all the compared algorithms.
Table 5. Wilcoxon signed-rank test for all the compared algorithms.
Test Metricp-ValueConclusions
(Reject H0)
MO-ESeaH0.01953Yes
MO-JAYA0.005859Yes
MO-CSA0.04883Yes
MO-JS0.003906Yes
MO-AVOA0.00353Yes
Table 6. Optimal outcome-level tuning parameters extracted from the representative Pareto-optimal solution for all compared algorithms (Case 6.4: 50 base stations).
Table 6. Optimal outcome-level tuning parameters extracted from the representative Pareto-optimal solution for all compared algorithms (Case 6.4: 50 base stations).
AlgorithmAverage Latency (s)Workload Coefficient of Variation (CV)Max Server LoadObservations
MO-ESeaH0.03100.79560.4232Fastest latency but highly unbalanced workloads
MO-JAYA0.03070.50560.3294Stable balance but slower convergence
MO-CSA0.03980.43900.2763Inconsistent trade-offs, unstable front
MO-JS0.03650.95640.4798Low latency but poor balance between latency and workload, uneven convergence
MO-AVOA0.03590.39500.2740Lowest latency value—balance between latency and workload
FO-MO-AVOA0.02080.38220.2523Lowest latency value—balance between latency and workload; lowest Maximum workload value and most stable Pareto front
Note: The reported values in Table 6 correspond to outcome-level optimal tuning parameters extracted from the representative Pareto-optimal (knee-point) solution obtained from the final Pareto archive of each algorithm.
Table 7. Statistical performance comparison of all algorithms over 30 independent runs (knee-point scalarization).
Table 7. Statistical performance comparison of all algorithms over 30 independent runs (knee-point scalarization).
AlgorithmBest LatencyMean LatencyStd LatencyMean Workload CVStd Workload CVMean Max LoadStd Max Load
MO-ESHO0.01430.02050.00260.59710.08580.34540.0394
MO-JAYA0.02110.02340.00360.37240.06540.26010.0104
MO-CSA0.01490.02140.00140.42670.04410.26770.0168
MO-JS0.01430.02170.00140.39250.04170.25360.0137
MO-AVOA0.01430.02130.00200.40660.06040.25250.0174
FO-MO-AVOA0.01370.02110.00140.40560.04440.25120.0106
Table 8. Optimal outcome-level tuning parameters extracted from the representative Pareto-optimal solution for all compared algorithms (Case 6.5: 100 base stations).
Table 8. Optimal outcome-level tuning parameters extracted from the representative Pareto-optimal solution for all compared algorithms (Case 6.5: 100 base stations).
AlgorithmAverage Latency (s)Workload Coefficient of Variation (CV)Max Server LoadObservations
MO-ESeaH0.014–0.0220.39–0.550.37–0.40Low diversity; limited adaptation
MO-JAYA0.022–0.0240.12–1.0980.22–0.50Excellent balance; slower convergence
MO-CSA0.013–0.0140.53–0.800.28–0.40Fast start; unstable exploration
MO-JS0.013–0.0210.37–0.860.27–0.42Low latency; poor load balance
MO-AVOA0.019–0.0240.38–0.840.25–0.38Strong Pareto spread; multi-objective compromise
FO-MO-AVOA0.008–0.0210.20–0.370.21–0.24Strongest Pareto spread; best multi-objective compromise
Note: The reported values in Table 8 correspond to outcome-level optimal tuning parameters extracted from the representative Pareto-optimal (knee-point) solution obtained from the final Pareto archive of each algorithm.
Table 9. Statistical performance comparison of all algorithms over 30 independent runs (knee-point scalarization).
Table 9. Statistical performance comparison of all algorithms over 30 independent runs (knee-point scalarization).
AlgorithmBest LatencyMean LatencyStd LatencyMean Workload CVStd Workload CVMean Max LoadStd Max Load
MO-ESHO0.014700.020930.003460.391410.142180.292740.02475
MO-JAYA0.013960.020140.004070.208880.186110.233330.01927
MO-CSA0.014090.014960.001430.479440.034480.269450.01233
MO-JS0.013840.014070.000170.490970.002840.264610.00382
MO-AVOA0.013470.013640.000200.495640.010310.259960.00258
FO-MO-AVOA0.013520.013680.000120.495480.006060.259070.00585
Table 10. Optimal outcome-level tuning parameters extracted from the representative Pareto-optimal solution for all compared algorithms (Case 6.3: 150 base stations).
Table 10. Optimal outcome-level tuning parameters extracted from the representative Pareto-optimal solution for all compared algorithms (Case 6.3: 150 base stations).
AlgorithmAverage Latency (s)Workload Coefficient of Variation (CV)Max Server LoadObservations
MO-ESeaH0.01360.16–0.2580.28–0.33Achieves very low latency but suffers from extreme workload imbalance and high max-load fluctuation.
MO-JAYA≈0.0117≈0.08≈0.216Produces highly stable results with large archive diversity but converges locally; balance moderate.
MO-CSA0.0115–0.01280.07–0.26≈0.216Good initial exploration, but inconsistent exploitation; uneven workload distribution.
MO-JS0.010–0.0120.60–0.860.32–0.35Competitive latency, yet poor workload regulation and higher edge-server overload risk.
MO-AVOA0.009–0.01250.045–0.270.21–0.54Strong overall compromise.
FO-MO-AVOA0.008–0.0110.04–0.240.209–0.25Strongest overall compromise—lowest max-load, well-balanced workload, and stable latency under heavy network load.
Note: The reported values in Table 10 correspond to outcome-level optimal tuning parameters extracted from the representative Pareto-optimal (knee-point) solution obtained from the final Pareto archive of each algorithm.
Table 11. Statistical performance comparison of all algorithms over 30 independent runs (knee-point scalarization).
Table 11. Statistical performance comparison of all algorithms over 30 independent runs (knee-point scalarization).
AlgorithmBest LatencyMean LatencyStd LatencyMean Workload CVStd Workload CVMean Max LoadStd Max Load
MO-ESHO0.014180.017500.001250.333320.152030.275620.03033
MO-JAYA0.017120.017290.000100.191320.025550.233860.00710
MO-CSA0.012200.014910.002470.387150.193880.272240.03756
MO-JS0.012400.013580.001380.496370.098500.268720.01582
MO-AVOA0.012040.012320.000250.519260.004070.262500.00795
FO-MO-AVOA0.011980.012260.000230.519440.003830.263640.00678
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alatwi, A.M.; Khan, B.M.; Wadood, A.; Khan, S.; El-Hageen, H.M.; Mead, M.A. A Fractional Calculus-Enhanced Multi-Objective AVOA for Dynamic Edge-Server Allocation in Mobile Edge Computing. Fractal Fract. 2026, 10, 28. https://doi.org/10.3390/fractalfract10010028

AMA Style

Alatwi AM, Khan BM, Wadood A, Khan S, El-Hageen HM, Mead MA. A Fractional Calculus-Enhanced Multi-Objective AVOA for Dynamic Edge-Server Allocation in Mobile Edge Computing. Fractal and Fractional. 2026; 10(1):28. https://doi.org/10.3390/fractalfract10010028

Chicago/Turabian Style

Alatwi, Aadel Mohammed, Bakht Muhammad Khan, Abdul Wadood, Shahbaz Khan, Hazem M. El-Hageen, and Mohamed A. Mead. 2026. "A Fractional Calculus-Enhanced Multi-Objective AVOA for Dynamic Edge-Server Allocation in Mobile Edge Computing" Fractal and Fractional 10, no. 1: 28. https://doi.org/10.3390/fractalfract10010028

APA Style

Alatwi, A. M., Khan, B. M., Wadood, A., Khan, S., El-Hageen, H. M., & Mead, M. A. (2026). A Fractional Calculus-Enhanced Multi-Objective AVOA for Dynamic Edge-Server Allocation in Mobile Edge Computing. Fractal and Fractional, 10(1), 28. https://doi.org/10.3390/fractalfract10010028

Article Metrics

Back to TopTop