Next Article in Journal
Soil Potassium Balance in the Hilly Region of Central Sichuan, China, Based on Crop Distribution
Previous Article in Journal
Student Perceptions of Environmental Education in India
Previous Article in Special Issue
The Artificial Intelligence Revolution in Digital Finance in Saudi Arabia: A Comprehensive Review and Proposed Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Objective Simulated Annealing Local Search Algorithm in Memetic CENSGA: Application to Vaccination Allocation for Influenza

College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(21), 15347; https://doi.org/10.3390/su152115347
Submission received: 1 October 2023 / Revised: 22 October 2023 / Accepted: 22 October 2023 / Published: 27 October 2023

Abstract

:
Flu vaccine allocation is of great importance for safeguarding public health and mitigating the impact of influenza outbreaks. In this regard, decision-makers face multifaceted challenges, including limited vaccine supply, targeting vulnerable people, adapting to regional variations, ensuring fairness in distribution, and promoting public trust. The objective of this work is to address the vaccination allocation problem by introducing a novel optimization scheme with the simulated annealing (SA) algorithm. A dual-objective model is developed to both manage infection rates and minimize the unit cost of the vaccination campaign. The proposed approach is designed to promote convergence toward the best Pareto front in multi-objective optimization, wherein SA attempts to embed diversity and uniformity within a memetic version of the controlled elitism nondominated sorting genetic algorithm (CENSGA). To model the underlying vaccination allocation problem, the dynamics of the disease are described using the susceptible–exposed–infectious–recovered (SEIR) epidemiological model to better express hidden flu characteristics. This model specifically analyzes the effects of pulsive vaccination allocation in two phases aiming to minimize the number of infected individuals to an acceptable level in a finite amount of time, which can help in stabilizing the model against sudden flu endemics over the long run. The computational experiments show that the proposed algorithm effectively explores the extensive search space of the vaccination allocation problem. The results of the suggested framework indicate that the obtained Pareto front best represents complete vaccination campaigns. The findings of this research can help in evidence-based decision making that can optimize flu vaccine distribution, contribute to the prevention of illness and reduction in hospitalizations, and potentially save countless lives.

1. Introduction

The significance of the impact of vaccination on public health and the economy is widely recognized [1]. Starting in the nineteenth century, there have been numerous successful vaccination campaigns for many vaccine-preventable diseases [2]. Nowadays, vaccination against disease is a fundamental tool for controlling epidemics among populations when a licensed vaccine formula is available. At present, proper vaccination planning and organization are still contentious and demanding tasks. Particularly in pandemic situations, such as influenza, COVID-19, and other respiratory viruses, campaigns work to eradicate the disease or maintain it at a reasonable level, and with minimal cost. Seasonal influenza causes approximately one billion cases of infection annually around the world, including 3–5 million cases of severe illness [3]. Until recently, the estimation of respiratory deaths was 250,000 to 500,000 annually; however, a striking increase in mortality incidences is newly estimated to be around 290,000 to 650,000 cases annually [4]. It is also estimated that around 99% of deaths in infants and toddlers are influenza-related in developing countries [5,6]. Moreover, the economic burden is estimated to be USD 11.2 billion in the US [7], EUR 6 billion to 14 billion in the European Union [8], and CNY 26.38 billion in China [9] annually.
The influenza virus is notably perilous due to its ability to mutate. Vaccines potently reduce the prevalence of seasonal influenza, with their formulations being adjusted annually based on guidance from the WHO. Nonetheless, for effective decision making in intervention planning, policymakers require data on anticipated expenses and quantities associated with vaccine implementation. The objective of this research was to develop a novel optimization model targeting both infection rates and the cost of vaccination campaigns, which can help governments and decision-makers achieve their intended goals by making well-informed choices regarding the optimal allocation of vaccines each year.
The behavior of diseases can be expressed using mathematical models. One of the most famous models that describes the spread of diseases in a population is the SEIR (Susceptible–Exposed–Infectious–Recovered) model [10]. It is formulated with deterministic nonlinear differential equations to observe a disease’s natural behavior over a specified time horizon, wherein the whole population is separated into four heterogeneous groups: Susceptible (individuals who are healthy but not immune to the risk of contagion), Exposed (individuals who are in an incubation period showing no symptoms but who can transmit the disease to susceptible individuals), Infected (individuals who are sick and showing symptoms with the ability to transmit the disease to susceptible individuals), and Recovered (individuals who have healed or were vaccinated and became immune to the risk of the contagion). SEIR can be used to empower the vaccination allocation process.
In the context of vaccine allocation, two independent vaccine allocation strategies are usually employed to estimate the timing and rate of vaccination based on specified goals, namely, contingent control and guardian control. Contingent control typically involves allocating different numbers of vaccine doses in different time windows for individuals who meet certain eligibility criteria, aiming to contain the spread of the disease. Here, the vaccination strategy is led by the current health conditions to obtain the most desirable outcome by ensuring that individuals who are more at risk take the vaccine first. Conversely, guardian control makes informed decisions by taking into account various factors, such as population health disparities, demographic groups, and transmission rates. The goal of the decision-maker here is to maximize both public welfare and quality of life, wherein a combination of predetermined time and vaccination ratios will be carried out regularly. In general, combining both control strategies efficiently requires careful consideration of local public health goals and policies. Consequently, by considering population features, such as age groups, risk groups, vaccine efficacy, transmission rates, and spatial living conditions, specific vaccine allocation strategies for each country, or even a country’s regions, may be created.
Thus, our proposed vaccination control model is suggested as a two-phase process to provide a complete vaccination campaign, with the main target of minimizing both the infection volume and the vaccination campaign costs. Phase I corresponds to the contingent variable control, which is responsible for decrementing the number of infected individuals to an acceptable level in a specified period, starting from the onset of the contagious disease to an equilibrium point. Simultaneously, Phase II, the guardian static control, completes the task of Phase I by stabilizing the infection volume to avoid a potential outbreak. The complete vaccination campaign combines solutions from both phases, each representing a point in the solution space.
Currently, multi-objective optimization is trending as a research topic in the scientific community. This is connected to the multidimensional, discontinuous, nonconvex, and difficult-to-evaluate nature of real-world problems, especially in control problems [2]. Unlike single-objective optimization, where there is a unique optimum solution, multi-objective algorithms produce multiple solutions, collectively called nondominated points, which belong to a Pareto front. Each nondominated point is considered equally equivalent to a set of Pareto front points, where enhancing one objective of a nondominated point is accompanied by degradation in the other objectives. Thus, the optimal solution in multi-objective optimization is designated by the decision-makers according to their preferences. Recently, multi-objective metaheuristic and evolutionary algorithms have dominated the field of multi-objective optimization [11], particularly simulated annealing (SA) and other heuristic algorithms. This is attributed to several factors: (1) the ability to obtain multiple solutions in a single run; (2) derivatives free (for instance, gradient-based algorithms need information about the derivative of the error concerning each potential move to minimize the mean squared error, and it would be helpful especially when the global minimum is well defined; alternatively, simulated annealing does not rely on knowledge of the derivative, as its mechanism cannot be misled by local minima attraction [12]); (3) asymptotic convergence to a Pareto-optimal front; (4) working well with continuous functions and combinatorial optimization problems; and (5) narrow influence caused by the shape of the Pareto front [11].
Genetic algorithms optimize a set of solutions by mimicking the natural behavior of evolution, whereby the next offspring are reproduced using the principles of selection, crossover/mutation, and recombination for each iteration. The new population is filtered using the selection process, promoting better solutions in order to form a parent population set containing good chromosomes with high-quality genes for subsequent rounds. The process of creating a new population is carried out via crossover/mutation and recombination mechanisms. The new population undergoes the selection process once more, and the cycle continues by applying crossover/mutation and recombination processes until it meets the stopping criteria. Solutions are characterized and compared using the dominance property, so according to their relative state among other solutions within the population, either they are dominated by others or dominant over other solutions. When the process completes, the best nondominated solutions survive to form a Pareto front [2,13].
Genetic algorithms are not necessarily the best choice in the presence of competing solutions that are close to true optimal solutions. In essence, integrating local search operators into the recombination step of a genetic algorithm is fundamental when a competitive GA is needed [14]. Memetic algorithms (MAs) are evolutionary algorithms used for intensive heuristic searches in optimization problems. Their distinguishing feature is combining the local search as a separate process to amend solutions (i.e., enhance their fitness), which has yielded substantial results in many problems [15], including the quadratic assignment problem [16], arc-routing problems [17], and scheduling problems [18]. The intensity of the search is maximized at different stages of the evolutionary process, since the algorithm can perform deeper searches around promising solutions. Moreover, since a balance between diversification and intensification mechanisms is required, diversification can be obtained with multiple approaches in memetic algorithms, such as restarting strategies, multistart strategies, and mutation [19]. In this work, a mutation strategy for diversification is considered.
This paper attempts to estimate an optimal Pareto set for the vaccination allocation problem using a memetic version of CENSGA (the controlled elitism nondominated sorting genetic algorithm) [20], which is a fast and elitist multi-objective optimization engine, with selection considering both the nondominance ranking and the crowding distance [21]. This article suggests an optimization scheme, embedding population-based simulated annealing (MOSA) as a local search module, with an archival hash table to help in controlling variability within the solution set. Combining these two approaches harnesses the genetic algorithm’s ability for extensive global search, while the local search helps in mitigating the potential of becoming trapped in the local optima and preserving the diversity of the population. This is crucial for tackling complex problems, where finding the global optimum is challenging, as is the case in vaccination allocation problems. In addition, the introduction of memetic CENSGA with MOSA in the field of vaccination allocation is considered unique, where the algorithm provides a level of adaptability to dynamic scenarios in which conditions change over time. This flexibility allows for adjustments in allocation strategies as new data or circumstances emerge. Another contribution of this work is helping in efficient resource allocation by distributing the limited vaccine doses optimally, considering factors like available healthcare resources, vaccine efficacy, and population demographics. The results were tested and verified using graphical and statistical comparisons covering different quality aspects: cardinality, convergence, distribution, and spread.
The remainder of this paper is organized as follows. In Section 2, related works are reviewed, followed by the introduction of some preliminaries in Section 3. After, the problem description is formulated in Section 4, and the optimization model proposed to solve the vaccination allocation problem is explained in Section 5. In Section 6, parameter tuning is briefly discussed, while the results and efficiency performance measures are provided and discussed in Section 7 and Section 8, respectively. Finally, concluding and perspective remarks are provided in Section 9.

2. Literature Review

A leading work on the application of SA as a solution metaheuristic for multi-objective problems was suggested by Serafini [22]; they proposed a single-point multi-objective simulated annealing (MOSA) that considers weighted scalarizing objective functions to determine acceptance probability. The weight vector of scalarizing the objective function is randomized during the search to diversify the nondominated solutions. Serafini’s work was applied to solve the traveling salesman problem (TSP) on a small scale (i.e., the two-dimensional problem). Czyzak et al. [23] extended Serafini’s work [22] by proposing PSA, an MOSA with an adaptive weight vector that guides the search direction toward unexplored areas of the Pareto-optimal front for both small and large problems. It is a population-based algorithm that optimizes multiple weighted scalarizing functions simultaneously. During the search, the weight vector of each solution is adaptively tuned to its adjacent neighbors.
Another population-based MOSA known as UMOSA was suggested by Ulungu et al. [24]. It starts with a set of candidate solutions that contains nondominated solutions as an initial population. They optimized one of the population’s solutions at each iteration. To introduce diversity to the Pareto front, a set of fixed, uniformly distributed weight vectors was used. Unlike Serafini’s MOSA, Ulungu’s approach has an even probability of optimizing each weighted scalarizing function in the cycle mode. Nevertheless, a drawback of the adaptive approach of Czyzak et al. [23] and the fixed approach of Ulungu et al. [24], as noticed by Li and Landa-Silva [25], is the early variation in the search directions, which may delay convergence. In other words, extensive computational efforts are needed for intensification around solutions in the Pareto-optimal front, as opposed to focusing on the diversification of solutions along the Pareto-optimal front, as in [23]. Conversely, the fixed-weight vectors in [24] might fail to cover the entire Pareto-optimal front without properly tuning the search direction weights to encourage them to find nondominated solutions in unexplored areas. Therefore, Li and Landa-Silva [25] proposed combining the two approaches (adaptive and fixed-weight vector scalarization) in a controlled manner to gain the benefits of both and overcome their weaknesses. They suggested a hybridized multi-objective evolutionary algorithm (EMOSA) based on decomposition (MOEA/D) and the simulated annealing metaheuristic. The proposed technique is a population-based MOSA consisting of uniformly spread solutions. The acceptance probability is determined regarding both the weighted scalarization sum and the Pareto dominance property. The choice of the next move from the current solution to the candidate one is based on the weighted scalarization sum function, while the nondominated solutions set (i.e., the set containing the best solutions found so far) is updated using the Pareto dominance property. The weights are fixed on higher temperature values and then adaptively updated at the lowest temperatures to assure diversity in searching. EMOSA was tested on TSP instances with convex or nonconvex Pareto-optimal fronts. These findings demonstrate an improvement in the performance of the multi-objective SA.
More MOSA variations have emerged in the literature inspired by early attempts, encouraging many researchers to keep using the dominance concept to steer searches, such as with SMOSA (Suppapitnarm multi-objective simulated annealing) [26]. SMOSA is a method that uses an archival list to maintain all nondominated solutions generated during the search process. SMOSA is a single-solution-based MOSA that generates a single solution in each iteration. It involves a novel strategy known as “return to base”, which restarts the search in every iteration by selecting a random solution from the archive in the hope of exploring new areas of the front. However, this work suffers from some drawbacks, according to Sankararao and Yoo [27], as the convergence is slow with only a few nondominated solutions; furthermore, some good solutions might be lost after comparing the current solution with the candidate solution alone.
Remarkable work in this field was conducted by Bandyopadhyay et al. [28]. They proposed an archival multi-objective simulated annealing (AMOSA) algorithm. The archive stores the nondominated solutions obtained during the search to compare the candidate solution with them. The acceptance probability is based on the domination property. The archive has upper and lower limits, and the number of nondominated solutions stored can increase to the upper limit. Whenever the number of solutions reaches the upper limit, a clustering technique is applied to group solutions with similar characteristics. The clustering process is implemented to compress the number of solutions to the lower limit, which, in turn, enhances the computational time of the algorithm. Nonetheless, to attain better diversity among the nondominated solutions, a limitless archive should be implemented instead. The proposed algorithm was tested using several standard benchmark functions. The performance of the AMOSA was found to generally be better than some of the other multi-objective algorithms, such as MOSA, nondominated sorting genetic algorithm II (NSGA-II), and the Pareto-archived evolution strategy (PAES). Overall, the paper presents the AMOSA as a promising multi-objective optimization algorithm with strengths in global search efficiency and avoidance of local optima. However, it is important to be mindful of parameter sensitivity and computational intensity when applying the algorithm in practice. Further empirical validation through real-world case studies would enhance confidence in its applicability across diverse domains.
An extensive review of variations in single-objective optimization and MOSA algorithms can be found in Suman and Kumar’s work [11]. Their review includes a comparison of five MOSA algorithms: UMOSA, SMOSA, PSA, and two algorithms found in Suman [29]. PDMOSA (Pareto dominance multi-objective-simulated annealing) uses the Pareto dominance concept to direct the search, and the WMOSA (weighted multi-objective-simulated annealing) adopts the weight vector to deal with constraints. The review concluded with potential future research directions to enhance the performance of MOSA algorithms.
A recent leading work by Cunha and Marques [30] was motivated by positive outcomes from a wide range of MOSA algorithm applications. In that work, they represented a new variation called MOSA-GR (multi-objective simulated annealing with new generation and reannealing procedures). It introduced two new generation and reannealing features to promote better convergence toward the true Pareto front, with uniformly distributed solutions along the front; in such cases, the usual method of generating candidate solutions in the neighborhood of a current solution might not be sufficient for application to some multi-objective optimization problems. Therefore, MOSA-GR suggests enhancing the generation method using knowledge of the nondominated solutions stored in the archive to produce candidate solutions. Furthermore, a reannealing process was suggested to improve the quality of the final nondominated solutions by increasing the selection pressure on the true Pareto front. The MOSA-GR was used on water distribution networks to solve a bi-objective optimization problem and showed notable results. However, the procedure needs more analysis to ensure its scalability in addressing a large number of solutions, which is an aspect of multi-objective optimization. Moreover, increasing the complexity of an optimization problem calls for further investigation, particularly in the generation process. In summary, the paper presents MOSA-GR as a valuable contribution to the field of multi-objective optimization, with a specific focus on water distribution network design. The algorithm demonstrates effectiveness in balancing cost and reliability considerations. However, its applicability beyond this domain, potential computational demands, and sensitivity to parameters are aspects that require further attention and consideration when applying MOSA-GR in practice.
A recent work proposed by Alkhamis and Hosny [20] was inspired by many works in the area of vaccination allocation. This paper introduces an innovative approach for synthesizing pulse influenza vaccination policies by utilizing the controlled elitism nondominated sorting genetic algorithm (CENSGA). This contribution addresses the challenge of optimizing vaccination strategies for influenza by considering two competing objectives, namely, reducing the infection size together with a budget friendly vaccination plan. The algorithm demonstrated a capacity to efficiently search through the solution space and identify a diverse set of high-quality policies. Their approach validated its robustness and adaptability to dynamic influenza scenarios by testing different R numbers. The suggested approach compares the use of CENSGA as a search engine with the widely used NSGA-II; it shows promising results for CENSGA, supporting its applicability in the field of vaccination allocation. Further validation in real-world scenarios is recommended to generalize their findings and assure the approach’s applicability in practice. Our work expands on the approach in [20] by introducing a new multi-objective simulated annealing algorithm as a local search procedure in CENSGA.
The literature on multi-objective optimization is contemporary and continually thriving, with a large number of successful algorithms designed and implemented in many applications. Developing new algorithms is still of significant importance, especially since the complexity of problems has increased over time (e.g., because of the inclusion of uncertainty or consideration of solutions within a tight time horizon). We propose a novel population-based SA that incorporates the concept of archives to avoid repetition and facilitate access to solutions generated during the optimization process. The archive’s size was an issue when using the AMOSA [28] and other similar works, such as the MOSA-GR [30], in which clustering was suggested to control the size. However, this will eventually burden the algorithm’s complexity and, even worse, compromise the convergence when a good solution is lost during that process. Therefore, the proposed algorithm instead incorporates the concept of crowding distance (CD), borrowed from the CENSGA, to overcome this issue. Another interesting property of the proposed approach is determining the acceptance probability of a candidate solution, whereby the domination status of the candidate solution with respect to the current solution, as well as those in the archive, is applied. Moreover, the novelty of this work is extended to applying the MOSA to the vaccine allocation problem as a local search module in the memetic CENSGA.

3. Preliminaries

3.1. Multi-Objective Evolutionary Optimization

The multi-objective evolutionary algorithm (MOEA) is from a branch of optimization that involves solving problems with competing objectives [31,32]. The objectives usually conflict with each other. Thus, finding a single solution that optimizes all objectives simultaneously while also satisfying the constraints is not always feasible [31]. The formal definition of a multi-objective problem (MOP) is stated as follows (assuming a minimization problem):
minimize F(x) = (f1(x), f2(x), f3(x), …, fm(x))T subject to x X
The function F: XRm maps xX decision variable vectors to optimize m real-valued objective functions, where X is the decision variable space, and Rm represents the objective space [33]. The MOEA aims to find a solution set by employing the concept of dominance to find optimal Pareto solutions [32]. Since improving one objective may cause deterioration in other objectives [31], the outcome of the MOEA is a set of best solutions in the population: a Pareto front (PF). The formal definitions of dominance and Pareto-optimality are the following [20]:
Definition 1.
A vector, X1 = (x1.1, …, x1.n)T, is said to dominate another vector, X2 = (x2.1, …,x2.n)T, denoted as X1 < X2 iff ∀ i ∈ {1, …,n}, x1.i ≤ x2.i, and X1 ≠ X2.
Definition 2.
A feasible solution, x1 ∈ X, to the problem in Equation (1), consisting of all nondominated solutions, is called a Pareto-optimal solution: if ∄ x2 ∈ X such that F(x2) < F(x1).
Definition 3.
A Pareto set (PS) consists of a collection of all Pareto-optimal solutions, denoted as:
PS = {x1 ∈ X| ∄ x2 ∈ X, F(x2) < F(x1)}.
The shape of the PS in objective space is called the Pareto front (PF):
PF = {F(x1)|x1 ∈ PS}.
In this work, we use the term Pareto front to refer to Pareto-optimal fronts (also known as true Pareto fronts).

3.2. CENSGA

Many of the recently introduced multi-objective algorithms in the literature have been recognized under the NSGA-II family. The CENSGA (controlled elitism nondominated sorting genetic algorithm) denotes the new extension of the well-known NSGA-II, which was first proposed by Deb et al. [34]. This category of multi-objective algorithms belongs to the Pareto-optimality class in which all problem objectives are considered equally while solving the problem with their constraints, if any. Multi-objective processes steer searches without the need to turn the objectives into a single one. The fundamental feature of the CENSGA over the NSGA-II is defined in the selection process; particularly, the CENSGA permits all fronts to engage in the selection process, and the contribution of the fronts varies concerning the geometric distribution. When compared with other multi-objective algorithms, such as the NSGA-II and MOPSO, the CENSGA operators are search-oriented; thus, the effect of the different operators on its performance is limited compared with other algorithms [35].
The detailed selection strategies of the CENSGA and NSGA-II are well described in the work of AlKhamis and Hosny [20]. Recall that, with the CENSGA, the selection operator is defined to grant all fronts a chance to be part of the selection strategy. However, the amount of front participation is defined according to their significant influence on constructing the next generation population. This process is governed by the geometric distribution defined in Equation (2):
n i = r . n i 1  
where ni denotes the maximum number of solutions in the ith front, and r (<1) ensures a reduction in the number of solutions in the subsequent fronts. In a population of size N, the maximum number of solutions allowed in each front (i = 1, 2, …, k) corresponds to:
n i = N . r i 1 1 r 1 r k
A measure of the density of the solutions belonging to the same rank (front) is associated with each solution, which is denoted as the crowding distance (CD). The crowded comparison criterion uses a special relation, >n, to promote diversity in the solutions within a front, where >n is defined as follows:
if (irank < jrank) or ((irank = jrank) and (idistance > jdistance)) Then SELECT i

3.3. Local Search Simulated Annealing Algorithm

Local search is the simplest metaheuristic algorithm. It begins with a single initial solution. Subsequently, in every iteration, the algorithm picks a solution from the neighboring candidates that have an improvement in their fitness values. The search stops when the neighbor candidate shows no further improvement. Simulated annealing is a more advanced form of local search that can often escape the trap of local optima, as described below.

Simulated Annealing (SA)

Simulated annealing (SA) uses the principle of annealing metals through heating and cooling processes to obtain a desired metal structure [36]. In a like manner, SA looks for an optimal solution by controlling a virtual temperature, T, and setting its starting value to a relatively high degree; T is a problem-dependent variable. The algorithm works by colling T using a predefined cooling schedule, α, to decrease the temperature of T gradually as time passes. The concept of temperature is used here to escape from a local optimum by allowing the algorithm to occasionally accept solutions with worse fitness and a high acceptance probability when the temperature is high while also decreasing the acceptance probability over time until the search approaches the final stages, where no bad solutions can then be accepted. The acceptance probability that controls the admission or rejection of a candidate solution, SCandidate, follows the Metropolis criterion found in [37]. Adopting the Metropolis criterion is a key aspect of implementing the SA algorithm; hence, it controls the transition from the current solution, SCurrent, to a candidate solution, SCandidate, according to Equation (4).
e x p Δ E T
where −ΔE is the difference between the objective values of SCurrent and SCandidate, and T represents the current temperature. If the candidate solution, SCandidate, is accepted, it becomes the current solution, SCurrent, for the next iteration. A few consecutive iterations, K, will carry on under the same temperature, T, before applying the cooling schedule, α. Thereafter, the algorithm will behave more or less like a local search, i.e., it will only accept improved solutions until convergence. Table 1 presents the template of the SA algorithm [38].

3.4. Epidemiological Model

Epidemiological models are effective tools for describing the spread of infectious diseases in a population over time. Individuals within a population are classified into mutually exclusive classes according to their relative health conditions. This helps in estimating the severity of a disease in the population under study; then, planning can be carried out regarding appropriate procedures and policies to contain the effects of the disease, which is possible through vaccination or other countermeasure means [39,40]. The following section explains the compartmental model used in this work, the SEIR (Susceptible–Exposed–Infected–Recovered) model.

SEIR Model

The SEIR model is an extension of the classical SIR (susceptible, infected, and recovered) compartmental model with an additional compartment (E) that represents the number of individuals who are infected with a contagious virus and become infectious to others without showing symptoms. This model inherits all classical SIR properties with the additional ability to describe infectious diseases in depth by considering the stage between a vulnerable (susceptible) individual becoming infectious and the stage of showing symptoms and becoming part of the (infectious) compartment [10]. Refer to Figure 1 for a schematic illustration of the SEIR epidemiological model, in which the arrows describe the movement of individuals between compartments. Initially, when susceptible individuals are exposed to the flu virus, they become a member of the (E) compartment. After the incubation period passes, they transition to the infectious (I) compartment with the capability of spreading the flu. Depending on the burden of flu, some infectious individuals may be hospitalized (H). Alternatively, with vaccination some susceptible individuals can directly move to the vaccinated compartment, gaining immunity without passing through the exposed (E) and infectious (I) compartments. Individuals in any compartment, including (V) and (H), may recover and gain immunity, moving to the recovered category.
To describe disease transmission in a particular population using SEIR, we extended the SIR model and added two additional compartments (vaccinated (V) and hospitalized (H)), inspired by [41,42,43]. Let the variables S(t), V(t), E(t), I(t), H(t), and R(t) denote the number of susceptible, vaccinated, exposed, infected, hospitalized, and recovered individuals at time (t), respectively. The total population number, N, at time (t) is indicated by N(t) = S(t) + V(t) + E(t) + I(t) + H(t) + R(t). Table 2 introduces the model parameters and their values, except parameter (e), which represents the vaccination rate that will be optimized in this study. The rest of the parameters were acquired from the work of A.R. de Cruz et al. [20,41,44]. The differential equations of the SEIR model are briefly presented using Equation (5):
d S d t = μ N μ S β I S N e S ,   S 0 = S o   0 , d V d t = e S n V μ V ,   V 0 = V o   0 , d E d t = β I S N k E μ E ,   E 0 = E o   0 , d I d t = k E γ 1 + a I μ I ,   I 0 = I o   0 , d H d t = a I γ 2 H μ H ,   H 0 = H o   0 , d R d t = γ 1 I + γ 2 H μ R ,   R 0 = R o   0 ,
A key number associated with the epidemiological model is the reproduction number (R0); it represents the number of secondary cases generated from exposure to a primary individual [42]. The R0 value is determined using the appropriate epidemiological model; for instance, in SIR, R0 = β/(μ + γ), while R0 = β/(μ + γ1 + γ2) in SEIR [45]. The value of R0 is fixed and reflects whether applying vaccination or any countermeasure strategies is needed. If R0 < 1, then the disease dies out and, henceforth, tends toward an “infection-free equilibrium” without intervention. In other words, the endemic cannot maintain its growth since each infected individual, on average, infects fewer than one member of the population. On the contrary, when R0 > 1, then the disease invades the population, and it will stabilize in the future at an “endemic equilibrium”. The disease then turns into an endemic state, in which the disease sustains itself in the population [45,46]. The only way to diminish the effects of an infectious disease with R0 > 1 is by introducing a relevant countermeasure strategy; in this study, this was vaccination. By applying vaccination, the R0 value cannot help any further in determining an endemic situation; therefore, another epidemiological number may help, known as the effective reproduction number (REF). REF represents the number of secondary cases generated from exposure to a primary individual during vaccination. REF can update its value with the progress of vaccination, unlike R0. Therefore, reaching a state of REF < 1 is desirable, marking the end of the contiguous disease and the vaccination process. The value of REF can be derived from R0 with REF = S(t). R0 [47].
The period that determines a complete vaccination campaign using a particular time unit (t.u.) is Tcntrl = Ttmp + Tgc = 150 (t.u.), where the first stage, Ttmp = 50 (t.u.), is dedicated to the contingent control phase, i.e., at time unit 50, the system reaches “endemic equilibrium”. Therefore, proper action must be deployed during this period. The next time stage, Tgc = 100 (t.u.), is dedicated to the guardian control phase. The guardian phase is better applied indefinitely, but since this is an unrealistic situation, it will be considered until the end of Tgc. The initial SEIR condition for the contingent control is supposed to be (s0, e0, i0, r0) = (0.99, 0, 0.01, 0). When the guardian control starts at Ttmp = 50, at the endemic equilibrium point, the initial system condition will be (se, ee, ie, re) ≈ (0.079, 0.027, 0.087, 0.805). Remarkably, ie = 0.087 is greater than the acceptable tolerance ratio, itol = 0.01; accordingly, it is important to keep vaccinating after the end of the first stage. For that purpose, the guardian control policy is introduced to minimize the infection volume after completing the contingent control policy. Setting up SEIR parameter values in this context mimics a hypothetical disease, which is hard to control without efficient vaccination; refer to Table 2. Figure 2 shows the behavior of the SEIR model without intervention using the proposed setup.

4. Evolutionary Multi-Objective Simulated Annealing (MOSA)

The main principle of simulated annealing is to prevent, as much as possible, the solution’s trajectory from becoming stuck in a local optimum region and to promote broad coverage by accepting moves to bad neighboring areas at certain temperature levels. In multi-objective optimization, the same principle can be extended by converting the acceptance function to a weighted scalarized sum function [32] or based on Pareto dominance [25]. Accordingly, two heuristic algorithms are found in the literature: single-point [22,28,30,48] and population-based MOSA [23,24,25]. A detailed review of various MOSA algorithms and their comparative analysis performance is available in [11].
Several research directions have been proposed in the literature to adapt the concept of simulated annealing (SA) to solving multi-objective problems, such as using a population rather than a single solution [25], employing an archive to store nondominated solutions found so far [28], engaging the domination concept to compare between solutions [28,48], and/or changing the solution selection strategy [30]. Some of these mechanisms may steer the search toward gradual convergence on a near-optimal front and could uniformity promote while also evolving the initial sample [49]. Collectively, the four mechanisms are incorporated in this work.
In brief, starting from an initial population obtained using the CENSGA, the population represents a hash table of nondominated solutions, also known as the archive, found so far using the CENSGA. However, an unbounded number of nondominated solutions may lead to inadequate solution competition since the solutions belong to the same promising (near-optimal) front, which could fail to create enough selection pressure in the problem [48]. Thus, a preparation step is required to construct a representative initial population with four selected points: two best points representing the best objective values for both F1 and F2 and two mean points from the mean objective area for both F1 and F2. Figure 3 demonstrates the spread of the selected solutions in the initial population. A local search works by moving each solution in the current population to a feasible neighboring solution simultaneously until some stopping criteria are met. Figure 4 depicts a hypothetical trajectory pattern of population-based MOSA, from the initial population until reaching the optimal Pareto front. The neighborhood function has an influential impact on the performance of the local search since it defines a set of candidate solutions that the local search must follow at each iteration. Here is the formal definition:
Definition 4.
The neighborhood of a solution, SCurrent, is a set Nr (SCurrent), consisting of all solutions (SCandidate), such that the distance between SCurrent and SCandidate is d(SCurrent, SCandidate) < rid, where rid is called the radius of Nr (SCurrent); that is,
Nr (x) = {SCurrent ∈ X| ∃ SCandidate ∈ X: SCandidate ≠ SCurrent, d(SCurrent, SCandidate) < rid}.
The number of domination function (Δdom) is computed between two solutions, SCurrentX and SCandidateNr (SCurrent), according to Equation (6):
Δ d o m S C u r r e n t , S C a n d i d a t = i = 1 ,   F i ( S C u r r e n t ) F i ( S C a n d i d a t e )   F i S C u r r e n t F i ( S C a n d i d a t e ) O i
By multiplying the differences in the values of the objective values (Fi) and then dividing them by the objective limits (Oi), Δdom is calculated for all solutions in the population to update the current population if required accordingly. The use of the domination property contributes to overcoming the issue of objective scalability [48]. The domination property is introduced to many Pareto dominance-based multi-objective metaheuristics (NSGA-II, CENSGA, PAES, etc.). The main role of Δdom is in computing the acceptance probability (Metropolis criterion) in Equation (7):
P r o b a b i l i t y   o f   a c c e p t a n c e ( P A c c e p t a n c e ) = 1   1 + e x p   ( Δ d o m × t e m p )  
In general, a sequence of local search moves is initialized using generation processes (crossover and mutation) to construct a candidate population (PCandidate) in the neighboring area of the current population (PCurrent) (i.e., the last accepted population). To maintain consistency, a random change in PCurrent solutions is carried by the same generation processes used on the CENSGA, i.e., variable length-bounded simulated binary crossover [20,50] and bounded poly mutation [51]. When the new PCandidate is created, the constraints are verified; then, PCandidate will be accepted or rejected according to Equations (6) and (7). The application of dominance status is a prime component of the implementation of the MOSA algorithm. A transition from PCurrent to PCandidate occurs at the solution level. If the candidate solution dominates the current solution, it becomes the current solution for the next iteration, as described in (8):
i 1,2: Fi (SCandidate) ≤ Fi (SCurrent)
and i 1,2: Fi (SCandidate) < Fi (SCurrent).
Population evolution is based on a comparison of dominance status between candidate solutions and the current solution (i.e., the last accepted solution). Moreover, in some circumstances, the comparison extends to include the solutions stored in the archive. The archive stores the nondominated solutions found during the local search process, which form the core of the returned population. In comparison, when SCandidate < SCurrent, SCandidateSCurrent or the current solution and the candidate solution are incomparable or equally preferable solutions, the probability of acceptance criterion in Equation (7) determines the sift of all or part of SCandidatePCandidate to SCurrentPCurrent. The dominance status, Δdom, is computed between the candidate and current solutions and the archive solutions that dominate the candidate solution, as suggested in [28,48]. In general, three different cases can prevail in verifying the dominance status. The first and straightforward case occurs when the current solution dominates the candidate solution (SCurrent < SCandidate); the probability of acceptance is calculated with (ΔdomAVG), which refers to the mean value of the amount of domination in the solutions, SCurrent, and archival solutions that dominate SCandidate:
Δ d o m A V G = ( i = 1 A R C Δ d o m i , S c a n d i d a t ) + Δ d o m S c u r r e n t , S c a n d i d a t A R C + 1  
where ARC is the number of solutions in the archive. In case one, the candidate solution is worse than the current solution. In this situation, the candidate solution still has a chance to survive and be accepted as the new current solution if the Metropolis criterion is satisfied. The Metropolis criterion is satisfied when the acceptance probability (PAcceptance), provided by (Equation (7)), is higher than a randomly generated reference number: NReference ∈ (0, 1). Therefore, if PAcceptance > NReference, then the candidate solution is accepted as the new current solution. The purpose of including the amount of domination (Equation (6)) to compute the PAcceptance used in the Metropolis criterion is to control the selection pressure. Furthermore, a high PAcceptance value can be obtained using solutions that have a small amount of domination, while low PAcceptance can be obtained for those that have a large amount of domination at higher temperature values, and vice versa. This administrates supervised diversity at an early stage of the MOSA; thus, steering the local movements toward better or equal solutions while approaching the near-optimal front to escape premature convergence.
The second case describes a situation in which the current solution and the candidate solution are incomparable (nondominated with respect to each other). In this case, it is not possible to enhance one of the objectives without degrading the other one. Three subcases can emerge based on the Δdom between the candidate solution and archive solutions. First, at least one of the solutions from the archive dominates the candidate solution; then, PAcceptance is computed with ΔdomAVG, and the candidate solution can be accepted as the new current solution if the Metropolis criterion is satisfied. Second, if the candidate solution is incomparable to the solutions in the archive, then the candidate solution is accepted as the new current solution, and it is added to the archive. In the third case, the candidate solution dominates at least one of the archive’s solutions, and, thus, the candidate solution is accepted as the new current solution and is added to the archive. After this, the solutions in the archive should be updated accordingly by eliminating any solution dominated by the candidate solution.
Ultimately, the third case represents a situation in which the candidate solution dominates the current solution, and the current solution should be eliminated from the archive whenever this solution is listed in the archive. Again, three subcases unfold. First, at least one of the solutions in the archive dominates the candidate solution; then, by computing PAcceptance with ΔdomMIN (equal to the lowest value obtained by comparing the amount of domination in each solution in the archive that dominates SCandidate), the candidate solution can be accepted as the new current solution if the Metropolis criterion is satisfied. Otherwise, the current solution is substituted with the solution from the archive that determines ΔdomMIN. Second, if the candidate solution is incomparable to the solution in the archive, then the candidate solution is confirmed to be the new current solution and is added to the archive. In the third case, the candidate solution dominates at least one solution from the archive; then, the candidate solution is accepted as the new current solution, and it is added to the archive. After, the solutions in the archive should be updated by eliminating any solution dominated by the candidate solution.
The algorithm will iterate several times (i.e., local search moves) with the same settings before applying the cooling schedule, α, to the temperature, and so on, until the stopping criteria are met. Figure 5 illustrates the three general cases and their descending branches. Finally, the archive set that contains the nondominated solutions (i.e., Pareto solutions) will return to the CENSGA. A general overview of the MOSA steps is provided in Figure 6.

5. Optimization Model

5.1. Pulse Vaccine Allocation

Influenza, commonly known as the flu, is a seasonal respiratory illness caused by mutable influenza viruses. It can lead to a range of health complications, including severe respiratory distress, which may necessitate hospitalization and, in severe cases, can lead to fatalities. The dynamic nature of the flu virus is reflected in its mutability from season to season, as well as in the fact that individuals who contract the disease may develop different symptoms [52]. The supply of vaccines in the event of outbreak or pandemic typically occurs 90 to 120 days after the onset of the incidence. Even during the regular influenza season, the supply of vaccines might experience delay or shortage [43,53]. Furthermore, an increase in the number of influenza infections coinciding with the pandemic may overwhelm the healthcare sector, and raise diverse economic and social concerns. Therefore, recommendations have been published by the WHO to administer the influenza vaccine to individuals in all age groups in the population above 6 months old who do not have contraindications [52]. Optimal allocation policy and planning for the whole population is thus crucial to guarantee a decrease in the rates of infection, hospitalization, and death arising from seasonal influenza at these critical times. Optimization methods are extensively applied to address both theoretical and practical challenges. These techniques aim to discover the most favorable solution, aiming for minimal time, cost, and risk or maximal profit, quality, and efficiency. Efficient planning plays a vital role in shaping vaccination policies, especially given the constraints of limited vaccine supply and application time. Balancing the need to curb infection rates with the costs of administering vaccines is pivotal for a cost-effective immunization campaign.
This work is based on an SEIR epidemiological model meant to control a two-phased pulse vaccination campaign, which means that, at a given time instant, a certain proportion of susceptible individuals get vaccinated and turn toward the recovered population [20]. This approach applies nonidentical magnitudes of pulse control actions at arbitrary time steps, considering that the targets are optimizing both infection volume and vaccination costs. This paper solves the problem by finding an estimate for the Pareto-optimal set using a memetic version of the canonical CENSGA [21], which is a fast and elitist multi-objective optimizer with selection operators that consider all fronts.
Formally, let vaccination time instants be defined by ΓR, and Γ = {τ0, …,τN} represents a set of time steps in the closed time interval [0, Tmax] such that τk < τk+1, τ0 = 0 and τN = Tmax. Remember that an epidemiological model compartment, x, is a state variable belonging to X and that the control vaccination percentage, u, belongs to Uj for all j = 0, , N − 1. The model states that at instance τj is represented by x(τj), with u[j] as the control action (vaccination policy). The instant of time right after τj is τj+1, and is defined as a time instant “just after” the pulse action in τj [54]. Briefly, the pulse control model is formally indicated by T. Yang [55]:
Definition 5.
In a pulse control model, the state at each instance, τj, x(τj) ∈ X can be changed by x(τj+) = x(τj) + u[j], with u[j] ∈ Uj.
The mechanism of the model under the pulse control can be summarized as follows:
x(t + 1) = f(t, x(t)),
x(τj+) = x(τj) + u[j],
t∈ (τj+,τj+1]
j = 0, …, N − 1,
The application of vaccination policy at each time instant creates a new configuration of the SEIR input set. A full vaccination campaign is structured by connecting each time step output with the next according to Equation (10); the entire process is in line with “Bellman’s optimality principle” [56].

5.2. Multi-Objective Formulation

Multi-objective optimization algorithms produce a set of equally optimal solutions, called the Pareto-optimal set or the set of nondominated points. Recall that reducing the infection volume and costs of the vaccination campaign are our goals. Increasing the number of vaccinated individuals, in general, helps with limiting the infection altitude in the targeted population. Simultaneously, various vaccination campaigns may yield similar infection altitudes. In the same related context, it is desirable to keep trying different permutations of vaccination ratios in the hope of reaching as minimum a cost as possible for the same infection altitude, since vaccine stock is not always handy. Consequently, obtaining multiple options for optimal solutions will help in the multi-objective decision-making (MODM) step by providing policy-makers plenty of applicable policies to choose from based on the situation.
The model suggested in this work is close to that in [20]; the optimization process has two phases:
  • The first Phase (I) corresponds to the contingent variable control, which considers pairs of susceptible ratios that will be vaccinated and the periods between every two consecutive policies to eradicate an ongoing outbreak;
  • The second Phase (II) corresponds to the guardian static control, taking into consideration the necessity of maintaining the infection volume at an acceptable level in a finite timespan after finishing the first phase (i.e., compartment (I) ≤ tolerance ratio (itol > 0)), starting from infection-free equilibrium; see Section 3.4.
The set of control policies from Phase I will be implemented and concatenated with the best policy of Phase II to form a complete vaccination campaign, in which Phase I is applied to the closed time interval [0, Ttmp], while Phase II is applied on the time interval T g c = (Ttmp, Tcntrl]. Each vaccination campaign represents a nondominated optimal (Pareto-optimal) solution in the Pareto front.

5.2.1. Formulation of the Guardian Static Control

The optimization decision variables of the problem are as follows:
  • A fixed-time interval between two consecutive campaigns: Δτgc, for Ngc = 1, …, T g c Δ τ g c ;
  • The fixed percentage of susceptible to-be-vaccinated people in each campaign: vgc.
The set of nondominated guardian static control policies for vaccination is described by
X g c = { ( Δ τ g c ,   v g c ) ,   ,   N g c }
The optimal vaccination allocation is formulated here as a minimization multi-objective optimization problem with two objective functions:
  • F1: integral infection volume;
  • F2: vaccination cost.
The model is subject to the following constraints:
  • vminvgcvmax;
  • tmin ≤ Δτjtmax;
for given values of vmin, vmax, tmin, and tmax.
The guardian static control is represented by Equations (11) and (12):
min   X g c F 1 = T t m p T c n t r l I t + H ( t ) d t F 2 = C 1 . N g c + C 2 . N g c 1 + v g c 2 + C 3 . j = 1 N g c v g c .   S k
subject to
d S d t = μ N μ S β I S N , S 0 = S 0 . N 0 d I d t = k E γ 1 + a I μ I ,     I 0 = I 0 . N 0 d H d t = a I γ 2 H μ H ,   H 0 = H 0 . N   0   t j = T t m p + j . Δ τ g c , τ τ j + , τ j + 1 s τ j + = s j + = s j 1 v g c i τ j + = i j + = i j h τ j + = h j + = h j i t     i t o l . N ,       t     T t m p , T t m p + T g c j = 1 N g c Δ j k T g c ;       N g c = T g c Δ τ g c j = 0,1 , , N g c 1 0.40   v m i n     v g c   v m a x   0.95 1     Δ τ m i n < Δ τ j < Δ τ m a x   20
The guardian static control occurs periodically kth times; tk = Ttmp + Δtgc. At each time unit (tk), a constant pulse vaccination policy, vgc (proportion of the susceptible to be vaccinated), is continued, spreading the beneficial impact of vaccination across the population, as described in S(tk+) and I(tk+). The notation (+) signifies the period following the implementation of the pulse vaccination policy. To effectively manage the infected population, the disease must be maintained below a predetermined acceptable threshold during the guardian static control by i(t) ≤ itol. N, where i(t) is an instance of dI/dt at time (tk). The frequency of applying the guardian static control policy is determined by Ngc, considering the overall duration, and it must not surpass Tgc.

5.2.2. Formulation of the Contingent Variable Control

The variables of the optimization model are:
  • The number of policies, Ncc, where Ncc can vary from ⌊Ttmptmax⌋ to ⌊Ttmptmin⌋;
  • The percentages of susceptible to-be-vaccinated people in each campaign: vj, j ∈ {1, 2, …, N}, such that vj = v(τj) for each τj in Γ = {τ0, …, τN};
  • The time interval between two consecutive campaigns: Δτj;
The set of complete control vaccination campaigns is described by
X* = {(Δτ1, v1, Δτ2, v2, …, ΔτNcc, vNcc, Δτgc, vgc), …}
The constraints are described as follows:
  • Each vaccination ratio, vj, must follow the following rule: 0.40 ≤ vminvmax ≤ 0.95, for the given percentage ratios, vmin and vmax.
  • Each time interval between the two campaigns, Δτj, must adhere to the following rule: 1 ≤ tmin ≤ Δτjtmax ≤ 20 for the given percentage ratios vmin and vmax.
The two minimization objectives expressing the vaccination allocation problem are
  • F1: integral infection volume;
  • F2: total vaccination cost.
Each contingent variable control is described by Ncc value pairs of the time interval between campaigns, Δτk, and the fraction of susceptible individuals to be vaccinated, vk, k ∈ {1, 2, …, N}. The extra pair (Δτgc,vgc) is derived from guardian static control, which represents a guardian static control policy chosen from solutions X*gc. The complete optimization problem is then shown in Equations (13) and (14):
min   X c c F 1 = 0 T c n t r l I t + H ( t ) d t F 2 = C 1 . N T o t a l + C 2 . k = 1 N c c 1 + v k 2 + N g c 1 + v g c 2 + C 3 . k = 1 N T o t a l v k .   S k
subject to
d S d t = μ N μ S β I S N , S 0 = S 0 . N 0 d I d t = k E γ 1 + a I μ I ,     I 0 = I 0 . N 0 d H d t = a I γ 2 H μ H ,   H 0 = H 0 . N   0   ,   τ   τ j + , τ j + 1 t j = τ j 1 + Δ τ j s τ j + = s j + = s j 1 v j i τ j + = i j + = i j h τ j + = h j + = h j v j = j ,                                         1 j   N c c v g c ,   N c c + 1 j   N g c Δ τ j = Δ τ j ,                                         1 j   N c c Δ τ g c ,   N c c + 1 j   N g c i t i t o l   .   N ,   t T t m p , T t m p + T g c j = 1 N g c Δ τ j T g c ;       N g c = T g c Δ τ g c j = 1 N c c Δ τ j T t m p ;   N T o t a l = N g c + N c c ; N c c T t m p Δ τ 1 , , T t m p Δ τ j ,     Δ τ j T t m p j = 0,1 , , N g c 1 0.40   v m i n     v j   v m a x   0.95 1     Δ τ m i n < Δ τ j < Δ τ m a x   20
The cost function in Equations (11) and (13) (the second objective function) enumerates the same cost function defined by Cruz et al. [44]. Fixed and variable cost parameters contribute to applying vaccination policies. The constants c1, c2, and c3 in Equation (11) are assumed to be c1 = 10, c2 = 1, and c3 = 1. The term c1 · Ngc represents a fixed part of the cost function in the guardian fixed control phase. The second term, c2 · Ngc · (1 + vgc)2, shows the variable monetary costs with regard to the effort required to vaccinate, vgc, a percentage of the susceptible population. Lastly, c3 · S vgc ·S[k] is the cost of all vaccines in all vaccination policies. In brief, the same interpterion applies to the cost function of the contingent variable control phase in Equation (13).
In summary, a chosen guardian static control policy will be implemented indefinitely on time interval [Ttmp, Tcntrl], while the contingent variable control policies are implemented within a specific time frame [0, Ttmp]. During the contingent control phase, the components coming from both phases are integrated to create a unified vaccination campaign that operates throughout the entire optimization time frame [0, Tcntrl].

5.3. Local Search

A canonical (i.e., standard) implementation of the CENSGA was used as the kernel algorithm to perform the optimization. Two memetic algorithms, denoted here as CENSGA—LS and CENSGA—MOSA, were implemented with the canonical CENSGA algorithm and the hash table data structure, as described in [20]. In short, a local search procedure in the CENSGA—LS was applied to iteration 20 as follows: (1) select a set of (r) points from the current nondominated solution set; (2) create a set of (m) new points using Gaussian distribution, with standard deviations equal to 0.01 times the size of the search space for each decision dimension; (3) evaluate the fitness function of the new points; and (4) seed the new r × m solutions into the current population.
The settings of the local search in this work were as follows: (r) was set to contain four randomly chosen solutions selected from the current nondominated front, with m = 2(2·n + 1), where (n) is the dimension of the problem. The number of newly generated solutions was set to p = N/2, where N is the population size. At each iteration, when the local search was conducted, 2N new solutions were added to the solution pool; thus, the selection operator was performed on a pool of size 4N solutions, with 2N solutions coming from the usual iteration of the CENSGA and 2N coming from the local search.
The proposed CENSGA—MOSA has its own version of the local search procedure in which a preparation local search is performed at the first iteration on the CENSGA to enhance the initial random population points. The purpose of this step is to start with trained points by performing a partial local search on the vaccination ratio part only. The processes can be summarized as follows: (1) A set of (r) points is selected following the same generation process described in Section 4. (2) First, a uniform mutator is applied to the vaccination ratio part. Using a uniform mutator in the early stages of the optimization process has a beneficial impact since it distributes points widely, which may hinder premature convergence [57]. (3) In subsequent iterations, a rule-based mutation is used, where three rules are suggested, namely, a swap mutator, a two-point inversion mutator, and a three-point inversion mutator. At each iteration, one rule is selected randomly, and the choice of points is made randomly. Then, a second local search is performed of the final iteration of the CENSGA to refine the nondominated solution. In this step, the proposed MOSA is applied following the same analogy in Section 4.

6. Parameter Tuning

To tune the MOSA’s parameters, the Taguchi method is applied to calibrate the settings. Taguchi is a statistical tool for optimizing the performance of parameters in a given algorithm [58,59]. Using Minitab, the effect plot of the signal-to-noise (S/N) ratio is computed with the following settings: a smaller-the-better type of response for the S/N ratio and L9 (3^4) orthogonal arrays (OA) as a Taguchi representation. We followed the same methodology used in our last work [20] in which a detailed explanation of the design of the Taguchi test was provided. Table 3 shows the parameters of the MOSA and the defined levels.
Figure 7 depicts the S/N ratio of the Taguchi test runs for the MOSA. The best calibrations of the parameters determined using the Taguchi method are tabulated in Table 4.

7. Results and Discussion

This sections presents the results of comparing three versions of the CENSGA algorithm—canonical CENSGA (CENSGA without local search) [21], CENSGA with local search (LS) [20], and the proposed CENSGA with the MOSA as a local search through the execution of the CENSGA—in our pulse vaccine allocation problem.
All versions of the CENSGA were executed 10 times with the following settings: population size, 25; number of generations, 30; geometric distribution, 0.9; probability of crossover, 0.8; probability of mutation, 0.1; and distribution index for crossover and mutation, 10. In addition, the SEIR’s and the MOSA’s calibration parameters are provided in Table 2 and Table 4. The experiments in Figure 8 illustrate the behavior of the three algorithms, showing average curves for the 10 best Pareto fronts, and Experiment 6 demonstrates the behavior using the final settings. The figures show the effect of changing the experiments’ settings on the range of objectives values. Moreover, they demonstrate that the ability of the algorithms to find solutions that cover most of the Pareto front regions may vary.
Finding an optimal solution among a list of nondominated optimal solutions was achieved with the help of a systematic procedure called multi-objective decision making (MODM). The procedure helps decision-makers dealing with competing objectives to select one compromise solution by defining various preferences as performance criteria (e.g., in our case, the probability of disease stability and the probability of disease eradication); performance criteria are derived or interpreted subjectively as problem-oriented indicators of the strength of the defined preferences [60].
The probability of disease stability (Pc) is calculated as the ratio of the number of infected individuals approaching 0.01N or below in the guardian phase, while the probability of disease eradication (PRE) is defined as the ratio at which the number REF is under “1” during the simulation’s timespan. Table A1, Table A2 and Table A3 in Appendix A tabulate the top solutions belonging to the canonical CENSGA, CENSGA—LS, and the proposed CENSGA—MOSA, as well as the mean values of PC and PRE. Here, we conducted our comparison based on the resulting Pareto fronts to gain good insight into the algorithm’s behavior.
Let us begin with our two performance indicators: PC and PRE. On average, the CENSGA—MOSA outperformed both the CENSGA—LS and the canonical CENSGA. Furthermore, the mean probability of PC in the CENSGA—MOSA increased by 2.4% and 4.5% compared with the canonical CENSGA and CENSGA—LS, respectively, while the probability, PRE, in the CENSGA—MOSA showed an interesting increase of 8.3% or 8.4% compared with the canonical CENSGA and CENSGA—LS. It is worth noting that the canonical CENSGA indicated a solid outcome compared with the CENSGA—LS. Significantly, this shows the power of the canonical CENSGA algorithm. Interestingly, in contrast, a simple local search procedure may not necessarily reflect any enhancement, as indicated by the CENSGA—LS. The canonical CENSGA outperformed the CENSGA—LS by 2.2% in PC probability. This means that the CENSGA calls for a more sophisticated local search procedure, as evident from the CENSGA—MOSA’s results. Table 5 demonstrates that the CENSGA—MOSA’s results are statically significant, and there were no differences statically detected between the results of the canonical CENSGA and the CENSGA—LS. Figure 9A,B indicate the range of values of the two probabilities, PC and PRE, as boxplots.
Now, let us take a glance at the objective values of the three algorithms to compare which algorithm is financially cheaper and more effective at disease control. In MODM, we pay more attention to the problem-related preferences (PC and PRE) to measure the targeted expectation, but omitting the objective values may lead to unintentional bias in discussing the results. In Appendix A, Table A1, Table A2 and Table A3, it is obvious that the canonical CENSGA and CENSGA—LS share similar objective values. However, the whole Pareto front of CENSGA—LS is less statically significant than the canonical CENSGA in terms of the F2 value only; see Table 5. Nevertheless, the results for the CENSGA—MOSA indicate a significant improvement in the mean value and along the whole Pareto front of the F2 value compared with the canonical CENSGA and CENSGA—LS. This means that the CENSGA—MOSA is financially cheaper than the other algorithms. The only drawback of our proposed algorithm is that it is only slightly effective at disease control in the mean value of F1. However, the entire Pareto front of the three algorithms showed no preferences for F1 concerning each other. Figure 9C,D represent the range values of F1 and F2 as boxplots.
The execution time of the three algorithms is displayed in Table 6 in the Experiment 6 row. The results show that the canonical CENSGA had a shorter execution time, which is due to the obvious fact that the canonical version does not involve a local search procedure as an additional step. For the execution time of the CENSGA—MOSA and CENSGA—LS, we noticed that the CENSGA—MOSA took longer to finish its complete run because of using two rounds of local search and the sophisticated nature of the local search procedure of the MOSA.
The SEIR behavior of a selected solution from the Pareto front is visualized in Figure 10A, demonstrating the effect of applying only contingent variable control as a vaccination intervention. It shows a huge reduction in the infection volume in comparison with the regular behavior of the SEIR without vaccination intervention (refer to Figure 2). The infection volume successfully dropped by 31.25%, which indicates the effectiveness of vaccination as an intervention medium against the flu virus. In addition, Figure 10B–D represent further enhancements over the long run for all algorithms, in which the application of the guardian static control in conjunction with contingent variable control extended the positive effect by propagating it to all other compartments. For instance, the Susceptible and Recovered compartments fluctuated, between 50 and 100 for the Susceptible rather than stabilizing at 100, while the Recovered ranged between 850 and 900 instead of 800. In addition, the prevalence reduced nearly to 25 and 10 for the Infection and Exposed compartments, respectively, compared to 100 and 50. The mean number of vaccinated individuals in each campaign is presented in Figure 11. It is obvious that the number of campaigns in all algorithms was similar, in which the concentration of vaccination in the CENSGA—MOSA was larger during the peak of the endemic, which explains its ability to obtain better objective values over the long run.

8. Efficiency Performance Measures

Quality indicators (QIs) in multi-objective optimization are used as measurement tools to approximate the quality of the obtained Pareto front in multiple dimensions. To evaluate the performance of the candidate optimizers in this study, we selected a set of four quality indicators: error ratio (ER) [61], generalized distance (GD) [62], ε-indicator (ε-IQ) [61], and hypervolume [61]. These indicators are widely recognized in the field, and they belong to the main quality categories of cardinality, convergence, distribution, and spread. In practice, all indicators call for the real Pareto front, which is usually unknown for most real-world problems. To overcome this issue, a practical replacement called the reference set is used in the literature, as found in [61]. A reference set often consists of nondominated solutions for all Pareto fronts produced during a study. For instance, to construct our reference set, we combined the Pareto fronts of the canonical CENSGA, CENSGA—LS, and the proposed CENSGA—MOSA and then applied fast nondominated sorting, ranking, and crowded distance functions to the collection to produce the reference set. In this work, the algorithms were coded using R Studio 2022.02.2 (Build 485) and implemented on a PC with MacOS Big Sur, 2.40 GHz Quad-Core Intel Core i5, and RAM 16 GB 2133 MHz LPDDR3.

8.1. Analysis and Comparison of Results

The effectiveness of the proposed optimization algorithm under different settings was validated using the four quality indicators (QIs) mentioned earlier. A total of nine computational experiments were designed by considering multiple disease scenarios for SEIR. The permutation of each pair of γ1, γ2 ∈ {1/5, 1/7, 1/9}, and β ∈ {1.18, 2.36, 4.72} introduced nine independent disease scenarios. The canonical CENSGA, CENSGA—LS, and CENSGA—MOSA were tested separately in Phase I of the optimization problem only since the changes in Phase II were minor, which could be neglected without affecting the findings. The parameters of the optimization algorithms were set similarly, where the value of vgc was set independently to the best value ever obtained from a guardian regime phase. The number of generations was set to 30, the number of generations was set to 25, the geometric distribution was set to 0.9, the probability of crossover was set to 0.8, the probability of mutation was set to 0.1, and the distribution index for crossover and mutation was set to 10.
The collection of values tabulated in Table 7 indicates the performance of the three algorithms under multiple experimental scenarios. In general, the CENSGA—MOSA performed better than the canonical CENSGA and the CENSGA—LS. For instance, a higher error ratio means fewer contributions from an algorithm in a reference set. In all experiments, the CENSGA—MOSA made better contributions to the reference set than the other two algorithms. The same performance was observed in the space QI results (i.e., the ε-indicator); this indicator showed that the CENSGA—MOSA was well spaced over the Pareto front, whereas the other algorithms tend toward “inconsistency” or gaps within the Pareto front. Nevertheless, the distribution and convergence QIs, which are represented by the hypervolume indicator, affirm that the points on the Pareto front were better distributed in the canonical CENSGA compared with the CENSGA—MOSA and CENSGA—LS, where the difference in the integral volume between the reference set and canonical CENSGA Pareto front was largest. Finally, the convergence of the points on the Pareto front toward the reference set measured by the GD QI indicates that the CENSGA—LS had at least one better point closer to the reference set than the two algorithms.

8.2. Statistical Analysis

A nonparametric test called the Wilcoxon rank-sum test was applied to confirm the findings’ statistical significance [63]. The results for the nonparametric Wilcoxon rank-sum test for the four indicator values are given in Table 8. In this statistical test, the null hypothesis (H0) is defined as “the difference between the two indicator medians is equal”, while the alternative hypothesis is given by “at least one median is different”. The hypothesis was tested with a confidence of 0.95. This means that, if the statistical test returns a p-value less than 0.05, then the indicator value may be considered significantly different.
According to the statistical test results reported in Table 8, the p-value of the Wilcoxon rank-sum nonparametric test for some performance indicators was less than 0.05; thus, H0 is rejected for these performance indicators. On the other hand, the rest of the performance indicators at more than 0.05 were accepted (the H0 hypothesis). In this way, the value of each QI represents the status of each algorithm statistically. The results from the nine experimental scenarios were seeded to the nonparametric Wilcoxon rank-sum test to validate the findings. Figure 12 shows the range values of the algorithms in each indicator. Statistical differences were detected in some mean experiments, as indicated by the p-values in Table 8. Therefore, it can be concluded that the proposed optimization engine, CENSGA—MOSA, was better statistically verified using the ε-indicator compared with the canonical CENSGA and CENSGA—LS optimizers. The same derivation was applied to the ER indicator, where, again, the CENSGA—MOSA was statistically verified to only be better than the canonical CENSGA. Furthermore, the CENSGA—LS was proven to be statistically better than the canonical CENSGA regarding both the ER and the ε-indicator. It is worth mentioning that neither the GD nor HV indicator values showed any statistical differences among the three optimizers. In other words, all optimizers performed similarly regarding the GD and HV indicators. In short, we statistically proved that adding the local search procedure as an enhancement step has a positive impact on the performance of the optimizer, and the type of local search procedure has a powerful impact.

9. Conclusions

This work proposed a memetic multi-objective optimization methodology for allocating a set of vaccination control policies within a specified time frame. The aim was to decrease the size of the infected population using an efficient and effective budget in the obtainable vaccination campaign. We introduced the SEIR model to simulate an epidemic by synthesizing the proposed two-phased control system: a contingent control and a guardian control. Each vaccination campaign comprised a finite set of control policies defined with a pair of time spans and the vaccination ratio. A nondominated metaheuristic algorithm, CENSGA, which has fast, elite, and efficient searchability, was proposed as a solution algorithm. In this work, a memetic version of the CENSGA was incorporated with a population-based simulated annealing (MOSA) as a local search, and an archival hash table property held the nondominated solutions. The statistical analysis of the proposed approach indicates the significant applicability of the proposed CENSGA—MOSA compared with the canonical CENSGA and CENSGA—LS.
Overall, vaccination allocation is a critical component of public health policies that can contribute to disease prevention, healthcare system sustainability, and overall societal well-being. Our results ensure optimal resource utilization by distributing limited vaccine supplies efficiently to maximize their impact on public health. In addition, the findings contribute to disease control and eradication through strategic allocation, which helps in the containment of the spread of infectious diseases and, in some cases, can lead to the eradication of diseases altogether. Another important impact is the minimization of healthcare burden by reducing hospitalized cases due to pneumonia or other related flu complications, particularly during endemics. It helps to alleviate pressure on healthcare services and reduce the possibility of health system failure. Moreover, the results of this work can benefit the economy by reducing medical costs, lost productivity, and the overall economic impact of infectious diseases.
The proposed approach to vaccination allocation can be generalized to other allocation problems with different details and in different settings. In the future, a range of potential extensions can be addressed. For instance, in the vaccination context, considering multiple age groups is crucial as vaccine effectiveness and susceptibility to diseases can vary significantly across different age ranges. Another interesting research direction is studying vaccination allocation in different spatial structures. This involves understanding how the population is distributed geographically. Different spatial structures could refer to urban versus rural areas, regions with varying population densities, or even different countries or states. In vaccination allocation, it is important to account for spatial disparities in disease prevalence and healthcare infrastructure, as well as to consider factors like travel patterns that can influence disease spread.
Investigating seasonality is also a demanding research direction that affects vaccination allocation. Seasonality in epidemiology refers to the variation in disease incidences over the course of a year. Many infectious diseases, including influenza, exhibit seasonal patterns. Understanding these patterns is crucial for effective vaccination allocation. For example, seasonal flu vaccines are adjusted annually to account for anticipated changes in circulating strains. In the same context, modeling with other epidemical models is used to understand and predict the spread of diseases within populations. Common models include SIR (susceptible–infected–recovered), SEIR (susceptible–exposed–infected–recovered), and more complex models that account for additional factors like age structure, spatial dynamics, and seasonality.
Further research directions in the field of multi-objective optimization include extensive examination of various selection, mutation, and crossover operators that would benefit the exploration and exploitation of the search space. In addition, applying and testing the suggested algorithm in real-world scenarios is important to demonstrate its validity in practical settings.
An interesting future research direction is to consider advanced optimization algorithms that play a pivotal role in addressing complex decision problems, especially those that fall under the class of NP-hard problems. Advanced optimization algorithms (e.g., hybrid heuristics and metaheuristics, adaptive algorithms, self-adaptive algorithms, island algorithms, polyploid algorithms, and hyper-heuristics) have demonstrated their ability to obtain high-quality solutions in a reasonable amount of time in applications such as online learning, scheduling, multi-objective optimization, transportation, medicine, data classification, and others [64,65,66,67,68,69,70,71]. Similar to the vaccine allocation problem, finding an exact solution for these problems is computationally infeasible. Hence, advanced algorithms employ effective strategies that can help standard heuristics and metaheuristics to escape local optima, leading to better overall solutions. For example, adaptive and self-adaptive algorithms [65,66] can adjust their strategies in response to changes in the problem characteristics, making them well-suited for decision problems that evolve over time. Algorithms like island algorithms [67] and hyper-heuristics [64] can be parallelized, allowing for the exploration of multiple solution paths concurrently. Hybrid heuristics [68] and metaheuristics [38] are designed to navigate through complex solution spaces efficiently, allowing them to excel in finding trade-off solutions in conflicting objectives problems that balance these competing interests. Moreover, polyploid algorithms [65] are designed to efficiently allocate resources in dynamic and uncertain environments, which is crucial for making optimal decisions. In general, advanced algorithms can be customized to incorporate domain-specific knowledge, allowing them to exploit problem-specific structures and constraints. Hence, expanding our discussion in future work to include a comparison with some of the recent works in the advanced optimization algorithms field (e.g., [65,69,70,71]) is key to generalizing the obtained findings.

Author Contributions

Conceptualization, A.K.A. and M.H.; Methodology, A.K.A.; Software, A.K.A.; Validation, A.K.A.; Formal analysis, A.K.A.; Investigation, A.K.A.; Resources, A.K.A.; Writing—original draft, A.K.A.; Writing—review & editing, A.K.A. and M.H.; Supervision, M.H.; Project administration, M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Best Pareto front solutions for the canonical CENSGA.
Table A1. Best Pareto front solutions for the canonical CENSGA.
PCPREF1F2
0.270.3156.626651219.241
0.630.1845.325591621.835
0.410.1849.549341412.716
0.430.1648.048261456.893
0.340.1850.112351382.050
0.290.2954.886121273.971
0.340.2653.586051283.310
0.440.1949.114221451.460
0.460.1947.783441489.475
0.290.2151.843521341.191
0.510.1946.966871514.805
0.380.2049.507741446.645
0.570.1446.735101569.582
0.340.2051.429821349.898
0.540.1746.951621546.749
0.580.1646.398771577.155
0.040.2756.211411236.038
0.030.2555.545571247.019
0.190.2555.032631258.703
0.600.1745.816981599.144
0.040.3056.470761233.693
0.030.2655.523121247.660
0.340.1951.040841351.873
0.200.2252.743011329.603
0.270.2152.275071335.285
0.600.1746.262231596.934
0.600.1746.388031593.350
0.340.2052.764721318.208
0.290.2053.246601304.670
0.540.1646.804791552.235
0.290.2152.359131331.906
0.340.1950.443621372.778
0.190.2554.967311260.678
0.510.1747.268221492.212
0.340.2050.995511362.985
0.340.1950.569181369.225
0.380.2653.417121288.175
0.340.2052.961701312.596
0.600.1745.594691605.459
0.200.2353.404681300.726
0.480.1947.587351491.165
0.480.1847.093961506.175
0.350.2050.834951367.459
0.610.1845.534601615.811
0.520.1747.046611507.552
0.340.2052.959251312.666
0.510.1647.130451496.123
0.440.2049.359671451.188
0.040.2756.238561235.218
0.340.2653.495421285.971
0.600.1745.590911608.750
0.120.2453.403361303.785
0.610.1845.576881614.588
0.510.1647.144561495.726
0.480.1947.645571489.524
0.340.1950.452861372.555
0.340.2050.906291365.505
0.450.2149.471361449.828
0.540.1646.836291551.330
0.410.2149.451531450.786
0.540.1746.917511549.035
0.540.1746.896131549.645
0.340.2050.876971366.345
0.610.1845.534691615.808
0.340.2050.891171365.950
0.540.1746.900161549.530
Average0.392121210.2016666750.07196121421.30529
Table A2. Best Pareto front solutions for the CENSGA—LS.
Table A2. Best Pareto front solutions for the CENSGA—LS.
PCPREF1F2
0.680.1745.235351625.342
0.040.2757.453981199.103
0.460.1748.182981455.939
0.040.2354.335351258.157
0.070.2756.461971229.337
0.330.1952.438021313.049
0.070.2556.979131207.264
0.440.1548.517481444.441
0.610.1845.973681595.592
0.200.2455.126221254.228
0.380.1950.421691383.349
0.390.2050.395551414.598
0.430.1749.180641427.711
0.480.1947.699391494.760
0.420.1649.705001416.870
0.510.1847.776561480.723
0.080.2453.977531260.561
0.350.2253.185741310.493
0.640.1945.707221612.027
0.310.2151.462391332.943
0.540.1647.268801539.966
0.290.2151.785111323.727
0.420.1649.742571415.811
0.500.1647.805331466.819
0.070.2557.008711206.401
0.660.1845.314381615.562
0.140.2756.278541234.734
0.450.2150.355731414.648
0.120.2655.639031252.425
0.330.1951.362291345.559
0.500.2047.533191504.106
0.560.1746.529571560.834
0.430.1749.106871429.812
0.170.2253.539701287.854
0.600.1646.190451582.079
0.430.1648.821351438.072
0.300.2151.642681327.776
0.070.2755.787421250.134
0.580.1646.486881573.861
0.040.2756.015561243.296
0.200.2455.176901252.782
0.250.2253.510761302.398
0.530.1947.317121513.110
0.540.1947.291561522.434
0.580.1546.366481575.009
0.240.2252.038421316.547
0.360.2151.113291363.213
0.240.2252.076201315.413
0.430.1648.780931439.240
0.510.2047.466451509.787
0.550.1746.777471547.786
0.080.2253.935141274.448
0.500.1647.795401467.105
0.120.2756.009621245.069
0.380.1950.572761378.985
0.550.1647.053401539.988
0.350.2153.293461303.557
0.350.2051.351171352.487
0.360.2150.739441374.238
0.640.1945.624301614.370
0.370.2051.237301355.697
0.070.2756.377261231.806
0.560.1746.565481553.677
0.550.1647.039391546.411
0.610.1646.026621584.144
0.350.2253.214471309.557
0.600.1746.010051587.142
0.540.1947.291561522.434
0.180.2153.792581282.390
0.080.2353.721291282.904
0.360.2151.032441365.847
0.350.2253.430601303.147
0.100.2453.829721276.462
0.370.2051.248671355.401
0.550.1647.047361546.219
0.560.1746.690651550.202
0.360.2150.884981370.109
0.560.1746.608191552.547
0.100.2353.626951285.639
0.080.2253.935141274.448
0.100.2453.829721276.462
0.100.2353.626861285.642
0.560.1746.666831550.891
0.380.2050.775451372.965
0.360.2150.934331368.768
0.360.2150.981781367.452
0.380.2050.829231371.429
0.640.1945.677031612.882
0.380.1950.447721382.582
0.380.1950.517671380.607
0.600.1746.019671586.891
0.380.1950.484941381.549
0.360.2150.815971372.116
0.640.1945.640401613.907
0.360.2150.959931367.982
0.380.1950.505241381.050
0.380.1950.509021380.856
0.380.1950.514561380.691
Average0.37051020.2007142950.49048321403.15137
Table A3. Best Pareto front solutions for the CENSGA—MOSA.
Table A3. Best Pareto front solutions for the CENSGA—MOSA.
PCPREF1F2
0.040.2561.824481078.657
0.840.1946.003351598.634
0.190.3060.483051118.464
0.080.3359.662791147.021
0.690.2148.347501529.636
0.660.1948.461691482.997
0.560.2351.239381395.370
0.640.2149.591351450.538
0.800.2046.732271593.260
0.790.2047.148541587.460
0.490.1949.949141436.021
0.820.2047.744461579.304
0.140.2757.485621198.499
0.510.2552.264281343.450
0.330.2756.701101206.742
0.510.2451.525251375.758
0.350.2253.678681290.560
0.570.2352.081021369.949
0.290.2553.940381280.792
0.610.1848.047471572.499
0.340.2755.589111245.719
0.500.2048.968021456.712
0.080.3459.226001151.910
0.270.3057.725881184.888
0.280.3158.158821178.688
0.080.2255.710361231.114
0.350.2353.517511303.360
0.020.3158.924361161.877
0.110.2955.524851257.387
0.630.2250.812051406.255
0.030.0355.853691226.999
0.160.1655.180261258.337
0.280.2858.574431174.423
0.280.2858.874181167.199
0.370.3752.832981315.306
0.570.5748.312941538.140
0.590.5948.243041556.686
0.580.5848.149421559.639
0.290.2957.538711190.234
0.160.1654.491401279.993
0.380.3856.495551218.827
0.490.4950.540351435.585
0.490.4950.739421420.547
0.170.1756.684581214.131
0.190.1959.483031147.993
0.390.3956.303491224.317
0.080.0858.450621177.844
0.070.0754.984431265.283
0.530.5348.858181459.792
0.590.5950.586571430.856
0.630.6350.782331407.135
0.490.4950.668561422.479
0.470.4752.423341339.078
0.650.6548.641351469.225
0.040.0454.784211271.454
0.450.4551.827001375.583
0.470.4752.819721327.656
0.660.6648.627881478.788
0.390.3954.944501268.039
0.650.6548.755901466.091
0.390.2856.331831223.552
0.350.2253.008731310.315
0.470.2552.651161332.504
0.470.2552.804781328.099
0.570.1848.330791537.614
0.460.2551.951091374.904
0.350.2253.237921303.745
0.350.2253.092391307.865
0.050.2654.693421274.190
0.450.2552.625341333.093
0.330.2452.597531338.675
0.430.1850.551051434.704
0.080.2654.639711275.763
0.210.2754.538891278.453
0.590.1948.228771557.082
0.430.2854.550301275.916
0.660.1948.541071481.248
0.330.2452.595931338.721
0.350.2253.165601305.787
0.570.2352.013441371.955
0.350.2253.014361310.147
0.650.2048.711081467.348
0.560.2352.027511371.557
0.660.1948.580271480.113
0.350.2253.208601304.566
0.650.2048.716011467.209
0.350.2253.189121305.126
0.660.1948.567431480.473
0.630.2250.796201406.738
0.350.2253.234161303.845
0.570.2352.020441371.750
Average0.415714290.2847252752.79929381346.74986

References

  1. Plotkin, S.A.; Orenstein, W.A.; Offit, P.A. (Eds.) Plotkin’s Vaccines, 7th ed.; Elsevier: Philadelphia, PA, USA, 2018. [Google Scholar]
  2. Cardoso, R.T.N.; Dusse, A.C.S.; Adam, K. Optimal Vaccination Campaigns Using Stochastic SIR Model and Multiobjective Impulsive Control. Trends Comput. Appl. Math. 2021, 22, 201–220. [Google Scholar] [CrossRef]
  3. Krammer, F.; Smith, G.J.D.; Fouchier, R.A.M.; Peiris, M.; Kedzierska, K.; Doherty, P.C.; Palese, P.; Shaw, M.L.; Treanor, J.; Webster, R.G.; et al. Influenza. Nat. Rev. Dis. Primers 2018, 4, 3. [Google Scholar] [CrossRef] [PubMed]
  4. Paget, J.; Spreeuwenberg, P.; Charu, V.; Taylor, R.J.; Iuliano, A.D.; Bresee, J.; Simonsen, L.; Viboud, C. Global mortality associated with seasonal influenza epidemics: New burden estimates and predictors from the GLaMOR Project. J. Glob. Health 2019, 9, 020421. [Google Scholar] [CrossRef]
  5. Nair, H.; Brooks, W.A.; Katz, M.; Roca, A.; Berkley, J.A.; Madhi, S.A.; Simmerman, J.M.; Gordon, A.; Sato, M.; Howie, S.; et al. Global burden of respiratory infections due to seasonal influenza in young children: A systematic review and meta-analysis. Lancet 2011, 378, 1917–1930. [Google Scholar] [CrossRef]
  6. Thompson, W.W.; Weintraub, E.; Dhankhar, P.; Cheng, P.Y.; Brammer, L.; Meltzer, M.I.; Bresee, J.S.; Shay, D.K. Estimates of US influenza-associated deaths made using four different methods. Influenza Resp. Viruses 2009, 3, 37–49. [Google Scholar] [CrossRef]
  7. Putri, W.C.W.S.; Muscatello, D.J.; Stockwell, M.S.; Newall, A.T. Economic burden of seasonal influenza in the United States. Vaccine 2018, 36, 3960–3966. [Google Scholar] [CrossRef] [PubMed]
  8. De Courville, C.; Cadarette, S.M.; Wissinger, E.; Alvarez, F.P. The economic burden of influenza among adults aged 18 to 64: A systematic literature review. Influenza Resp. Viruses 2022, 16, 376–385. [Google Scholar] [CrossRef]
  9. Gong, H.; Shen, X.; Yan, H.; Lu, W.Y.; Zhong, G.J.; Dong, K.G.; Yang, J.; Yu, H.J. Estimating the disease burden of seasonal influenza in China, 2006–2019. Zhonghua Yi Xue Za Zhi 2021, 101, 560–567. [Google Scholar] [PubMed]
  10. Biswas, M.H.A.; Paiva, L.T.; de Pinho, M.D.R. A SEIR model for control of infectious diseases with constraints. Math. Biosci. Eng. 2014, 11, 761–784. [Google Scholar] [CrossRef]
  11. Suman, B.; Kumar, P. A survey of simulated annealing as a tool for single and multiobjective optimization. J. Oper. Res. Soc. 2006, 57, 1143–1160. [Google Scholar] [CrossRef]
  12. Ledesma, S.; Avia, G.; Sanchez, R. Practical Considerations for Simulated Annealing Implementation. In Simulated Annealing; Ming, C., Ed.; InTech: Houston, TX, USA, 2008. [Google Scholar] [CrossRef]
  13. Goldberg, D.E. Genetic Algorithms in Search, Optimization, and Machine Learning; Addison-Wesley Pub. Co.: Reading, MA, USA, 1989. [Google Scholar]
  14. Kumar, P.; Sharath, S.; D’Souza, G.R.; Sekaran, K.C. Memetic NSGA—A multi-objective genetic algorithm for classification of microarray data. In Proceedings of the 15th International Conference on Advanced Computing and Communications (ADCOM 2007), Guwahati, India, 18–21 December 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 75–80. [Google Scholar] [CrossRef]
  15. Bektur, G. An NSGA-II-Based Memetic Algorithm for an Energy-Efficient Unrelated Parallel Machine Scheduling Problem with Machine-Sequence Dependent Setup Times and Learning Effect. Arab. J. Sci. Eng. 2022, 47, 3773–3788. [Google Scholar] [CrossRef]
  16. Benlic, U.; Hao, J.-K. Memetic search for the quadratic assignment problem. Expert Syst. Appl. 2015, 42, 584–595. [Google Scholar] [CrossRef]
  17. Mei, Y.; Tang, K.; Yao, X. Decomposition-Based Memetic Algorithm for Multiobjective Capacitated Arc Routing Problem. IEEE Trans. Evol. Computat. 2011, 15, 151–165. [Google Scholar] [CrossRef]
  18. Mencía, R.; Sierra, M.R.; Mencía, C.; Varela, R. Memetic algorithms for the job shop scheduling problem with operators. Appl. Soft Comput. 2015, 34, 94–105. [Google Scholar] [CrossRef]
  19. Pan, Q.-K.; Ruiz, R. An estimation of distribution algorithm for lot-streaming flow shop problems with setup times. Omega 2012, 40, 166–180. [Google Scholar] [CrossRef]
  20. Alkhamis, A.K.; Hosny, M. A Synthesis of Pulse Influenza Vaccination Policies Using an Efficient Controlled Elitism Non-Dominated Sorting Genetic Algorithm (CENSGA). Electronics 2022, 11, 3711. [Google Scholar] [CrossRef]
  21. Deb, K.; Goel, T. Controlled Elitist Non-dominated Sorting Genetic Algorithms for Better Convergence. In Evolutionary Multi-Criterion Optimization; Zitzler, E., Thiele, L., Deb, K., Coello, C.A.C., Corne, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2001; pp. 67–81. [Google Scholar]
  22. Serafini, P. Simulated Annealing for Multi Objective Optimization Problems. In Multiple Criteria Decision Making; Tzeng, G.H., Wang, H.F., Wen, U.P., Yu, P.L., Eds.; Springer: New York, NY, USA, 1994; pp. 283–292. [Google Scholar] [CrossRef]
  23. Czyzżak, P.; Jaszkiewicz, A. Pareto simulated annealing—A metaheuristic technique for multiple-objective combinatorial optimization. J. Multi-Crit. Decis. Anal. 1998, 7, 34–47. [Google Scholar] [CrossRef]
  24. Ulungu, E.L.; Teghem, J.; Fortemps, P.H.; Tuyttens, D. MOSA method: A tool for solving multiobjective combinatorial optimization problems. J. Multi-Crit. Decis. Anal. 1999, 8, 221–236. [Google Scholar] [CrossRef]
  25. Li, H.; Landa-Silva, D. Evolutionary Multi-objective Simulated Annealing with adaptive and competitive search direction. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 3311–3318. [Google Scholar] [CrossRef]
  26. Suppapitnarm, A.; Seffen, K.A.; Parks, G.T.; Clarkson, P.J. A simulated annealing algorithm for multiobjective optimization. Eng. Optim. 2000, 33, 59–85. [Google Scholar] [CrossRef]
  27. Sankararao, B.; Yoo, C.K. Development of a Robust Multiobjective Simulated Annealing Algorithm for Solving Multiobjective Optimization Problems. Ind. Eng. Chem. Res. 2011, 50, 6728–6742. [Google Scholar] [CrossRef]
  28. Bandyopadhyay, S.; Saha, S.; Maulik, U.; Deb, K. A Simulated Annealing-Based Multiobjective Optimization Algorithm: AMOSA. IEEE Trans. Evol. Computat. 2008, 12, 269–283. [Google Scholar] [CrossRef]
  29. Suman, B. Simulated annealing-based multiobjective algorithms and their application for system reliability. Eng. Optim. 2003, 35, 391–416. [Google Scholar] [CrossRef]
  30. Cunha, M.; Marques, J. A New Multiobjective Simulated Annealing Algorithm—MOSA-GR: Application to the Optimal Design of Water Distribution Networks. Water Resour. Res. 2020, 56, e2019WR025852. [Google Scholar] [CrossRef]
  31. Zhou, A.; Qu, B.-Y.; Li, H.; Zhao, S.-Z.; Suganthan, P.N.; Zhang, Q. Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm Evol. Comput. 2011, 1, 32–49. [Google Scholar] [CrossRef]
  32. Gunantara, N. A review of multi-objective optimization: Methods and its applications. Cogent Eng. 2018, 5, 1502242. [Google Scholar] [CrossRef]
  33. Hu, S.; Wu, X.; Liu, H.; Wang, Y.; Li, R.; Yin, M. Multi-Objective Neighborhood Search Algorithm Based on Decomposition for Multi-Objective Minimum Weighted Vertex Cover Problem. Sustainability 2019, 11, 3634. [Google Scholar] [CrossRef]
  34. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  35. Hajipour, V.; Tavana, M.; Santos-Arteaga, F.J.; Alinezhad, A.; Di, D. An efficient controlled elitism non-dominated sorting genetic algorithm for multi-objective supplier selection under fuzziness. J. Comput. Des. Eng. 2020, 7, 469–488. [Google Scholar] [CrossRef]
  36. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  37. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  38. Talbi, E.-G. Metaheuristics from Design to Implementation; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  39. Khan, A.; Zaman, G. Global analysis of an age-structured SEIR endemic model. Chaos Solitons Fractals 2018, 108, 154–165. [Google Scholar] [CrossRef]
  40. Wang, X.; Peng, H.; Shi, B.; Jiang, D.; Zhang, S.; Chen, B. Optimal vaccination strategy of a constrained time-varying SEIR epidemic model. Commun. Nonlinear Sci. Numer. Simul. 2019, 67, 37–48. [Google Scholar] [CrossRef]
  41. Lee, S.; Golinski, M.; Chowell, G. Modeling Optimal Age-Specific Vaccination Strategies Against Pandemic Influenza. Bull. Math. Biol. 2012, 74, 958–980. [Google Scholar] [CrossRef] [PubMed]
  42. Aletreby, W.T.; Alharthy, A.M.; Faqihi, F.; Mady, A.F.; Ramadan, O.E.; Huwait, B.M.; Alodat, M.A.; Lahmar, A.B.; Mahmood, N.N.; Mumtaz, S.A.; et al. Dynamics of SARS-CoV-2 outbreak in the Kingdom of Saudi Arabia: A predictive model. Saudi Crit. Care J. 2020, 4, 79. [Google Scholar] [CrossRef]
  43. Kim, S.; Jung, E. Prioritization of vaccine strategy using an age-dependent mathematical model for 2009 A/H1N1 influenza in the Republic of Korea. J. Theor. Biol. 2019, 479, 97–105. [Google Scholar] [CrossRef] [PubMed]
  44. da Cruz, A.R.; Cardoso, R.T.N.; Takahashi, R.H.C. Multiobjective synthesis of robust vaccination policies. Appl. Soft Comput. 2017, 50, 34–47. [Google Scholar] [CrossRef]
  45. van den Driessche, P. Reproduction numbers of infectious disease models. Infect. Dis. Model. 2017, 2, 288–303. [Google Scholar] [CrossRef]
  46. Shulgin, B. Pulse vaccination strategy in the SIR epidemic model. Bull. Math. Biol. 1998, 60, 1123–1148. [Google Scholar] [CrossRef]
  47. Hill, A.N.; Longini, I.M. The critical vaccination fraction for heterogeneous epidemic models. Math. Biosci. 2003, 181, 85–106. [Google Scholar] [CrossRef]
  48. Marques, J.; Cunha, M.; Savić, D. Many-objective optimization model for the flexible design of water distribution networks. J. Environ. Manag. 2018, 226, 308–319. [Google Scholar] [CrossRef]
  49. Amine, K. Multiobjective Simulated Annealing: Principles and Algorithm Variants. Adv. Oper. Res. 2019, 2019, 8134674. [Google Scholar] [CrossRef]
  50. Deb, K.; Sindhya, K.; Okabe, T. Self-adaptive simulated binary crossover for real-parameter optimization. In Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation—GECCO ’07, London, UK, 7–11 July 2007; ACM Press: New York, NY, USA, 2007; p. 1187. [Google Scholar] [CrossRef]
  51. Deb, K.; Deb, D. Analysing mutation schemes for real-parameter genetic algorithms. Int. J. Artif. Intell. Soft Comput. 2014, 4, 1–28. [Google Scholar] [CrossRef]
  52. Rao, S.; Nyquist, A.-C.; Stillwell, P.C. Influenza. In Kendig’s Disorders of the Respiratory Tract in Children; Elsevier: Amsterdam, The Netherlands, 2019; pp. 460–465.e2. [Google Scholar] [CrossRef]
  53. Patel, R.; Longini, I.M.; Halloran, M.E. Finding optimal vaccination strategies for pandemic influenza using genetic algorithms. J. Theor. Biol. 2005, 234, 201–212. [Google Scholar] [CrossRef]
  54. Cardoso, R.T.N.; Takahashi, R.H.C. Solving Impulsive Control Problems by Discrete-Time Dynamic Optimization Methods. TEMA—Tendências Em Matemática Apl. E Comput. 2008, 9, 21–30. [Google Scholar] [CrossRef]
  55. Yang, T. Impulsive control. IEEE Trans. Automat. Contr. 1999, 44, 1081–1083. [Google Scholar] [CrossRef]
  56. Bertsekas, D.P. Dynamic Programming and Optimal Control, 2nd ed.; Athena Scientific: Belmont, MA, USA, 2000. [Google Scholar]
  57. Lin, Y.-K.; Yeh, C.-T. Maximal network reliability with optimal transmission line assignment for stochastic electric power networks via genetic algorithms. Appl. Soft Comput. 2011, 11, 2714–2724. [Google Scholar] [CrossRef]
  58. Uray, E.; Carbas, S.; Geem, Z.W.; Kim, S. Parameters Optimization of Taguchi Method Integrated Hybrid Harmony Search Algorithm for Engineering Design Problems. Mathematics 2022, 10, 327. [Google Scholar] [CrossRef]
  59. Sabarish, K.V.; Baskar, J.; Paul, P. Overview on L9 taguchi optimizational method. Int. J. Adv. Res. Eng. Technol. 2019, 10, 652–658. [Google Scholar] [CrossRef]
  60. Yazdi, M.; Nedjati, A.; Zarei, E.; Abbassi, R. Application of multi-criteria decision-making tools for a site analysis of offshore wind turbines. In Artificial Intelligence and Data Science in Environmental Sensing; Elsevier: Amsterdam, The Netherlands, 2022; pp. 109–127. [Google Scholar] [CrossRef]
  61. Li, M.; Yao, X. Quality Evaluation of Solution Sets in Multiobjective Optimisation: A Survey. ACM Comput. Surv. 2020, 52, 1–38. [Google Scholar] [CrossRef]
  62. Santos, T.; Xavier, S. A Convergence indicator for Multi-Objective Optimisation Algorithms. TEMA 2018, 19, 437–448. [Google Scholar] [CrossRef]
  63. Higgins, J.J. An Introduction to Modern Nonparametric Statistics; Brooks/Cole: Pacific Grove, CA, USA, 2004. [Google Scholar]
  64. Drake, J.H.; Kheiri, A.; Özcan, E.; Burke, E.K. Recent advances in selection hyper-heuristics. Eur. J. Oper. Res. 2020, 285, 405–428. [Google Scholar] [CrossRef]
  65. Dulebenets, M.A. An Adaptive Polyploid Memetic Algorithm for scheduling trucks at a cross-docking terminal. Inf. Sci. 2021, 565, 390–421. [Google Scholar] [CrossRef]
  66. Chuang, C.-S.; Hong, C.-C. New Self-Adaptive Algorithms and Inertial Self-Adaptive Algorithms for the Split Variational Inclusion Problems in Hilbert Space. Numer. Funct. Anal. Optim. 2022, 43, 1050–1068. [Google Scholar] [CrossRef]
  67. Li, J.; Gonsalves, T. Parallel Hybrid Island Metaheuristic Algorithm. IEEE Access 2022, 10, 42268–42286. [Google Scholar] [CrossRef]
  68. Blum, C.; Puchinger, J.; Raidl, G.R.; Roli, A. Hybrid metaheuristics in combinatorial optimization: A survey. Appl. Soft Comput. 2011, 11, 4135–4151. [Google Scholar] [CrossRef]
  69. Singh, P.; Pasha, J.; Moses, R.; Sobanjo, J.; Ozguven, E.E.; Dulebenets, M.A. Development of exact and heuristic optimization methods for safety improvement projects at level crossings under conflicting objectives. Reliab. Eng. Syst. Saf. 2022, 220, 108296. [Google Scholar] [CrossRef]
  70. Dulebenets, M.A. A Diffused Memetic Optimizer for reactive berth allocation and scheduling at marine container terminals in response to disruptions. Swarm Evol. Comput. 2023, 80, 101334. [Google Scholar] [CrossRef]
  71. Singh, E.; Pillay, N. A Study of Ant-Based Pheromone Spaces for Generation Perturbative Hyper-Heuristics. In Proceedings of the Genetic and Evolutionary Computation Conference, Lisbon, Portugal, 15–19 July 2023; ACM: New York, NY, USA, 2023; pp. 84–92. [Google Scholar] [CrossRef]
Figure 1. General outline of SEIR.
Figure 1. General outline of SEIR.
Sustainability 15 15347 g001
Figure 2. Behavior of the SEIR model without intervention, considering the parameters shown in Table 2.
Figure 2. Behavior of the SEIR model without intervention, considering the parameters shown in Table 2.
Sustainability 15 15347 g002
Figure 3. Particular solutions selected from the initial hash table population.
Figure 3. Particular solutions selected from the initial hash table population.
Sustainability 15 15347 g003
Figure 4. Hypothetical trajectory pattern of population-based MOSA.
Figure 4. Hypothetical trajectory pattern of population-based MOSA.
Sustainability 15 15347 g004
Figure 5. MOSA: three general cases.
Figure 5. MOSA: three general cases.
Sustainability 15 15347 g005
Figure 6. MOSA flowchart.
Figure 6. MOSA flowchart.
Sustainability 15 15347 g006
Figure 7. Main effect plot for signal−to−noise (S/N) ratio of the MOSA.
Figure 7. Main effect plot for signal−to−noise (S/N) ratio of the MOSA.
Sustainability 15 15347 g007
Figure 8. Average run of the 10 best Pareto fronts under different experimental settings.
Figure 8. Average run of the 10 best Pareto fronts under different experimental settings.
Sustainability 15 15347 g008aSustainability 15 15347 g008b
Figure 9. Boxplot of the three algorithms outcomes.
Figure 9. Boxplot of the three algorithms outcomes.
Sustainability 15 15347 g009
Figure 10. SEIR behavior under two vaccination allocation interventions: (A) SEIR under contingent variable control; (BD) SEIR under complete pulse vaccination intervention (contingent variable control and guardian static control).
Figure 10. SEIR behavior under two vaccination allocation interventions: (A) SEIR under contingent variable control; (BD) SEIR under complete pulse vaccination intervention (contingent variable control and guardian static control).
Sustainability 15 15347 g010
Figure 11. Mean number of vaccinations for the three algorithms.
Figure 11. Mean number of vaccinations for the three algorithms.
Sustainability 15 15347 g011
Figure 12. Performance measures of all mean experiment cases.
Figure 12. Performance measures of all mean experiment cases.
Sustainability 15 15347 g012
Table 1. Template of the simulated annealing algorithm.
Table 1. Template of the simulated annealing algorithm.
Input: starting temperature value (T0), final temperature value (Tmin), temperature cooling schedule (α), and initial solution (SInitial),
Initialize solution SCurrent = SInitial;
Initialize temperature T = T0;
Repeat
Repeat
Generate a random neighbor candidate solution SCandidate;
ΔE = f (SCandidate) f (SCurrent);
If  ΔE ≤ 0 Then SCurrent = SCandidate
Else
Accept SCandidate with a probability e x p Δ E T ;
Until equilibrium condition
T = T × α;
Until stopping criteria satisfied
Output: best solution found
Table 2. SEIR model parameters.
Table 2. SEIR model parameters.
ParametersDefinitionValue
βTransmission rate4.5 (t.u.)−1
γ1Recovery rate of infected individuals1/7 (t.u.)−1
γ2Recovery rate of hospitalized individuals 1/7 (t.u.)−1
μBirth and mortality rate1/70 (t.u.)−1
nProgress to immune rate10 (t.u.)
kProgress to infectious rate ½ (t.u.)−1
aRate of infectious becoming hospitalized 0.2135 (t.u.)−1
R0Reproduction number15
itolTolerance ratio0.01
Table 3. The levels defined for the parameters of the MOSA.
Table 3. The levels defined for the parameters of the MOSA.
ParametersParameters Levels
Level 1Level 2Level 3
Starting temperature (A)305070
Temperature cooling rate (B)0.50.70.9
Number of local search moves (C)5710
Table 4. Optimal values for the parameters of the MOSA.
Table 4. Optimal values for the parameters of the MOSA.
ParametersOptimal Value
Starting temperature (A)50
Temperature cooling rate (B)0.7
Number of local search moves (C)10
Table 5. p-Values returned by Wilcoxon Rank-Sum test.
Table 5. p-Values returned by Wilcoxon Rank-Sum test.
PCPRE
MOSA-LSMOSA-CanonicalLS-CanonicalMOSA-LSMOSA-CanonicalLS-Canonical
p-Value1.45311 × 10−70.0049949380.99272825.258285 × 10−384.044913 × 10−180.9994427
F1 F2
MOSA-LSMOSA-CanonicalLS-CanonicalMOSA-LSMOSA-CanonicalLS-Canonical
110.99995135.167215 × 10−91.772605 × 10−222.568021 × 10−7
Table 6. Execution time in seconds.
Table 6. Execution time in seconds.
Average 10 Runs
Execution Time
CENSGA—MOSACanonical CENSGACENSGA—LS
Experiment 1307.4102227.3253256.8435
Experiment 2319.188210.3709263.9084
Experiment 3332.1827216.6491270.7399
Experiment 4317.2059204.6829266.15789
Experiment 5315.9615214.1225265.0167
Experiment 6318.5307251.59203.7231
Experiment 7313.8855206.3925270.2337
Experiment 8318.9852205.8596267.3136
Experiment 9336.0775217.9315273.2358
Overall average319.936217.214259.686
Table 7. Nine computational experiments and their associated QI values.
Table 7. Nine computational experiments and their associated QI values.
#ExperimentsMOSALSCanonical
ERGDε-IQHVERGDε-IQHVERGDε-IQHV
1γ = 1/5, β = 1.180.6132.92410.550.4790.6073.3198.9880.8060.7814.2279.5941.297
2γ = 1/5, β = 2.360.5816.2137.0290.9720.5881.44230.090.5360.8321.18240.320.511
3γ = 1/5, β = 4.720.6022.3182.5210.7920.5791.29424.760.7100.8190.71842.170.683
4γ = 1/7, β = 1.180.6094.10512.390.5160.6093.4508.9330.8110.7824.2729.5691.299
5γ = 1/7, β = 2.360.5816.6187.0290.9720.5881.44230.090.5360.8321.18240.320.511
6γ = 1/7, β = 4.720.4543.4462.6470.7460.6860.77039.170.6800.7820.59354.370.729
7γ = 1/9, β = 1.180.6064.54311.690.5480.6113.5138.9330.9870.7824.2759.5691.299
8γ = 1/9, β = 2.360.5817.0357.0290.9720.5881.44230.090.5360.8321.18240.320.511
9γ = 1/9, β = 4.720.6022.4252.5210.7920.5791.29424.760.7100.7970.71842.170.699
Average0.5814.4037.0450.7540.6041.99622.870.7020.8042.03932.050.838
Table 8. p-Values returned by the Wilcoxon rank-sum test.
Table 8. p-Values returned by the Wilcoxon rank-sum test.
ERGD
MOSA-LSMOSA-CanonicalLS-CanonicalMOSA-LSMOSA-CanonicalLS-Canonical
0.3170.0050.0050.9950.9880.453
ε-IQ HV
MOSA-LSMOSA-CanonicalLS-CanonicalMOSA-LSMOSA-CanonicalLS-Canonical
0.0290.0290.0040.3170.5940.829
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alkhamis, A.K.; Hosny, M. A Multi-Objective Simulated Annealing Local Search Algorithm in Memetic CENSGA: Application to Vaccination Allocation for Influenza. Sustainability 2023, 15, 15347. https://doi.org/10.3390/su152115347

AMA Style

Alkhamis AK, Hosny M. A Multi-Objective Simulated Annealing Local Search Algorithm in Memetic CENSGA: Application to Vaccination Allocation for Influenza. Sustainability. 2023; 15(21):15347. https://doi.org/10.3390/su152115347

Chicago/Turabian Style

Alkhamis, Asma Khalil, and Manar Hosny. 2023. "A Multi-Objective Simulated Annealing Local Search Algorithm in Memetic CENSGA: Application to Vaccination Allocation for Influenza" Sustainability 15, no. 21: 15347. https://doi.org/10.3390/su152115347

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop