Estimation of Small-Scale Kinetic Parameters of Escherichia coli ( E. coli ) Model by Enhanced Segment Particle Swarm Optimization Algorithm ESe-PSO

: The ability to create “structured models” of biological simulations is becoming more and more commonplace. Although computer simulations can be used to estimate the model, they are restricted by the lack of experimentally available parameter values, which must be approximated. In this study, an Enhanced Segment Particle Swarm Optimization (ESe-PSO) algorithm that can estimate the values of small-scale kinetic parameters is described and applied to E. coli’s main metabolic network as a model system. The glycolysis, phosphotransferase system, pentose phosphate, the TCA cycle, gluconeogenesis, glyoxylate pathways, and acetate formation pathways of Escherichia coli are represented by the Differential Algebraic Equations (DAE) system for the metabolic network. However, this algorithm uses segments to organize particle movements and the dynamic inertia weight ( ω ) to increase the algorithm’s exploration and exploitation potential. As an alternative to the state-of-the-art algorithm, this adjustment improves estimation accuracy. The numerical ﬁndings indicate a good agreement between the observed and predicted data. In this regard, the result of the ESe-PSO algorithm achieved superior accuracy compared with the Segment Particle Swarm Optimization (Se-PSO), Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Differential Evolution (DE) algorithms. As a result of this innovative approach, it was concluded that small-scale and even entire cell kinetic model parameters can be developed.


Introduction
Computing simulations and optimization are important subjects in systems biology and bioinformatics, where they play an important role in mathematical approaches to the reverse engineering of biological systems and managing uncertainty in that context.The large amount of computation required for the simulation, calibration, and analysis of models has prompted a number of researchers to propose various parallelization schemes in an attempt to accelerate these activities [1].The development of dynamic (kinetic) models at a smaller scale has been the focus of recent research, with the eventual goal of producing whole-cell models.Model calibration has gained a lot of interest, especially with reference to global optimization metaheuristics and hybrid approaches [2].
Thus, kinetic models are being developed so that the dynamics of biological processes can be described in a quantitative manner.These models consider the stoichiometry of reactions and the kinetic expressions associated with each enzyme.In vitro experiments are used to determine the kinetic properties of enzymes by exposing isolated enzymes to optimal conditions.In vitro experiments are used to determine the kinetic properties of enzymes by exposing isolated enzymes to optimal conditions.These conditions are not the same as those found around enzymes inside living cells; thus, when compared to in vivo-measured concentrations, the use of in vitro parameters in kinetic models can result in incorrect predictions of intracellular metabolite concentrations [3].The dynamics of cell metabolism can be fine-tuned by perturbing a culture and measuring fluxes, enzyme levels, and intra-and extracellular metabolite concentrations as a function of time.Advances in experimental techniques have paved the way for developing dynamic models for metabolic networks that can estimate microbial behavior [4].
Various Escherichia coli (E.coli) models were created and simulated in order to better understand the model's behavior and produce specific products such as those reported in [5][6][7][8][9].In [5], the researchers simulate and estimate the kinetic parameters of two pathways while ignoring the other pathways (gluconeogenesis and glyoxylate).In [6], glutamine/aspartate metabolism and fructose consumption are incorporated into the model by utilizing the (Pts) system.However, the model does not consider the entire pathway model or E. coli cell growth.The researchers in [7] generate the experimental time courses of extracellular glucose and biomass in the E. coli model but do not consider the overall production of the pathways.As reported in [8], Kotte used the Monod equations to simulate glucose uptake without estimating the specific growth rate based on a molecular process in E. coli; therein, he did not consider estimating the entire E. coli main metabolic pathway.Neglecting the entire main metabolic pathway may result in an incorrect prediction of the simulation result.As a result, small-scale kinetic parameters must be comprehensively investigated.The study of a complete model can only be comprehensively achieved by including the entire cell system.This is because the feedback loop of several metabolites, such as PEP, OAA, PYR, and others, may affect other enzymes in the main metabolic pathway, causing the concentration of other metabolites to change dynamically over time [10,11].
Recently, three different strategies were compared for estimating the kinetic parameters of a dynamic model of central carbon metabolism in Escherichia coli, i.e., the modified simplex method [12,13], simulated annealing, and differential evolution.According to the authors, differential evolution produced the best results.Moreover, the researchers in [14] revised the central carbon metabolism model and estimated kinetic parameters for the glycolytic enzymes from [5].The parameter estimation problem was solved using MATLAB and a weighted least squares objective function.
As a result, researchers are increasingly using metaheuristic optimization [15] algorithm methods to estimate the kinetic parameters of the E. coli model and other biological models because of the difficulties in calculating kinetic parameters.Several of these metaheuristic methods have been utilized to estimate the kinetic parameters utilized in [16][17][18][19], and some of these algorithms used experimental data derived from [5,20].Furthermore, there are hundreds or even thousands of parameters in biological kinetic models that make parameter estimates difficult because of the large search space that must be investigated.High-dimensional kinetic models (with hundreds or thousands of different kinetic parameters) are difficult to compute and, as a result, the above algorithms' performance is negatively affected, resulting in reduced accuracy [21].
Differential Evolution (DE) is an example of a method that is commonly used and explored for parameter estimation in metabolic models.The DE method's key shortcoming is its time consumption, which makes it difficult for the DE algorithm to set its parameters when a high number of processors and numerous local searches are involved.Evolutionary algorithms, such as the Genetic Algorithm (GA), share many of the same principles as DE.This methodology is often used to estimate metabolic model parameters.When compared to DE, Particle Swarm Optimization (PSO), and other methods, this algorithm's key flaw is the time it takes to compute [22,23].
Many other fields, including those referenced in [24][25][26], have used the PSO algorithm since its inception in 1995.Birds and fish finding their food were modeled using the method

Problem Formulation
Kinetic metabolic models are part of the enzymatic equation for ODE functions.The models' behavior and procedure design can be hampered by erroneous metabolic kinetic models.As a result of the nonlinearity of the model, obtaining accurate results is a challenging task because the kinetics are often gathered from multiple laboratories and under differing conditions [32][33][34].
In this regard, the main kinetic metabolic model of E. coli simulated in [9] contains large-kinetic parameters distributed across six pathways.This model had a great impact on E. coli model simulations in terms of understanding and simulating its behavior.However, the researchers stated that the model requires further investigations focused on its kinetic parameter estimations and responses and that further comparisons with real experimental data with small-scale kinetic parameters are necessary.Thus, further study to this end is of significant importance.However, the process of parameter estimation involves looking for the parameter values in a mathematical model (formulated using ODE) that best suit the experimental data.This can be accomplished by minimizing the scalar distance between the model prediction and experimental data, considering experimental errors.This problem can be divided into three categories: multimodal, continuous, and single-objective optimization.The objective function of kinetic parameter estimation considered in this work is described as follows: where f is the objective function and y r,m is the actual model output r resulting from m metabolites.The y s,m term is the simulation model output s for m metabolites.
Because biological problems are nonlinear, estimating the best parameters for this problem is difficult because many local minima exist.Most optimization algorithms become easily trapped in local minima as a result, resulting in a slow convergence speed.
Furthermore, the kinetic parameters that need to be estimated using the methods in Section 3 are implemented.

Materials and Methods
This section describes the model structure and the algorithms of DE, GA, PSO, Se-PSO, and Ese-PSO in order to estimate small-kinetic parameters.

The Structure of the E. coli Kinetic Model
The dynamic model of the main metabolic pathway of E. coli formulated in [9] was used as a benchmark and described in Figure 1.This model, which consists of glycolysis, pentose phosphate, the TCA cycle, gluconeogenesis, and glyoxylate pathways, in addition to acetate formation with the phosphotransferase system.This model has 23 metabolites, 28 enzymatic reactions, 10 co-factors (e.g., adenosine triphosphate (ATP), coenzyme A (COA), nicotinamide adenine dinucleotide phosphate (NADPH)), and 172 kinetic parameters.Equation (2) gives the rate at which the concentration of the metabolite in the considered model changes.
where C i is the concentration of metabolite i, R ij is the stoichiometric coefficient of metabolite i in the reaction j, v j is the rate of the reaction j, and C i is the growth rate on the dilution effect µ = 0.1 h −1 due to the increase in cell volume as the cell grows [9].
Processes 2022, 10, x FOR PEER REVIEW 4 of 26 errors.This problem can be divided into three categories: multimodal, continuous, and single-objective optimization.The objective function of kinetic parameter estimation considered in this work is described as follows: where  is the objective function and  , is the actual model output  resulting from  metabolites.The  , term is the simulation model output  for  metabolites.
Because biological problems are nonlinear, estimating the best parameters for this problem is difficult because many local minima exist.Most optimization algorithms become easily trapped in local minima as a result, resulting in a slow convergence speed.
Furthermore, the kinetic parameters that need to be estimated using the methods in Section 3 are implemented.

Materials and Methods
This section describes the model structure and the algorithms of DE, GA, PSO, Se-PSO, and Ese-PSO in order to estimate small-kinetic parameters.

The Structure of the E. coli Kinetic Model
The dynamic model of the main metabolic pathway of E. coli formulated in [9] was used as a benchmark and described in Figure 1.This model, which consists of glycolysis, pentose phosphate, the TCA cycle, gluconeogenesis, and glyoxylate pathways, in addition to acetate formation with the phosphotransferase system.This model has 23 metabolites, 28 enzymatic reactions, 10 co-factors (e.g., adenosine triphosphate (ATP), coenzyme A (COA), nicotinamide adenine dinucleotide phosphate (NADPH)), and 172 kinetic parameters.Equation (2) gives the rate at which the concentration of the metabolite in the considered model changes.The model mass balance equations in Table 1 and the kinetic rate equations in Table 2 for the investigated model that was described in Figure 1 are all listed below.

Reactions Kinetic Equation
Isocitrate lyase (IcL) Malate synthase (MS) After the model structure is described, the algorithms used to estimate the kinetics are stated below.2-Ketoglutarate (2KG)

Succinate dehydrogenase (SDH)
Fumarate hydratase (Fum) Malate dehydrogenase (MDH) After the model structure is described, the algorithms used to estimate the kinetics are stated below.
After the model structure is described, the algorithms used to estimate the kinetics are stated below.

GA Algorithm
One of the most well-known heuristic search algorithms is the Genetic Algorithm, which is based on the process of natural evolution.In recent decades, GA has received a lot of attention in engineering design optimization.GA was first introduced in computer science in the 1960s, when a group of biologists attempted to implement the process of evolution in nature in computer code [35].GA refers to any population-based algorithm that finds the best solution by using selection, crossover, and mutation across chromosomes.A member of the population is referred to as a chromosome/genotype, which is a binary or real-valued string.Several types of GA have been introduced in optimization studies following Barricelli [35].However, a given optimization problem can be simply defined in GA by following three main steps, which include [36]: Initialization is the first step.
Step 1: generation; Step 2: selection; Step 3: stopping criteria.The first step in the GA algorithm is the generation of initial chromosomes (genotypes) in step 0. In most practical problems, genotypes are generated at random, and the goodness of each chromosome is assessed using an objective function and the constraints that go with it.
Step 1: Introduce the generation operator and apply it to the current population to generate an intermediate one.In the first step, the initial and intermediate populations are the same (step 0).However, the imposition of the generation operator forms the intermediate population in subsequent iterations; Step 2: To create the next population, crossover-mutation operators are applied to the results of Step 1.When the generator (operator) creates a chromosome by combining the properties of two parental chromosomes, the term crossover is used.However, the term mutation is used when a new chromosome is formed by making minor changes to the properties of a single parent chromosome [37].
The algorithm design is influenced by experience, model specifications, and the association of experimental results with various heuristic search algorithms.Thus, the algorithm used in this work is described in Algorithm 1 below.
Generate an initial population with m chromosomes at random; 2.
Determine the fitness, f (m) , of each chromosome in m; 4.
Build the evolved population using the following criteria; 5.
Using proportional fitness selection for m1 and m2 chromosomes; 6.
Use the crossover function on m1 and m2 to create a new chromosome (m3); 7.
Use the mutation function on m3 to generate m ; 8.
Include m \ in the next population; 9.
Replace the old population with the new population; 10.If the stopping criteria are not met, repeat step 2 of the procedure.

DE Algorithm
More than two decades ago, Storn and Price [38] presented Differential Evolution (DE), a novel optimization method designed to handle nondifferentiable, nonlinear, and multimodal objective functions.To meet this requirement, DE was designed as a stochastic parallel direct search method that borrows concepts from the broad class of evolutionary algorithms while requiring only a few easily chosen control parameters.Early experimental results show that DE outperforms other well-known evolutionary algorithms in terms of convergence [38,39].
The combination of randomly chosen vectors produces new individuals (vectors) in each population.In our context, this operation is known as mutation.The resulting vectors are then mixed with another predetermined vector-the target vector-in a process known as recombination.This operation produces the trial vector.If and only if the trial vector reduces the value of the objective function f, it is accepted for the next generation.This operation is known as selection.The following is a high-level description of the aforementioned operators (for one generation).Thus, the algorithm used in this work is described in Algorithm 2 below.Initialization operation: Generate the initial individuals x 0 i , i = 1, 2, . . .NP in S 0 .Determine the mutation probability F, the crossover probability CR, and the maximal number of generations G m .Set the current generation G = 0; 2.
For each individual x G i , i = 1, 2, . . .NP, perform steps 3-5 to produce the population for the next generation G + 1; 4.
Mutation operation: a perturbed individual x G+1 i is generated as follows: Crossover operation: the perturbed individual in are selected by a binomial distribution to perform the crossover operation to generate the offspring x −G+1 i ; 6.
Evaluation operation: the offspring x −G+1 i competes one-to-one with its parent x G i ; 7.
Repeat steps 2-7 as long as the number of generations is smaller than the allowable maximum number G m and the best individual is not obtained.

PSO Algorithm
Kennedy, a social psychologist, and Eberhart, an electrical engineer, developed the idea of particle swarms to create computational intelligence by exploiting already existing natural interacting systems [40,41].
The PSO method was developed in the middle of the 1990s while attempting to mimic the elegant, well-choreographed motion of a flock of birds as part of a social cognition study looking into the idea of collective intelligence in biological populations.Soon after its creation, it was recognized as an evolutionary technique [42].This technique has received extensive study and in-depth reflection due to its incredibly effective problem-solving capabilities in a variety of technical and scientific applications.
The earliest simulations [42] modelled flocks of birds foraging for grain and were driven by social behavior.This quickly became a potent optimization technique, called Particle Swarm Optimization [43,44] (PSO).
The collection of particles in the search space in the PSO algorithm aims to optimize a fitness function, similar to the movement of flocks of birds in the natural environment in search of food.The particles are randomly placed in the search space and their quality or fitness at that position is evaluated.Then, after a predetermined number of iterations, each particle moves to a new location that is a better fit than the previous one.With some random perturbations, this movement is based on the history of the particle's best and current locations with those of the best positions attained by other particles in the swarm.Thus, with a fixed number of particles working together, the swarm achieves the most optimal solution to the fitness function in the problem space in subsequent iterations [44,45].The fitness or objective function in the PSO algorithm is a performance evaluation criterion that is dependent on the algorithm's application area.A performance criterion is typically defined by a mathematical formulation that quantifies system performance via a performance index.Thus, the algorithm used in this work is described in Algorithm 3 below.
Initialize the velocities and positions of the swarm; 4.
Initialize the kinetics and their boundaries; 7.
Find the particles with best fitness in the neighborhood; 9.
Calculate the velocity of each particles using this equation: Update the position of each particle using this equation:

Se-PSO Algorithm
The Se-PSO algorithm is derived from a mix of segmentation and Particle Swarm Optimization (PSO) algorithms, wherein segmentation is utilized to identify the local and global optimal point problems of PSO.On the basis of the dimension, the segmentation can be divided into more than two groups.The concept of parameter segmentation is theoretically illustrated in Figure 2. Assuming that we have three kinetic parameters that are initialized with search space boundaries, parameters 1 and 2 are divided into two segments, whereas parameter 3 is divided into one segment.Then, on the basis of the objective function, a group of particles moving in the search space is initiated.Each search iteration displays the local optimum position in each segment parameter.This scenario is shown in Figure 2 [28].The following parameters are set in this optimization identification case: the bird steps for PSO = 30,  2 = 1.2,  1 = 1.2, and  = 0.9.According to the experimental results, it is better to initially set the inertia weight to a higher value and gradually reduce it to obtain refined solutions.To improve the algorithm, a new inertia weight is proposed that is set to the damping process as a linearly decreasing time-varying function.

ESe-PSO Algorithm
The primary parameters of the PSO algorithm ( 1 ,  2 ,  ) help the algorithm obtain the best possible result.When dealing with high-dimensional difficult situations, PSO suffers from early convergence to local optima [26].As a result, the three parameters indicated above must be carefully calculated for a robust PSO [46].The first is the learning factor that pulls particles toward their personal optimum position, while the second is the social learning component that pushes particles toward their global optimum position.It also remembers the prior velocity of the particles, preventing them from converging to local minima.Thus, in the basic PSO utilized in Se-PSO, the inertia weight () was set (0.9) [28], indicating the necessity to adjust it in order to produce a better estimation in this work.
This approach was created using the fundamental PSO algorithm used in PSO segmentation.This advancement is related to the inertia weight (), which determines the influence that the last iteration speed has on the current speed and allows the particles to explore larger regions in the beginning and exploit neighboring areas at lower speeds in later stages.The development is carried out by initializing the inertia weight and damping parameter, which is determined throughout the iteration process to improve control convergence and enable the particle to search for a global solution.
According to the PSO algorithm, a particle with a higher fitness value is thought to be nearer to the global optimum than one with a lower fitness value.Stronger local exploration capabilities may be necessary for the particle with a higher fitness to seek through its immediate surroundings for the best solution.On the other hand, a particle with a lower fitness requires stronger global exploration capabilities to move fast to the particles with higher fitness.This improves the likelihood that the particle discovers the global optimum and speeds up the PSO's convergence.Lower inertia weights can be employed for particles with superior fitness, which helps to accelerate the PSO's convergence.Higher The following parameters are set in this optimization identification case: the bird steps for PSO = 30, c 2 = 1.2, c 1 = 1.2, and ω = 0.9.According to the experimental results, it is better to initially set the inertia weight to a higher value and gradually reduce it to obtain refined solutions.To improve the algorithm, a new inertia weight is proposed that is set to the damping process as a linearly decreasing time-varying function.

ESe-PSO Algorithm
The primary parameters of the PSO algorithm (c 1 , c 2 , and ω) help the algorithm obtain the best possible result.When dealing with high-dimensional difficult situations, PSO suffers from early convergence to local optima [26].As a result, the three parameters indicated above must be carefully calculated for a robust PSO [46].The first is the learning factor that pulls particles toward their personal optimum position, while the second is the social learning component that pushes particles toward their global optimum position.It also remembers the prior velocity of the particles, preventing them from converging to local minima.Thus, in the basic PSO utilized in Se-PSO, the inertia weight (ω) was set (0.9) [28], indicating the necessity to adjust it in order to produce a better estimation in this work.
This approach was created using the fundamental PSO algorithm used in PSO segmentation.This advancement is related to the inertia weight (ω), which determines the influence that the last iteration speed has on the current speed and allows the particles to explore larger regions in the beginning and exploit neighboring areas at lower speeds in later stages.The development is carried out by initializing the inertia weight and damping parameter, which is determined throughout the iteration process to improve control convergence and enable the particle to search for a global solution.
According to the PSO algorithm, a particle with a higher fitness value is thought to be nearer to the global optimum than one with a lower fitness value.Stronger local exploration capabilities may be necessary for the particle with a higher fitness to seek through its immediate surroundings for the best solution.On the other hand, a particle with a lower fitness requires stronger global exploration capabilities to move fast to the particles with higher fitness.This improves the likelihood that the particle discovers the global optimum and speeds up the PSO's convergence.Lower inertia weights can be employed for particles with superior fitness, which helps to accelerate the PSO's convergence.Higher inertia weights are used for particles that are less fit and far from the optimal particle position, which can improve their capacity for global exploration and help these particles escape from local maxima [24].
The process of damping is subsequently executed before the end of the loop iteration process.This process supports the searching process by calculating the inertia weight after each iteration until the iteration is finished.This method supports the particles through control of convergence, thereby balancing the local and global search by increasing the exploration and exploitation of the particles to locate the optimal values [46,47].When the inertia weight was initialized to the original interval value ω [1, 0.01], the damping value was set to ωdamp = 0.99.Subsequently, the damping process damp is calculated in a decreasing manner in the iteration loop, using Equations (3) and ( 4), until the iteration is finished: where damp is the damping process, ωdamp is the damping value, ω max is the maximum value of ω = 1, ω min is the minimum value of ω = 0.01, and iter max and iter min are the maximum and minimum iterations, respectively.In this regard, this process explores wider areas in the beginning and exploits nearby areas in the later stages at a reduced speed.The changes are added to Se-PSO since this algorithm combines segmentation and particle swarm optimization based on the inertia weight (ω) modification in addition to segmentation.This modification helps the basic Se-PSO to more effectively reach the best optimum solution and increase exploration and exploitation.However, after the damping process is added, the velocity of Se-PSO [28] should be modified according to the damping changes as described in Equation ( 5): where x t,j is the position of particle i, v t,j is the velocity of particle i, t is the iteration of particles, j is the number of segments, ω is the inertia weight, damp is a damping process, c 1 and c 2 are the acceleration coefficients, and r 1 and r 2 are random numbers between (0 and 1).The p i term is the local optimum position of particle i, and G i is the global optimum position of particle i.
After the ω is modified, the ESe-PSO algorithm proceeds according to the process illustrated by Algorithm 4 below: Set Se-PSO parameters, the problem dimension, and the kinetic boundaries; 2.
Initialize the v t,j and x t,j ; 3.
For k = 1 to the number of the segment; 5.
Select the best G i (t, j); 7.
Evaluate f ; 10.If f new < f ; 11.Update the G i (t, j); 12. x i,j (t + 1) = G i (t, j); 13.If f new > f , return to step 1 until the iteration i = iteration is finished or a good solution is discovered.If f new < f , then print G i (t, j); 14.Set x i,j (t + 1) = G i (t, j); 15.Set the optimal segment of each particle; 16.New optimal_segment k p = current position k p ; 17. Apply the PSO [11] Algorithm for the new optimal values.The ESe-PSO adoption algorithm can be described based on the parameters and steps described in the above pseudocode.Specifically, all of the parameters for the ESe-PSO method and the problem's dimensions and kinetic boundaries are calculated and set.Then, according to the kinetic sensitivity, the number of segments to be created and the length of each segment for each kinetic are determined.Following that, starting with segment 1 and continuing until the number of segments is reached, the objective function is assessed, and then the global optimum position is selected for each segment in the evaluation.Thereafter, the iteration counter is set and the damping process damp is included, after which you should update the inertia weight ω, the velocity v t,j , and the location x t,j .Then, once the updating process has been completed, the fitness function f results are evaluated.A particle segment's global optimum position G(t, j) is determined based on whether the new fitness function f new is better than the current fitness f update.If, on the other hand, the new fitness function f new is less than the present fitness function f , the algorithm returns to the beginning of its parameters and makes the necessary modifications.A second option is that the algorithm finds issues that are highly solvable, or that the iteration is completed.In this situation, the best particle segment is published from each particle segment on a global scale.Following that, the global best of each particle segment is used to determine the current position of each particle segment G(t, j) = current position k p .The PSO algorithm then iterates through the possibilities until it finds the best solution.

Algorithms Estimation Result
A comparative inertia weight is shown in Table 3.Five (5) different inertia weights were tested 30 times to calculate the best solution with the best fitness function (the lowest) updating the scheme (minimization) as described in [48][49][50].The purpose here was to investigate and enhance the Se-PSO algorithm's ability with the inertia weight to increase accuracy.These inertia weights were selected because the Se-PSO algorithm uses a constant value and the particles face difficulties in exploration and exploitation since the problem is highly nonlinear and the inertia weight is constant.However, applying different inertia weight ω strategies can enhance the algorithm.Table 3 records the best five inertia weights ω in terms of accuracy and efficiency of the ESe-PSO algorithm when dealing with smallscale kinetic parameters estimation.
Figure 3 describes the 5 scenarios of the inertia weight (ω) to determine the best and worst solution before starting the estimation.
, is the maximum and minimum iterations, and  = 0.99 is the damping process value.Estimation of small-scale kinetics needs a sensitivity analysis to explore the kinetics' efficacy.However, the sensitivity analysis shows how many outputs are affected by the changes made to each kinetic parameter in order to identify the most sensitive kinetics.The sensitive kinetic parameters of [4,10] were used for estimation purposes, where the sensitivity percentage efficacy is shown in Table 4.It is difficult to optimize 172 kinetics at the same time due to the small-scale kinetic parameters.As a result, the sensitivity analysis was used to identify the most sensitive parameters and reduce estimation costs.All of the kinetics were tested up to a 200% perturbation.Furthermore, during the simulation, 7 kinetic parameters were identified to be the most effective kinetics out of a total of 172 and were designated as the parameters that needed to be estimated, while the remaining  + ω min where ω min is the minimum inertia weight = 0.4, ω max is the maximum inertia weight = 0.9, and iter max, min is the maximum and minimum iterations.0.023 0.07 3 [27] Constant inertia weight ω = 0.9 0.04 0.8

[39]
The chaotic inertia weight ω = (ω 1 − ω 2 ) * iter max −iter min iter max where z is interval number (0, 1), ω 1,2 are the maximum and minimum inertia weight, and iter max, min are the maximum and minimum iterations.Estimation of small-scale kinetics needs a sensitivity analysis to explore the kinetics' efficacy.However, the sensitivity analysis shows how many outputs are affected by the changes made to each kinetic parameter in order to identify the most sensitive kinetics.The sensitive kinetic parameters of [4,10] were used for estimation purposes, where the sensitivity percentage efficacy is shown in Table 4.It is difficult to optimize 172 kinetics at the same time due to the small-scale kinetic parameters.As a result, the sensitivity analysis was used to identify the most sensitive parameters and reduce estimation costs.All of the kinetics were tested up to a 200% perturbation.Furthermore, during the simulation, 7 kinetic parameters were identified to be the most effective kinetics out of a total of 172 and were designated as the parameters that needed to be estimated, while the remaining kinetics were left at their original values because they had an efficacy of below 20% if the perturbation was increased to 200%.The kinetic parameters' segments were increased throughout the ESe-PSO execution by adding one segment to each kinetic to boost the likelihood of finding an accurate solution, which allowed the algorithm to search a broad space.Additionally, the kinetic limits, with their upper and lower values, were started with tiny increments.On the basis of the objective function, the optimal segment was determined, as shown in Table 5.Thereafter, after updating the inertia weight by adding the damping process, the method of updating the location, velocity, and the objective function follows that of the PSO algorithm.The best segment is then used as the new boundary, and PSO searches around it.The objective behind the damping process is to boost the particles' exploration and exploitation capabilities in order to find an optimal solution.The damping process has a value of 0.99, and the inertia weight is [1, 0.01].As a result, the damping changes its values in a decreasing manner in each iteration until the iteration is complete.
Notably, the ESe-PSO approach utilized in this work precisely minimizes the model response distance.Moreover, rather than the molecule itself, this reduction reduces just 15 metabolites.This is due to the addition of the Se-PSO algorithm's kinetic parameter segmentation and damping procedure.Tables 4 and 5 illustrate the segmentation of kinetic parameters and the predicted kinetic parameters.
The segmentation was recommended based on the sensitivity analysis results' effect on the kinetic parameters.Only two kinetic factors influence more than or equal to 43 metabolites and enzymes, accounting for more than 80% of the total.These were considered two segments as a result of the extremely important alterations to the model output.The others, on the other hand, were regarded as a single section.As a result, each segment was expanded by one in order to maximize the likelihood of discovering an optimal solution.
The estimation of the kinetic parameters functions to minimize the distance of the model responses from [9] and move it towards the experimental data from [20].From Table 6, it is evident that the simulation concentration result is remarkably close to the real experimental data for the model under study measured by mM = 0.001 mole literˆ(−1.0).As presented in Table 6, using the experimental data from [20], the estimation result of the ESe-PSO algorithm improved as compared to the results from the Se-PSO and PSO algorithms.The estimated result is highlighted.In the ESe-PSO estimation results, 15 metabolites (GLc, G6P, F6P, GH AP/GAP, FDP, PEP, PYR, 6PG, Ru5P, R5P, E4P, AcCoA, OAA, 2KG, and Ace) were well-optimized with a minimized error of about 16.81%.
Only the ICIT was not properly minimized.In the Se-PSO estimation results, seven (7) metabolites (GH AP/GAP, FDP, PEP, 6PG, Ru5P, 2KG, and Ace) were well-optimized and minimized, with an error of about 26.29%.In the PSO estimation results, seven (7) metabolites (GH AP/GAP, FDP, PEP, 6PG, Ru5P 2KG, and Ace) were well-optimized and minimized, with errors of approximately 37.09%.In the DE estimation results, six (6) model outputs (DH AP/GAP, FDP, 6PG, Ru5P, and 2KG) were well-optimized, with a minimized distance of about 33.9%.In the GA estimation result, six (6) metabolites (GLc, DH AP/GAP, FDP, PEP, 6PG, Ru5P, and 2KG) were well-optimized, with a distance minimized to 35.34%.Therefore, it can be confirmed that the results were perfect when compared to the model under study, which has a distance of 57.16% [9].This indicates that, for kinetic parameter estimation, all these algorithms could provide a decent result, but the adopted and enhanced algorithm produces a better estimation than the others in terms of accuracy.Some of the metabolites were not minimized to a slight degree due to the other pathways' involvement, the model complexity, and the lumping together of various metabolites [9].
A comparative experiment is shown in Table 7.The ESe-PSO achieved the best (7.04 * 10 −5 and 7.41 * 10 −5 ) objective function mean as compared to the Se-PSO algorithm and the best mean (0.000603 and 0.00379), PSO (0.003893 and 0.00549), GA (0.11476 and 0.269007), DE (0.049185), and 0 and DE (0.049185 and 0.280478), respectively, for the Hoque dataset.Overall, the ESe-PSO can be adopted to effectively estimate small-scale kinetic parameters to obtain accurate and acceptable results.Notably, the ESe-PSO was superior to the original Se-PSO, PSO, and other state-ofthe-art approaches in terms of distance minimization, and the smallest objective function's value produced appropriate fits to two sets of experimental data.This is because the ESe-PSO algorithm added a damping process to increase the exploration and exploitation of the search space to support the particle in finding a global optimum solution.modification facilitates the accurate determination of the optimal solution.The inertia weight ω was adjusted to the maximum and minimum during the damping process.
However, as stated in [32,51], the mean, STD (standard deviation), distance minimization, and F-test can be calculated for the result accuracy.The STD is a well-known measurement of how broad the meanings of the values being distributed are.The distance minimization is used to see how much the algorithm moves the estimation closer to real experimental data.An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis.It is most often used when comparing statistical models that have been fitted to a dataset, in order to identify the model that best fits the population from which the data was sampled.As a result, using the experimental dataset, the algorithms were implemented and adopted to minimize the distance between the simulation results and the results in [20].
The hypothesis of this study is based on the results from the six estimations as follows: where STD 2 E is the standard deviation of the optimized result E, STD 2 D is the standard deviation of the model under study D, and n E , n D are the number of variables for the optimized and model result, respectively.
To ensure that the final simulated results were statistically consistent with the experimental results in Table 5, a statistical test, the F-test [51], was applied to the ESe-PSO algorithm results with the model under study and the experimental data.The results, using the method from Hoque et al., 2005, show that all the metabolites achieved an STD close to the mean and 0. Thus, this demonstrates that the results produced by ESe-PSO are consistent with Equation (9).The hypothesis of the result in Table 5 was calculated and confirmed using Equations ( 8) and (9).
The hypothesis in Table 5 aims to minimize the distance of the model under study.Therefore, it was concluded that H 0 is rejected, while H 1 is accepted as a reasonable result.The model simulation, after estimation, is presented in Table 5.
After the kinetic parameters were estimated and the model outputs under study were minimized, an observable increase or decrease in the model pathway output simulation results was noted as compared to the model under investigation, as shown in Figure 4.In the glycolysis pathway, the model response simulation of GLcex, FDP, GAP/DH AP, PEP, and PYR decreased, while G6P and F6P increased due to the pts system and a little consumption of GLcex.In the pentose phosphate pathway, the model outputs simulation of 6PG, Ru5P, R5P, Xu5P, and E4P increased, while S7P decreased due to the increase in G6P and the involvement of F6P and GAP/DH AP in the calculation.
Processes 2022, 10, x FOR PEER REVIEW 18 of 26 the glycolysis pathway, the model response simulation of  ,  , / , , and  decreased, while 6 and 6 increased due to the  system and a little consumption of .In the pentose phosphate pathway, the model outputs simulation of 6, 5, 5, 5, and 4 increased, while 7 decreased due to the increase in 6 and the involvement of 6 and / in the calculation.In the TCA cycle, the simulated model outputs of , , and  increased due to gluconeogenesis/analaprotic pathway involvement and the effect of the , , and  enzymes.This was also due to the increases in  and , whereas the metabolites , 2KG, , and  decreased.Moreover, acetate formation had a certain impact on the model response, which resulted in increases in  and decreases in  and .This was due to ′ involvement in the TCA cycle and the glyoxylate pathways.In the glyoxylate pathway, the metabolites , , and  decreased, while  increased.This was attributed to the involvement of the TCA cycle, the analaprotic pathway, and  involvement.In the analaprotic pathway, the metabolites , , and  decreased, while the metabolite  increased due to the TCA cycle and analaprotic pathway involvement.
On the contrary, the other metabolites moved slightly towards the experimental data with small errors.These changes occurred due to other metabolites' participation, model In the TCA cycle, the simulated model outputs of OAA, FU M, and GOX increased due to gluconeogenesis/analaprotic pathway involvement and the effect of the mez, pck, and ppc enzymes.This was also due to the increases in PEP and PYR, whereas the metabolites ICIT, 2KG, SUC, and MAL decreased.Moreover, acetate formation had a certain impact on the model response, which resulted in increases in ACP and decreases in ACE and AcCOA.This was due to AcCOA s involvement in the TCA cycle and the glyoxylate pathways.In the glyoxylate pathway, the metabolites ICIT, SUC, and MAL decreased, while GOX increased.This was attributed to the involvement of the TCA cycle, the analaprotic pathway, and AcCOA involvement.In the analaprotic pathway, the metabolites PEP, PYR, and MAL decreased, while the metabolite OAA increased due to the TCA cycle and analaprotic pathway involvement.
On the contrary, the other metabolites moved slightly towards the experimental data with small errors.These changes occurred due to other metabolites' participation, model complexity, glucose depletion, and the lumping together of various metabolites to simplify the model.

ESe-PSO Algorithm with Different Optimization Problems
The performance of the enhanced segment particle swarm optimization (ESe-PSO) algorithms was compared to that of the original segment particle swarm optimization (Se-PSO) algorithms.The test functions were chosen from six different benchmark functions using the Sphere, Rosenbrock, Rastrigin, Griewank, Shubert, and Booth functions with asymmetric initial range settings (higher and lower boundary values).The experimental results indicated that the ESe-PSO method outperformed the original Se-PSO algorithm in terms of convergence speed under all test conditions.However, the experimental results established ESe-PSO as a potentially useful optimization algorithm in a variety of other fields.
Nonlinear functions are used as a comparison here.The first function is the Sphere function, which is represented by equation ( f (x)), as follows: where X = (X 1 , X 2 , . . ., X n ) is an n-dimensional real valued vector.The second function is the Rosenbrock function as described by equation ( f 1 (x)): The third function is the generalized Rastrigrin function as described by equation ( f 2 (x)): The fourth function is the generalized Griewank function as described by equation ( f 3 (x)): The fifth function is the generalized Shubert function as described by equation ( f 4 (x)): The sixth function is the generalized Booth function as described by equation ( f 5 (x)): As shown in Table 8, the maximum number of iterations for each function in both algorithms was set to 50, 100, and 150.The bird number was set to 20, 40, 60, and 80.Each algorithm was evaluated 10 times in order to determine the mean global optimum position.Furthermore, as demonstrated in Tables 9 and 10, the ESe-PSO convergence speed towards the optimal values was faster than that of Se-PSO.It is worth noting, however, that the ESe-PSO method's convergence was swift in all functions but slowed when scanning a huge space for the global optimum location before being chosen by the ESe-PSO algorithm as it approached the optimum.9 and 10 show that ESe-PSO took 0.004 s to determine the global optimum position, while Se-PSO took 0.013 s.Furthermore, as indicated in the calls column (4920 and 650), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.
When compared to Se-PSO, the global optimum position of ESe-PSO in the Sphere function produced a far superior outcome Table 11 in a short period of time.Furthermore, as shown in Tables 12 and 13, ESe-PSO's convergence speed towards the optimal values was faster than the Se-PSO's.It is worth noting, however, that the convergence of ESe-PSO was swift across the board, but it slowed down when scanning a broad space for the global optimum location before being chosen by the PSO algorithm as it approached the optimum.The ESe-PSO took 0.240 s to obtain the optimum global position, but the Se-PSO only took 0.363 s.The self-time column in Tables 12 and 13 shows that ESe-PSO took 0.0009 s to determine the global optimum position, while Se-PSO took 0.001 s.Furthermore, as indicated in the calls column (4920), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.In Table 14, the global optimum position of ESe-PSO exhibited a considerably improved outcome in a short time as compared to Se-PSO in the Rastrigin function.Furthermore, as shown in Tables 15 and 16, the ESe-PSO's convergence speed towards the optimal values was faster than that of the Se-PSO.It is worth noting, however, that the convergence of ESe-PSO was swift across the board, but it slowed down when scanning a broad space for the global optimum location before being chosen by the PSO algorithm as it approached the optimum.The ESe-PSO took 0.135 s to obtain the optimum global position, but the Se-PSO took only 0.158 s.The self-time column in Tables 15 and 16 shows that ESe-PSO took 0.001 s to determine the global optimum position, while Se-PSO took 0.004 s.Furthermore, as indicated in the calls column (4920), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.Furthermore, as shown in Tables 18 and 19, the ESe-PSO's convergence speed towards the optimal values was faster than that of the Se-PSO.It is worth noting, however, that the convergence of ESe-PSO was swift across the board, but it slowed down when scanning a broad space for the global optimum location before being chosen by the PSO algorithm as it approached the optimum.The ESe-PSO took 0.363 s to obtain the optimum global position, but the Se-PSO took only 0.048 s.The self-time column in Tables 18 and 19 shows that ESe-PSO took 0.001 s to determine the global optimum position, while Se-PSO took 0.013 s.Furthermore, as indicated in the calls column (4920), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.Furthermore, as shown in Tables 21 and 22, the ESe-PSO's convergence speed towards the optimal values was faster than that of the Se-PSO.It is worth noting, however, that the convergence of ESe-PSO was swift across the board, but it slowed down when scanning a broad space for the global optimum location before being chosen by the PSO algorithm as it approached the optimum.The ESe-PSO took 0.25 s to obtain the optimum global position, but the Se-PSO took only 0.032 s.The self-time column in Tables 21 and 22 shows that ESe-PSO took 0.002 s to determine the global optimum position, while Se-PSO took 0.003 s.Furthermore, as indicated in the calls column (4920), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.Furthermore, as shown in Tables 24 and 25, the ESe-PSO's convergence speed towards the optimal values was faster than that of the Se-PSO.It is worth noting, however, that the convergence of ESe-PSO was swift across the board, but it slowed down when scanning a broad space for the global optimum location before being chosen by the PSO algorithm as it approached the optimum.The ESe-PSO took 0.363 s to obtain the optimum global position, but the Se-PSO took only 0.048 s.The self-time column in Tables 24 and 25 shows that ESe-PSO took 0.001 s to determine the global optimum position, while Se-PSO took 0.004 s.Furthermore, as indicated in the calls column (4920), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.In Table 26, the global optimum position of ESe-PSO exhibited a considerably improved outcome in a short time as compared to Se-PSO in the Booth function.

Conclusions
For the purposes of this study, ESe-PSO and a number of other state-of-the-art algorithms were developed and assessed.Particle segmentation was used in order to direct the motion of the particles toward the global optimal location.The inertia weight and dampening procedure of this algorithm were also changed to boost exploration and exploitation in order to discover the global optimum location more quickly.We demonstrate the universal applicability of the adopted technique by successfully applying it to small-scale kinetic parameters.The E. coli model's small-scale kinetic parameters were evaluated using the ESe-PSO, Se-PSO, PSO, DE, and GA algorithms.Small-scale models can benefit greatly from the ESe-PSO method because of its high estimate efficiency.The seven kinetic parameters (v pyk max , n pk , icdh, k f icdh , k d icdhnap , k m icdhnadp , and v icl max ) were effectively estimated.The F-test, the mean, and the STD proved that the results are moved closely to the real experimental data.

Note:
The shaded cells represent the best and lower objective function.

Figure 3
Figure 3 describes the 5 scenarios of the inertia weight () to determine the best and worst solution before starting the estimation.

ω
max −ω min iter max −iter min + ω min * ωdamp ω max = 1 is the maximum inertia weight, ω min = 0.01 is the minimum inertia weight, iter max, min is the maximum and minimum iterations, and ωdamp = 0.99 is the damping process value.The shaded cells represent the best and lower objective function.

Figure 4 .
Figure 4.The model simulation using the Ese-PSO algorithm.

Figure 4 .
Figure 4.The model simulation using the Ese-PSO algorithm.

Table 1 .
The mass balance equation.

Table 2 .
Cont. Keq+[NADP]) + 1); 11.If the fitness value > the best fitness the best fitness p i (t), then set the current values as the new G i (t); 12. Otherwise, modify steps 1 and 2 and repeat steps 3-11; 13.Print the best fitness G i (t) of each particle; 14.End. 15.End.

Table 3 .
The inertia weight test.

Table 4 .
The kinetic parameter segments.

Table 5 .
The kinetic parameters boundaries.

Table 6 .
The simulation results.
Note:The shaded cells represent the best simulation result of each algorithm.

Table 7 .
[19]arative objective function results over 20 runs[19].The shaded cells represent the MEAN, STD, and BEST and LOWER objective functions. Note:

Table 10 .
Se-PSO consumption for Sphere function.The ESe-PSO took 0.194 s to attain the optimum global position, whereas the Se-PSO took only 0.213 s.The self-time column in Tables

Table 11 .
ESe-PSO and Se-PSO global best positions for Sphere function.

Table 14 .
ESe-PSO and Se-PSO global best position for Rastrigin function.

Table 16 .
Se-PSO consumption for Rosenbrock function.In Table17, the global optimum position of ESe-PSO exhibited a considerably improved outcome in a short time as compared to Se-PSO in the Rosenbrock function.

Table 17 .
ESe-PSO and Se-PSO global best position for Rosenbrock function.

Table 19 .
Se-PSO consumption for Griewank function.In Table20, the global optimum position of ESe-PSO exhibited a considerably improved outcome in a short time as compared to Se-PSO in the Griewank function.

Table 20 .
ESe-PSO and Se-PSO global best position for Griewank function.

Table 22 .
Se-PSO consumption for Shubert function.In Table23, the global optimum position of ESe-PSO exhibited a considerably improved outcome in a short time as compared to Se-PSO in the Shubert function.

Table 23 .
ESe-PSO and Se-PSO global best position for Shubert function.

Table 25 .
Se-PSO consumption for Booth function.

Table 26 .
ESe-PSO and Se-PSO global best position for Booth function.