Kinetic Parameter Determination for Depolymerization of Biomass by Inverse Modeling and Metaheuristics

: A computational methodology based on inverse modeling and metaheuristics is presented for determining the best parameters of kinetic models aimed to predict the behavior of biomass depolymerization processes during size scaling up. The Univariate Marginal Distribution algorithm, particle swarm optimization, and Interior-Point algorithm were applied to obtain the values of the kinetic parameters ( K M and V max ) of four mathematical models based on the Michaelis–Menten equation: (i) Traditional Michaelis–Menten, (ii) non-competitive inhibition, (iii) competitive inhibition, and (iv) substrate inhibition. The kinetic data were obtained from our own experimentation in micro-scale. The parameters obtained from an optimized micro-scale experiment were compared with a bench scale experiment (0.5 L). Regarding the metaheuristic optimizers, it is concluded that the Interior-Point algorithm is e ﬀ ective in solving inverse modeling problems and has the best prediction power. According to the results, the Traditional model adequately describes the micro-scale experiments. It was found that the Traditional model with optimized parameters was able to predict the behavior of the depolymerization process during size scaling up. The methodology followed in this study can be adopted as a starting point for the solution of future inverse modeling problems.


Introduction
Vegetal biomass (VB) is an alternative for biofuel production. VB is mainly composed of cellulose, hemicellulose, and lignin and can be transformed into different products through four fundamental stages: (i) conditioning, (ii) depolymerization or enzymatic hydrolysis, (iii) fermentation, and (iv) the downstream process [1,2]. In this context, the depolymerization stage is the most expensive, and predicting the reaction behavior is essential to achieve a higher reaction yield. One of the most widely used models to describe the behavior of this phenomenon is the Michaelis-Menten (M-M) equation. The M-M equation consists of two parameters: the Michaelis constant (K M ) and the maximum reaction velocity (V max ), which describe the speed of the enzymatic reactions related to the reaction rate (r s ) and the concentration of a substrate (S), as described in Equation (1) [3]. The CCD allowed us to obtain a statistical model that showed the optimal operating parameters of the depolymerization process. A micro-scale experiment was developed using the optimized operative parameters, and the trajectory was followed. Finally, a bench scale experiment (0.5 L) was performed, thereby reproducing the operative parameters obtained in the optimized micro-scale experiment. The main difference between the micro-scale and the bench scale experiments was the stirring. The micro-scale experiments used high orbital shaking (700 rpm), while the bench scale experiment used propeller shaking (150 rpm). At both scales, the orbital and propeller shaking were high, allowing us to maintain spatial homogeneity, which was previously suggested for the study of reactors as ideal systems [14]. However, during scaling, homogeneity may be reduced and cause interference in the mass transfer. The details are described in [12].

Inverse Modeling
The ability of four mathematical models based on the Michaelis-Menten equation (Equations (2)-(5)) to fit the experimental observations of biomass depolymerization was evaluated. For these models, the parameters V max , K M , I, and K stand for the maximal velocity, substrate concentration (S) at half-maximal velocity, inhibitor concentration, and characteristic constant, respectively. The K parameter is an offset that enables the kinetic models to adapt to the initial conditions. Michaelis-Menten: Non-Competitive Inhibition: Competitive Inhibition: Substrate Inhibition: The depolymerization data were calculated indirectly according to the concentration of the reducing sugar (RS) for each time step using Equation (6): C 0 ws = X ws C ws ; C ws = C 0 ws − MW cellulose MW RS C RS (6) Processes 2020, 8, 836 4 of 17 where C 0 ws is the initial concentration of WS, X ws is the fraction of cellulose + hemicellulose in the dried substrate (0.825 mg/mg), C ws is the dried WS concentration in the enzymatic hydrolysis media (10 mg/mL), MW is the molecular weight of the cellulose, C ws is the concentration of the substrate, and C RS is the concentration of reducing sugars reported as glucose equivalents [12].
In this work, the aim of the inverse modeling is to find the Michaelis-Menten model that best fits the substrate concentration S = (S 1 , . . . S n ) T to the experimental data C ws = C 1 ws , . . . , C n ws T obtained from n instants during the depolymerization process. To measure the quality of the fit achieved by each model, the weighted least square cost function where σ n is the standard deviation [15].
The four models require a set of parameters (θ ∈ R 4 ) to be specified, such that S(θ) can be obtained by solving the associated differential equation. Thus, the measure χ 2 can be defined to quantify how well an enzymatic model reproduces C ws for a given θ. Therefore, the aim of the inverse modeling can be achieved through a parameter identification process formulated as the following optimization problem: where θ * = (V * max , K * m , I * , K * ) T is the optimal kinetic parameter vector obtained from the four-dimensional space of the positive real values excluding the origin. Since it is unknown if two different sets of parameters θ 1 and θ 2 could lead to the same measure χ 2 , we cannot assume a functional relationship between θ and χ 2 . Furthermore, the relevant characteristics of this measure, such as the convexity or roughness of its corresponding response surface, are also unknown. Consequently, sophisticated optimizers need to be applied to solve Equation (7).

Metaheuristics
Metaheuristics are approximate optimization techniques that provide acceptable solutions for solving complex problems [16]. These techniques are frequently used to deal with optimization problems where the solution cannot be found with exact methods or where the computational time required to draw the solution is so high that the problem becomes intractable.
For the inverse modeling problem stated in Equation (7), preliminary results were reported with the Nelder-Mead algorithm in [17]. This study showed that Nelder-Mead is effective at minimizing χ 2 ; however, the best kinetic parameters found (θ * ) with this method lacked physical meaning. The reason for this result is that Nelder-Mead is an unconstrained method; therefore, the search for the optimal set of parameters is prone to reaching infeasible regions.
The problem presented by the Nelder-Mead algorithm can be solved via constrained optimization methods that retain the candidate solutions inside feasible regions. Three generic constrained optimization methods were employed: (i) the Univariate Marginal Distribution algorithm (UMDA G c , hereafter referred to as UMDA) [18]; (ii) particle swarm optimization [19]; (iii) the Interior-Point algorithm [20]. These three methods differ in their exploration of search spaces and are selected to understand the different aspects of the quality measure χ 2 . The next lines describe the implementation of these methods.
UMDA and PSO are metaheuristics from different families. UMDA is an example of the Estimation of Distribution algorithms. EDAs explore the solution space by iteratively building and sampling explicit probabilistic models of candidate solutions that guide their search. The generic procedure, presented in Algorithm 1, is as follows [21]: The method starts by measuring the quality of a set of random candidate solutions sampled from a uniform distribution. Then, a fraction of the best solutions is selected to construct an estimate of their probability distribution (the mean and standard deviation for a normal distribution in the case of the UMDA). By sampling this distribution, new solutions are generated to update the model. Thus, the estimated probability distribution is improved. A bias to the best solutions is the fundamental idea behind EDAs; the aim is that after some iterations, the random candidate solutions will converge to an optimal solution (assuming that this solution exists and is unique, i.e., assuming that the solution space is convex). The proof of the convergence of the algorithm implemented in our work (UMDA), as well as that of other EDAs, is provided in [22]. This process is repeated until a termination criterion is met (usually, when a maximum number of iterations is reached). Depending on the specific probabilistic model built, an EDA takes a name. The UMDA then builds a Gaussian model P from a set of solutions Θ = θ j M j=1 , with the parameters given by (Equation (8)): where M is the number of solutions considered to compute these parameters, and θ j,κ represents the κ-th component of a solution θ j , which is a D-dimensional vector in R D . The UMDA was implemented in this work to observe if the measure χ 2 can be regarded as convex. A drawback of the UMDA is that the algorithm is sensitive to local minima. Thus, if different runs of the UMDA lead to different answers, the response surface of the quality measure may contain more than one minimum, i.e., the quality measure may not be convex.

1:
Generate an initial population of N candidate solutions: at iteration t = 0, uniformly 2: WHILE (stopping criteria are not met) // N = 100 in this work 3: Compute the quality measure of each candidate solution g θ (t) // χ 2 in this work 4: Select a subset of the best solutions, θ

6:
Sample P (t) to generate new solutions θ (t) 7: Substitute/Incorporate θ (t) into θ (t) 8: Update t = t + 1 9: END WHILE 10: Output the best solution found, θ * , as a result PSO is a member of the bio-inspired metaheuristics family that emulates the collective behavior of swarms (such as a flock of birds or a school of fish) to explore continuous search spaces looking for the optimal value. In this simulation, each particle of the swarm corresponds to a candidate solution (θ). An iterative process like the UMDA is then followed, except for the model-updating mechanism. For each iteration (t) of the PSO, the particle with the best quality measure (θ * (t) ) is identified. Then, the rest of the particles ( θ j N−1 j=1 ) update their positions according to the directional vector (θ j − θ * (t) ) plus a random walk, described as Equation (9) [23]: where U(0, φ) represents a vector of random numbers, uniformly distributed in [0, φ] and generated at each iteration for each particle, and the operator ⊗ represents a component-wise multiplication. The parameters φ 1 and φ 2 are known as acceleration coefficients. They determine the magnitude of the random forces in the direction of the best position in the trajectory of a given particle (θ b j ) and the neighborhood best θ * (t) . The values of φ 1 = φ 2 = 2.0 are generally adopted. ω is termed the inertia weight, which can be interpreted as the fluidity of the medium in which a particle moves; this value is commonly initialized to a high value (e.g., 0.9) and gradually reduced to a low value (e.g., 0.4). The PSO is considered in this study since it has been proven to solve highly complex optimization problems. Importantly, it has the advantage of avoiding local minima. Thus, if the quality measure of the inverse modeling problems is rough, the PSO can offer better answers than the UMDA. The main drawback of PSO is its stochastic nature, which means that no convergence warranties can be offered.
Interior-Point algorithms (IPAs) are not metaheuristics in a strict sense but exact methods that improve the performance of the optimization process in terms of both computational time and the ability to handle degenerate problems (by employing one of several heuristics) [24]. IPAs emerged as an extension of the simplex method to solve complex linear programming problems (involving thousands of constraints and variables). In contrast to the simplex algorithm, where the movement of candidate solutions (θ) occurs along the boundary of the feasible region, the points generated by IPAs lie in the interior of the inequality constraints, hence the name "Interior-Point". IPAs provide a natural way to incorporate inequality constraints into the objective function and transform a constrained optimization problem into an unconstrained one. The idea is to add barrier functions to the original objective function like in the Lagrange multiplier method [25]. A common formulation of the barrier method is the following: where µ > 0 is the barrier parameter, and the slack variable s is assumed to be positive. From among the several existing Interior-Point algorithms, we implemented the version in [20] (which starts from Equation (10)) because it includes heuristics in the trust region to 1) avoid approaching the boundary of the feasible regions too soon and 2) allows the algorithm to treat convex and non-convex problems uniformly by facilitating the use of second derivative information when the problem is non-convex. In this work, the Interior-Point algorithm was implemented to evaluate if the measure χ 2 could be treated as a function of the parameter set (θ). Thus, for this method, it is assumed that f (θ) = χ 2 .
The main drawback of IPAs is that they are sensitive to an initial estimate of the solution required to perform the optimization process, suggesting that a different initial point can lead to a different solution [26]. The PSO and the Interior-Point algorithms were adapted to solve the inverse modeling problem by using the particle swarm and fmincon MATLAB ® procedures (with default hyper-parameter values), respectively. As the UMDA is not a MATLAB procedure available in the optimization toolbox, it was implemented by following Algorithm 1. The differential equations of the models described in Equations (2)-(5) were solved using the ODE45 procedure as an implicit step in the quality measure computations. The initial values of V max and K M were calculated by the Eaddie-Hofstee diagram [27].
Although the quality measure χ 2 is effective to measure the fit level accuracy of kinetic models to experimental data, this measure is not informative. To ease the interpretation of the results provided by the optimization methods, the proportions of the variation between the fitted models and experimental data were obtained by calculating the determination coefficients R 2 . The closer to the unit this coefficient is, the more qualitative the model. A zero value of R 2 means that the kinetic model behaves as a constant function, and a negative value indicates that more variables need to be integrated into the model to explain the experimental data. As suggested in [28], the performance of predictive models should be measured by following a multidimensional approach. In this work, the R 2 was considered to evaluate the correspondence between predictions and experimental data, the mean square error (MSE) was used to evaluate the prediction accuracy, and Akaike's information criterion (AIC) was used to quantify the model complexity. These three metrics are described as Equations (11)-(13): where i refers to the i-th datum for each experiment. Further, the mean square errors were calculated by: where n is the number of data in each experiment. The AIC combines the quality of fit and the parameter number (prm) of models, allowing to compare models of varying complexity [29]. The AIC is defined by the Equation (13):

Experimental Methodology
The procedure followed for the evaluation of kinetic models is summarized in Algorithm 2. Each experimental dataset, with its conditions reported in Table 1, was partitioned to conduct a 4-fold cross validation scheme. For the sake of a fair comparison between models, each model employed the same data for training along with the same test data. The final performance was computed as an average of the four cross-validations, and these results are reported in Table 2. The criterion to select the best model was Akaike's information since it achieves a trade-off between model complexity and data fitting.  Table 2 shows the best kinetic model found by each optimizer and its corresponding optimal parameter values θ * = V * max , K * M , I * , K * . It also presents the average and standard deviation of the quality measures R 2 , AIC, χ 2 , and MSE. As can be observed from Table 2, the least complex and best-fitting kinetic model was the Michaelis-Menten Traditional model for most of the experiments.

The Best Kinetic Model
For the optimizers, the UMDA presented the worst performance in terms of both error and quality measures. In addition to its bad performance, the UMDA showed premature convergence on different runs. The quality measure of the UMDA was insignificantly improved as the number of iterations and the population size increased. Because of this behavior, we concluded that the UMDA is easily trapped in local minima; consequently, we hypothesize that the quality measure χ 2 is not convex.
Surprisingly, the Interior-Point algorithm dominated in determining the best-fitting model for most of the experiments. Furthermore, the MSEs in training and testing were similar. However, as can be observed in the results of Experiments 08 and 11 (Table 2), different parameter sets lead to the same quality measures. From this result, it can be concluded that χ 2 is a non-injective function of θ. Despite this result, the Interior-Point algorithm was robust enough to carry out the optimization process. The good results of the PSO in conjunction with the bad performance of the UMDA provide evidence that the quality measure χ 2 is not smooth but presents a certain degree of roughness. Therefore, optimizers that can avoid local minima like the PSO or the Interior-Point should be preferred.
Experiments 04 and 06 are two representative cases of the numerical results reported in Table 2. Figure 1 illustrates a scenario where the Nelder-Mead, PSO, and Interior-Point optimizers achieve similarly high performance, while the UMDA leads to an unsatisfactory fit, and Nelder-Mead leaves the feasible regions in the parameter search space. Experiments 04 and 06 are two representative cases of the numerical results reported in Table 2. Figure 1 illustrates a scenario where the Nelder-Mead, PSO, and Interior-Point optimizers achieve similarly high performance, while the UMDA leads to an unsatisfactory fit, and Nelder-Mead leaves the feasible regions in the parameter search space.

Computation Time for Kinetic Parameter Optimization
A comparison of the computation time required for each optimizer to find the best kinetic parameters is depicted in Figure 3. Here, the computation time not only depends on the optimizer but also on the kinetic model being optimized. For the first three models (Michaelis-Menten, Non-  Table 2 should be carefully interpreted. For instance, the case of PSO with a mean value of R 2 = 0.8 and standard deviation of 0.33 were the result of evaluating the model with four training data folds, which reached R 2 values of 0.95, 0.30, 0.96, and 0.98, respectively.
Experiments 04 and 06 are two representative cases of the numerical results reported in Table 2. Figure 1 illustrates a scenario where the Nelder-Mead, PSO, and Interior-Point optimizers achieve similarly high performance, while the UMDA leads to an unsatisfactory fit, and Nelder-Mead leaves the feasible regions in the parameter search space.

Computation Time for Kinetic Parameter Optimization
A comparison of the computation time required for each optimizer to find the best kinetic parameters is depicted in Figure 3. Here, the computation time not only depends on the optimizer but also on the kinetic model being optimized. For the first three models (Michaelis-Menten, Non-

Computation Time for Kinetic Parameter Optimization
A comparison of the computation time required for each optimizer to find the best kinetic parameters is depicted in Figure 3. Here, the computation time not only depends on the optimizer but also on the kinetic model being optimized. For the first three models (Michaelis-Menten, Non-competitive inhibition, and competitive inhibition), the UMDA and PSO required between 10 and 100 times the average amount of time required by the Nelder-Mead and Interior-Point algorithms to converge to their best solutions. Notably, for the Substrate Inhibition model, the UMDA and PSO (and sometimes the Nelder-Mead) optimizers presented convergence problems. These three algorithms required around 100-1000 times the amount of time compared to the time required by the Interior-Point method to converge. These problems could be due to two reasons: (i) All methods except for the Interior-Point are completely stochastic; thus, some of the random candidate solutions can be stuck in valleys of the response surface, thereby hindering the convergence of the optimizer; (ii) The Interior-Point was provided with an initial candidate solution that favors its convergence. competitive inhibition, and competitive inhibition), the UMDA and PSO required between 10 and 100 times the average amount of time required by the Nelder-Mead and Interior-Point algorithms to converge to their best solutions. Notably, for the Substrate Inhibition model, the UMDA and PSO (and sometimes the Nelder-Mead) optimizers presented convergence problems. These three algorithms required around 100-1000 times the amount of time compared to the time required by the Interior-Point method to converge. These problems could be due to two reasons: (i) All methods except for the Interior-Point are completely stochastic; thus, some of the random candidate solutions can be stuck in valleys of the response surface, thereby hindering the convergence of the optimizer; (ii) The Interior-Point was provided with an initial candidate solution that favors its convergence. The optimizers were run on a computer with an Intel ® (Intel Corporation, Santa Clara, CA, United States) Core ™ i7 CPU @ 2.60 GHz processor, 16GB RAM, and a 256 GB hard disk. The total computation times to calibrate the first three models with the available experimental data were 0.19, 1.05, 13.06, and 33.37 min for the Nelder-Mead, Interior-Point, PSO, and UMDA algorithms, respectively. For the Substrate Inhibition model, the total computation time was 0.13, 27.54, 592.41, and 832.15 min for the Interior-Point, Nelder-Mead, UMDA, and PSO, respectively. In conclusion, the Interior-Point algorithm is the fastest method among the four optimizers analyzed in this work for kinetic parameter optimization. Nelder-Mead is also fast, but it is not recommended because it converges to unfeasible regions and is sensitive (like the UMDA and PSO) to the typical randomized initialization of metaheuristics for complex models such as Substrate Inhibition.

Comparison between Micro-Reaction and Bench Scale Reactor
As a final experiment, the values of the kinetic parameters for the optimized experiment in the micro-reaction were obtained. The values of the parameters Vmax, Km, and K were 1.06 mg/Ml × s, 8.24 mg/mL, and 0.27, respectively. The data were obtained using the IPA method and the traditional kinetic model according to the conclusions derived from Table 2. Figure 4 depicts the trajectory of the optimized micro-reactor compared to the three bench scale reactors. According to the results presented in [12], a micro-scale experiment was developed using the optimal parameters obtained by The optimizers were run on a computer with an Intel ® (Intel Corporation, Santa Clara, CA, USA) Core™ i7 CPU @ 2.60 GHz processor, 16GB RAM, and a 256 GB hard disk. The total computation times to calibrate the first three models with the available experimental data were 0.19, 1.05, 13.06, and 33.37 min for the Nelder-Mead, Interior-Point, PSO, and UMDA algorithms, respectively. For the Substrate Inhibition model, the total computation time was 0.13, 27.54, 592.41, and 832.15 min for the Interior-Point, Nelder-Mead, UMDA, and PSO, respectively. In conclusion, the Interior-Point algorithm is the fastest method among the four optimizers analyzed in this work for kinetic parameter optimization. Nelder-Mead is also fast, but it is not recommended because it converges to unfeasible regions and is sensitive (like the UMDA and PSO) to the typical randomized initialization of metaheuristics for complex models such as Substrate Inhibition.

Comparison between Micro-Reaction and Bench Scale Reactor
As a final experiment, the values of the kinetic parameters for the optimized experiment in the micro-reaction were obtained. The values of the parameters V max , K m , and K were 1.06 mg/Ml × s, 8.24 mg/mL, and 0.27, respectively. The data were obtained using the IPA method and the traditional kinetic model according to the conclusions derived from Table 2. Figure 4 depicts the trajectory of the optimized micro-reactor compared to the three bench scale reactors. According to the results presented in [12], a micro-scale experiment was developed using the optimal parameters obtained by the central composite design, and these parameters were also used to test the experiment on a scale of 0.5 L.
Notably, the difference in size between the two reported experiments is 500 times higher. Furthermore, the reported micro-scale experiments were tested by orbital shaking at a speed of 700 rpm and a very stable temperature control with variations of ±1 • C. The difference with the 0.5 L reactor was the use of propeller agitation (150 rpm), ensuring that all the reactor contents were well mixed to ensure spatial homogeneity. Further studies should be done to ensure spatial homogeneity, but this goal is beyond the scope of this work.
reactor was the use of propeller agitation (150 rpm), ensuring that all the reactor contents were well mixed to ensure spatial homogeneity. Further studies should be done to ensure spatial homogeneity, but this goal is beyond the scope of this work.
According to the results, the Traditional model with its optimized parameters was able to predict the behavior in the micro-reaction, showing = 0.99 and = 0.06 . In addition, the same model and parameters predicted the behavior of the reaction on a bench scale for three different reactors, achieving performance of ( = 0.96; = 0.30), ( = 0.93; = 0.54), and ( = 0.97; MSE = 0.22) for reactors 1, 2, and 3, respectively. Therefore, the behavior of depolymerization on a bench scale can be predicted with acceptable errors using micro-reaction experiments.  According to the results, the Traditional model with its optimized parameters was able to predict the behavior in the micro-reaction, showing R 2 = 0.99 and MSE = 0.06. In addition, the same model and parameters predicted the behavior of the reaction on a bench scale for three different reactors, achieving performance of (R 2 = 0.96; MSE = 0.30), (R 2 = 0.93; MSE = 0.54), and R 2 = 0.97; MSE = 0.22 for reactors 1, 2, and 3, respectively. Therefore, the behavior of depolymerization on a bench scale can be predicted with acceptable errors using micro-reaction experiments.

Discussion
The results in Table 2 present the best-fit model found by each optimizer. To clarify the differences between the best-fitted model and the rest of the models, Figure 5 provides a graphical comparison of the four M-M models calibrated using the four optimization strategies for Experiment No. 13. Figure 5a-c depict the fit obtained by both IPA and PSO, which reached the same R 2 of 0.93, but the AIC value for the IPA method was lower (−3.06). Consequently, we observed a better fit in Figure 5a than in Figure 5b. The UMDA optimizer, shown in Figure 5c, presents an R 2 of 0.86 and an AIC of 1.02. UMDA presents a poor fit compared to the three previous methods, but it also adequately recapitulates the trajectory shown by the experimental data.
The parameters V max and K M characterize the enzymatic reactions that are described with the kinetics of M-M. V max depends on the total concentration of the enzyme, but K M does not [14]. K M is equal to the concentration of the substrate at half V max . It can also be associated with the affinity of the enzyme for the substrate [3]. As reported by Cho et al. [3], a low K M value indicates a sizeable binding affinity, as the reaction approaches V max more rapidly. Furthermore, a high K M indicates that the enzyme does not bind as efficiently with the substrate, and V max will only be reached if the substrate concentration is high enough to saturate the enzyme [3]. The V max and K m parameters were searched in (0, 8.25) and (0, 8.25), respectively. Parameter I (the inhibitor) was calculated using a range between 0 and 8.25. This range was employed because it is well known that there is an intermediate product called cellobiose during the production of glucose that exerts an inhibitory effect over biomass depolymerization reactions [2]. The maximum amount of glucose that can be ideally produced is 8.25 mg.
comparison of the four M-M models calibrated using the four optimization strategies for Experiment No. 13. Figure 5a-c depict the fit obtained by both IPA and PSO, which reached the same R 2 of 0.93, but the AIC value for the IPA method was lower (−3.06). Consequently, we observed a better fit in Figure 5(a) than in Figure 5(b). The UMDA optimizer, shown in Figure 5(c), presents an R 2 of 0.86 and an AIC of 1.02. UMDA presents a poor fit compared to the three previous methods, but it also adequately recapitulates the trajectory shown by the experimental data. The parameters Vmax and KM characterize the enzymatic reactions that are described with the kinetics of M-M. Vmax depends on the total concentration of the enzyme, but KM does not [14]. KM is equal to the concentration of the substrate at half Vmax. It can also be associated with the affinity of the enzyme for the substrate [3]. As reported by Cho et al. [3], a low KM value indicates a sizeable binding affinity, as the reaction approaches Vmax more rapidly. Furthermore, a high KM indicates that the enzyme does not bind as efficiently with the substrate, and Vmax will only be reached if the substrate As previously explained, Table 2 shows the results obtained for the parameters V max , K M , I, K, χ 2 , and R 2 , as well as the error value. To explain the kinetic parameters (V max and K M ), Experiment 11 and 12 were used ( Figure 6). Experiments 11 and 12 have the same characteristics with the exception of temperature: Experiment 11 @ pH = 5; T = 65 • C; enzyme-substrate ratio = 0.9 µL of enzyme/g of substrate; Experiment 12 @ pH = 5; T = 45 • C; enzyme-substrate ratio = 0.9 µL of enzyme/g of substrate. The V max and K M values for Experiment 11 were 3.83 mg/mL·s and 7.26 mg/mL, indicating that the V max was quickly reached due to the affinity of the enzyme with the substrate, probably associated with the effect of the temperature. It is well known that a high temperature favors enzymatic reactions. The results agree with the low K M value obtained, which is associated with a high affinity between the enzyme and the substrate. On the other hand, Experiment 12 shows a low Vmax (0.19 mg/mL‧s) and a high KM value (8.25 mg/mL). The high KM value indicates that the enzyme-substrate affinity is low; however, it allows the experiment to attain a higher depolymerization value. Both experiments show a high inhibitor value, indicating that the reaction is being inhibited. This result could be associated with a high accumulation of cellobiose during the reaction favored by temperature [2]. A graphic compilation of all the methods obtained is shown in the complementary information. On the other hand, Experiment 12 shows a low V max (0.19 mg/mL·s) and a high K M value (8.25 mg/mL). The high K M value indicates that the enzyme-substrate affinity is low; however, it allows the experiment to attain a higher depolymerization value. Both experiments show a high inhibitor value, indicating that the reaction is being inhibited. This result could be associated with a high accumulation of cellobiose during the reaction favored by temperature [2]. A graphic compilation of all the methods obtained is shown in the complementary information.

Conclusions
For the metaheuristic optimizers, we conclude that the Interior-Point algorithm is effective for solving the inverse modeling problem of kinetic parameters for the depolymerization of biomass. If a priori information, such as the initial candidate solution, is available, the Interior-Point should be the first option; if not, the PSO may be considered. The default values were shown to be sufficient for all the optimizers to provide acceptable solutions. However, pure metaheuristic algorithms (like the UMDA and PSO) might further improve convergence efficiency by tuning their hyper-parameters.
The quality measure employed to evaluate the best-fitting kinetic model is likely non-convex and rough. Although many other quality measures could be used, the Weighted Least Squares metric χ 2 was enough to guide the optimizer exploration and correctly identify the best model for experimental data, at both the micro and macro scale.
The methodology applied in this study can be adopted as a starting point for solving forthcoming inverse modeling problems. The use of different metaheuristic and hybrid optimizers can reveal the nature of the quality measures, thereby easing the implicit optimization problems.
In this study, we found an effective computational procedure to determine a better calibration for mathematical models of kinetic polymerization process. However, new experiments should be considered in the future to study micro-reactions, compare these with larger scales and test multiple operating parameters. In addition, important considerations such as heat and mass transfer should be studied but are beyond the scope of this investigation.
Finally, according to our observations, the models with the best fit are not necessarily consistent with a fixed set of kinetic parameters, K M and V max . Therefore, to inform the best decision for the process it is necessary to discern which K M and V max values best describe the behavior of the evaluated reaction.