Next Article in Journal
A Legendre Spectral-Element Method to Incorporate Topography for 2.5D Direct-Current-Resistivity Forward Modeling
Previous Article in Journal
Non-Parametric Estimation of the Renewal Function for Multidimensional Random Fields
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accelerated Driving-Training-Based Optimization for Solving Constrained Bi-Objective Stochastic Optimization Problems

by
Shih-Cheng Horng
1,* and
Shieh-Shing Lin
2
1
Department of Computer Science & Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
2
Department of Electrical Engineering, St. John’s University, New Taipei City 251303, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(12), 1863; https://doi.org/10.3390/math12121863
Submission received: 10 May 2024 / Revised: 12 June 2024 / Accepted: 13 June 2024 / Published: 14 June 2024
(This article belongs to the Section E: Applied Mathematics)

Abstract

The constrained bi-objective stochastic optimization problem (CBSOP) considers the optimization problem with stochastic bi-objective functions subject to deterministic constraints. The CBSOP is part of a set of hard combinatorial optimization problems regarding time complexity. Ordinal optimization (OO) theory provides a commonly recognized structure to handle hard combinatorial optimization problems. Although OO theory may solve hard combinatorial optimization problems quickly, the deterministic constraints will critically influence computing performance. This work presents a metaheuristic approach that combines driving-training-based optimization (DTBO) with ordinal optimization (OO), abbreviated as DTOO, to solve the CBSOP with a large design space. The DTOO approach comprises three major components: the surrogate model, diversification, and intensification. In the surrogate model, the regularized minimal-energy tensor product with cubic Hermite splines is utilized as a fitness estimation of design. In diversification, an accelerated driving-training-based optimization is presented to determine N remarkable designs from the design space. In intensification, a reinforced optimal computing budget allocation is used to find an extraordinary design from the N remarkable designs. The DTOO approach is applied to a medical resource allocation problem in the emergency department. Simulation results obtained by the DTOO approach are compared with three heuristic approaches to examine the performance of the DTOO approach. Test results show that the DTOO approach obtains an extraordinary design with higher solution quality and computational efficiency than the three heuristic approaches.

1. Introduction

The constrained bi-objective stochastic optimization problem (CBSOP) considers the optimization problem with stochastic bi-objective functions subject to deterministic constraints. The stochastic bi-objective functions of the CBSOP are assessed using simulations [1]. The primary goal of the CBSOP is to look for optimal values of the variables that optimize the stochastic bi-objective functions while satisfying the deterministic constraints. The CBSOP is part of a class of hard combinatorial optimization problems [2]. Moreover, the large design space makes it more challenging to look for optimal values through traditional approaches within a limited amount of time.
Numerous approaches have been developed to address NP-hard problems, broadly falling into deterministic and non-deterministic approaches. Examples of deterministic approaches include exact algorithms [3] and polynomial-time approximation algorithms [4]. Exact algorithms aim to find optimal solutions but can be slower and more computationally expensive. Polynomial-time approximation algorithms provide approximate solutions within any selected ratio in polynomial time but may lead to slightly less accurate designs. Examples of non-deterministic approaches include probabilistic algorithms [5] and metaheuristic algorithms [6,7,8,9,10,11]. Probabilistic algorithms search a problem space using a probabilistic model of candidate solutions but may only sometimes find the optimal design. Metaheuristic algorithms are rule-of-thumb algorithms that usually involve making an educated guess or inferencing. They use general strategies to explore the design space and escape from local optima, making them more applicable to a broader range of problems.
Metaheuristic algorithms are usually classified into six categories: evolutionary-based, swarm-based, mathematics-based, physics-based, game-based, and human-based approaches. Evolutionary-based algorithms mimic natural biological evolution, biology, and genetics [6]. Swarm-based algorithms are formed by the interactions between living organisms such as ants, termites, birds, and fish [7]. Mathematics-based algorithms are formed by research on certain mathematical functions or rules [8]. Physics-based algorithms simulate various laws, theories, forces, concepts, and phenomena in physics [9]. Game-based algorithms mimic the behaviors of players, referees, coaches, and the rules of different group games [10]. Human-based algorithms model human activities, social relationships, thinking, communication, and interaction in social life [11].
In essence, metaheuristic algorithms are probabilistic search techniques that share heuristic information to lead the search. Despite the numerous benefits [12], metaheuristic algorithms also have some limitations and barriers [13]. Recently, hybrid metaheuristic optimization is a computational method that combines multiple metaheuristics to solve complex optimization problems [14]. The goal of hybridization in metaheuristic optimization is to enhance the performance of the algorithms by combining their strengths and overcoming their weaknesses [15]. In a hybrid metaheuristic optimization method, different metaheuristics are integrated in three ways [16]. (i) Cooperative combination: different metaheuristics work together, exchanging information and guiding each other’s searches. (ii) Parallel combination: multiple metaheuristics are used simultaneously, each working on its portion of the solution space. (iii) Sequential combination: one metaheuristic is used to initialize the search, and another is used to improve the solution. There are many advantages to hybrid metaheuristic optimization, such as enhanced diversity, flexibility, increased reliability, and robustness.
The CBSOP is difficult to solve due to three challenges: (i) time-consuming assessment of stochastic bi-objective functions, (ii) large design space, and (iii) satisfaction with all deterministic constraints. The ordinal optimization (OO) theory [17] offers a statistically quantifiable avenue to address issues (i) and (iii) simultaneously using soft computation. The OO theory is employed to support existing search approaches. OO theory can further reduce time consumption by focusing on ranking and selecting a finite set of good enough designs. OO theory has three basic steps. First, a sampling set of good enough designs is chosen using a simple model. The performance of a design can be quickly assessed through a simple model. The OO method specifies that the orders of designs are reserved even when they are assessed through a simple model [17]. Second, a candidate subset is chosen from the sampling set. Lastly, remarkable designs in the candidate subset are assessed using a complex model. A complex model provides a precise assessment of the performance. The best design with the optimum performance among these remarkable designs is chosen. Since then, there have been numerous applications of OO for such expensive problems, such as emergency department healthcare [18], sorting conveyor systems [19], and container freight stations in air cargo [20].
OO theory has emerged as a practical structure for CBSOP, but the deterministic constraints will greatly influence efficiency. To decrease the computing time of the CBSOP, a metaheuristic approach combining driving-training-based optimization (DTBO) [21] with ordinal optimization (OO), abbreviated as DTOO, is presented to look for an extraordinary design in an acceptable time. The DTOO approach has three major components: the surrogate model, diversification, and intensification. In the surrogate model, the regularized minimal-energy tensor product with cubic Hermite splines (RMTC) [22] is utilized as a fitness evaluation of design. In diversification, an accelerated driving-training-based optimization (ADTBO) is presented to look for N remarkable designs from the design space. In intensification, a reinforced optimal computing budget allocation (ROCBA) is utilized to seek an extraordinary design from the N remarkable designs. These three components can significantly diminish the computational effort to solve the CBSOP.
In the dynamic healthcare landscape, where patient care, technological advancements, and operational excellence intersect, effective resource allocation is critical. Optimization of resource allocation can ensure efficient utilization of staff, time, and budget while maintaining high-quality patient care [23,24,25]. The number of emergency cases has increased quickly each year, resulting in long-term overcrowding in emergency departments. Nevertheless, strategies to increase medical resources and satisfy patient needs are impracticable in the real world. Therefore, emergency departments must optimize the allocation of medical resources to minimize average patient length of stay (LOS) and medical wasted cost (MWC) simultaneously.
The DTOO approach is applied to the medical resource allocation problem in the emergency department. The contributions to this work consist of two parts. The first is to develop a DTOO approach for a CBSOP to find an extraordinary design in a reasonable time. The second one is to apply the DTOO approach to deal with a CBSOP involving the medical resources allocation in the emergency department. The application of the DTOO approach is not limited to the CBSOP. The DTOO approach can be adopted to resolve probabilistic multi-objective optimization problems, constrained stochastic optimization problems, combinatorial probabilistic optimization problems, and expensive constrained simulation optimization problems.
The remainder of this paper is organized as follows. Section 2 contains the DTOO approach to looking for an extraordinary design for a CBSOP. Section 3 introduces the medical resource allocation problem in the emergency department, which is formulated as a CBSOP. Next, the DTOO approach is employed to solve this CBSOP. Section 4 presents the experimental results and compares them with three heuristic approaches. The final section concludes the paper and gives future directions for research.

2. Combining Driving-Training-Based Optimization in Ordinal Optimization

2.1. CBSOP

A classical CBSOP is formulated as follows.
min   E [ f i ( x ) ] , i = 1 , 2 .
subject   to   h j ( x ) d j , j = 1 , , J .
B x U .
where x = [ x 1 , , x K ] T depicts a K-dimensional design vector, f i ( x ) is the ith stochastic objective function, h j ( x ) is the jth constrained function, d j depicts a required level, J is the quantity of constraints, and B = [ B 1 , , B K ] T , U = [ U 1 , , U K ] T indicate the lower bounds and upper bounds, respectively. Sufficient simulation replications are performed to attain a complex model of E [ f i ( x ) ] . Let L represent the amount of replications and f i l ( x ) denote the objective value of the l th replication. A common method to assess E [ f i ( x ) ] is the sample average approximation.
f ¯ i ( x ) = 1 L l = 1 L f i l ( x ) , i = 1 , 2 .
One way to obtain more precise estimates for E [ f i ( x ) ] is to increase the value of L. Two primary efficiency concerns in the sample average approximation include (i) L needs to be sufficiently large to permit a reasonable estimation of E [ f i ( x ) ] and (ii) f ¯ i ( x ) must be calculated for numerous designs to find the best design.
A well-known strategy for solving bi-objective optimization problems is the weighted-sum method. This method converts a bi-objective into a single objective by multiplying each of the objectives by a user-supplied weight. Since the inequality constraints are soft, we transform the bi-objective optimization problems into unconstrained ones [26].
min   F ( x ) = α f ¯ 1 ( x ) + ( 1 α ) f ¯ 2 ( x ) + β j = 1 J P F j ( x )
where α ( 0 , 1 ) denotes a compound factor, β denotes a penalty weight, F ( x ) depicts a weighted-sum objective function, and P F j ( x ) is a quadratic penalty function.
P F j ( x ) = 0 ,   i f h j ( x ) d j , h j ( x )     d j 2 ,   e l s e , j = 1 , , J .
An enormous value of β can help mitigate the violation of a given constraint. Let L a indicate the sufficiently large of L , and the exact assessment of (4) refers to L = L a . Let F a ( x ) express the weighted-sum objective value of x using an exact assessment.
To determine N remarkable designs from the entire design space in an accepted time, we develop a surrogate model to assess a design more quickly and employ a novel optimization algorithm supported by this surrogate model. The surrogate model is the RMTC [22], and the optimization algorithm is the ADTBO.

2.2. RMTC

A surrogate model is a technique used in engineering when a result of interest cannot be readily and directly assessed, and a model of the outcome is utilized instead. To determine the relationship between design goals, constraint functions, and design variables, experiments and/or simulations are often used. However, running even a single simulation might take a considerable amount of time for many practical issues. Therefore, basic activities like design exploration, sensitivity analysis, and what-if analysis are unfeasible since they need hundreds or millions of simulation evaluations. To lessen this load, approximation models that behave similarly to the simulation model may be built but would need fewer computing resources for assessment. The construction of surrogate models follows a data-driven, bottom-up methodology. It is not required that all the inner workings of the simulation code be known; only the input–output behavior is essential.
Surrogate models have been used in several recent applications. Various machine learning algorithms have been developed for surrogate mode, including support vector regression (SVR) [27], polynomial chaos expansion (PCE) [28], kriging [29], extreme learning machine (ELM) [30], and RMTC [22]. Among them, RMTC interpolates low-dimensional problems with large unstructured datasets by computing spline coefficients based on minimal energy and regularization. The RMTC has many uses, especially in time series forecasting, function approximation, curve fitting, prediction, and classification [22]. There are many advantages to RMTC, such as short prediction times, the inability to predict training failure, and the scalability of parameter variability. Therefore, an RMTC surrogate model is used to assess the fitness of a design.
The data for training are ( x i , F a ( x i ) ), where x i and F a ( x i ) are the design and objective values assessed using exact assessment, respectively. The goal of RMTC is to provide predicted value f ( x , ω ) that are close to the observed one, F a ( x ) . Figure 1 illustrates the framework of RMTC. The general mathematical model of RMTC is shown below.
f ( x , ω ) = b = 1 B ω b θ b ( x )
where f ( x , ω ) depicts the predicted output, B is the amount of basis function, x depicts a K-dimensional design vector, ϕ b ( ) depicts the multivariate spline functions, and ω b indicates the coefficients of the multivariate spline.
Any type of splines can be used in a regularized minimal-energy tensor product. There are numerous spline types, including plus-spline, rational cubic splines, natural cubic splines, B-splines, piecewise Hermite splines, and cubic Hermite splines. The cubic Hermite splines offer a great deal of flexibility and versatility in terms of approximation accuracy and computational efficiency. RMTC computes the coefficients of the splines, ω b , through the energy minimization problem under the condition that the splines need to pass through the given points. We trained RMTC offline with training data. After offline training, the calculation of predicted output f ( x , ω ) is straightforward for any x .

2.3. ADTBO

With the help of the RMTC surrogate model, novel optimization algorithms can be applied to find N remarkable designs from the whole design space. Because the DTBO can balance global and local search capabilities, it is appropriate to meet the efficient requirements. DTBO is a human-based algorithm that mimics the emulation of human behavior observed in driving education [21]. DTBO is designed to solve and optimize optimization problems by considering three phases: training, replication, and practice. During the training phase, training is provided by the driving instructor. Instructors offer guidance, encouragement, and constructive feedback throughout lessons to help learners develop their driving skills. In the replication phase, the student drivers learn and replicate the patterns and skills imparted by the instructor. In the practice phase, consistent personal practice ensures that knowledge and skills are stored in the brain and cannot be disturbed easily. DTBO has been successfully applied to solving the partial shading problem [31], skin cancer detection [32], diverse hybrid power systems [33], and parameter identification of fractional-order systems [34].
Although DTBO provides potential results, it needs to address the serious issue of premature convergence, which is caused by the decrease in population diversity in the search process. To overcome the drawbacks and further enhance the precision of calculation results, the ADTBO is proposed. The ADTBO has two control parameters, which are the number of driving instructors (ND) and the patterning index (PI). The values ND and PI are dynamically changed to emphasize exploration in the advanced process and exploitation in the past process. The value of ND follows an exponential decrease when the iteration is increased. Among the ADTBO population, the former ND members are chosen as driving instructors, and the remainder of the members are learning drivers. The roulette wheel selection is utilized to select a driving instructor from the ND members. Learning the skills from this driving instructor will move learning drivers to distinct areas in the design space. The value of PI also decreases exponentially to reinforce exploration when the number of iterations increases. Since the learning driver attempts to model all the actions and skills of the instructor, linear combinations of members and the instructor will move each member to distinct positions in the design space.
The ADTBO uses the following notations (Algorithm 1). Ψ denotes the number of members and t max depicts the maximum iterative number. x i t = [ x i , 1 t , , x i , K t ] T , y i t = [ y i , 1 t , , y i , K t ] T depict the positions of the ith member and the corresponding driving instructor at iteration t, respectively. N D t is the number of driving instructors at iteration t, N D t [ N D min , N D max ] , where N D min and N D max depict the minimum value and maximum value of ND, respectively. P I t is the patterning index at iteration t, P I t [ P I min , P I max ] , where P I min and P I max depict the minimum and maximum value of PI, respectively.
Algorithm 1: The ADTBO
Step 1: Configuration
Configure the values of Ψ , N D min , N D max , P I min , P I max , and t max , where Ψ is the number of members in the population.
Step 2: Initialization
(a)
Initialize   iteration   indices   t = 1 , and a population with Ψ members.
x i t = B + r a n d [ 0 , 1 ] ( U B ) , i = 1 , , Ψ . (8)
where B and U depict the lower and upper bounds, respectively, and r a n d [ 0 , 1 ] draws a random value between zero and one.
(b)
Compute   the   fitness   F   a ( x i t ) of each member supported by RMTC, i = 1 , , Ψ .

Step 3: Update two control parameters
N D t = N D min + N D max N D min exp ln N D min N D max t t max (9)
P I t = P I   max ∙ exp( P I max P I min t t max ) (10)
where denotes the bracket function, which rounds a real value to the closest integer.
Step 4: Choosing the driving instructor
(a)
Rank the Ψ members based on fitness from the least to the largest, then pick the topranking N D t members as a candidate set of driving instructors {1,2,…, N D t }.
(b)
Determine   the   driving   instructor   y i t of the ith member using the roulette wheel selection from the candidate set {1,2,…, N D t }.

Step 5: Training by a driving instructor
(a)
Update the positions of the ith member.
If F   a ( y i t ) < F   a ( x i t )
x i = x i t + r a n d [ 0 , 1 ] y i t r a n p [ 1 , 2 ] x i t , i = 1 , , Ψ . (11)
Else
x i = x i t + r a n d [ 0 , 1 ] x i t y i t , i = 1 , , Ψ . (12)
where r a n p [ 1 , 2 ] is a random picker number from the set {1, 2}. If x i , k < B k , set x i , k = B k , and when x i , k > U k , set x i , k = U k .
(b)
  Compute   F a ( x i t ) and F a ( x i ) supported by RMTC and adopt the greedy approach between x i t and x i . If F a ( x i ) < F a ( x i t ) , set x i t = x i .

Step 6: Patterning the learner’s driver from instructor skills
(a)
Update the positions of the ith member.
x i = P I t x i t + ( 1 P I t ) y i t (13)
If x i , k < B k , set x i , k = B k , and when x i , k > U k , set x i , k = U k .
(b)
  Compute   F a ( x i t ) and F a ( x i ) supported by RMTC and apply the greedy approach between x i t and x i . If F a ( x i ) < F a ( x i t ) , set x i t = x i .

Step 7: Personal practice
(a)
Update the positions of the ith member.
x i = 1 + r a n d [ - 0 . 05 , 0 . 05 ] ( 1 t t max ) x i t (14)
where r a n d [ - 0 . 05 , 0 . 05 ] draws a random value between −0.05 and 0.05, and the value of 0.05 is a default value in the original DTBO [21]. If x i , k < B k , set x i , k = B k , and when x i , k > U k , set x i , k = U k .
(b)
  Compute   F a ( x i t ) and F a ( x i ) supported by RMTC and employ the greedy method between x i t and x i . If F a ( x i ) < F a ( x i t ) , set x i t = x i .

Step 8: Stop
If t t max , stop; else, let t = t + 1 and return to Step 3.
When the limit on the number of iterations exceeds a specified value t max , the ADTBO is terminated. After the ADTBO terminates, the final Ψ members are sorted according to their fitness. Finally, the high-ranking N members are picked as a selected subset involving remarkable designs.
The computational complexity of ADTBO depends on the initialization and three phases: training, replication, and practice. The computational complexity is O ( Ψ K ) in the initialization, where Ψ is the number of members, and K is the dimension of the CBSOP. The ADTBO members are changed in three phases for each iteration. Thus, the computational complexity in the search processes is O ( 3 Ψ K t max ) , where t max denotes the maximum iterative number. Accordingly, the whole computational complexity of ADTBO is O ( 1 + 3 t max ) Ψ K .

2.4. ROCBA

Finally, we determine an extraordinary design using the ROCBA approach from the N remarkable designs. The ROCBA approach selects an extraordinary design by iteratively adjusting computing resources (Algorithm 2). The core idea of the ROCBA is that it spends more computational resources on a few potential designs and less on the most passable designs. Focusing on a few potential designs can save computing resources and reduce the variances of potential estimators.
Through the statistics of objective values from the N remarkable designs, the value of extra replications is calculated to supply more computing resources for potential designs. Let C b represent the limited computational budget and L i denote the replications allocated to the i th design. The rough replications allocated to the N remarkable designs are L 0 . A pre-defined Δ is a non-negative integer denoting a one-time computing budget increment. ROCBA aims to efficiently assign C b to L 1 , L 2 , …, L N to maximize the probability of determining the optimum and satisfy L 1 + L 2 + + L N = C b . The limited computational budget C b is calculated by C b = N × L a s , where L a = 10 4 depicts the replications used in exact assessment, and s denotes a time-saving ratio that can be obtained in the OCBA procedure [35].
Algorithm 2: The ROCBA
Step 1. Define the value of L 0 , l = 0 , L 1 l = L 0 ,…, L N l = L 0 . Calculate the limited computational budget C b = N × L a s .
Step 2. If i = 1 N L i l < C b , go to Step 3; else, terminate and select the one with the smallest objective value.
Step 3. Increase an additional computational budget Δ to i = 1 N L i l , and refresh the replications.
L j l + 1 = ( i = 1 N L i l + Δ ) × θ j l / ( θ b l + i = 1 , i b N θ i l ) (15)
L b l + 1 = θ b l θ j l × L j l + 1 (16)
L i l + 1 = θ i l θ j l × L j l + 1 (17)
for all i j b , where θ i l θ j l = δ i l × ( F ¯ b l F ¯ j l ) δ j l × ( F ¯ b l F ¯ i l ) 2 , θ b l = δ b l i = 1 , i b N ( θ i l δ i l ) 2 , F ¯ i l = 1 L i l h = 1 L i l F h ( x i ) , δ i l = 1 L i l h = 1 L i l F h ( x i ) F ¯ i l 2 , x i is the i th potential design, F h ( x i ) represents the fitness of x i at the h th replication, b = arg min i F ¯ i l , i and j are two different potential designs, and b is the observed best design.
Step 4. Enforce extra replications, i.e., max [ 0 , L i l + 1 L i l ] , for the i th potential design, then compute the mean ( F ^ i l + 1 ) and standard deviation ( δ ^ i l + 1 ) of extra replications by
F ^ i l + 1 = 1 ( L i l + 1 L i l ) h = L i l + 1 L i l + 1 F h ( x i ) (18)
δ ^ i l + 1 = 1 ( L i l + 1 L i l ) h = L i l + 1 L i l + 1 F h ( x i ) F ^ i l + 1 2 (19)
, respectively.
Step 5. Compute the mean ( F ¯ i l + 1 ) and standard deviation ( δ i l + 1 ) of entire replications by
F ¯ i l + 1 = 1 L i l + 1 L i l × F ¯ i l + ( L i l + 1 L i l ) × F ^ i l + 1 (20)
δ i l + 1 = 1 ( L i l + 1 1 ) × L i l F ¯ i l 2 + ( L i l 1 ) δ i l 2 + ( L i l + 1 L i l ) F ^ i l + 1 2 + ( L i l + 1 L i l 1 ) δ ^ i l + 1 2 L i l + 1 F ^ i l + 1 2 (21)
, respectively. Let l = l + 1 and return to Step 2.

2.5. The DTOO Approach

Figure 2 demonstrates the flow diagram of the DTOO approach. The DTOO approach comprises three major components: the surrogate model, diversification, and intensification (Algorithm 3). Steps 2, 3, and 4 of the DTOO approach correspond to the surrogate model, diversification, and intensification, respectively.
Algorithm 3: The DTOO
Step 1: 
Define the quantities of M , Ψ , N D min , N D max , P I min , P I max , t max , N , L a , L 0 , and Δ .
Step 2: 
Randomly choose M   x ’s from the design space and assess F a ( x ) , and off-line train the RMTC through these M sampling designs.
Step 3: 
Stochastically generate Ψ   x ’s as the initial population and employ Algorithm 1 on these members supported by RMTC. When Algorithm 1 stops, rank all the final Ψ x ’s based on their objective value from lowest to highest, and pick the top N   x ’s to be the selected subset.
Step 4: 
Employ Algorithm 2 to the N remarkable designs in the selected subset and determine the optimum x as the extraordinary design.

3. Medical Resource Allocation in the Emergency Department

3.1. Medical Resource Allocation

Figure 3 shows the patient flow in the emergency department. Patient flows are divided into four different phases: (i) reception and examination, (ii) assigning patient beds, (iii) diagnostic and transfer patients, and (iv) patient assessment and leave. The interval between patients arriving at the emergency department follows a known probability distribution. Service times for medical services follow certain probabilistic distributions. A CBSOP concerning emergency medical resource allocation is presented under these pre-established conditions. This problem aims to find the most viable design for minimizing the average LOS and MWC of emergency resource allocation under limited medical resources. The medical resources contain the number of physicians in three areas x 1 ~ x 3 , the number of nurses in four areas x 4 ~ x 7 , the number of two pieces of medical equipment x 8 ~ x 9 , the number of laboratory technicians x 10 , and the number of emergency department beds x 11 . Table 1 lists the medical resources.

3.2. Mathematical Formulation

The medical resource allocation problem can be formulated as a CBSOP based on patient flows.
min   f 1 ( x ) = E [ LOS ( x ) ]
min   f 2 ( x ) = E [ MWC ( x ) ]
subject   to   i = 1 3 x i d r ,
i = 4 7 x i n u
Β x U .
where x = [ x 1 , , x 11 ] T denotes the decision vector, E [ LOS ( x ) ] denotes the expected patient LOS, E [ MWC ( x ) ] denotes the average MWC, d r denotes the maximum limit of physicians, n u denotes the maximum limit of nurses, B = [ B 1 , , B 11 ] T and U = [ U 1 , , U 11 ] T express the lower bound and upper bound, respectively. Constraints (24) and (25) refer to the total number of physicians and nurses in different areas that cannot exceed the maximum limit, respectively.
The sample mean is used to estimate two objective performances.
f ¯ 1 ( x ) = 1 L l = 1 L LOS l ( x )
f ¯ 2 ( x ) = 1 L l = 1 L MWC l ( x )
where L represents the quantity of replications, and LOS l ( x ) and MWC l ( x ) are the assessment of the l th replication. Due to the inequality constraints being soft, we convert the bi-objective optimization problems into unconstrained ones.
min   F ( x ) = α f ¯ 1 ( x ) + ( 1 α ) f ¯ 2 ( x ) + β ( P F 1 ( x ) + P F 2 ( x ) )
where α ( 0 , 1 ) denotes a compound factor, β depicts a penalty weight, F ( x ) represents a weighted-sum objective function, and P F 1 ( x ) and P F 2 ( x ) are quadratic penalty functions.
P F 1 ( x ) = 0 , i f i = 1 3 x i d r , i = 1 3 x i d r 2 , e l s e .
P F 2 ( x ) = 0 , i f i = 4 7 x i n u , i = 4 7 x i n u 2 , e l s e .

4. Practical Example

4.1. Practical Example and Simulation Results

A practical example presented in [36] illustrates the application of the DTOO approach. Table 2 indicates the limited resources, including the maximum and minimum quantities of each medical resource. The costs of each medical resource are shown in Table 3, where the units are measured in thousands. Table 4 shows the processing time probability distribution. Patient arrival times, nursing service times, and the arrival times of emergency department beds follow an exponential distribution. The value in the parentheses of the exponential distribution is the average time. The reception and examination times follow a gamma distribution. The two values in the parentheses of the gamma distribution are the shape and scale parameters. X-ray tests, CT tests, and report waiting times follow a lognormal distribution. The two values in the parentheses of the lognormal distribution are the location and scale parameters. Laboratory test times and service times of fever area physicians follow a normal distribution. The two values in the parentheses of the normal distribution are the mean and variance. Service times for emergency and critical area physicians follow Weibull distributions. The two values in the parentheses of the Weibull distribution are the shape and scale parameters. We carried out a simulation run over a period of 30 days. The transient period of a simulation run is the first five days, and the steady period is the latter 25 days. We collected the experimental data during the steady-state period. The experimental data are primarily collected in the emergency department of a certain hospital in Taiwan. Although the framework of the simulation process simplifies some complicated medical activities, the framework still preserves high referential value in terms of the behavior explored by the simulation model. The DTOO approach was coded with the MATLAB R2021a software on Windows 11. This algorithm was then run on a personal computer with an Intel Core i7-10610U 1.80 GHz CPU and 32 GB of RAM.
The RMTC was trained by arbitrarily selecting M = 9604 designs. The value M = 9604 was obtained with a confidence level of 95% and a confidence interval of 1% at the sample size calculator [37]. The objective value of a design was computed by exact assessment.
The compound factor α was 0.5, and the penalty weight β was 10 4 . The maximum limit of physicians was d r = 10 , and the maximum limit of nurses was n u = 20 . The lower bounds and upper bounds were B = [ 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ] T and U = [ 5 , 5 , 5 , 10 , 10 , 10 , 10 , 4 , 4 , 10 , 300 ] T , respectively. Accordingly, the design space size is 6 × 10 10 .
The performance of ADTBO is dependent on the settings of two parameters, the number of members Ψ , and the required iterations t max . Various parameter combinations were performed with hand tuning to identify the best parameter settings. The following parameters were used in ADTBO: Ψ = 100 , N D min = 1 , N D max = 10 , P I min = 0.05 , P I max = 0.9 , and t max = 500 . Figure 4 demonstrates the curves of ND and PI as a function of the number of iterations. ADTBO usefully explores the design space from the beginning and tends to develop excellent designs towards the end. As the iterative process continues, the parameters ND and PI are dynamically changed to enhance the diversity of former searches and the intensity of subsequent searches.
To examine the impact of N on performance, the DTOO approach was verified for four values of N: 10, 20, 30, and 40. Ref. [35] provided that a good choice for Δ is a number smaller than 100 or 10% of N, whichever is smaller. A suggested choice of L 0 is somewhere between 5 and 20. Thus, the parameters utilized in ROCBA were L 0 = 20, Δ = 10, and L a = 10 4 . The time-saving ratios concerning N = 40, 30, 20, and 10 are s = 10.7, 8.4, 6.1 and 3.5, respectively [35]. Accordingly, the limited computational budgets with respect to N = 40, 30, 20, and 10 were C b = 37,383, 35,714, 32,787 and 28,571, respectively. Table 5 lists the extraordinary design x , E [ LOS ( x ) ] , E [ MWC ( x ) ] , and the CPU times. Since the RMTC is trained offline, the CPU time consumed by the DTOO approach does not include the training time. Therefore, the CPU time reports the total runtime for executing Steps 3 and 4 of the DTOO approach. Different values of N were set up before executing the program. Simulation results reveal that as N increases, the CPU time also increases, but the weighted-sum objective value decreases. Additionally, all CPU times were under two minutes, which is quick enough for instant applications.

4.2. Comparison

Since particle swarm optimization (PSO), the genetic algorithm (GA), and the evolution strategy (ES) are popular algorithms that have canonically been described as inherently real-valued optimization algorithms, they were employed in the practical example for comparison. The PSO algorithm solves tricky problems by mimicking the behavior of social insects or other animals [38]. The GA is an adaptive heuristic search algorithm that belongs to the larger category of evolutionary algorithms [39]. The ES is a powerful subset of evolutionary algorithms that can be used to solve various problems [40]. The following algorithmic parameter settings for three heuristic approaches were default values. PSO used the inertia factor of 1, cognitive and social factors of 2.05, a population size of 100, and the maximum allowable velocity of 0.5. The GA adopted real-value coding to represent an integer-valued design, a population size of 100, single-point crossover with a probability of 0.7, roulette wheel selection, and mutation with a probability of 0.03. The ES utilized a mutated time constant of 1 11 , a population size of 100, and an offspring size of 200. The objective values of three heuristic approaches were calculated by exact assessment.
Due to the random nature, 30 simulation runs were simulated for the practical example. Since the three heuristic approaches spent a lengthy period to find the optimum, they ceased when they had executed 100 min of CPU time. Figure 5 presents the solution quality and execution time of four methods for 30 simulation runs. The “ ” point with coordinates (1.95, 59,456.2) denotes the pair of (average CPU time, average objective value) resulting from the DTOO with N = 40. Figure 5 also demonstrates the average best-so-far objective value progression and error bars resulting from 30 simulation runs of the three heuristic approaches. Table 6 shows that the performances obtained by PSO, the GA, and the ES were 5.54%, 9.71%, and 6.96% more significant than those obtained by DTOO, respectively. Simulation results reveal that the DTOO approach can find an extraordinary design quickly and outperform the three heuristic approaches.
Furthermore, the ranks of the designs were analyzed to verify global optimality. Because it is hard to acquire the ranking of all designs, a representative subset Ω is chosen to approximate the characteristics of the large design space. Therefore, 16,641 designs were randomly picked from the design space to create the representative subset. An exact assessment was used to evaluate the fitness of a design. The value of Ω = 16,641 was calculated with a confidence level of 99% and a confidence interval of 1% at the sample size calculator [37].
An analytical process on ranking percentages was performed to reveal the ranking of extraordinary designs within a representative subset. The ranking percentage of an extraordinary design in Ω is defined by r k Ω × 100 % , where r k depicts the rank of an extraordinary design. Table 7 shows the statistics over 30 simulation runs for four methods. The standard error of the mean (S.E.M) concerning the average best-so-far performance using DTOO with N = 40 over 30 runs was 4.62. This small value indicates that most of the performances of the DTOO are very close to the optimum in 30 simulation runs. Consequently, the DTOO approach can often achieve near-optimal results, even if it cannot provide a globally optimal design. Figure 6 presents the progression of the average best-so-far objective value and error bars of the DTOO approach for N = 40.
To provide statistical analysis compared to three heuristic approaches, the Wilcoxon rank-sum test is performed at the 5% significance level [41]. The Wilcoxon rank-sum test is based on two indicators, the p-value, and the h-value, to evaluate the superiority of one method over another. The p-value depicts the degree of marginal significance within a statistical hypothesis test. The value h = 1 indicates a rejection of the null hypothesis, and h = 0 represents a failure to reject the null hypothesis. If p-value < 0.05 and h = 1 are obtained, the observed data provide strong evidence against the null hypothesis. The Wilcoxon rank-sum test on DTOO compared to the three heuristic approaches is shown in Table 8. Results reveal the statistically significant superiority of DTOO compared to the three heuristic approaches.

5. Conclusions

To solve the CBSOP in a reasonable computation time, a metaheuristic approach combining DTBO with OO was presented. The DTOO approach has three major components: the surrogate model, diversification, and intensification. The RMTC surrogate model quickly assessed a design vector. The DTOO approach employed the ADTBO to diversify, as well as the ROCBA to intensify. The DTOO approach was adopted to simultaneously minimize average patient LOS and MWC for optimal allocation of medical resources in the emergency department. The DTOO approach was verified for four values of N: 10, 20, 30, and 40. All CPU times were under two minutes, which is quick enough for instant applications. We compared the DTOO approach with three heuristic approaches, PSO, GA, and ES, with exact assessment. The performance values obtained by PSO, GA, and ES were 5.54%, 9.71%, and 6.96% larger than those obtained by DTOO, respectively. The standard error of the mean concerning the average best-so-far performance using DTOO with N = 40 over 30 runs was 4.62. Test results demonstrate that most of the performances for the DTOO approach are near the optimum in 30 simulation runs. Because the DTOO frequently yields a near optimum in an acceptable time, the limitation of the DTOO approach is that it does not obtain a global optimum. The RMTC can be substituted by a simple model with L 0 replications used in the ROCBA to resolve this limitation. Future research will apply the OO theory to solving more complex probabilistic inequality constraint problems, such as portfolio optimization in finance and the economy and optimal investment-reinsurance problems.

Author Contributions

S.-C.H. designed and conceived of the experiments; S.-C.H. performed the experiments; S.-S.L. analyzed the data; S.-S.L. contributed reagents and analysis tools; S.-C.H. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Science and Technology Council in Taiwan, R.O.C., under Grant MOST111-2221-E-324-021.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wu, Z.Y.; Lu, Z.B.; Zhang, B.J.; He, C.; Chen, Q.L.; Yu, H.S.; Ren, J.Z. Stochastic bi-objective optimization for closed wet cooling tower systems based on a simplified analytical model. Energy 2022, 250, 123703. [Google Scholar] [CrossRef]
  2. Rossit, D.G.; Nesmachnow, S.; Toutouh, J.; Luna, F. Scheduling deferrable electric appliances in smart homes: A bi-objective stochastic optimization approach. Math. Biosci. Eng. 2022, 19, 34–65. [Google Scholar] [CrossRef] [PubMed]
  3. Monaci, M.; Pike-Burke, C.; Santini, A. Exact algorithms for the 0–1 time-bomb knapsack problem. Comput. Oper. Res. 2022, 145, 105848. [Google Scholar] [CrossRef]
  4. Doerr, B.; Rajabi, A.; Witt, C. Simulated annealing is a polynomial-time approximation scheme for the minimum spanning tree problem. Algorithmica 2024, 86, 64–89. [Google Scholar] [CrossRef]
  5. Berend, D.; Mamana, S. A probabilistic algorithm for vertex cover. Theor. Comput. Sci. 2024, 983, 114306. [Google Scholar] [CrossRef]
  6. Agharezaei, P.; Sahu, T.; Shock, J.; O’Brien, P.G.; Ghuman, K.K. Designing catalysts via evolutionary-based optimization techniques. Comput. Mater. Sci. 2023, 216, 111833. [Google Scholar] [CrossRef]
  7. Khandelwal, M.K.; Sharma, N. Adaptive and intelligent swarms based algorithm for software cost estimation. J. Mult.-Valued Log. Soft Comput. 2023, 40, 415–432. [Google Scholar]
  8. Zhao, S.J.; Zhang, T.R.; Cai, L.; Yang, R.H. Triangulation topology aggregation optimizer: A novel mathematics-based meta-heuristic algorithm for continuous optimization and engineering applications. Expert Syst. Appl. 2024, 238, 121744. [Google Scholar] [CrossRef]
  9. Qais, M.H.; Hasanien, H.M.; Alghuwainem, S.; Loo, K.H. Propagation search algorithm: A physics-based optimizer for engineering applications. Mathematics 2023, 11, 4224. [Google Scholar] [CrossRef]
  10. Ma, B.; Hu, Y.T.; Lu, P.M.; Liu, Y.G. Running city game optimizer: A game-based metaheuristic optimization algorithm for global optimization. J. Comput. Des. Eng. 2023, 10, 65–107. [Google Scholar] [CrossRef]
  11. Faridmehr, I.; Nehdi, M.L.; Davoudkhani, I.F.; Poolad, A. Mountaineering team-based optimization: A novel human-based metaheuristic algorithm. Mathematics 2023, 11, 1273. [Google Scholar] [CrossRef]
  12. Mishra, A.; Goel, L. Metaheuristic algorithms in smart farming: An analytical survey. IETE Tech. Rev. 2024, 41, 46–65. [Google Scholar] [CrossRef]
  13. Turgut, O.E.; Turgut, M.S.; Kirtepe, E. A systematic review of the emerging metaheuristic algorithms on solving complex optimization problems. Neural Comput. Appl. 2023, 35, 14275–14378. [Google Scholar] [CrossRef]
  14. Seydanlou, P.; Jolai, F.; Tavakkoli-Moghaddam, R.; Fathollahi-Fard, R.A.M. A multi-objective optimization framework for a sustainable closed-loop supply chain network in the olive industry: Hybrid meta-heuristic algorithms. Expert Syst. Appl. 2022, 203, 117566. [Google Scholar] [CrossRef]
  15. Raj, A.; Shetty, S.D.; Rahul, C.S. An efficient indoor localization for smartphone users: Hybrid metaheuristic optimization methodology. Alex. Eng. J. 2024, 87, 63–76. [Google Scholar] [CrossRef]
  16. Suwannarongsri, S. A novel hybrid metaheuristic optimization search technique: Modern metaheuristic algorithm for function minimization. Int. J. Innov. Comput. Inf. Control. 2023, 19, 1629–1645. [Google Scholar]
  17. Ho, Y.C.; Zhao, Q.C.; Jia, Q.S. Ordinal Optimization: Soft Optimization for Hard Problems; Springer: New York, NY, USA, 2007. [Google Scholar]
  18. Horng, S.C.; Lin, S.S. Improved beluga whale optimization for solving the simulation optimization problems with stochastic constraints. Mathematics 2023, 11, 1854. [Google Scholar] [CrossRef]
  19. Horng, S.C.; Lin, S.S. Incorporate seagull optimization into ordinal optimization for solving the constrained binary simulation optimization problems. J. Supercomput. 2023, 79, 5730–5758. [Google Scholar] [CrossRef]
  20. Horng, S.C.; Lin, S.S. Advanced golden jackal optimization for solving the constrained integer stochastic optimization problems. Math. Comput. Simul. 2024, 217, 188–201. [Google Scholar] [CrossRef]
  21. Dehghani, M.; Trojovská, E.; Trojovsky, P. A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving training process. Sci. Rep. 2022, 12, 9924. [Google Scholar] [CrossRef]
  22. Hwang, J.T.; Martins, J.R.R.A. A fast-prediction surrogate model for large datasets. Aerosp. Sci. Technol. 2018, 75, 74–87. [Google Scholar] [CrossRef]
  23. Parimanam, K.; Lakshmanan, L.; Palaniswamy, T. Hybrid optimization based learning technique for multi-disease analytics from healthcare big data using optimal pre-processing, clustering and classifier. Concurr. Comput.-Pract. Exp. 2022, 34, e6986. [Google Scholar] [CrossRef]
  24. Ala, A.; Simic, V.; Pamucar, D.; Bacanin, N. Enhancing patient information performance in internet of things-based smart healthcare system: Hybrid artificial intelligence and optimization approaches. Eng. Appl. Artif. Intell. 2024, 131, 107889. [Google Scholar] [CrossRef]
  25. Anand, A.; Singh, A.K. Hybrid nature-inspired optimization and encryption-based watermarking for E-healthcare. IEEE Trans. Comput. Soc. Syst. 2023, 10, 2033–2040. [Google Scholar] [CrossRef]
  26. Kumari, B.; Ahmad, I. Penalty function method for a variational inequality on Hadamard manifolds. Opsearch 2023, 60, 527–538. [Google Scholar] [CrossRef]
  27. Tran, N.K.; Kühle, L.C.; Klau, G.W. A critical review of multi-output support vector regression. Pattern Recognit. Lett. 2024, 178, 69–75. [Google Scholar] [CrossRef]
  28. Lee, D.; Chang, S.; Lee, J. Generalized polynomial chaos expansion by reanalysis using static condensation based on substructuring. Appl. Math. Mech.-Engl. Ed. 2024, 45, 819–836. [Google Scholar] [CrossRef]
  29. Balaban, M. Review of DACE-kriging surrogate model. Interdiscip. Descr. Complex Syst. 2023, 21, 316–323. [Google Scholar] [CrossRef]
  30. Genç, M. An enhanced extreme learning machine based on square-root lasso method. Neural Process. Lett. 2024, 56, 5. [Google Scholar] [CrossRef]
  31. Rehman, H.; Sarwar, A.; Tariq, M.; Bakhsh, F.I.; Ahmad, S.; Mahmoud, H.A.; Aziz, A. Driving training-based optimization (DTBO) for global maximum power point tracking for a photovoltaic system under partial shading condition. IET Renew. Power Gener. 2023, 17, 2542–2562. [Google Scholar] [CrossRef]
  32. Prasad, V.; Selvan, G.S.R.E.; Ramkumar, M.P. ADTBO: Aquila driving training-based optimization with deep learning for skin cancer detection. Imaging Sci. J. 2023, 1–19. [Google Scholar] [CrossRef]
  33. Zhang, G.Q.; Daraz, A.; Khan, I.A.; Basit, A.; Khan, M.I.; Ullah, M. Driver training based optimized fractional order pi-pdf controller for frequency stabilization of diverse hybrid power system. Fractal Fract. 2023, 7, 315. [Google Scholar] [CrossRef]
  34. Ni, L.; Ping, Y.; Li, Y.Y.; Zhang, L.Q.; Wang, G. A fractional-order modelling and parameter identification method via improved driving training-based optimization for piezoelectric nonlinear system. Sens. Actuators A-Phys. 2024, 366, 114973. [Google Scholar] [CrossRef]
  35. Chen, C.H.; Lee, L.H. Stochastic Simulation Optimization: An Optimal Computing Budget Allocation; World Scientific: Hackensack, NJ, USA, 2010. [Google Scholar]
  36. Feng, Y.Y.; Wu, I.C.; Chen, T.L. Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm. Health Care Manag. Sci. 2017, 20, 55–75. [Google Scholar] [CrossRef] [PubMed]
  37. Ryan, T.P. Sample Size Determination and Power; John Wiley and Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  38. Lu, F.Q.; Huang, M.; Ching, W.K.; Siu, T.K. Credit portfolio management using two-level particle swarm optimization. Inf. Sci. 2013, 237, 162–175. [Google Scholar] [CrossRef]
  39. Bi, H.L.; Lu, F.Q.; Duan, S.P.; Huang, M.; Zhu, J.W.; Liu, M.Y. Two-level principal-agent model for schedule risk control of IT outsourcing project based on genetic algorithm. Eng. Appl. Artif. Intell. 2020, 21, 103584. [Google Scholar] [CrossRef]
  40. Zhang, K.; Xu, Z.W.; Yen, G.G.; Zhang, L. Two-stage multiobjective evolution strategy for constrained multiobjective optimization. IEEE Trans. Evol. Comput. 2024, 28, 17–31. [Google Scholar] [CrossRef]
  41. Derrac, J.; Garcı, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
Figure 1. Framework of an RMTC.
Figure 1. Framework of an RMTC.
Mathematics 12 01863 g001
Figure 2. Flowchart of the proposed DTOO approach.
Figure 2. Flowchart of the proposed DTOO approach.
Mathematics 12 01863 g002
Figure 3. Patient flow in the emergency department.
Figure 3. Patient flow in the emergency department.
Mathematics 12 01863 g003
Figure 4. Variations in α and β over iterations.
Figure 4. Variations in α and β over iterations.
Mathematics 12 01863 g004
Figure 5. Progression of the average best-so-far objective value and error bars resulting from 30 simulation runs.
Figure 5. Progression of the average best-so-far objective value and error bars resulting from 30 simulation runs.
Mathematics 12 01863 g005
Figure 6. Progression of the average best-so-far objective value and error bars of the DTOO approach for N = 40.
Figure 6. Progression of the average best-so-far objective value and error bars of the DTOO approach for N = 40.
Mathematics 12 01863 g006
Table 1. Medical resources.
Table 1. Medical resources.
NotationNumber of Medical Resources
x 1 Physicians in critical care area
x 2 Physicians in treatment area
x 3 Physicians in fever area
x 4 Nurses in reception and examination
x 5 Nurses in critical care area
x 6 Nurses in treatment area
x 7 Nurses in fever area
x 8 X-ray machines
x 9 Computed tomography machines
x 10 Laboratory technicians
x 11 Emergency department beds
Table 2. The limits of each resource.
Table 2. The limits of each resource.
Medical ResourceMinimum Maximum
Physicians110
Nurses 120
X-ray machines14
Computed tomography machines14
Laboratory technicians110
Emergency department beds1300
Table 3. The costs of each medical resource.
Table 3. The costs of each medical resource.
Medical ResourceCost (NTD)
Physicians in critical care area 5 × 10 4
Physicians in treatment area 5 × 10 4
Physicians in fever area 5 × 10 4
Nurses in reception and examination 10 4
Nurses in critical care area 10 4
Nurses in treatment area 10 4
Nurses in fever area 10 4
X-ray machines 4 × 10 3
Computed tomography machines 9 × 10 4
Laboratory technicians 1.6 × 10 4
Emergency department beds 5 × 10 3
Table 4. Parameters of each probability distribution.
Table 4. Parameters of each probability distribution.
ActivityProcess/Service TimesUnits
Patient arrivalsExponential (5.07)h
Emergency department bed arrivalsExponential ( 4 × 10 4 )s
Reception and examination servicesGamma (7.79, 23.62)s
Critical area physician servicesWeibull (2.42, 8.01)min
Treatment area physician servicesWeibull (1.64, 128.78)s
Fever area physician servicesNormal (5.8, 3.6)min
Nurse servicesExponential (5)min
X-ray testsLognormal (1.13, 0.73)min
Computed tomography testsLognormal (2.5, 0.7)min
Laboratory testsNormal (1.5, 0.3)min
ReportLognormal (1.61, 0.47)h
Table 5. The extraordinary design x , E [ LOS ( x ) ] , E [ MWC ( x ) ] , and the CPU times in practical example.
Table 5. The extraordinary design x , E [ LOS ( x ) ] , E [ MWC ( x ) ] , and the CPU times in practical example.
N x E [ L O S ( x ) ] E [ M W C ( x ) ] CPU Times (s)
40[2, 3, 3, 5, 6, 3, 3, 3, 4, 6, 243]T11,508.83107,403.6116.72
30[1, 2, 5, 1, 5, 8, 5, 3, 2, 2, 106]T9912.68354,873.7113.58
20[1, 3, 4, 1, 5, 7, 6, 2, 2, 2, 208]T9561.35386,118.4108.32
10[2, 3, 4, 3, 3, 6, 6, 4, 3, 7, 221]T8291.251,173,482.3101.13
Table 6. Comparison of the performances for 30 simulation runs.
Table 6. Comparison of the performances for 30 simulation runs.
MethodsAP A P - § × 100 %
DTOO with N = 4059,456.20
PSO with exact assessment62,750.35.54%
GA with exact assessment65,229.59.71%
ES with exact assessment63,594.76.96%
AP: average performance; §,∗: average performance of DTOO.
Table 7. Statistics of four approaches over 30 simulation runs.
Table 7. Statistics of four approaches over 30 simulation runs.
MethodsMin.Max.MeanS.D.S.E.M.Average Ranking Percentage
DTOO with N = 4059,390.159,552.859,468.625.34.620.003%
PSO with exact assessment62,479.263,015.162,773.478.614.350.615%
GA with exact assessment64,884.665,642.265,284.8107.519.630.925%
ES with exact assessment63,351.163,858.963,612.284.815.480.737%
Table 8. Test results of the Wilcoxon rank-sum test.
Table 8. Test results of the Wilcoxon rank-sum test.
ValueDTOO vs. PSO DTOO vs. GADTOO vs. ES
p-value0.00251 7.25 × 10 5 3.12 × 10 4
h-value111
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Horng, S.-C.; Lin, S.-S. Accelerated Driving-Training-Based Optimization for Solving Constrained Bi-Objective Stochastic Optimization Problems. Mathematics 2024, 12, 1863. https://doi.org/10.3390/math12121863

AMA Style

Horng S-C, Lin S-S. Accelerated Driving-Training-Based Optimization for Solving Constrained Bi-Objective Stochastic Optimization Problems. Mathematics. 2024; 12(12):1863. https://doi.org/10.3390/math12121863

Chicago/Turabian Style

Horng, Shih-Cheng, and Shieh-Shing Lin. 2024. "Accelerated Driving-Training-Based Optimization for Solving Constrained Bi-Objective Stochastic Optimization Problems" Mathematics 12, no. 12: 1863. https://doi.org/10.3390/math12121863

APA Style

Horng, S.-C., & Lin, S.-S. (2024). Accelerated Driving-Training-Based Optimization for Solving Constrained Bi-Objective Stochastic Optimization Problems. Mathematics, 12(12), 1863. https://doi.org/10.3390/math12121863

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop