Next Article in Journal
Machine Learning Distinguishes Plant Bioelectric Recordings with and Without Nearby Human Movement
Previous Article in Journal
Biomimetic Digital Twin of Future Embodied Internet for Advancing Autonomous Vehicles and Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Optimization of Uncertain Multi-Stage Production Integrated Decisions Based on an Improved Grey Wolf Optimizer

1
School of Computer and Information Engineering, Institute for Artificial Intelligence, Shanghai Polytechnic University, Shanghai 201209, China
2
School of Computer and Information Engineering, Shanghai Polytechnic University, Shanghai 201209, China
3
School of Computer Science, University of Liverpool, Liverpool L69 3DR, UK
4
College of Materials and Energy, South China Agricultural University, Guangzhou 510642, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2025, 10(11), 775; https://doi.org/10.3390/biomimetics10110775 (registering DOI)
Submission received: 27 October 2025 / Revised: 10 November 2025 / Accepted: 11 November 2025 / Published: 15 November 2025
(This article belongs to the Section Biological Optimisation and Management)

Abstract

Defect-rate uncertainty creates cascading operational challenges in multi-stage production, often driving inefficiency and misallocation of labor, materials, and capacity. To confront this, we develop a multi-stage Production Integrated Decision (MsPID) framework that unifies quality inspection and shop-floor decision-making within a single computational model. The framework couples a two-stage sampling inspection policy—used to statistically learn and control defect-rate uncertainty via estimation and rejection rules—with a multi-process, multi-part production decision model. Optimization is carried out with an Improved Grey Wolf Optimizer (IGWO) enhanced with Latin hypercube sampling (LHS) for uniformly diverse initialization; an evolutionary factor mechanism that blends simulated binary crossover (SBX) among three leadership-guided parents (Alpha, Beta, Delta) to strengthen global exploration in early iterations and focus exploitation later; and a greedy, mutation-assisted opposition learning step applied to the lowest-performing quartile of the population to effect leader-informed local refinement and accept only fitness-improving moves. Experiments show the method identifies minimum-cost policies across six single-stage benchmark cases and yields a total profit of 43,800 units in a representative multi-stage scenario, demonstrating strong performance in uncertain environments. Sensitivity analysis further clarifies how recommended decisions adapt to shifts in estimated defect rates, finished product prices, and swap/changeover losses. These results highlight how bio-inspired intelligence can enable adaptive, efficient, and resilient integrated production management at scale.

1. Introduction

Since its inception, the Industrial Revolution has epitomized an unprecedented confluence of science and technology within the industrial domain, radically reshaping the modes of human existence, labor, and social engagement [1]. With the dawn of the Fourth Industrial Revolution, conventional industries are confronting disruptive transformations driven by a wave of technological innovations. In reaction, a growing number of enterprises have turned to artificial intelligence (AI) to develop adaptive industrial systems. Through systematic intelligent enhancements, AI furnishes more sophisticated and efficient solutions to challenges across supply chains, manufacturing processes, and decision-making frameworks [2,3].
Nonetheless, traditional industrial production continues to be plagued by persistent concerns regarding process quality and exorbitant inspection costs, compelling firms to pursue heightened operational efficiency, cost reduction, and unwavering quality assurance. Missbauer H. has emphasized the transformative potential of information and computational technologies in unlocking novel applications within production environments [4]. Similarly, Tseng M.L. has drawn attention to the inefficiencies and environmental drawbacks of obsolete automation systems, underscoring the imperative for modernization [5]. Expanding on these observations, Khan S.A.R. has advocated for the optimization of production workflows as a viable strategy to mitigate these issues [6].
Consequently, the reconciliation of process reliability with economic efficiency has emerged as a paramount research imperative in industrial studies [7]. While extant literature frequently emphasizes the optimization of discrete manufacturing stages, Alvandi et al. contend that an integrated modeling approach—one that holistically unifies optimization throughout the entire decision-making continuum—can concurrently elevate manufacturing performance and curtail operational expenditures [8].
This study establishes a Multi-stage Production Integrated Optimization (MsPIO) model to tackle complex production planning and control challenges under uncertainty. By leveraging artificial intelligence and systems engineering principles, the proposed approach facilitates the development of integrated solutions that enhance resource efficiency through synchronized cross-process coordination. The principal contributions of this work are threefold:
  • Development of an Integrated Multi-stage Production Optimization Model: The MsPIO model systematically unifies a two-stage sampling inspection mechanism—designed for defect-rate estimation and criterion-based decision-making—with multi-process, multi-component production planning. This integrated framework provides a coherent structure for addressing uncertainty and variability in complex production systems, aligning with systemic approaches to operational decision-making.
  • Design of an Enhanced Metaheuristic Solution Strategy: An Improved Grey Wolf Optimizer (IGWO) is introduced, incorporating Latin hypercube sampling to ensure uniform initialization of the population. The algorithm further integrates an evolutionary factor mechanism based on simulated binary crossover (SBX) and three leadership-guided parents (Alpha, Beta, Delta) to strengthen global exploration. A greedy mutation-based opposition learning strategy is applied to the lowest-performing quarter of the population, enabling effective local refinement and accelerating convergence toward high-quality solutions.
  • Comprehensive Experimental Validation and Sensitivity Analysis: Extensive experiments validate the model’s effectiveness and robustness under both cost-minimization and profit-maximization objectives. Results demonstrate that the proposed IGWO-based method not only identifies cost-optimal strategies across multiple single-stage production configurations but also achieves a total profit of 43,800 in a multi-stage production scenario. Through systematic sensitivity analysis, the study elucidates how key parameters—such as estimated defect rates (modulated by confidence levels), finished product price fluctuations, and replacement losses—influence optimal decisions. These insights offer valuable guidance for intelligent production management in environments shaped by mass customization and operational uncertainty.
The remainder of this paper is structured as follows. Section 2 reviews related work and the background of the cited models. Section 3 describes our designed sampling inspection mechanism and the multi-stage decision-making model for multiple processes and multiple parts based on the Improved Grey Wolf Optimizer (IGWO), which leverages defect-rate estimation to optimize multi-stage production decisions. Section 4 presents the experimental results, including algorithmic convergence analysis, production optimization performance, comparisons of optimal decision schemes, and sensitivity analysis. Section 5 concludes the paper.

2. Related Work

With the advent of Industry 4.0, the manufacturing sector is steadily transforming into smart production, leveraging AI, big-data analytics, and cloud computing to manage and optimize every stage of the production lifecycle—thereby boosting production efficiency and precision while reducing operating costs [9,10]. Recent studies show that large, well-capitalized firms deploying these advanced technologies consistently outperform small- and medium-sized enterprises and achieve marked improvements in production-flow efficiency [11,12]. Moreover, smart-optimization techniques have proven both effective and widely applicable across diverse industries [13,14,15], underscoring the necessity of optimizing production processes.
When defect-rate uncertainty arises, it must first be quantified via estimation [16]. Most scholars model the basic conditions with a binomial distribution and approximate its probabilities by simple random sampling—an approach prized for its ease of implementation [17]—though recent work has produced refined binomial sampling variants [18,19]. To better capture real-world complexity and avoid sampling bias, hypergeometric sampling (i.e., sampling without replacement) can augment the binomial approach and yield estimates closer to the true defect rate [20,21]. Some researchers have even applied this method to estimate sharding-failure probabilities in blockchain systems [22]; however, this method can only estimate the probability, whereas uncertainty calls for a confidence-interval framework. The Clopper–Pearson method provides exactly such an efficient, reasonable and lightweight interval estimate, defining an uncertainty band around the defect-rate estimate according to a prescribed confidence level—without imposing undue time or computational burdens [23,24]. Together, these techniques transform an uncertain defect rate into a confidence interval that can be fed into an optimization model for robust decision-making.
Beyond front-end inspection methods, the downstream “decision” phase determines an enterprise’s ultimate production cost. Schärer emphasizes that optimizing production decision-making in enterprises must account for process-related costs and requires more robust algorithms to reduce expenditures [25]. This calls for optimization approaches that consider global solutions, a task for which metaheuristic algorithms are particularly well-suited [26]. Recent advancements have further expanded the toolbox for tackling such challenges. For instance, the Schrödinger optimizer introduces a novel quantum duality-driven mechanism, demonstrating strong performance in stochastic optimization and complex engineering problems [27]. Similarly, significant improvements in Particle Swarm Optimization (PSO) have been achieved through the integration of quadratic interpolation and a new local search approach, enhancing its precision in parameter estimation tasks [28]. Enhancements to other popular algorithms, such as the Moth Flame Optimizer incorporating local escape operators [29] and gradient-based methods refined with quasi-Newton rules and new local search techniques [30], also exemplify the ongoing trend of hybridizing and refining metaheuristics to balance global exploration with local exploitation. Silva, Tao, and colleagues have applied such metaheuristic algorithms to production decision-making to address cost-related challenges. Their findings indicate that these methods enable effective evaluation and optimization of overall costs, ensuring reductions across various production stages and leading to minimized total cost [31,32]. In contrast, conventional optimization techniques often suffer from premature convergence to local optima. Therefore, appropriate algorithmic enhancements are necessary to improve solution quality. For instance, Liu et al. improved the Crested Porcupine Optimization algorithm, achieving notable results in optimizing delivery routes for unmanned aerial vehicles by reducing angular variations and path length, thereby enhancing distribution efficiency and lowering operational costs [33]. Similarly, Zhang et al. refined the Ivy algorithm by integrating principles from particle swarm optimization, which improved its performance in handling numerous hyperparameters in neural networks. This enhancement allowed for balanced global exploration and precise parameter tuning, ultimately increasing predictive accuracy [34]. Additionally, Qiu and colleagues advanced the grey wolf optimization algorithm to address complex numerical global optimization problems in engineering design, providing new impetus for production optimization [35]. Extending GWO to other infrastructure settings, Sujono and Musafa hybridized Grey Wolf Optimization with the Whale Optimization Algorithm for load-shedding in isolated distribution networks, illustrating how cross-metaheuristic design can improve decision robustness under stringent operational constraints [36]. Likewise, Fauzan, Munadi, Sumaryo, and Nuha proposed an enhanced GWO for transmission-power optimization in wireless sensor networks, underscoring that careful operator redesign and parameter control can yield more energy-efficient configurations without sacrificing network performance [37]. Adopting a similar improvement-oriented approach, Musshoff et al. optimized production processes by identifying optimal decisions that significantly cut costs, thereby supporting sustainable enterprise operations through enhanced operational efficiency [38]. After that, Pan et al. developed a four-step decision optimization model and improved the grey wolf optimization algorithm, called the Decision Grey Wolf Optimization Algorithm (DGWO) [39]. This method solves the global optimal decision, significantly improves the global optimization performance of the decision, and ensures that the performance indicators of multiple decision problems rank at the top, which means that this method has been fully applied to decision optimization.
To resolve our research problems, we propose a Multi-stage Production Integrated Optimization (MsPIO) model grounded in statistical process control. First, a defect-rate prediction model is built using hypergeometric distribution point estimation and Clopper–Pearson exact confidence intervals, enabling dynamic assessment of process quality equilibrium through sampling data. Building on this, a coordinated decision-making framework for multiple processes and parts is constructed (visually summarized in Figure 1), where, Red text represents states or products that are unqualified/defective (e.g., “Unqualified,” “Unqualified semi-finished products,” “Unqualified finished products”). Green text represents states or products that are qualified/conforming and allowed to proceed to the next stage. Yellow text represents final ideal outcome, i.e., successful sales and satisfied customers (“Perfect sales”). Grey text represents auxiliary processes or annotations (e.g., “Disassemble,” “Exchange”) that describe supporting actions rather than main product states. Blue lines represent the main forward flow of products through the manufacturing, inspection and sales process. Grey (dashed) lines represent feedback and rework paths, such as disassembly, reuse of parts, or customer returns/exchanges, which are secondary to the main process flow. After that, IGWO establishes a synchronization mechanism for process parameters, achieving globally optimal decisions that maximize manufacturing-system symmetry. Finally, global sensitivity analysis reveals how key parameters—confidence thresholds, product pricing, and process-change costs—disrupt or preserve systemic balance through controlled perturbation experiments.

3. Methodologies

Before we start the methodology, we need to define our symbol names and descriptions, as shown in Table 1:
This study focuses on sampling inspection and robust decision optimization in the multi-stage production of electronic products and proposes an MsPIO. The model simulates the sampling process to estimate defect rates at each process stage and embeds these estimates into a Grey Wolf Optimizer–driven decision-making model for multiple processes and multiple parts, thereby enabling stochastic adjustment and optimization of assembly, inspection, and disassembly strategies.

3.1. Model Assumptions

The model assumptions are particularly important during the experimental process, reflecting the practical constraints of multi-stage optimization through the following assumptions:
  • It is assumed that the qualification of each sample is independent.
  • It is assumed that the enterprise’s quality inspection system is accurate and error-free.
  • It is assumed that all finished products entering the market will be successfully sold.

3.2. Two-Stage Sampling Inspection Model

Inspired by the “quick reject, strict accept” strategy, we establish a two-stage sampling model to more effectively control both Type I error α and Type II error β . In the early stage, obviously defective batches are rapidly rejected; in the later stage, compliant batches are strictly accepted. This approach minimizes the sample size while maintaining control over both error types. To address the uncertainty of defect rates in practice, we further incorporate binomial point estimates [32] and Clopper–Pearson confidence intervals [39] to perform confidence inference on the overall defect rate, thereby enhancing decision robustness and interpretability.

3.2.1. Simulation of Product Sequence

Let the total number of items in a batch be N, of which Q are actually defective, yielding a true defect rate of p = Q / N . For quality control, a nominal defect rate p 0 is set, and a threshold Q 0 = N p 0 of the number of defective products is introduced as the worst case. If Q > Q 0 , the batch of products is considered to be defective.
In practice, full inspection of the entire batch is infeasible, so we perform sampling inspection. From the N items, we randomly draw n samples without replacement for inspection. Let X denote the number of defectives observed in the sample. Because sampling is without replacement and each item’s quality status is predetermined; therefore, X follows the hypergeometric distribution of N, Q, and n, denoted as:
X   ~   H y p e r g e o m e t r i c ( N , Q , n ) ,
The probability mass function is:
P ( X = k ) = Q k N Q n k N n   ,   max ( 0 , n N + Q ) k min ( n , Q ) .

3.2.2. Stage Two: Acceptance Test

The model design introduces a “quick rejection test” mechanism in the first stage. The primary goal of this stage is to make a swift decision to reject obviously nonconforming lots, thereby avoiding resource wastage in subsequent stages. Let the initial sample size be n 1 and the rejection threshold be q. The decision process is as follows:
1.
If the number of defectives in the first-stage sample is X 1 > q , the lot is immediately rejected.
2.
If X 1 q , the lot moves to the second stage for further inspection.
To ensure that the rejection decision in the first stage is statistically robust, the probability of a Type I error must be strictly controlled. At this point, the control condition for Type I error is:
P ( X 1 > q   |   p = p 0 ) α .

3.2.3. Stage One: Quick Rejection Test

If a lot is not rejected in the first stage, it moves to the second stage for a more stringent acceptance test. At this point, the n 1 samples already drawn remain unchanged, and n 2 additional samples are drawn from the remaining N n 1 products, resulting in a total sample size of n z = n 1 + n 2 . The observation result for the second stage is the number of defective products, X 2 , and the final total number of defective products is X z = X 1 + X 2 . The decision process is as follows:
1.
If X z c , the lot is accepted.
2.
If X z > c , the lot is rejected.
To control the risk of Type II errors, the following condition must be satisfied:
P ( X z c   |   p = p 1 ) β ,
where p 1 is the unacceptable defective rate.

3.2.4. Estimation and Confidence Prediction Under Uncertain Defective Rate

Due to the inherent randomness of the sampling process and the uncertainty of the defective rate itself, relying solely on point estimates can lead to misjudgment. Therefore, a more robust confidence inference method is introduced to assess the uncertainty range of the defective rate.
After sampling, we get n z and X z , and the point estimate of the defective rate is naturally:
p = X z n z ,
Proposed by Cochran’s classic sampling method, the hypergeometric distribution is generally considered to perform well when the sample size n is less than 5% of the population size N [40]. In this thesis, N is 500, n is 22, and the sampling ratio is 4.4%. The study meets specific application conditions; therefore, the hypergeometric distribution can be well approximated by the binomial distribution in this research. Based on the binomial distribution assumption, the Clopper–Pearson method is used in this paper to construct a confidence interval for the true defective rate p, suitable for small sample scenarios. Let the confidence level be 1 γ , then the confidence interval [ p L , p U ] for the true defective rate p can be given by the following formulas:
  • Lower confidence bound pL
p L = B e t a I n v ( γ 2 , X z , n z X z + 1 ) ,
  • Upper confidence bound pU
p U = B e t a I n v ( 1 γ 2 , X z + 1 , n z X z ) ,
where B e t a I n v ( a , b , c ) represents the a-quantile of the Beta distribution, with parameters b and c.

3.3. Decision-Making Model for Multiple Processes and Multiple Parts

3.3.1. Decision Variables

Let D k be a binary variable that indicates whether to inspect a spare part or to inspect and disassemble the finished product; its detailed meaning is given below.

3.3.2. Objective Function

  • Spare Parts Inspection Stage
In this stage, we must decide whether to inspect each spare part. If inspection is carried out, all nonconforming spare parts are discarded immediately; if inspection is skipped, uninspected spare parts—potentially including defective ones—proceed directly to assembly and may adversely affect final product quality.
Introduce a decision variable D 1 , representing whether to inspect the spare parts, that is:
D 1 = 0 ,   when   the   components   are   not   inspected 1 ,   when   the   components   are   inspected ,
When D 1 = 0 occurs, spare parts are not inspected and are directly assembled. In this case, the cost consists only of the purchasing cost, namely:
C lg = Q i u i ,
where C lg denotes the purchasing cost of the spare parts, Q i represents the quantity of the i-th type of spare part, and u i indicates the unit purchase price of the i-th spare part.
When D 1 = 1 occurs, additional inspection costs are incurred. Defective spare parts are discarded, and only qualified spare parts are assembled. In this case, the cost becomes:
C lg + C l j = Q i ( u i + d i ) ,
where C l j represents the inspection cost of the spare parts, and d i denotes the inspection cost of the i-th type of spare part.
To summarize, the total cost at this stage includes both the purchasing cost and the inspection cost of the spare parts:
C lg + C l j = Q i ( u i + D 1 d i )   .
  • Finished Product Inspection Stage
In this stage, it is necessary to assess whether the finished products should undergo quality inspection. During the inspection process, defective products will enter the disassembly process, while qualified products and uninspected finished products, including potential defects, will directly enter the market for distribution.
Introduce a decision variable D 2 , which represents whether to inspect the finished products, namely:
D 2 = 0 ,   when   the   finished   product   is   not   inspected 1 ,   when   the   finished   product   is   inspected ,
When D 2 = 0 occurs, the finished product is assembled and directly enters the market without inspection. The cost consists of the assembly cost and the sales revenue, namely:
C c z + C c s = Q c ( a c S c ) ,
where C c z represents the assembly cost of the finished product, C c s denotes the sales revenue of the finished product, Q c represents the quantity of finished products, a c represents the assembly cost per finished product, and S c represents the market price of a finished product.
When D 2 = 1 occurs, in addition to the assembly cost, defective finished products only require inspection cost before entering the disassembly stage, while qualified finished products enter the market for sale and generate revenue. In this case, the cost consists of the assembly cost, inspection cost, and sales revenue, namely:
C c z + C c j + C c s = Q c a c + Q c d c Q c ( 1 P c ) S c ,
where C c j represents the inspection cost of the finished product, d c denotes the inspection cost per finished product, and P c represents the defect rate of the finished products.
To summarize, the total cost at this stage consists of the assembly cost, inspection cost, and sales revenue of the finished product, namely:
C c z + C c j + C c s = Q c ( a c S c ) + D 2 ( d c + P c S c )   .
  • Defective Finished Product Disassembly Stage
Introduce a decision variable D 3 , which represents whether to disassemble defective finished products, namely:
D 3 = 0 ,   when   the   defective   finished   products   are   not   disassembled 1 ,   when   the   defective   finished   products   are   disassembled ,
When D 3 = 0 occurs, the defective finished products are directly discarded without disassembly, incurring no additional cost.
When D 3 = 1 occurs, disassembly is required, and the disassembled spare parts will re-enter the spare part inspection stage. In this case, the cost incurred is solely the disassembly cost, namely:
C c c = Q c P c l c ,
where C c c represents the disassembly cost of the defective finished product, and l c denotes the disassembly cost per defective finished product.
To summarize, the total cost at this stage consists of the disassembly cost of the finished products:
C c c = D 3 Q c P c l c .
  • Replacement Stage for Sold Defective Products
In this stage, the company replaces defective finished products that have already been sold, incurring return costs. The returned defective products will enter the defective product disassembly stage. At this point, the required cost consists of the replacement cost and the return cost of the finished products, namely:
C c d + C c t = Q c P c ( e c + r c ) ,
where C c d represents the replacement cost of the finished products, C c t represents the return cost, e c denotes the replacement cost per finished product, and r c denotes the return cost per finished product. In this paper, the return cost per finished product is defined as the sum of the purchasing costs of the two spare parts and their assembly cost, calculated as follows:
r c = u 1 + u 2 + a c .
  • Semi-Finished Product Inspection Stage
Introduce a decision variable D 4 , which represents whether to inspect the semi-finished products, namely:
D 4 = 0 ,   when   the   semi-finished   products   are   not   inspected 1 ,   when   the   semi-finished   products   are   inspected ,
When D 4 = 0 occurs, no inspection is conducted, and the semi-finished products are directly assembled into finished products. Thus, the cost at this stage includes only the assembly cost of the semi-finished products, namely:
C b z = Q b a b ,
where C b z represents the assembly cost of the semi-finished products, Q b denotes the quantity of the b-th type of semi-finished product, and a b denotes the assembly cost of the b-th type.
When D 4 = 1 occurs, defective semi-finished products proceed to the next disassembly stage, while qualified ones are assembled into finished products. Therefore, the cost at this stage includes both the assembly and inspection costs of the semi-finished products, namely:
C b z + C b j = Q b ( a b + d b ) ,
where C b j represents the inspection cost of the semi-finished products, and d b denotes the inspection cost of the b-th type of semi-finished product.
To summarize, the total cost at this stage consists of the assembly cost and inspection cost of the semi-finished products:
C b z + C b j = Q b a b + D 4 Q b d b .
  • Defective Semi-Finished Product Disassembly Stage
Introduce a decision variable D 5 , which represents whether to disassemble defective semi-finished products, namely:
D 5 = 0 ,   when   the   defective   semi-finished   products   are   not   disassembled 1 ,   when   the   defective   semi-finished   products   are   disassembled ,
When D 5 = 0 occurs, the defective semi-finished products are directly discarded without incurring additional cost.
When D 5 = 1 occurs, disassembly is required. The disassembled spare parts will re-enter the inspection stage, and the cost incurred at this stage is solely the disassembly cost:
C b c = Q b P b l b ,
where C b c represents the disassembly cost of the defective semi-finished products, P b denotes the defect rate of the b-th type of semi-finished product, and l b is the disassembly cost of the b-th type.
To summarize, the total cost at this stage is the disassembly cost of the semi-finished products:
C b c = D 5 Q b P b l b .
  • Total Cost
To summarize, the objective function is to minimize the total cost across all stages, namely:
min Z = ( C lg + C l j ) + ( C b z + C b j ) + C b c + ( C c z + C c j + C c s ) + C c c + ( C c d + C c t ) .
Constraints:
The quantities of spare parts, semi-finished products, and finished products must all be positive integers, namely:
Q i , Q b , Q c N + .
Furthermore, it must be ensured that all spare parts are used up during semi-finished product assembly, and all semi-finished products are used up during finished product assembly, namely:
Q b min Q i , Q c min Q b .
In conclusion, the decision-making model for multiple processes and multiple parts is as follows:
min Z = ( C lg + C l j ) + ( C b z + C b j ) + C b c + ( C c z + C c j + C c s ) + C c c + ( C c d + C c t ) . s . t . Q i , Q b , Q c N + Q b min Q i , Q c min Q b

3.4. Improved Grey Wolf Optimizer (IGWO)

Confronted with the high-dimensional and non-linear optimization landscape shaped by uncertain defect rates, traditional solvers risk converging on locally optimal yet globally inefficient production policies. To overcome this challenge, the MsPIO model leverages the Improved Grey Wolf Optimizer (IGWO), whose pack-based cooperative search mechanism is inherently suited to the characteristics of Uncertain Multi-stage Production. By simulating hierarchical leadership and collaborative hunting behavior, IGWO deploys multiple search agents to simultaneously explore and exploit diverse regions of the solution space, thereby reducing the risk of premature convergence and ensuring a robust global search. Its suitability is further reinforced under our assumptions: (i) the independence of sample qualification allows IGWO’s parallel agents to evaluate production stages without correlation bias; (ii) the accuracy of the enterprise’s inspection system ensures that optimization outcomes are not distorted by measurement errors; and (iii) the guarantee that all finished products are successfully sold aligns the objective function directly with profit maximization. Prior evidence has shown that IGWO performs effectively in complex, stochastic environments, and its adaptive exploration–exploitation balance makes it particularly well matched to the uncertainty and asymmetry inherent in multi-stage production systems.
In this work, an Improved Grey Wolf Optimizer (IGWO) is proposed to address the MsPIO problem. The conventional GWO [41], which draws inspiration from the social hierarchy and hunting behavior of grey wolves, is enhanced in the following three aspects. First, Latin Hypercube Sampling (LHS) is employed to generate the initial population, significantly improving its spatial uniformity. Second, a simulated binary crossover (SBX)-based evolution operator is introduced within the triple-parent framework to produce potential candidate solutions, thereby strengthening the algorithm’s exploration capability in complex search spaces. Finally, a mutation-based opposition learning mechanism is incorporated, which generates opposite solutions for inferior individuals and selectively replaces them to mitigate premature convergence.

3.4.1. GWO Original Position Update Steps

1.
Encircling the Prey
Firstly, the grey wolves’ ranks need to be classified, with the top-level wolves referred to as leader wolves (the optimal solution), represented by α , β , and δ . These leader wolves guide the rest of the wolf pack to hunt the prey. The other wolves (candidate solutions) are denoted as ω , and they locate and determine the target based on the leader wolves’ instructions. During the hunting process, the behavior of the grey wolves encircling the prey is described as follows:
D = C X P ( t ) X ( t ) ,
X ( t + 1 ) = X P ( t ) A D ,
where D represents the distance between the grey wolf and the prey, X ( t + 1 ) is the updated position of the grey wolf, t is the current iteration number, A and C are coefficient vectors, X P represents the position vector of the prey, and X denotes the position vector of the grey wolf.
The following are the formulas for the coefficient vectors:
A = 2 a r 1 a ,
C = 2 r 2 ,
where a     is the convergence factor, and as the iteration number progresses, it decreases from 2 to 0. r 1 and r 2     are random numbers between [0, 1].
2.
Hunting the Prey
Once the leader wolves identify the prey’s location, they guide the pack to gradually encircle and hunt it. In the optimization decision process, the exact position of the prey is unknown. Therefore, to simulate the grey wolves’ hunting behavior, it is assumed that α , β , and δ have a clearer estimation of the prey’s potential location. These three wolves are considered the best solutions, and their positions are used to search for the possible location of the prey. Meanwhile, the rest of the grey wolves adjust their positions based on the leaders’ locations, gradually approaching the prey. This strategy helps get closer to the prey’s actual position, thereby improving solution quality. Figure 2 displays the update process of an individual wolf’s position.
Accordingly, the grey wolf hunting model is established as follows:
D α = C 1 X α X D β = C 2 X β X D δ = C 3 X δ X ,
where the distances between the leader wolves and other individuals are represented by D α , D β , and D δ , the random vectors are denoted by C 1 , C 2 , and C 3 , and the current positions of the leader wolves are represented by X α , X β , and X δ , respectively.
After further rearranging the formula, we obtain the step size and direction of the rest of the wolves as they move toward the leader wolves, given by:
X 1 = X α A 1 D α X 2 = X β A 2 D β X 3 = X δ A 3 D δ .
Finally, the position update for the remaining grey wolves ω in the population is as follows:
X ( t + 1 ) = X 1 + X 2 + X 3 3 .
3.
Besieging the Prey
When the prey stops moving, the wolf pack can complete the hunt by surrounding and besieging it. During the simulation of approaching the prey, the value of a gradually decreases, which in turn narrows the range of changes for A . Specifically, as a decreases from 2 to 0, the value of A will fluctuate within the range of a , a .

3.4.2. Improvement Strategies

1.
LHS Initialization
Latin Hypercube Sampling (LHS) [42] is an efficient initialization method based on stratified sampling that ensures a uniform distribution of sample points across a multidimensional search space. In the hunting process of grey wolves, individuals are typically dispersed uniformly across the terrain to conduct extensive exploration, thereby increasing the probability of locating prey. Inspired by this behavioral characteristic, this study integrates LHS into the algorithmic framework to mitigate the non-uniformity of samples caused by random initialization, while simultaneously enhancing global search capability.
Assume the search space is D-dimensional, and the value range in each dimension is the interval x min i , x max i ( i = 1 , 2 , , D ) . For each dimension i, the range is partitioned into N equiprobable subintervals. The length of each subinterval is:
Δ x i = x max i x min i N .
A uniformly distributed random value r i j 0 , 1 is selected within the j-th subinterval of the i-th dimension. A sample point is then positioned within this subinterval based on this random value:
x i j = x min i + ( j 1 + r i j ) Δ x i .
To ensure a uniform distribution of sample points across the entire search space, a random permutation is applied to the sampling order across different dimensions, guaranteeing that the samples are evenly distributed in all dimensions. Subsequently, an initial population of N individuals is constructed, where the value of each dimension for every individual is systematically drawn from the corresponding subinterval using the LHS method:
x j = [ x 1 i , x 2 j , , x D j ] , j = 1 , 2 , , N .
2.
Evolutionary Parent Roundup
In nature, grey wolves also achieve efficient hunting through group cooperation and dynamic adjustment strategies, which are typically characterized by diverse hunting behaviors. Building upon this concept, this study incorporates Simulated Binary Crossover (SBX) [43] into the IGWO framework. As a crossover operator that simulates the behavior of binary crossover in genetic algorithms, SBX effectively introduces evolutionary variation by leveraging three-parent solutions to generate new offspring. This strategy enhances the diversity and complexity of parental information, enabling the algorithm to more effectively escape local optima and accelerate the search for the global optimum.
SBX determines the inheritance of genetic information between offspring based on the differences between the two parent solutions. The crossover factor χ is first calculated using SBX as follows:
χ = ( 2 u ) 1 η c + 1 ,                             u 1 2 1 2 ( 1 u ) 1 η c + 1 , u > 1 2 ,
where u is a random number and η c = 2 is the crossover distribution index controlling the shape of the crossover operator. The three-parent strategy further incorporates the relative positional differences among individuals α , β , and δ . By systematically combining their characteristics, this approach explores new regions of the solution space, thereby effectively leveraging information from high-quality solutions to generate improved offspring. The evolutionary factor (EF) is computed as follows:
E F = β ( α p o s j β p o s j ) + ( 1 β ) ( β p o s j δ p o s j ) 2 ,
Finally, the wolf’s position is updated:
x i j n e w = E F + X 1 + X 2 + X 3 4 ,
Additionally, we ensure that each wolf’s new position does not exceed the defined search space boundaries:
X i j = max ( min ( X i j , u b j ) , l b j ) ,
At the same time, compare the fitness values of the new position and the old position. If the fitness of the new position is better, replace the old position:
i f       f n e w < f o l d , X i o l d = X i n e w .
3.
Mutation Reverse Learning Strategy
When facing hunting failures or encountering more resilient prey, grey wolves often exhibit strategic adaptability by altering their attack patterns or reorganizing their formation to reinitiate the hunt, all while refining the capabilities of underperforming members to strengthen the overall group. Inspired by this observed behavior, this study incorporates a mutation-based opposition learning strategy [44]. This approach enhances population diversity by applying mutation operations to the least fit individuals, enabling the IGWO to autonomously adjust and escape local optima when trapped in suboptimal regions, thereby enhancing its global search capability.
Specifically, individuals are ranked in descending order based on their fitness values, and the lowest-performing quartile of individuals is selected for opposition-based learning:
w o r s t i n d i c e s = s o r t ( f v a l u e s , d e s c e n d ) ( 1 : N 4 ) ,
Calculate the inverse solution of α , β , and δ :
X α = l b + u b α p o s ,
X β = l b + u b β p o s ,
X δ = l b + u b δ p o s ,
Calculate the reverse solution for the worst individual X w o r s t and add the Gaussian perturbation term ε :
X = l b + u b X w o r s t + ε ,
where ε = ( u b j l b j ) r a n d n . In addition, calculate the fitness value of the reverse solution:
f o p p o s i t i o n = Cos t F u n c t i o n ( X ) ,
Then select the optimal reverse solution according to the fitness value:
b e s t o p p o s i t i o n s = s o r t ( o p p o s i t i o n s f i t n e s s , a s c e n d ) ( 1 : N 4 ) ,
Similarly, if the fitness of the reverse solution is better, replace the original individual:
i f       f b e s t   o p p o s i t i o n < f , X w o r s t = X b e s t   o p p o s i t i o n ,
And update the positions and scores of α , β , and δ :
i f       f b e s t   o p p o s i t i o n < α s c o r e , α p o s = X b e s t   o p p o s i t i o n , α s c o r e = f b e s t   o p p o s i t i o n ,
i f       α s c o r e < f b e s t   o p p o s i t i o n < β s c o r e , β p o s = X b e s t   o p p o s i t i o n , β s c o r e = f b e s t   o p p o s i t i o n ,
i f     β s c o r e < f b e s t   o p p o s i t i o n < δ s c o r e , δ p o s = X b e s t   o p p o s i t i o n , δ s c o r e = f b e s t   o p p o s i t i o n .
The complete algorithm flow chart is shown in Figure 3, where, orange represents flowcharts, and blue represents decision boxes.
The complete solution flowchart is shown in Figure 4, and the specific steps are as follows:
Step 1. Initialize the Wolf Pack: Based on the problem scale and constraints, initialize a wolf pack. Each wolf represents a decision scheme.
Step 2. Initial Strategy Cost: For each wolf’s decision scheme, calculate its corresponding total cost.
Step 3. Decide Whether to Inspect Spare Parts: Each wolf decides whether to inspect spare parts. If inspected, the purchased spare parts are checked for quality. If they pass, they move on to the qualification check; if they do not pass, they are discarded. If not inspected, the spare parts are directly assembled.
Step 4. Decide Whether to Inspect Assembled Products: Each wolf decides whether to inspect the assembled finished or semi-finished products. If inspected and passed, they move on to the qualification check; if not, proceed to Step 5. If not inspected, the products are directly sold.
Step 5. Decide Whether to Disassemble Defective Finished Products: Each wolf decides whether to disassemble defective finished or semi-finished products. If disassembled, the spare parts are retrieved, and the process returns to Step 3. Otherwise, the product is discarded. Even uninspected defective products sold will reach this step.
Step 6. Update Position: Update the wolf’s position, i.e., the optimal decision, and calculate the cost.
Step 7. Fitness Evaluation: Evaluate the cost of each decision scheme and select the one with the lowest cost to update the current decision.
Step 8. Iterative Optimization: Repeat the above process until the predetermined number of iterations is reached or the convergence condition is met. Output the decision scheme with the minimum cost.

4. Experimental Analysis

4.1. A Case Study

In this experiment, we used examples from the “China Mathematical Contest in Modeling”. This case includes both single-process and multi-process production scenarios. Appendix A displays the defect rates and prices for various spare parts in a single-process scenario, where the process involves assembling two spare parts into one finished product.
Figure 5 illustrates the multi-process scenario, where two or three components are assembled into a semi-finished product, and then three semi-finished products are assembled into a finished product. In this diagram, light gray boxes represent individual spare parts, the basic elements of production. Light orange boxes represent semi-finished products assembled from several spare parts. Light blue boxes represent the finished product after final assembly. Black arrows indicate the assembly or material flow, i.e., the process path of how spare parts are combined into semi-finished products, and how semi-finished products are then combined into finished products. Appendix A presents the defect rates and prices for various components in the multi-process scenario.

4.2. Inspection Results

Based on the two-stage sampling inspection model in the rejection plan, let α = 0.05 be set, with the total number of components N = 500 and the threshold for defective parts D 0 = 50 . The result shows that the minimum sample size is n = 2, and the rejection critical number is c = 2. This means that if two components are selected from the 500 components and both are defective (X = 2), the rejection decision is made with 95% confidence. Otherwise (X = 0 or 1), the rejection cannot be made. In the acceptance plan, let β = 0.1 be set with the total number of components N = 500 and the threshold for defective parts D 0 = 50 . The result shows that the minimum sample size is n = 22, and the acceptance critical number is m = 0. This means that if 22 components are selected and all are non-defective (X = 0), the acceptance decision is made with 90% confidence. Otherwise (X ≥ 1), the acceptance cannot be made.
Based on the two schemes above, the two-stage sequential sampling plan is implemented as follows: The first stage selects a sample size of n 1 = 2 . If the number of defects X 1 2 , the batch is immediately rejected; otherwise, the process proceeds to the second stage. The second stage involves an additional sample of size n 2 = 20 , making the total sample size 22. The batch is accepted only if the total number of defects X 0 ; otherwise, reject it. After 10,000 simulation trials, the batch was accepted in 986 cases, resulting in an acceptance rate of 9.86%, and rejected in 9014 cases, giving a rejection rate of 90.14%.
Subsequently, the Clopper–Pearson method was applied with a 95% confidence level to estimate the defect rate for accepted batches. The estimated defect rate for the entire batch of components in such cases falls within the interval [0, 0.154].
The following outlines the derivation process for the defect-rate intervals of semi-finished products and finished products for two components. Let the fixed defect rate in the assembly process be P z = 0.1 , the probability that both components are qualified be ( 1 P l 1 ) ( 1 P l 2 ) , and the probability of at least one defective component be 1 ( 1 P l 1 ) ( 1 P l 2 ) . The condition for a semi-finished product to be qualified is that all components are qualified and the assembly is correct. Therefore, the semi-finished product pass rate is ( 1 P z ) ( 1 P l 1 ) ( 1 P l 2 ) , and the semi-finished product defect rate P b is:
P b = 1 ( 1 P z ) ( 1 P l 1 ) ( 1 P l 2 ) ,
where P l 1 and P l 2 are the defect rates of the two spare parts.
Similarly, the defect rate P c of finished products can be obtained as:
P c = 1 ( 1 P z ) ( 1 P b 1 ) ( 1 P b 2 ) ( 1 P b 3 ) ,
where P b 1 , P b 2 , and P b 3 are the defect rates of the three semi-finished products.
In summary, the defect rates of semi-finished and finished products for various production scenarios are summarized in Table 2.

4.3. Experimental Results Analysis

This section employs the commercial optimization-solving software MATLAB R2022b on a computer with an 11th-generation Intel(R) i7 CPU, using the parameters provided in the previous chapter for model resolution.

4.3.1. Performance Testing

To evaluate the performance of our proposed IGWO, we conducted comparative experiments against two recently enhanced grey wolf optimizer variants: the hybrid grey wolf and whale optimization algorithm (hGWOA) [45] and the method integrating random opposition-based learning, strengthened wolf hierarchy, and modified evolutionary population dynamics (RSMGWO) [46]. The algorithms were tested on the IEEE CEC2022 benchmark suite [47], a comprehensive set of optimization problems detailed in Table 3.
To ensure experimental reproducibility, the initial population size for each intelligent algorithm is mandated to be 50, with a maximum of 1000 iterations permitted. Each algorithm undergoes 50 independent experimental trials, validated on the IEEE CEC 2022 benchmark suite (10-dimensional space).
Before the test began, this study set fixed initial parameters for the algorithm as shown in Table 4.
Firstly, ablation experiments were conducted to test the performance of each element in the proposed IGWO. GWO-LHS represents GWO initialized with Latin Hypercube Sampling, GWO-SBX represents GWO improved with Simulated Binary Crossover mechanism, and GWO-OL represents GWO enhanced with opposition learning strategy.
Table 5 and the corresponding figure unequivocally demonstrate the individual and synergistic contributions of each proposed improvement to the final IGWO algorithm. On the most challenging functions, the complete IGWO achieves superior accuracy and convergence. For instance, on F1, IGWO’s average (3.40 × 102) significantly outperforms not only the basic GWO (1.68 × 103) but also all its intermediate variants, including GWO-SBX (3.54 × 102) and GWO-OL (4.17 × 102), highlighting that the fusion of all components is crucial for optimal performance. A similar trend is observed on F6 and F11, where IGWO attains the best average values (5.35 × 103 and 2.70 × 103, respectively), underscoring its enhanced capability in navigating complex, high-dimensional search spaces. Furthermore, the consistently lower standard deviation of IGWO across a majority of functions (e.g., F1, F4, F9, F10, F11) confirms its superior stability and reliability compared to the other configurations. This combination of the lowest average fitness and minimal performance variability makes a compelling case for the robustness of the fully assembled IGWO algorithm, proving that each component—LHS, SBX, and OL—plays a vital and complementary role in the overall design.
Figure 6 depicts the computational time consumption of the proposed algorithm and its variants, where the standard GWO algorithm serves as the computational baseline. The results indicate that the IGWO algorithm incurs a substantial computational overhead, requiring approximately 3.5 times longer to execute than the baseline GWO. This is a direct consequence of integrating the additional LHS, SBX, and OL mechanisms. Among the individual components, the GWO-SBX variant exhibits the most significant time increase, implying that the crossover operation is the primary contributor to the computational cost. Meanwhile, the GWO-LHS variant shows a negligible time increase compared to the baseline, confirming the efficiency of the initialization strategy. This notable rise in runtime for the complete IGWO is, however, a justified trade-off for its demonstrably superior convergence accuracy and robust performance, as comprehensively validated in our experimental sections.
Table 6 presents the p-values derived from the Wilcoxon signed-rank test, which is employed to statistically ascertain the performance differences between the proposed IGWO and its component variants. The IGWO algorithm demonstrates statistically significant superiority over the base GWO, as well as the GWO-LHS and GWO-SBX variants. This is evidenced by the exceptionally low p-values in its corresponding row: 2.44 × 10−3 against GWO, 4.88 × 10−3 against GWO-LHS, and 2.69 × 10−2 against GWO-SBX. These results strongly reject the null hypothesis of equivalent performance, confirming that the integration of all three components in IGWO yields a performance that is significantly better than that of the algorithm with only one improvement. Notably, the p-value of 0.470 against GWO-OL indicates that their performance is not statistically different, highlighting the substantial individual contribution of the opposition learning strategy. In conclusion, while the OL strategy alone confers a performance level comparable to the full IGWO, the complete algorithm robustly outperforms the baseline and other intermediate forms, solidifying its overall efficacy.
Figure 7 summarizes the Friedman test results, which provide an overall performance ranking of the algorithms. The proposed IGWO algorithm secures the top ranking with the lowest Friedman value of 1.75, unequivocally identifying it as the best-performing method. The GWO-OL variant achieves the second position with a value of 2.42, followed by GWO-SBX at 2.92. In contrast, the base GWO and GWO-LHS obtain the highest (i.e., worst) Friedman values of 3.83 and 4.08, respectively, confirming their inferior performance relative to the enhanced variants in this study. This statistical ranking definitively positions the complete IGWO as the most effective algorithm, demonstrating that the synergistic integration of all components yields superior overall performance compared to any individual improvement.
After that, to test the advantages of the proposed IGWO over the previous GWO variant, the CEC 2022 kit (Dim = 10) was also used.
Table 7 and Figure 8 indicate that on complex functions such as F1 and F6, IGWO achieves far superior average values (3.48 × 102 and 4.73 × 103, respectively) compared to its competitors, and IGWO has a fast convergence speed. For instance, on F1, IGWO’s average is an order of magnitude better than that of GWO (1.22 × 103) and dramatically outperforms RSMGWO (4.91 × 103). This indicates a significantly enhanced capability for escaping local optima and locating the vicinity of the global optimum. Furthermore, the notably lower standard deviation values of IGWO across most functions (e.g., F1, F5, F6, F11) underscore its superior stability and reliability. A smaller Std signifies that IGWO’s performance is less variable across independent runs, yielding highly consistent and dependable results. This combination of low Avg and low Std makes a compelling case for the robustness of the proposed algorithm.
Figure 9 shows that the standard GWO algorithm serves as the computational baseline, with IGWO, hGWOA and RSMGWO exhibiting marginally longer execution periods. Notably, the IGWO algorithm incurs a substantial computational overhead, requiring approximately 3.6 times longer to execute than the baseline GWO. This significant increase in runtime, however, is a direct and justified trade-off for its demonstrably superior convergence properties and markedly enhanced solution precision observed in our experimental validation.
Table 8 presents the p-values derived from the Wilcoxon signed-rank test, which is employed to statistically ascertain the performance differences between the various algorithms. The proposed IGWO algorithm demonstrates statistically significant superiority over all its competitors. This is evidenced by the exceptionally low p-values (all below 0.05) in its corresponding row: 2.44 × 10−3 against GWO, 2.69 × 10−2 against hGWOA, and 1.47 × 10−3 against RSMGWO. These results strongly reject the null hypothesis of equivalent performance, confirming that IGWO’s enhanced convergence accuracy, as previously noted, is not incidental but statistically robust. In conclusion, IGWO is the top-performing algorithm, achieving a level of solution quality that is significantly better than that of the other state-of-the-art methods tested.
Figure 10 shows that ESFOA exhibits the best performance on the F7, F8, F10, and F12. Figure 8 shows the proposed IGWO algorithm secures the top ranking with the lowest Friedman value of 1.25, unequivocally identifying it as the best-performing method overall. The algorithms GWO and hGWOA are tied for the second position, both with an average rank of 2.67. Finally, the RSMGWO algorithm obtains the highest (i.e., worst) Friedman value of 3.42, indicating its inferior performance relative to the other algorithms in this comparative study. It definitively ranks IGWO as the most effective algorithm.
Finally, IGWO was tested on the CEC 2022 kit (Dim = 10) and compared with global optimization methods, differential algorithms (DE), and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) to verify the powerful global optimization capabilities of IGWO.
Table 9 presents a comparative performance analysis of the proposed IGWO against two mainstream global optimizers, DE and CMA-ES, on the CEC 2022 benchmark suite. The results demonstrate that IGWO achieves highly competitive, and often superior, performance. On complex unimodal and hybrid functions, IGWO exhibits a decisive advantage. For instance, on F1, IGWO’s average value (3.40 × 102) is dramatically lower than that of DE (1.59 × 103) and CMA-ES (9.50 × 102), highlighting its superior capability for exploitation and convergence to high-precision solutions. A similar trend is observed on F11, where IGWO’s average (2.70 × 103) significantly outperforms both DE (3.11 × 103) and CMA-ES (2.87 × 103). Furthermore, while DE shows strong performance on functions like F6 and F7, IGWO maintains robust competitiveness across the majority of test functions. The consistently low standard deviation of IGWO on key functions such as F1, F4, and F11 further underscores its stability and reliability. This comprehensive comparison confirms that the proposed IGWO is not only an improvement over its GWO-based peers but also a formidable contender against established mainstream optimizers, successfully balancing exploratory power with convergent precision.
The computational time consumption of IGWO and the mainstream optimizers is presented in the accompanying Figure 11. The results indicate that the proposed IGWO algorithm incurs a substantial computational overhead compared to the highly efficient DE and CMA-ES algorithms. Specifically, IGWO requires approximately 4.3 times longer to execute than DE and 3.4 times longer than CMA-ES. This significant increase in runtime is a direct consequence of the sophisticated integration of the LHS initialization, SBX, and opposition-based learning mechanisms within the IGWO framework. While DE demonstrates the fastest execution, this notable rise in runtime for the complete IGWO is a justified and necessary trade-off for its demonstrably superior convergence accuracy and robust performance across the CEC 2022 benchmark, as conclusively validated in the preceding sections.
Table 10 presents the p-values from the Wilcoxon signed-rank test conducted to statistically ascertain the performance differences between IGWO and the mainstream optimizers, DE and CMA-ES. The results indicate that there is no statistically significant difference in the overall performance among the three algorithms. This conclusion is supported by the obtained p-values, all of which are substantially above the 0.05 significance threshold: 0.424 between IGWO and DE, 0.151 between IGWO and CMA-ES, and 0.470 between DE and CMA-ES. These high p-values fail to reject the null hypothesis, meaning that the observed performance advantages of IGWO in the previous tables, while evident in the average values, are not statistically conclusive at this confidence level. Therefore, it can be stated that the proposed IGWO achieves performance that is statistically competitive with both the classic DE and the state-of-the-art CMA-ES, establishing it as a viable and robust alternative within the landscape of global optimization algorithms.
The overall performance ranking derived from the Friedman test is presented in Figure 12. The results show that the proposed IGWO algorithm secures the top rank, achieving a top-tier Friedman value of 1.8333. It is noteworthy that IGWO attains the same top rank value as the classic Differential Evolution (DE) algorithm, indicating that their overall performances are statistically equivalent and both are the best-performing methods in this study. The CMA-ES algorithm obtains a slightly higher (i.e., worse) Friedman value of 2.3333. This ranking definitively positions the proposed IGWO as a leading optimizer, whose overall performance is not only superior to the advanced CMA-ES but is also statistically indistinguishable from the highly regarded DE algorithm, thereby solidifying its competitiveness and effectiveness.

4.3.2. Single-Process Test

To begin with, a single-process experimental test was conducted on the model. The model was tested across six different scenarios for the single process. After running the test 100 times, the average decision value was calculated, and the results are shown in Table 11. Please note that all cost units in this article are monetary units (m.u.).
Table 11 demonstrates that the results obtained by our model, with the goal of minimizing costs, consistently yield positive profits across all cases. In Cases 1 to 5, the model determined that inspecting finished products and disassembling unqualified products are essential, with rates fixed at 100%, indicating that these steps are critical for quality assurance despite potential high defect rates or costs. However, in Case 6, the inspection rates for finished products and disassembly are significantly lower (12.16% and 18.52%, respectively), yet the profit is the highest (12,026), suggesting that the model optimized the inspection process by reducing unnecessary checks while maintaining cost efficiency. The varying inspection rates for spare parts (e.g., low rates in Case 1 and high rates in Case 4) reflect adaptive decisions based on defect risks or cost-balancing strategies. Overall, it is evident that our model effectively finds an optimal balance between cost and detection rate in single-process tasks, ensuring profitability while managing quality control.
Figure 13 demonstrates that the IGWO quickly raises the total profit to the range of profit of 9000 to 12,000 during the first 200 iterations, after which it fine-tunes to a stable value. This indicates that IGWO is effectively integrated into the solution process of our model.

4.3.3. Multi-Process Test

Subsequently, a multi-process experiment was conducted. A comparison with the similarly improved GWO verifies the effectiveness of the proposed method.
The comparison of multi-process optimization results in Table 12 shows that our IGWO model significantly outperforms the other comparison algorithms with a total profit of 43,800, thanks to its more intelligent and balanced decision-making in many inspection and disassembly links. Compared with hGWOA, which shows a tendency to “over-inspect”, and RSMGWO, whose decision-making has extreme risks, the IGWO model can more finely balance the costs and risks of each link. While ensuring the common bottom line of 100% full inspection of the final product, it minimizes the total cost and maximizes the profit by optimizing the inspection intensity of the early processes, demonstrating its excellent cost control and decision-making robustness in a complex multi-process environment.
As shown in Figure 14, IGWO surpasses GWO and its variants in around 300 iterations. This demonstrates IGWO’s ability to meticulously identify near-optimal detection–decomposition combinations. Through subsequent fine-tuning, the algorithm further reduces costs, highlighting its superior efficiency and robustness.

4.3.4. Confidence Level Analysis

To further validate the decision-making capability of the model under uncertainty, a sensitivity analysis is conducted. First, the uncertainty degree of the confidence level in the two-stage sampling inspection model presented in this paper is analyzed, discussing the impact of different confidence levels on the results under uncertain conditions. The probability estimation results with adjusted confidence levels are shown in Table 13. Similarly, after running the model 100 times, the results are presented in Table 14.
A lower confidence level signifies greater uncertainty in parameter estimates. As illustrated in Table 13, when the confidence level is reduced, the corresponding interval must be situated closer to zero to satisfy the batch acceptance criteria, thereby enabling a more definitive assessment of part quality. Table 14 further reveals that although a 15% confidence level may lead to higher theoretical profits, the associated increase in parameter uncertainty necessitates more extensive inspection efforts. In practical manufacturing settings, estimation outcomes are often suboptimal under such conditions. Thus, it is critical to strategically set the confidence level amid uncertainty, enhance process control measures, and streamline procedures for low-risk components to improve overall profitability.

4.3.5. Finished Product Selling Price Analysis

In addition, the selling price of finished products can fluctuate with market conditions. Therefore, we conducted an uncertainty analysis on the selling price of finished products. Considering varying product prices, this paper will discuss the corresponding outcomes for each price level. The results, after running the model 100 times, are presented in Table 15.
As evidenced by the sensitivity analysis in Table 15, rising finished product prices are accompanied by a marked upward trajectory in total profit, reflecting a strong positive correlation between pricing strategy and overall economic return. Concurrently, the decision-making structure exhibits notable variability across inspection and dismantling activities, with certain processes demonstrating pronounced sensitivity to price fluctuations. For instance, the inspection rate for spare part 8 rises consistently from 76.96% to 97.83% as price increases, whereas the dismantling rate for semi-finished product 2 surges dramatically from 4.07% to 96.11%, underscoring its high price elasticity. Conversely, finished product inspection remains consistently implemented across all scenarios, whereas the dismantling of nonconforming finished products declines sharply from 61.13% to 0%, indicating a strategic shift toward maximizing output value in high-price regimes. These patterns collectively highlight how pricing dynamics reconfigure operational priorities across the production system.

4.3.6. Exchange Loss Analysis

Finally, the exchange loss, which directly affects the company’s reputation and the credit value of its products, is also critically important. Particularly in the event of negative publicity, it is necessary to increase the exchange loss to maintain credibility. Conversely, when facing economic constraints, it may be necessary to reduce the exchange loss to help salvage the company. Therefore, the results of uncertainty adjustments to the exchange loss need to be discussed, and recommendations should be provided. Similarly, after running the model 100 times, the results are presented in Table 16.
Furthermore, analysis of Table 16 indicates that fluctuations in swap losses within a ±20% range of the baseline induce considerable variability in inspection and dismantling decisions across the production process. While finished product inspection remains consistently implemented across all scenarios, multiple inspection and dismantling activities exhibit non-monotonic and at times pronounced shifts in response to changing swap loss levels. For instance, the inspection rate for spare part 6 rises sharply from 5.13% at the baseline to 88.68% at a +20% swap loss, reflecting heightened sensitivity to loss escalation. Similarly, the dismantling rate of unqualified finished products climbs markedly from 62.65% to 97.46% as swap losses increase from 0% to 20%, underscoring the growing economic incentive to mitigate quality-related losses under such conditions. These patterns collectively illustrate how swap loss magnitudes reconfigure operational priorities in a non-linear fashion across the system.
In summary, the proposed production system demonstrates remarkable dynamic adjustment capabilities when confronted with fluctuations in key parameters. Our comprehensive sensitivity analysis, which directly addresses the practical concerns regarding inspection accuracy and market uncertainty, confirms the system’s robustness. Through the implementation of the Improved Grey Wolf Optimization (IGWO) algorithm, the system achieves a precise equilibrium between inspection costs and quality output. The IGWO mechanism enables intelligent adaptation of quality control strategies: under favorable circumstances with rising prices (a proxy for demand stability), the system autonomously reduces intermediate inspection investments while prioritizing final output; conversely, when facing elevated swap losses (reflecting higher quality failure costs), it intensifies front-end inspection. Empirical evidence demonstrates that this IGWO-optimized framework maintains superior performance across diverse parameter scenarios, effectively validating the model’s practical utility despite its foundational assumptions. This intelligent, algorithm-based adaptive optimization framework provides robust technical support for precision management in modern manufacturing systems.

5. Conclusions

This research addresses quality–resource asymmetries in complex production systems through a systemic framework integrating the MsPIO model with a two-stage sampling mechanism. Experimental results demonstrate that confidence levels serve as a critical calibration parameter for defect-rate estimation symmetry, with the 95% confidence strategy achieving an optimal profit of 43,800 by harmonizing inspection expenditure with quality assurance. Sensitivity analyses further reveal that finished product price increases can yield nearly tenfold profit improvements, while the system maintains robust profitability under fluctuating swap loss conditions by adaptively modulating inspection intensity. The proposed IGWO algorithm effectively synchronizes multi-process decisions, underscoring the model’s systemic adaptability and offering a quantifiable bio-inspired intelligence mechanism suitable for smart manufacturing environments.
Guided by parameter sensitivity findings under uncertainty—which validate the model’s robustness to variations in defect-rate confidence and economic parameters, thereby addressing concerns related to inspection accuracy and stochastic sales—we propose a layered optimization strategy for industrial implementation. It should be acknowledged that the current framework operates under idealized assumptions, such as perfect inspection and deterministic sales; however, the sensitivity analyses confirm that the core decision logic remains effective under a wide range of realistic deviations. Enterprises may thus confidently calibrate confidence levels to sustain process stability through symmetric defect-rate estimation. In markets with high premium potential, inspection standards for intermediate products may be strategically relaxed, reallocating resources toward finished product quality and delivery efficiency. Conversely, during high-risk phases characterized by increased replacement or conversion losses, front-end inspection mechanisms should be systematically reinforced. Importantly, the model’s flexible inspection–dismantling framework enables dynamic cost–benefit adjustment, offering significant value for intelligent discrete manufacturing systems. Future research will incorporate real-time pricing mechanisms and explicitly model inspection errors and demand stochasticity to further strengthen market–production synchronization within an integrated systemic framework.

Author Contributions

Conceptualization, W.G. and X.Z.; methodology, W.G. and X.Z.; software, W.G. and X.Z.; validation, W.G. and W.W.; formal analysis, W.G. and X.Z.; investigation, X.Z. and C.-A.X.; resources, W.G. and X.Z.; data curation, W.G. and X.Z.; writing—original draft preparation, W.G. and X.Z.; writing—review and editing, W.W. and C.-A.X.; visualization, W.G. and X.Z.; supervision, W.G. and W.W.; project administration, W.G. and W.W.; funding acquisition, W.G., W.W. and C.-A.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study are available from the corresponding author upon reasonable request.

Acknowledgments

We would like to show our greatest appreciation to anonymous reviewers, the editor, and those who have contributed to the writing of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Single-process Production Scenario.
Table A1. Single-process Production Scenario.
ScenarioSpare Part 1Spare Part 2Finished ProductDefective Finished Product
Defect RatePurchase Unit PriceInspection CostDefect RatePurchase Unit PriceInspection CostDefect RateAssembly CostInspection CostMarket PriceExchange CostDisassembly Cost
110%4210%18310%635665
220%4220%18320%635665
310%4210%18310%6356305
420%4120%18120%6256305
510%4820%18110%6256105
65%425%1835%63561040
Table A2. Multi-process Production Scenario.
Table A2. Multi-process Production Scenario.
Spare PartDefect RatePurchase Unit PriceInspection CostSemi-Finished ProductDefect RateAssembly CostInspection CostDisassembly Cost
110%21110%846
210%81210%846
310%122310%846
410%21
510%81Finished product10%8610
610%122
710%81 Market priceExchange cost
810%122Finished product20040

References

  1. Schwab, K. The Fourth Industrial Revolution: What it means, how to respond1. In Handbook of Research on Strategic Leadership in the Fourth Industrial Revolution; Edward Elgar Publishing: Cheltenham, UK, 2024; pp. 29–34. [Google Scholar]
  2. Soori, M.; Jough, F.K.G.; Dastres, R.; Arezoo, B. AI-based decision support systems in Industry 4.0, A review. J. Econ. Technol. 2024, 4, 206–225. [Google Scholar] [CrossRef]
  3. Ghasemi, A.; Farajzadeh, F.; Heavey, C.; Fowler, J.; Papadopoulos, C.T. Simulation optimization applied to production scheduling in the era of industry 4.0: A review and future roadmap. J. Ind. Inf. Integr. 2024, 39, 100599. [Google Scholar] [CrossRef]
  4. Missbauer, H.; Uzsoy, R. Optimization models of production planning problems. In Planning Production and Inventories in the Extended Enterprise: A State of the Art Handbook; Springer Science & Business Media: Berlin/Heidelberg, Germany; Volume 1, pp. 437–507.
  5. Tseng, M.L.; Ha, H.M.; Tran, T.P.T.; Bui, T.D.; Chen, C.C.; Lin, C.W. Building a data-driven circular supply chain hierarchical structure: Resource recovery implementation drives circular business strategy. Bus. Strategy Environ. 2022, 31, 2082–2106. [Google Scholar] [CrossRef]
  6. Khan, S.A.R.; Yu, Z.; Farooq, K. Green capabilities, green purchasing, and triple bottom line performance: Leading toward environmental sustainability. Bus. Strategy Environ. 2023, 32, 2022–2034. [Google Scholar] [CrossRef]
  7. Tambe, P.P.; Kulkarni, M.S. A reliability based integrated model of maintenance planning with quality control and production decision for improving operational performance. Reliab. Eng. Syst. Saf. 2022, 226, 108681. [Google Scholar] [CrossRef]
  8. Alvandi, S.; Li, W.; Kara, S. An integrated simulation optimisation decision support tool for multi-product production systems. Mod. Appl. Sci. 2025, 11, 56. [Google Scholar] [CrossRef]
  9. Meng, Y.; Yang, Y.; Chung, H.; Lee, P.H.; Shao, C. Enhancing sustainability and energy efficiency in smart factories: A review. Sustainability 2018, 10, 4779. [Google Scholar] [CrossRef]
  10. Lee, J.; Chua, P.C.; Liu, B.; Moon, S.K.; Lopez, M. A hybrid data-driven optimization and decision-making approach for a digital twin environment: Towards customizing production platforms. Int. J. Prod. Econ. 2025, 279, 109447. [Google Scholar] [CrossRef]
  11. Frank, A.G.; Dalenogare, L.S.; Ayala, N.F. Industry 4.0 technologies: Implementation patterns in manufacturing companies. Int. J. Prod. Econ. 2019, 210, 15–26. [Google Scholar] [CrossRef]
  12. Xu, L.D.; Xu, E.L.; Li, L. Industry 4.0: State of the art and future trends. Int. J. Prod. Res. 2018, 56, 2941–2962. [Google Scholar] [CrossRef]
  13. Chauhan, D.; Singh, A.P.; Chauhan, A.; Arora, R. Sustainable supply chain: An optimization and resource efficiency in additive manufacturing for automotive spare part. Sustain. Futures 2025, 9, 100563. [Google Scholar] [CrossRef]
  14. Stanković, K.; Jelić, D.; Tomašević, N.; Krstić, A. Manufacturing process optimization for real-time quality control in multi-regime conditions: Tire tread production use case. J. Manuf. Syst. 2024, 76, 293–313. [Google Scholar] [CrossRef]
  15. Paraschos, P.D.; Koulouriotis, D.E. Learning-based production, maintenance, and quality optimization in smart manufacturing systems: A literature review and trends. Comput. Ind. Eng. 2024, 198, 110656. [Google Scholar] [CrossRef]
  16. Vidakovic, B.; Vidakovic, B. Point and Interval Estimators. In Statistics for Bioengineering Sciences: With MATLAB and WinBUGS Support; Springer: New York, NY, USA, 2011; pp. 229–277. [Google Scholar]
  17. Sarhan, M.A.; Rasheed, M.; Mahmood, R.S.; Rashid, T.; Maalej, O. Evaluating the Effectiveness of Continuity Correction in Discrete Probability Distributions. J. Posit. Sci. 2024, 4, 4. [Google Scholar]
  18. Trouvé, R.; Arthur, A.D.; Robinson, A.P. Assessing the quality of offshore Binomial sampling biosecurity inspections using onshore inspections. Ecol. Appl. 2022, 32, e2595. [Google Scholar] [CrossRef]
  19. Mannu, R.; Olivieri, M.; Francesconi, A.H.D.; Lentini, A. Monitoring of Larinus spp. (Coleoptera Curculionidae) infesting cardoon and development of a binomial sampling plan for the estimation of Larinus cynarae infestation level in Mediterranean conditions. Crop Prot. 2025, 187, 106955. [Google Scholar] [CrossRef]
  20. Kemp, C.D.; Kemp, A.W. Generalized hypergeometric distributions. J. R. Stat. Soc. Ser. B Stat. Methodol. 1956, 18, 202–211. [Google Scholar] [CrossRef]
  21. Schilling, M.F.; Stanley, A. A new approach to precise interval estimation for the parameters of the hypergeometric distribution. Commun. Stat.-Theory Methods 2022, 51, 29–50. [Google Scholar] [CrossRef]
  22. Hafid, A.; Hafid, A.S.; Samih, M. A novel methodology-based joint hypergeometric distribution to analyze the security of sharded blockchains. IEEE Access 2020, 8, 179389–179399. [Google Scholar] [CrossRef]
  23. Puza, B.; O’neill, T. Generalised Clopper–Pearson confidence intervals for the binomial proportion. J. Stat. Comput. Simul. 2006, 76, 489–508. [Google Scholar] [CrossRef]
  24. Bu, H.; Sun, M. Clopper-Pearson algorithms for efficient statistical model checking estimation. IEEE Trans. Softw. Eng. 2024, 50, 1726–1746. [Google Scholar] [CrossRef]
  25. Skėrė, S.; Žvironienė, A.; Juzėnas, K.; Petraitienė, S. Optimization experiment of production processes using a dynamic decision support method: A solution to complex problems in industrial manufacturing for small and medium-sized enterprises. Sensors 2023, 23, 4498. [Google Scholar] [CrossRef]
  26. Zhang, K.; Yuan, F.; Jiang, Y.; Mao, Z.; Zuo, Z.; Peng, Y. A Particle Swarm Optimization-Guided Ivy Algorithm for Global Optimization Problems. Biomimetics 2025, 10, 342. [Google Scholar] [CrossRef]
  27. Hussein, N.K.; Qaraad, M.; El Najjar, A.M.; Farag, M.A.; Elhosseini, M.A.; Mirjalili, S.; Guinovart, D. Schrödinger optimizer: A quantum duality-driven metaheuristic for stochastic optimization and engineering challenges. Knowl.-Based Syst. 2025, 328, 114273. [Google Scholar] [CrossRef]
  28. Qaraad, M.; Amjad, S.; Hussein, N.K.; Farag, M.A.; Mirjalili, S.; Elhosseini, M.A. Quadratic interpolation and a new local search approach to improve particle swarm optimization: Solar photovoltaic parameter estimation. Expert Syst. Appl. 2024, 236, 121417. [Google Scholar] [CrossRef]
  29. Qaraad, M.; Amjad, S.; Hussein, N.K.; Badawy, M.; Mirjalili, S.; Elhosseini, M.A. Photovoltaic parameter estimation using improved moth flame algorithms with local escape operators. Comput. Electr. Eng. 2023, 106, 108603. [Google Scholar] [CrossRef]
  30. Mahmood, B.S.; Hussein, N.K.; Aljohani, M.; Qaraad, M. A modified gradient search rule based on the quasi-newton method and a new local search technique to improve the gradient-based algorithm: Solar photovoltaic parameter extraction. Mathematics 2023, 11, 4200. [Google Scholar] [CrossRef]
  31. Tao, S.; Liu, S.; Zhou, H.; Mao, X. Research on inventory sustainable development strategy for maximizing cost-effectiveness in supply chain. Sustainability 2024, 16, 4442. [Google Scholar] [CrossRef]
  32. Silva, C.; Ribeiro, R.; Gomes, P. Algorithmic Optimization Techniques for Operations Research Problems. In Data Analytics in System Engineering, Proceedings of the Computational Methods in Systems and Software, Szczecin, Poland, 12–14 October 2023; Springer Nature: Cham, Switzerland, 2023; pp. 331–339. [Google Scholar]
  33. Liu, S.; Jin, Z.; Lin, H.; Lu, H. An improve crested porcupine algorithm for UAV delivery path planning in challenging environments. Sci. Rep. 2024, 14, 20445. [Google Scholar] [CrossRef]
  34. Zhang, K.; Li, X.; Zhang, S.; Zhang, S. A Bio-Inspired Adaptive Probability IVYPSO Algorithm with Adaptive Strategy for Backpropagation Neural Network Optimization in Predicting High-Performance Concrete Strength. Biomimetics 2025, 10, 515. [Google Scholar] [CrossRef]
  35. Qiu, Y.; Yang, X.; Chen, S. An improved gray wolf optimization algorithm solving to functional optimization and engineering design problems. Sci. Rep. 2024, 14, 14190. [Google Scholar] [CrossRef]
  36. Sujono, S.; Musafa, A. Load-Shedding Optimization Using Hybrid Grey Wolf-Whale Algorithm to Improve the Isolated Distribution Networks. J. INFOTEL 2025, 17, 377–393. [Google Scholar] [CrossRef]
  37. Fauzan, M.N.; Munadi, R.; Sumaryo, S.; Nuha, H.H. Enhanced Grey Wolf Optimization for Efficient Transmission Power Optimization in Wireless Sensor Network. Appl. Syst. Innov. 2025, 8, 36. [Google Scholar] [CrossRef]
  38. Musshoff, O.; Hirschauer, N. Optimizing production decisions using a hybrid simulation–genetic algorithm approach. Can. J. Agric. Econ./Rev. Can. D’agroeconomie 2009, 57, 35–54. [Google Scholar] [CrossRef]
  39. Pan, C.; Si, Z.; Du, X.; Lv, Y. A four-step decision-making grey wolf optimization algorithm. Soft Comput. 2021, 25, 1841–1855. [Google Scholar] [CrossRef]
  40. Cochran, W.G. Sampling Techniques, 3rd ed.; John Wiley & Sons: New York, NY, USA, 1977. [Google Scholar]
  41. Zareie, A.; Sheikhahmadi, A.; Jalili, M. Identification of influential users in social network using gray wolf optimization algorithm. Expert Syst. Appl. 2020, 142, 112971. [Google Scholar] [CrossRef]
  42. Escobar-Cuevas, H.; Cuevas, E.; Avila, K.; Avalos, O. An advanced initialization technique for metaheuristic optimization: A fusion of Latin hypercube sampling and evolutionary behaviors. Comput. Appl. Math. 2024, 43, 234. [Google Scholar] [CrossRef]
  43. Xiong, Q.; Dong, L.; Chen, H.; Zhu, X.; Zhao, X.; Gao, X. Enhanced NSGA-II algorithm based on novel hybrid crossover operator to optimise water supply and ecology of Fenhe reservoir operation. Sci. Rep. 2024, 14, 31621. [Google Scholar] [CrossRef]
  44. Dong, W.; Kang, L.; Zhang, W. Opposition-based particle swarm optimization with adaptive mutation strategy. Soft Comput. 2017, 21, 5081–5090. [Google Scholar] [CrossRef]
  45. Pham, V.H.S.; Nguyen, V.N.; Nguyen Dang, N.T. Hybrid whale optimization algorithm for enhanced routing of limited capacity vehicles in supply chain management. Sci. Rep. 2024, 14, 793. [Google Scholar] [CrossRef]
  46. Zhang, X.; Wang, X.; Chen, H.; Wang, D.; Fu, Z. Improved GWO for large-scale function optimization and MLP optimization in cancer identification. Neural Comput. Appl. 2020, 32, 1305–1325. [Google Scholar] [CrossRef]
  47. Siddiqui, L.; Mani, A.; Singh, J. Investigating Quantum-Inspired Evolutionary Algorithm with Restarts for Solving IEEE CEC 2022 Benchmark Problems. In Proceedings of the International Conference on Recent Developments in Control, Automation & Power Engineering, Penghu, Taiwan, 26–29 October 2023; Springer Nature: Singapore, 2023; pp. 219–230. [Google Scholar]
Figure 1. Complete model flow chart with multiple processes and multiple parts.
Figure 1. Complete model flow chart with multiple processes and multiple parts.
Biomimetics 10 00775 g001
Figure 2. Grey Wolf Individual Position Update Diagram.
Figure 2. Grey Wolf Individual Position Update Diagram.
Biomimetics 10 00775 g002
Figure 3. IGWO Flowchart.
Figure 3. IGWO Flowchart.
Biomimetics 10 00775 g003
Figure 4. IGWO Solution Flowchart.
Figure 4. IGWO Solution Flowchart.
Biomimetics 10 00775 g004
Figure 5. Multi-process Assembly Scenario.
Figure 5. Multi-process Assembly Scenario.
Biomimetics 10 00775 g005
Figure 6. Running Time of Each Element.
Figure 6. Running Time of Each Element.
Biomimetics 10 00775 g006
Figure 7. The Friedman Value of Each Element on the CEC2022 Test Suite (Dim = 10).
Figure 7. The Friedman Value of Each Element on the CEC2022 Test Suite (Dim = 10).
Biomimetics 10 00775 g007
Figure 8. The Convergence Curves of Different Algorithms.
Figure 8. The Convergence Curves of Different Algorithms.
Biomimetics 10 00775 g008
Figure 9. Running Time of Different Algorithms.
Figure 9. Running Time of Different Algorithms.
Biomimetics 10 00775 g009
Figure 10. The Friedman Value of Different Algorithms on the CEC2022 Test Suite (Dim = 10).
Figure 10. The Friedman Value of Different Algorithms on the CEC2022 Test Suite (Dim = 10).
Biomimetics 10 00775 g010
Figure 11. Running Time of Different Global Optimization Methods.
Figure 11. Running Time of Different Global Optimization Methods.
Biomimetics 10 00775 g011
Figure 12. The Friedman Value of Different Global Optimization Methods on the CEC2022 Test Suite (Dim = 10).
Figure 12. The Friedman Value of Different Global Optimization Methods on the CEC2022 Test Suite (Dim = 10).
Biomimetics 10 00775 g012
Figure 13. Convergence Curves for Various Situations in a Single Process.
Figure 13. Convergence Curves for Various Situations in a Single Process.
Biomimetics 10 00775 g013
Figure 14. Convergence Curves for Situations in a Multi-process.
Figure 14. Convergence Curves for Situations in a Multi-process.
Biomimetics 10 00775 g014
Table 1. Symbol Description.
Table 1. Symbol Description.
VariableDefinition
D k 0-1 variable, used to determine whether to test spare parts or whether to test and disassemble finished products
C l g Purchase cost of spare parts
Q i Quantity of the i-th spare part
u i Purchase unit price of the i-th spare part
C l j Inspection cost of spare parts
d i Inspection cost of the i-th spare part
C c z Assembly cost of finished products
C c s Sales revenue of finished products
Q c Quantity of finished products
a c Assembly cost per finished product
S c Market price per finished product
C c j Inspection cost of finished products
d c Inspection cost per finished product
P c Defect rate of finished products
C c c Disassembly cost of defective finished products
l c Disassembly cost per defective finished product
C c d Exchange cost of finished products
C c t Return cost of finished products
e c Exchange cost per finished product
r c Return cost per finished product
C b z Assembly cost of semi-finished products
Q b Quantity of the b-th semi-finished product
a b Assembly cost of the b-th semi-finished product
C b j Inspection cost of semi-finished products
d b Inspection cost of the b-th semi-finished product
C b c Disassembly cost of defective semi-finished products
P b Defect rate of the b-th defective semi-finished product
l b Disassembly cost of the b-th defective semi-finished product
Z Total cost
Table 2. Results for Various Production Scenarios.
Table 2. Results for Various Production Scenarios.
Defect Rate
Interval of
Semi-Finished
Product 1
Defect Rate
Interval of
Semi-Finished
Product 2
Defect Rate
Interval of
Semi-Finished
Product 3
Defect Rate
Interval of
Finished
Product
Production scenario1×××[0.1, 0.356]
Production scenario 2×××[0.2, 0.390]
Production scenario 3×××[0.1, 0.356]
Production scenario 4×××[0.2, 0.390]
Production scenario 5×××[0.1, 0.356]
Production scenario 6×××[0.05, 0.320]
Multi-process production
inspection scenario
[0, 0.455][0, 0.455][0, 0.356][0, 0.828]
Table 3. CEC2022 Test Table.
Table 3. CEC2022 Test Table.
No.Functions F i *
Unimodal function1Shifted and Fully Rotated Zakharov Function300
Basic functions2Shifted and Fully Rotated Rosenbrock’s Function400
3Shifted and Fully Rotated Expanded Schaffer’s f 6 Function600
4Shifted and Fully Rotated Non-Continuous Rastrigin’s Function800
5Shifted and Fully Rotated Levy Function900
Hybrid functions6Hybrid Function 1 (N = 3)1800
7Hybrid Function 2 (N = 6)2000
8Hybrid Function 3 (N = 5)2200
Composition functions9Composition Function 1 (N = 5)2300
10Composition Function 2 (N = 4)2400
11Composition Function 3 (N = 5)2600
12Composition Function 4 (N = 6)2700
Search range: 100 , 100 D
Table 4. Initial Parameter Settings of Algorithms.
Table 4. Initial Parameter Settings of Algorithms.
AlgorithmsParameter
IGWO η c = 2
GWONo fixed initial parameters
hGWOANo fixed initial parameters
RSMGWO α 1 = 2 , B a = 1
DE F = 0.5 , C R = 0.5
CMA-ESNo fixed initial parameters
Table 5. The Results of Each Element on the CEC2022 Test (Dim = 10).
Table 5. The Results of Each Element on the CEC2022 Test (Dim = 10).
FunctionIGWOGWOGWO-LHSGWO-SBXGWO-OL
BestAvgStdBestAvgStdBestAvgStdBestAvgStdBestAvgStd
F13.01 × 1023.40 × 10235.14.12 × 1021.68 × 1031.64 × 1033.77 × 1021.69 × 1031.74 × 1033.00 × 1023.54 × 10231.73.72 × 1024.17 × 10233.3
F24.00 × 1024.10 × 10214.24.03 × 1024.20 × 10219.24.00 × 1024.18 × 10223.74.07 × 1024.29 × 10232.74.05 × 1024.24 × 10224.6
F36.00 × 1026.00 × 1020.1426.00 × 1026.00 × 1020.3056.00 × 1026.01 × 1021.516.00 × 1026.00 × 1020.4756.00 × 1026.00 × 1020.351
F48.02 × 1028.09 × 1024.368.06 × 1028.13 × 1026.428.06 × 1028.15 × 1028.168.06 × 1028.16 × 1029.818.04 × 1028.14 × 1029.76
F59.00 × 1029.08 × 10214.09.00 × 1029.06 × 1028.369.00 × 1029.01 × 1020.6489.00 × 1029.01 × 1020.7209.00 × 1029.03 × 1026.01
F62.22 × 1035.35 × 1032.33 × 1032.24 × 1035.39 × 1032.62 × 1032.57 × 1036.62 × 1032.08 × 1032.14 × 1035.61 × 1032.78 × 1032.10 × 1034.48 × 1032.75 × 103
F72.02 × 1032.03 × 1037.962.02 × 1032.04 × 10311.82.02 × 1032.04 × 10314.22.00 × 1032.02 × 10310.82.01 × 1032.03 × 10310.4
F82.20 × 1032.22 × 1038.772.22 × 1032.23 × 1032.072.20 × 1032.22 × 10310.12.22 × 1032.23 × 1031.872.22 × 1032.22 × 1032.94
F92.53 × 1032.53 × 10312.82.53 × 1032.56 × 10326.32.53 × 1032.55 × 10327.32.53 × 1032.54 × 10346.42.53 × 1032.53 × 10312.7
F102.50 × 1032.53 × 10354.82.50 × 1032.57 × 10358.62.50 × 1032.59 × 10349.92.50 × 1032.56 × 10358.42.50 × 1032.51 × 10315.0
F112.60 × 1032.70 × 1031.46 × 1022.73 × 1032.92 × 1031.55 × 1022.60 × 1032.95 × 1031.69 × 1022.60 × 1032.86 × 1031.09 × 1022.60 × 1032.75 × 1031.57 × 102
F122.86 × 1032.86 × 1030.9162.86 × 1032.87 × 1035.062.86 × 1032.87 × 1036.562.86 × 1032.86 × 1031.822.86 × 1032.86 × 1030.616
Table 6. Wilcoxon Test of Each Element.
Table 6. Wilcoxon Test of Each Element.
IGWOGWOGWO-LHSGWO-SBXGWO-OL
IGWO 2.44 × 10−34.88 × 10−32.69 × 10−20.470
GWO2.44 × 10−3 0.4700.1761.22 × 10−2
GWO-LHS4.88 × 10−30.470 6.40 × 10−26.40 × 10−2
GWO-SBX2.69 × 10−2 0.1766.40 × 10−2 0.301
GWO-OL0.4701.22 × 10−26.40 × 10−20.301
Table 7. The Results of Different Algorithms on the CEC2022 Test (Dim = 10).
Table 7. The Results of Different Algorithms on the CEC2022 Test (Dim = 10).
FunctionIGWOGWOhGWOARSMGWO
BestAvgStdBestAvgStdBestAvgStdBestAvgStd
F13.13 × 1023.48 × 10236.04.03 × 1021.22 × 1031.24 × 1033.02 × 1028.37 × 1028.72 × 1021.84 × 1034.91 × 1032.12 × 103
F24.06 × 1024.09 × 1021.824.02 × 1024.24 × 10225.24.00 × 1024.27 × 10236.64.17 × 1024.26 × 1029.90
F36.00 × 1026.00 × 1020.2086.00 × 1026.01 × 1021.206.00 × 1026.02 × 1023.446.10 × 1026.16 × 1025.20
F48.04 × 1028.13 × 1028.018.06 × 1028.10 × 1023.128.14 × 1028.33 × 10212.18.45 × 1028.61 × 10213.7
F59.00 × 1029.01 × 1020.7279.00 × 1029.05 × 10213.19.01 × 1021.01 × 1031.20 × 1029.60 × 1021.08 × 1031.81 × 102
F62.00 × 1034.73 × 1032.96 × 1033.39 × 1036.43 × 1032.12 × 1032.21 × 1034.60 × 1032.16 × 1032.79 × 1044.85 × 1054.05 × 105
F72.00 × 1032.02 × 1037.422.02 × 1032.03 × 1037.942.02 × 1032.03 × 1038.802.04 × 1032.05 × 10310.8
F82.20 × 1032.22 × 10310.62.22 × 1032.23 × 1032.092.22 × 1032.22 × 1031.852.23 × 1032.23 × 1032.14
F92.53 × 1032.53 × 1030.3892.53 × 1032.55 × 10324.62.53 × 1032.54 × 10320.22.53 × 1032.55 × 10327.8
F102.50 × 1032.52 × 10346.42.50 × 1032.57 × 10358.32.50 × 1032.56 × 10363.42.50 × 1032.52 × 10354.6
F112.60 × 1032.69 × 1031.25 × 1022.91 × 1032.97 × 10392.22.60 × 1032.74 × 1031.48 × 1022.76 × 1032.80 × 10318.0
F122.86 × 1032.86 × 1031.422.86 × 1032.86 × 1030.9812.86 × 1032.88 × 10321.72.86 × 1032.86 × 1030.888
Table 8. Wilcoxon Test of Different Algorithms.
Table 8. Wilcoxon Test of Different Algorithms.
IGWOGWOhGWOARSMGWO
IGWO 2.44 × 10−32.69 × 10−21.47 × 10−3
GWO2.44 × 10−3 0.6770.110
hGWOA2.69 × 10−20.677 4.25 × 10−2
RSMGWO1.47 × 10−30.1104.25 × 10−2
Table 9. The Results of Different Global Optimization Methods on the CEC2022 Test (Dim = 10).
Table 9. The Results of Different Global Optimization Methods on the CEC2022 Test (Dim = 10).
FunctionIGWODECMA-ES
BestAvgStdBestAvgStdBestAvgStd
F13.01 × 1023.40 × 10235.17.63 × 1021.59 × 1034.76 × 1023.00 × 1029.50 × 1028.42 × 102
F24.00 × 1024.10 × 10214.24.00 × 1024.04 × 1023.144.00 × 1024.11 × 1027.21
F36.00 × 1026.00 × 1020.1426.00 × 1026.00 × 1027.12× 10−26.00 × 1026.00 × 1021.27 × 10−4
F48.02 × 1028.09 × 1024.368.11 × 1028.13 × 1022.868.01 × 1028.03 × 1021.75
F59.00 × 1029.08 × 10214.09.00 × 1029.00 × 1020.3449.00 × 1029.00 × 1020.00
F62.22 × 1035.35 × 1032.33 × 1031.81 × 1032.17 × 1035.04 × 1021.87 × 1033.72 × 1031.91 × 103
F72.02 × 1032.03 × 1037.962.00 × 1032.00 × 1032.502.02 × 1032.04 × 10345.1
F82.20 × 1032.22 × 1038.772.21 × 1032.22 × 1035.742.22 × 1032.24 × 10337.1
F92.53 × 1032.53 × 10312.82.53 × 1032.53 × 1031.682.54 × 1032.57 × 10338.3
F102.50 × 1032.53 × 10354.82.40 × 1032.41 × 10324.62.50 × 1032.55 × 10355.4
F112.60 × 1032.70 × 1031.46 × 1022.90 × 1033.11 × 1031.79 × 1022.60 × 1032.87 × 10394.9
F122.86 × 1032.86 × 1030.9162.86 × 1032.87 × 1030.9362.86 × 1032.87 × 1030.942
Table 10. Wilcoxon Test of Different Global Optimization Methods.
Table 10. Wilcoxon Test of Different Global Optimization Methods.
IGWODECMA-ES
IGWO 0.4240.151
DE0.424 0.470
CMA-ES0.1510.470
Table 11. Results of Various Situations in a Single Process.
Table 11. Results of Various Situations in a Single Process.
Inspecting Spare
Parts 1
Inspecting Spare
Parts 2
Inspecting
Finished
Products
Disassembling
Unqualified
Finished Products
Total Profit
(The Opposite of
Total Cost)
Case 10.16%14.84%100.00%100.00%10,916
Case 219.71%1.10%100.00%100.00%8324
Case 330.52%0.07%100.00%100.00%10,700
Case 44.95%37.22%100.00%100.00%9045
Case 53.70%6.14%100.00%100.00%11,145
Case 622.90%79.66%12.16%18.52%12,026
Table 12. Results of Situation in a Multi-process.
Table 12. Results of Situation in a Multi-process.
IGWOGWOhGWOARSMGWO
Inspecting spare parts 139.94%58.13%47.00%55.80%
Inspecting spare parts 221.16%30.53%70.03%3.06%
Inspecting spare parts 327.00%14.67%60.98%39.60%
Inspecting spare parts 448.57%6.13%67.83%2.44%
Inspecting spare parts 522.59%11.01%26.16%63.23%
Inspecting spare parts 65.13%80.00%91.08%22.03%
Inspecting spare parts 722.12%17.70%74.66%45.08%
Inspecting spare parts 834.64%9.27%10.47%81.59%
Inspecting semi-finished products 140.72%6.73%89.79%33.46%
Inspecting semi-finished products 228.53%54.90%38.25%15.25%
Inspecting semi-finished products 327.37%12.63%53.49%49.28%
Dismantling semi-finished products 116.09%50.90%47.25%0.45%
Dismantling semi-finished products 21.19%11.85%61.62%87.11%
Dismantling semi-finished products 365.60%79.23%79.91%9.90%
Inspecting finished products100.00%100.00%100.00%100.00%
Dismantling of unqualified finished products62.65%70.13%88.61%56.66%
Total profit (The opposite of total cost)43,80042,80042,80043,400
Table 13. Estimated defective rate at different confidence levels.
Table 13. Estimated defective rate at different confidence levels.
Confidence Level15%35%55%75%
Part 1[0.000, 0.038][0.000, 0.050][0.000, 0.066][0.000, 0.090]
Part 2[0.000, 0.038][0.000, 0.050][0.000, 0.066][0.000, 0.090]
Part 3[0.000, 0.038][0.000, 0.050][0.000, 0.066][0.000, 0.090]
Part 4[0.000, 0.038][0.000, 0.050][0.000, 0.066][0.000, 0.090]
Part 5[0.000, 0.038][0.000, 0.050][0.000, 0.066][0.000, 0.090]
Part 6[0.000, 0.038][0.000, 0.050][0.000, 0.066][0.000, 0.090]
Part 7[0.000, 0.038][0.000, 0.050][0.000, 0.066][0.000, 0.090]
Part 8[0.000, 0.038][0.000, 0.050][0.000, 0.066][0.000, 0.090]
Semi-finished product 1[0.100, 0.199][0.100, 0.228][0.100, 0.267][0.100, 0.322]
Semi-finished product 2[0.100, 0.199][0.100, 0.228][0.100, 0.267][0.100, 0.322]
Semi-finished product 3[0.100, 0.167][0.100, 0.188][0.100, 0.215][0.100, 0.255]
Finished Product[0.100, 0.519][0.100, 0.564][0.100, 0.620][0.100, 0.692]
Table 14. Average Decision Results at Different Confidence Levels.
Table 14. Average Decision Results at Different Confidence Levels.
Confidence Level15%35%55%75%
Inspecting spare parts 159.43%13.61%25.20%22.16%
Inspecting spare parts 262.92%52.89%15.08%15.04%
Inspecting spare parts 355.71%7.48%59.86%39.56%
Inspecting spare parts 434.29%25.45%67.40%2.89%
Inspecting spare parts 555.82%25.27%65.29%16.39%
Inspecting spare parts 688.68%30.75%81.28%15.60%
Inspecting spare parts 774.23%33.93%34.52%31.65%
Inspecting spare parts 896.22%39.92%30.52%15.25%
Inspecting semi-finished products 18.58%13.90%20.93%83.82%
Inspecting semi-finished products 214.73%42.25%28.72%5.22%
Inspecting semi-finished products 34.56%76.79%20.49%4.52%
Dismantling semi-finished products 130.07%10.35%60.80%37.94%
Dismantling semi-finished products 246.45%43.32%57.60%62.97%
Dismantling semi-finished products 321.13%1.48%60.16%48.44%
Inspecting finished products100.00%100.00%100.00%100.00%
Dismantling of unqualified finished products97.46%71.98%48.19%35.39%
Total profit (The opposite of total cost)43,40041,60041,20043,600
Table 15. Average Decision Results for Different Finished Product Selling Prices.
Table 15. Average Decision Results for Different Finished Product Selling Prices.
Finished Product Price−40%−20%0%20%40%
Inspecting spare parts 144.27%33.77%39.94%36.15%59.86%
Inspecting spare parts 230.81%19.91%21.16%11.93%36.28%
Inspecting spare parts 346.46%73.99%27.00%42.43%66.98%
Inspecting spare parts 476.88%13.91%48.57%45.34%34.99%
Inspecting spare parts 556.89%41.53%22.59%54.54%43.67%
Inspecting spare parts 619.30%45.74%5.13%24.29%60.02%
Inspecting spare parts 722.13%61.12%22.12%35.46%2.36%
Inspecting spare parts 876.96%73.98%34.64%63.71%97.83%
Inspecting semi-finished products 136.67%82.37%40.72%0.00%4.35%
Inspecting semi-finished products 255.19%34.79%28.53%20.05%0.00%
Inspecting semi-finished products 322.41%16.57%27.37%53.53%13.64%
Dismantling semi-finished products 122.02%12.13%16.09%35.62%34.83%
Dismantling semi-finished products 24.07%31.73%1.19%84.45%96.11%
Dismantling semi-finished products 364.97%50.49%65.60%94.79%52.10%
Inspecting finished products100.00%100.00%100.00%100.00%100.00%
Dismantling of unqualified finished products61.13%59.66%62.65%7.30%0.00%
Total profit (The opposite of total cost)792024,60043,80059,64078,480
Table 16. Average Decision Results with Different Swap Losses.
Table 16. Average Decision Results with Different Swap Losses.
Swap Losses−20%−10%0%10%20%
Inspecting spare parts 178.75%47.15%39.94%40.38%59.43%
Inspecting spare parts 23.31%55.16%21.16%63.62%62.92%
Inspecting spare parts 338.68%74.12%27.00%18.60%55.71%
Inspecting spare parts 488.42%67.84%48.57%6.96%34.29%
Inspecting spare parts 512.65%66.59%22.59%70.70%55.82%
Inspecting spare parts 614.57%69.37%5.13%19.51%88.68%
Inspecting spare parts 712.89%56.46%22.12%22.46%74.23%
Inspecting spare parts 884.60%36.44%34.64%55.62%96.22%
Inspecting semi-finished products 17.49%23.91%40.72%0.31%8.58%
Inspecting semi-finished products 25.26%20.02%28.53%11.85%14.73%
Inspecting semi-finished products 33.26%36.58%27.37%7.54%4.56%
Dismantling semi-finished products 181.99%32.15%16.09%62.91%30.07%
Dismantling semi-finished products 253.18%32.48%1.19%17.30%46.45%
Dismantling semi-finished products 350.22%8.19%65.60%29.49%21.13%
Inspecting finished products100.00%100.00%100.00%100.00%100.00%
Dismantling of unqualified finished products14.04%19.61%62.65%41.70%97.46%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gan, W.; Zhou, X.; Wu, W.; Xu, C.-A. Research on the Optimization of Uncertain Multi-Stage Production Integrated Decisions Based on an Improved Grey Wolf Optimizer. Biomimetics 2025, 10, 775. https://doi.org/10.3390/biomimetics10110775

AMA Style

Gan W, Zhou X, Wu W, Xu C-A. Research on the Optimization of Uncertain Multi-Stage Production Integrated Decisions Based on an Improved Grey Wolf Optimizer. Biomimetics. 2025; 10(11):775. https://doi.org/10.3390/biomimetics10110775

Chicago/Turabian Style

Gan, Weifei, Xin Zhou, Wangyu Wu, and Chang-An Xu. 2025. "Research on the Optimization of Uncertain Multi-Stage Production Integrated Decisions Based on an Improved Grey Wolf Optimizer" Biomimetics 10, no. 11: 775. https://doi.org/10.3390/biomimetics10110775

APA Style

Gan, W., Zhou, X., Wu, W., & Xu, C.-A. (2025). Research on the Optimization of Uncertain Multi-Stage Production Integrated Decisions Based on an Improved Grey Wolf Optimizer. Biomimetics, 10(11), 775. https://doi.org/10.3390/biomimetics10110775

Article Metrics

Back to TopTop