Next Article in Journal
A Four-Party Evolutionary Game Analysis of Silver Economy Data Sharing Based on Digital Platforms
Previous Article in Journal
A Decoupling-Fusion System for Financial Fraud Detection: Operationalizing Causal–Temporal Asynchrony in Multimodal Data
Previous Article in Special Issue
A Method for Identifying Critical Control Points in Production Scheduling for Crankshaft Production Workshop by Integrating Weighted-ARM with Complex Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Workforce Scheduling in Manufacturing: An Integrated Optimization Framework Using Genetic Algorithm, Monte Carlo Simulation, and Taguchi Method

1
Department of Industrial Engineering, Sakarya University, Sakarya 54050, Türkiye
2
Department of Computer Engineering, Sakarya University, Sakarya 54050, Türkiye
*
Author to whom correspondence should be addressed.
Systems 2026, 14(1), 26; https://doi.org/10.3390/systems14010026
Submission received: 4 October 2025 / Revised: 17 December 2025 / Accepted: 22 December 2025 / Published: 25 December 2025
(This article belongs to the Special Issue Scheduling and Optimization in Production and Transportation Systems)

Abstract

Small and medium-sized enterprises (SMEs) constitute a substantial share of industrial production. However, their operational performance is frequently constrained by delivery delays caused by inefficiencies in workforce scheduling and task sequencing. These limitations reduce overall competitiveness, particularly in project-based manufacturing environments where task heterogeneity and multi-skill variability are prominent. To address this challenge, this study develops an artificial intelligence based workforce planning framework tailored to capital-constrained manufacturing settings. The new proposed hybrid system integrates a Genetic Algorithm (GA), Monte Carlo Simulation (MCS), and Taguchi methodology to generate robust, uncertainty-aware labor assignments. The framework is validated through 18-month deployments in two manufacturing facilities with differing levels of technological maturity, demonstrating consistent improvements in operational outcomes. Furthermore, specific weekly examples were validated against the solutions of exact mixed integer linear programming solvers on the deterministic core to assess the optimality gap and ensure constant solution quality. Across the deployments, the system achieved 13% and 15% reduction in task completion times. The resulting GA–MCS–Taguchi pipeline operates efficiently on standard SMEs hardware, requires only short historical performance windows for calibration, and exhibits high user adoption in real industrial settings, which indicates strong operational viability and practical deployability.

1. Introduction

Project-based manufacturing features multiple isolated projects with precedence constraints, multi-mode activities, and limited multi-skilled staff resources. This produces a research focus on multi-skill resource-constrained scheduling problems [1]. Industry domains such as heavy machinery and aerospace add large instance sizes and numerous real-world extensions that challenge pure exact methods and motivate simulation-based and hybrid solution approaches [2,3,4]. Employees often manage workforce planning manually, despite significant investments in digitalization and automation at the production facility. Unlike traditional manufacturing scheduling, project-based environments face multi-project complexity, with simultaneous projects competing for shared resources in this case. Workers with multi-skill requirements possess various and frequently overlapping competencies. Uncertainty in work duration and resource availability are among the important criteria for manufacturing companies. The planning horizons of these companies can range from weeks to months. Complex task dependencies within and between projects create priority constraints. Despite these complex conditions, many manufacturing companies are lagging behind in using technology for daily labor planning and scheduling. Industry 4.0 systems provide fast and reliable automation and analytics, but supervisors still rely on manual schedules, spreadsheets, and trial-and-error adjustments to organize work [5,6]. These approaches are a greater challenge in project-type production environments where each order has unique characteristics, non-repetitive task sequences, and varying skill requirements. Therefore, in this study, a project-type production business example was examined, and a solution proposal for the workforce scheduling approach was investigated using an artificial intelligence approach [7,8].
One of the disadvantages of manual labor planning is that it is an activity that takes approximately 72 minutes for each production supervisor per day. At the same time, resource utilization and timely delivery performance rates were recorded at approximately 68% and 71%, respectively [9,10]. Although these levels damage customer relationships, they increase unit costs, and managers often fail to focus on labor planning and improvement activities as the source of the problem. Labor planning is even more challenging in small and medium-sized enterprises (SMEs) due to limited buffers and weaker economies of scale. Enterprise Resource Planning (ERP) systems with advanced planning and scheduling modules have been adopted by some large companies, albeit at high costs.
In contrast, SMEs often rely on manual planning. This study specifically aims to present a solution that SMEs can implement. The reason is that SMEs make up about half of the GDP of industrialized nations and employ more than 60% of the manufacturing workforce [11].
This study suggests an artificial intelligence-based method that can address the limitations of the manual approach at a minimal cost. Machine learning and transformer architectures, the most common applications in artificial intelligence, provide a 15–30% improvement over traditional heuristic methods in the literature and reduce the computation time by 50–80% in benchmark examples [12,13]. However, the application of machine learning in manufacturing has generally been disfavored because it often requires extensive training data, a GPU or distributed computing infrastructure, and specialized machine learning expertise [14].
This paper introduces a genetic algorithm (GA) optimization framework adapted for small and medium-sized enterprises (SMEs) due to existing limitations. Three complementary techniques with proven industrial reliability have been integrated. The first method used was GA, one of the artificial intelligence algorithms, which provides a global search in a discrete multimodal solution space without gradient or convexity assumptions [15,16]. In the problem addressed, a data set was created that contains short historical time periods covering performance data from production sites collected regularly over 4 to 8 weeks. This data is used to calibrate the distributions of tenure and worker turnover, thereby avoiding reliance on multi-year records or large, labeled datasets. Secondly, Monte Carlo Simulation (MCS) quantifies the distributional results by capturing stochastic worker performance, task duration variability, and disruptions. As a third method, Taguchi orthogonal array experiments were used to determine parameter settings that are robust to the observed variability in the simulation model and the current production process. The Taguchi method uses orthogonal arrays to determine parameter settings that maintain performance in the face of environmental variation through a small number of structured experiments. This proposed approach for a solution can be applied to all standard tasks. At the same time, it was presented to production managers in a clear and interpretable format with a custom interface. The application was completed in 2 to 3 weeks in two sample facilities, and the performance was maintained under uncertainty. The focus of the study is on practical effectiveness and applicability, rather than algorithmic innovation. This study is the first research to present an artificial intelligence-supported combined solution framework for workforce scheduling and planning in the literature. The original contribution of this study is that, instead of developing complex metaheuristic designs for theoretical innovation, it offers a practical optimization framework by integrating established methods from the literature in a way suitable for SMEs, and this framework is empirically validated in real manufacturing facilities.
With this methodology, the completion times for tasks assigned to employees have been reduced by 13–15%. Verification over 18 months at two facilities that manufacture project-type machinery proved this improvement. Selected deterministic instances were solved for review with exact MILP solvers as an optimality baseline. Full-scale stochastic instances were handled by the proposed GA–MCS–Taguchi pipeline for practical runtimes. As explained in Section 3, an open-source framework has been created to support data replication and data transfer. A solution targeting SMEs with moderate digital maturity has been presented, characterized by regular collection of empirical verification, production, and timekeeping data; updating of skills matrices; and access to basic industrial engineering or operations expertise. Therefore, the findings apply primarily to this segment, rather than to all firms.
The effectiveness of the framework depends on three prerequisites: (i) the availability of information about task prioritization, (ii) measurable performance variability between workers, and (iii) the commitment of management to data-driven scheduling. In the proposed framework, ‘intelligence’ also refers to a combination of adaptive metaheuristic search, uncertainty-aware assessment, and deployment-oriented design that supports planning managers in complex multi-skill production environments. Environments with highly unpredictable task arrivals or substantial but unmeasurable performance heterogeneity are less suitable for optimization-based scheduling. In such cases, the underlying models cannot reliably capture task loads or worker capabilities. Figure 1 presents the integrated structural framework developed for this study.
In this direction, the following sections of the study are presented in order: literature review, methodological framework, results obtained, and performance analysis, followed by discussion and conclusion sections.

2. Literature Review

Workforce scheduling in project-based manufacturing has evolved from a niche research area to a critical operational capability with substantial practical impact. The literature demonstrates clear pathways from theoretical optimization to industrial implementation, although significant gaps remain in scalability under uncertainty and human factors. This includes long-term integration and adoption studies. In project-type production environments, where workforce planning is based on multi-skill requirements, this becomes an even more complex problem under priorities and constraints [7,8,17]. Practical reviews have repeatedly pointed out the gap between planning theory and routine industrial use, especially in dynamic environments [18,19]. Persistent gaps suggest that method availability alone has not been sufficient without deployment-oriented designs tailored to operational realities. Furthermore, capacity utilization has remained around 77%, which means that there is room for efficiency gains driven by scheduling [20]. Because this situation most affected the delivery date, the delivery benchmarks have been different, with world-class performance targets much higher than what is generally done [21].
MILP models are widely used to integrate project planning with workforce allocation. These approaches are used to ensure that the cost and time factors are minimized. These models can generate optimal solutions for small and medium-sized change problems, and optional over-budget selection can be made using various weighting methods. As the problem size increases, the solution conditions increase; therefore, the practical use of MILP models in real-time industrial applications is limited.
Although LP methods are powerful for sample sizes, the fragmentation methods they involve are complex to implement and require significant programming resources. On the other hand, meta-versions and hybrid algorithms, where large sample times are common, offer scalability and flexibility that exact methods cannot provide [22]. Memetic approaches, which combine GA and local search methods, outperform greedy searches on large samples. The key features of memetic approaches are (i) preserving diversity while finding solutions by performing population-based searches, (ii) using adaptive operators such as crossover, and (iii) improving local search networks by combining global GA search locally [23]. Another hybrid application is the GA + SA (a hybrid method of genetic algorithm and simulated annealing), which combines the local optimum problem of GA with the registration power of SA. These hybrids in the literature balance diversity, improve early convergence, and refine search strategies by looking at the quality of the solution [24].
There are implementation challenges in capturing the specific rules of empirical studies, such as shift constraints, competency requirements, and the number of workers, which often necessitate custom-tailored LP or hybrid solutions; MILP alone may not be able to optimize without customization [25,26]. To make accurate process time estimates, it is necessary to use skill matrices and their availability data, but these data are often missing or outdated. Frequent operational disruptions, such as worker absenteeism, equipment failures, and changes in work priorities, require experts to be able to reschedule operations quickly. Planning managers may resist “black box” optimization systems, whereas transparency and explainability are critical in planning.
Critical success factors obtained from the literature and companies, as well as gaps in the literature, require combining optimization with simulation. The accurate and rapid results achieved with these studies build user trust and justify the vision of continuous investment to improve key performance indicators (KPIs) [2,3]. Stochastic formulations and data-driven reallocation also address processing time variability and disruptions; SA and matheuristics are practical approaches [3]. A significant gap in the existing literature is the widespread assumption that workers are interchangeable within skill categories, coupled with limited attention to individual preferences, job satisfaction, team dynamics, collaboration effectiveness, fatigue-related productivity changes, and learning or forgetting effects. Overcoming these limitations requires incorporating behavioral models into scheduling optimization, considering employee well-being and work-life balance, optimizing team composition beyond basic skill matching, and validating productivity assumptions with empirical evidence in real industrial settings. Our study contributes to a scalable solution that offers a practical and ready-to-use application with openly shared source code, incorporating probabilistic factors and comprehensive multi-skill structures, adaptable to large-scale environments, thus providing a robust and viable solution to the limitations identified in the literature. To our knowledge, there are no studies have been published that combine genetic algorithms, Monte Carlo simulation, and the Taguchi method in a unified methodological approach to workforce planning. This study addresses this gap in the literature by using AI-based GA and incorporating the Taguchi method for parameter determination and MCS to more effectively model and evaluate uncertainty.
Genetic Algorithm (GA) has been shown to be effective for discrete and constraint-rich workforce scheduling, using population-based global search and multi-objective handling [27,28,29]. Adaptive crossover strategies and job-based encoding schemes have further improved GA performance in scheduling contexts [30,31], while artificial immune systems and ordered flow shop variants have extended the metaheuristic toolkit [32,33,34,35]. This research gap indicates that explicit uncertainty assessment should be integrated with search rather than relying on retrospective sensitivity analyzes.
Monte Carlo Simulation (MCS) has been utilized to quantify stochasticity in productivity, task times, and availability; thus, distributional results and robustness can be examined rather than simple point estimates [36,37,38]. Although GA and MCS have been combined in specific scheduling and maintenance contexts, integrated GA–MCS approaches to workforce scheduling have remained comparatively sparse or narrowly confined [39,40,41,42,43,44,45]. This lack of exploration is particularly limiting in situations where variability significantly undermines deterministic schedules [46]. In contrast, stochastic programming and robust optimization can impose substantial modeling and computational burdens under high-dimensional uncertainty, limiting feasibility in the context of SMEs.
The suggested approach utilizes the Taguchi layer as a systematic and resource-efficient parameter-tuning mechanism, supplanting ad hoc trial-and-error and facilitating the selection of robust GA configurations through a limited number of controlled tests. Using orthogonal arrays and signal-to-noise analysis, Taguchi determines parameter values that achieve both high average performance and minimal variability in stochastic situations, thus offering a pragmatically robust configuration appropriate for the environment of SMEs. Taguchi’s orthogonal arrays and signal-to-noise analysis provide robustness against noise factors while sharply reducing the experimental burden, which aligns well with constraints of SMEs [47,48,49,50,51]. Multi-response Taguchi designs have also been applied in the balance of manufacturing lines to simultaneously control multiple performance criteria, illustrating the suitability of the method to jointly address station count, workforce size and, productivity in real industrial settings [52]. Although Taguchi strategies have been used to tune GA for scheduling, applications have remained fragmented and seldom coupled with explicit stochastic evaluation [53,54,55]. Hence, positioning Taguchi as an algorithmic parameter-design layer on top of stochastic evaluation directly targets environmental drift without inflating tuning cost.
Meanwhile, recent studies on intelligent manufacturing scheduling consistently emphasize the importance of reducing the number of evaluated configurations, for example, through Automated guided vehicle routing heuristics, solid-wood production GA schedulers, digital-twin-enabled monitoring and intelligent scheduling, metaheuristic production sequencing in tire mixing operations, re-manufacturing design optimization, and simulation–optimization schemes in additive manufacturing and Industry 5.0 decision support, because fully exploring the decision space is computationally and operationally impractical in real plants [56,57,58,59,60]. Adaptive GA and hybrid simulation-optimization frameworks are designed to achieve robust parameter settings within limited evaluation budgets, similar to orthogonal-array designs that minimize exhaustive experiments. Evidence supports the use of orthogonal arrays in the Taguchi methodology, where conducting nine experiments instead of 27 for three three-level factors significantly reduces effort while maintaining robustness, which is crucial for resource management [61,62].
The proposed approach integrates GA, MCS, and the Taguchi method to provide a scalable and robust intelligent optimization framework that can operate effectively under stochastic conditions and multi-skill requirements. The Taguchi layer replaces traditional trial-and-error parameter tuning with a systematic, resource-efficient design strategy that identifies parameter settings, delivering high performance and stability. To indicate methodological strength, a deterministic formulation was used. In this way, by ensuring that intuitive outputs are based on optimal references, they are a benchmark for representative subsets using exact MILP solvers. However, the overall real-world problem was solved by the intelligent workforce scheduling created in this study, as the solution times of the deterministic methods under uncertainty increased. The resulting method not only addresses the algorithmic limitations noted in the literature but also introduces a deployable, interface-ready solution whose open-source implementation supports reproducibility and industrial uptake.

3. Methods

3.1. Mathematical Formulation and Notation

3.1.1. Problem Statement

A finite planning horizon is considered with known sets of workers W = { 1 , , n } and tasks T = { 1 , , m } . Each task j T has a base duration d j > 0 and may require r j Z + workers (coverage requirement). When applicable, task precedence relations form a directed acyclic graph. Each worker i W has a capacity C i on the horizon, binary qualification indicators q i j { 0 , 1 } for eligibility on task j, competency multipliers c i j [ 0.5 , 2.0 ] (with c i j = 1 as a baseline), and empirical performance scores p i j [ 0 , 1 ] .
Stochastic effective durations are modeled using truncated-normal multipliers, as specified in (7) below, which captures variability in worker–task execution times. The decision variable x i j { 0 , 1 } indicates whether the worker i is assigned to task j. The goal is to obtain balanced assignments aligned with the competency under coverage, capacity, and qualification constraints while explicitly incorporating stochastic duration effects in the evaluation.

3.1.2. Notation System

The mathematical framework establishes a comprehensive optimization model for task–worker assignment problems, as summarized in Table 1.
The decision architecture focuses on binary assignment variables ( x i j { 0 , 1 } ) linking workers (W) to tasks (T), with competency factors ( c i j [ 0.5 , 2.0 ] ) providing quantitative skill assessments. Performance modeling incorporates both deterministic and stochastic elements through base durations ( d j ), performance scores ( p i j ) and stochastic duration variations ( d ˜ i j ). Resource constraints are captured through capacity limitations ( C i ), qualification indicators ( q i j ), and coverage requirements ( r j ), creating a multi-dimensional feasibility space.
The mathematical formulation focuses on the core worker–task assignment problem within the broader digital scheduling system. Although subsequent sections describe simulation, Taguchi-based parameter tuning, web interface, and ROI analysis, these elements are layered on top of the assignment model to enable deployment in SMEs, rather than constituting separate optimization problems. In contrast to classical deterministic assignment and workforce scheduling models, which typically minimize a single cost or makespan objective, the proposed formulation combines (i) competency-adjusted workload, (ii) load balance, and (iii) competency mismatch in a multi-objective structure, while explicitly incorporating pairwise worker–task stochastic durations via truncated-normal multipliers. This integrated structure is tightly coupled with the GA–MCS solver, where the tri-factor suitability score shapes initialization and repair operators, rather than remaining a purely ex-post performance indicator.

3.1.3. Problem Formulation

The core challenge is to achieve efficient and balanced task assignments that align with employee competencies, workload capacities, and performance histories. This is formulated as a multi-objective optimization problem where x i j { 0 , 1 } represents the assignment of worker i to task j:
min f 1 = max i W j = 1 m c i j x i j d j ,
min f 2 = max i W j = 1 m x i j d j min i W j = 1 m x i j d j ,
min f 3 = i = 1 n j = 1 m ( 1 p i j ) x i j .
Subject to:
i = 1 n x i j = r j , j T ,
j = 1 m x i j d j C i , i W ,
x i j q i j , i W , j T ,
d ˜ i j = d j · max 0.7 , min ( 1.3 , N ( μ i j , σ i j 2 ) ) .
In the empirical implementation, the parameters μ i j and σ i j are calibrated from historical completion-time data. For each pair of worker-task ( i , j ) with recorded executions, the ratios ρ i j k = d ^ i j k / d j between the completed time d ^ i j k of the instance k and the corresponding base duration d j are calculated and the mean and standard deviation of { ρ i j k } k are used as empirical estimates of μ i j and σ i j , respectively. When only a limited history is available for a given pair, the estimates shrink toward the task-level mean and the pooled standard deviation among workers to avoid unstable parameter values. The truncated-normal bounds in (7) enforce a minimum and maximum effective speed–up relative to the duration of the baseline, preventing unrealistic or pessimistic realizations during the MCS.
The deterministic core of the formulation constitutes a 0–1 mixed-integer assignment model expressible in standard form, thereby rendering it solvable by commercial exact solvers such as CPLEX or Gurobi for small to medium-sized instances. The proposed framework focuses on the GA–MCS–Taguchi solution strategy for realistically large, uncertainty-aware instances arising in the case facilities. Systematic benchmarking against commercial solvers in larger stochastic instances is positioned as future work.
The Objective (1) represents a makespan surrogate, minimizing the maximum competency-adjusted workload between workers. The Objective (2) addresses the balance of workload by reducing the difference between the maximum and minimum worker loads. The Objective (3) focuses on the alignment of competencies. Constraints (4)–(6) enforce coverage, capacity, and qualification, while (7) specifies the stochastic duration model used during the MCS.
A tri-factor suitability score is used for the worker–task matching in the implementation: Suitability i j = 0.5 × Competency i j + 0.3 × Experience i j + 0.2 × Efficiency i j where each component is normalized to [ 0 , 1 ] prior to aggregation. The three-component structure is aligned with established skill-based assignment and multi-criteria workforce evaluation practices, which typically emphasize technical competency, accumulated experience, and realized efficiency as distinct yet complementary dimensions. Specific weights (0.5, 0.3, 0.2) were obtained in joint workshops with production and HR managers in the two case facilities and stress-tested by sensitivity analysis. Alternative weight combinations within ±0.1 of these values did not significantly alter the GA’s recommended assignments or the aggregate performance indicators; therefore, the selected weights are both managerially interpretable and numerically robust. The suitability score is used to bias the initial population and repair operators toward high-compatibility assignments, thereby embedding domain knowledge directly into the search dynamics.
Relative to classical deterministic workforce assignment models that minimize a single cost or makespan objective under fixed task durations, the present formulation integrates three explicit objectives while incorporating pairwise worker–task stochastic durations via truncated normal multipliers [27,32]. The tri-factor suitability score is embedded directly into GA initialization and repair operators rather than serving solely as an ex-post diagnostic, thereby coupling the mathematical model with domain-informed search dynamics.
Building on this stochastic specification, an MCS module is employed to propagate worker–task duration uncertainty through the candidate schedules. For each configuration, the module reports an aggregate risk score, the probability of delay, and indicators of performance stability and range, together with design-wise summaries of average performance, risk, and delay. These outputs provide a quantitative basis for comparing alternative designs and identifying worker–task configurations that remain robust under variability rather than only under nominal conditions.

3.2. Integrated Optimization Methodology

Table 2 provides a system-level overview of the integrated GA–MCS–Taguchi framework.
Table 3 illustrates the performance integration matrix, demonstrating a weighted competency assessment model for task–worker optimization.

3.2.1. Optimization Algorithm Configuration

The detailed configuration of the GA, MCS, and Taguchi design parameters used in all computational experiments is summarized in Table 4.
The parameter ranges explored in Table 4 were selected to be consistent with the configurations reported in GA-based scheduling and allocation studies [27,43,53], where medium-sized populations and moderate genetic pressure are recommended for discrete combinatorial problems. These choices also align with the guideline ranges discussed in standard GA and evolutionary-computing references for discrete combinatorial optimization [63,64]. Using these ranges informed by the literature, the final parameter values were selected using the Taguchi L9 design, with the signal-to-noise (S/N) metric used as the robustness measure during stochastic MCS. This two-step process ensures that the configuration space remains grounded in prior empirical evidence while allowing data-driven refinement within the specific industrial context examined.
In the Taguchi experiments, the response variable is defined as the average total completion time of a representative job portfolio under a fixed number of GA–MCS runs with different random seeds. For each design point in the L9 array, the GA–MCS solver is run multiple times, and the resulting completion times are summarized using the “smaller-is-better” S/N formulation, where higher S/N values indicate low average completion time combined with low variability across runs. The GA configuration with the highest S/N index is therefore interpreted as the most robust setting, in the sense that it provides short schedules while remaining insensitive to stochastic variation in execution times and random initialization [65].
The L9 orthogonal array primarily supports efficient estimation of main effects; interaction and nonlinear effects among GA parameters are captured only coarsely. The S/N ratio condenses mean performance and variability into a single scalar, thereby simplifying robustness assessment but obscuring explicit mean–variance trade-offs [47,48,66]. These limitations are accepted as pragmatic consequences of restricted experimental budgets in SMEs contexts; dual-response designs that separately model mean and variance for multiple KPIs are positioned as future extensions.
Operationally, the GA–MCS–Taguchi algorithm follows the two-level structure summarized in Algorithms 1 and 2. The Taguchi layer explores combinations of GA parameters using the L9 orthogonal array. It identifies a robust configuration using the S/N criterion, while the inner GA–MCS solver evaluates each configuration on a fixed job portfolio under stochastic durations.
Algorithm 1 Integrated GA–MCS–Taguchi optimization procedure
1.
Initialize the response table for Taguchi experiments.
2.
For each experiment e L in the L9 orthogonal array:
(a)
Set GA parameters ( pop e , p e cross , p e mut ) according to design e.
(b)
For each replication r = 1 , , R , run the GA–MCS solver (Algorithm 2) with ( pop e , p e cross , p e mut ) and record the response y e , r (mean completion time).
(c)
Compute the Taguchi signal-to-noise (S/N) ratio for experiment e from { y e , r } r = 1 R using the smaller-is-better formulation.
3.
Select the experiment e ^ with the highest S/N ratio and fix GA parameters to ( pop e ^ , p e ^ cross , p e ^ mut ) .
4.
Run the GA–MCS solver once more with the tuned parameters on the target planning horizon to obtain the final workforce schedule.
Algorithms 1 and 2 present pseudo-code for the implemented procedure rather than line-by-line software instructions. Low-level details such as chromosome encoding, repair heuristics, and data access are omitted for brevity; these follow standard practices in GA-based scheduling and are fully specified in the open-source implementation.
Algorithm 2 GA–MCS solver for workforce scheduling (single run)
1.
Generate an initial population P ( 0 ) of feasible chromosomes using constraints (4)–(6), and set x best to the best individual in P ( 0 ) .
2.
For each generation g = 1 , , G :
(a)
For each chromosome x P ( g 1 ) :
i.
For each Monte Carlo iteration s = 1 , , S , sample d ˜ i j ( s ) from the truncated-normal duration model (7) and evaluate objectives f 1 ( s ) ( x ) , f 2 ( s ) ( x ) , and f 3 ( s ) ( x ) as in (1)–(3).
ii.
Aggregate scenarios to obtain f ¯ 1 ( x ) , f ¯ 2 ( x ) , and f ¯ 3 ( x ) and compute the scalar fitness F ( x ) via (8).
(b)
Select parents from P ( g 1 ) using tournament selection with elitism.
(c)
Apply order crossover with probability p cross and swap mutation with probability p mut to construct offspring, followed by feasibility repair if necessary.
(d)
Form the new population P ( g ) and update x best if a better individual is found.
3.
Return the best chromosome x best and its fitness F ( x best ) as the GA–MCS solution.

3.2.2. Fitness Function and Performance Evaluation

The integrated fitness function combines multiple objectives with validated weightings:
F = w 1 · 1 f 1 + ϵ + w 2 · 1 f 2 + ϵ + w 3 · i , j p i j · x i j i , j x i j ,
where w 1 = 0.6 , w 2 = 0.25 , w 3 = 0.15 , and ϵ = 0.001 . The GA utilizes this scalar fitness during selection and replacement. Monte Carlo scenarios are used to evaluate f 1 and f 2 under stochastic durations before aggregation, while the third term captures the expected contribution to suitability.

3.2.3. Deterministic Core Formulation and Exact Solver Validation

The preceding stochastic multi-objective formulation is grounded in a deterministic core that admits exact optimization via commercial mixed-integer programming (MIP) solvers such as CPLEX or Gurobi. To establish the mathematical soundness of the GA heuristic and verify performance against exact methods, this section presents the equivalent deterministic formulation, demonstrates conditions under which exact optimality is achievable, and documents the validation methodology by which the heuristic GA solutions are benchmark against provably optimal assignments.
Single-Task Deterministic Reduction
Consider a deterministic variant of the assignment problem, obtained by fixing all stochastic duration multipliers to their nominal values (i.e., d ˜ i j d j ) and reducing the multi-objective structure to a single-criterion assignment problem. In this simplified setting, the task is to select r j workers from a candidate pool W to maximize total suitability while respecting qualification constraints. So, corresponding to a standard 0–1 binary integer program. Let x i { 0 , 1 } indicate the assignment of the worker i to the task with the objective:
max Z = i W S i · x i ,
where S i is the suitability score for worker i, calculated using the tri-factor weighting scheme introduced in Section 3.1.2:
S i = w comp · Comp i + w exp · Exp i + w eff · Eff i .
The components Comp i , Exp i , Eff i are each normalized to the interval [ 0 , 1 ] , and the weights ( w comp = 0.5 , w exp = 0.3 , w eff = 0.2 ) are those empirically validated in the two case facilities.
The constraints enforce exact coverage and qualification:
i W x i = r j ,
x i q i i W ,
x i { 0 , 1 } i W .
This formulation is a cardinality-constrained binary optimization problem, a classical variant of the assignment problem well-suited to standard MILP solvers.
Problem Characteristics and Solver Applicability
The deterministic assignment problem is known to be NP-hard in its general multi-task, multi-objective form [67,68]. However, for small to medium problem instances, specifically those with | W | 50 workers and | T | 20 tasks, modern MILP solvers can efficiently explore the solution space via branch-and-bound, yielding provably optimal solutions within seconds to minutes on standard computing hardware. The two case facilities examined here present instance sizes that fall favorably within this regime: Facility A operates with approximately 100 workers and 15–25 tasks per week, while Facility B has 85 workers and 10–20 tasks per week. For subproblems decomposed at the level of individual tasks or small task clusters, which is operationally natural in project-based manufacturing, the effective problem sizes are reduced further, making them highly tractable for exact methods.
Commercial solvers such as CPLEX and Gurobi employ advanced presolve, cutting plane, and heuristic warm-start techniques that make them substantially faster than basic branch-and-bound, though open-source alternatives such as CBC (via the PuLP Python interface) also provide adequate performance for SME-scale instances [69].
Reduced Deterministic MIP Core
The full GA–MCS–Taguchi framework operates on a multi-task, multi-role, and stochastic-duration formulation in which each design code is associated with role-specific headcounts, capacity constraints, and Monte Carlo–based risk measures. In contrast, the deterministic model in (9)–(13) is deliberately restricted to a minimal core that is amenable to exact solution by commercial MIP solvers. The multi-role assignment structure is collapsed into a single pooled task, and role-wise constraints are replaced by the cardinality constraint i W x i = r j which fixes the overall team size. Stochastic duration multipliers are fixed at their nominal values, and additional operational constraints (such as time windows, capacity limits, or explicit risk terms) are omitted from this verification step. This constraint reduction is consistent with standard practice in hybrid optimization: the goal is not to reproduce the entire stochastic multi-objective problem, but to test whether the GA’s selection mechanism can recover (or closely approximate) the optimal solution of the deterministic core when evaluated with the same suitability scores.
GA–Gurobi Verification with Workforce Data
To ensure that deterministic verification remains grounded in real data, the benchmark instances are constructed directly from the workforce suitability matrices embedded in the data set. Each such matrix records 0–1 scaled suitability/performance values for a common pool of named workers across multiple rows and is exported and interpreted as a worker–score matrix. For every worker i, the entries in the corresponding column are averaged and rescaled to obtain a deterministic suitability score S i consistent with (10), yielding the vector used in (9). Each instance then selects r j = 6 workers from a group W of 24 employees, mirroring the reduction in a single task in (11). The resulting single-task models are encoded in PuLP and solved exactly by Gurobi or the open-source CBC solver, depending on the solver configuration. In parallel, a lightweight GA is run on the same data: each chromosome represents a fixed-size team of six workers, fitness is defined as the total of the team i W S i , and tournament selection, crossover, and mutation operators are applied analogously to the production GA. In the verification experiments, the GA uses the same order of magnitude for its configuration as in the main computational study (population size 50, 50 generations. The best GA solution ( x i GA ) and its objective value Z GA are compared with the MIP optimum Z * to quantify the heuristic’s performance in these deterministic instances based on the workforce.
Reproducible Python Implementation
The Algorithm 3 summarizes the PuLP implementation that underpins this verification workflow. The modeling layer constructs one binary decision variable x i per worker and enforces the cardinality constraint (11) exactly as in (9)–(13); the only change required to “serve” the instance with Gurobi is to switch the solver object passed to model . solve ( ) . This makes the verification portable: a user with access to Gurobi can reproduce the precise branch-and-bound trace, whereas other readers can rerun the same script with CBC and still obtain the optimal benchmark needed to compute the GA gap.
Algorithm 3 PuLP model solved with Gurobi or CBC
1.
Read the workforce suitability matrix from datasets . xlsx and compute S i for each worker.
2.
Build binary decision variables x i for all i W .
3.
Maximize i W S i x i using suitability scores from (10).
4.
Enforce the team-size constraint i W x i = r j as in (11).
5.
If a Gurobi executable is available, set the solver to pulp . GUROBI _ CMD ( msg = False ) ; otherwise, set the solver to pulp . PULP _ CBC _ CMD ( msg = False ) .
6.
Call model . solve ( solver ) and extract optimal assignments x i * .
7.
Compare the optimal objective Z * with the GA objective Z GA using (14).
Optimality Gap Assessment
The relative optimality gap is defined as:
Gap = Z * Z GA Z * × 100 % .
Across the set of workforce matrices extracted from the dataset spreadsheet file, the exact solver and the GA produce identical objective values for all 16 benchmark instances (i.e. Z * = Z GA and Gap = 0 % in every case), and the GA recovers the exact same team composition as the MIP model for each instance. This indicates that for these representative workforce scenarios and the chosen GA configuration, the selection and recombination operators are capable of recovering the deterministic optimum identified by the MIP solver. These results should be interpreted as a verification of the deterministic core of the framework, rather than as a guaranty that the full stochastic GA–MCS–Taguchi system attains a globally optimal solution in all settings. More generally, the presence of multiple optimal or near-optimal teams—a consequence of the discrete nature of the problem and the existence of workers with similar competency profiles—suggests that the solution space contains broad plateaus of high-quality assignments and that the GA’s robustness in repeatedly locating such teams is a practically desirable property.
Connection to the Full GA–MCS–Taguchi Framework
Because the full stochastic multi-objective formulation (Equations (1)–(3)) is not solved to optimality by a classical deterministic MIP, we first verify the deterministic assignment core. Table 5 reports this verification: in 16 benchmark instances, the GA matches the exact MIP optimum ( Z GA = Z * ), producing a 0.00% gap and recovering an identical team in every case. This result indicates that the GA representation and feasibility handling are consistent with the deterministic model; the subsequent MCS and Taguchi layers then incorporate the duration uncertainty and the robustness of the parameters, respectively.
This hierarchical decomposition aligns with established practices in hybrid optimization: verifying the correctness of the search mechanism on tractable sub-instances, then extending the verified logic into more complex stochastic and multi-objective settings. The approach provides both methodological transparency (stakeholders understand the basis for the GA’s quality) and practical assurance (the heuristic’s solutions are known to be near-optimal or optimal on simplified variants of the full problem).

3.3. System Architecture and Implementation

The system architecture encompasses multiple integrated components. The core architecture consists of five primary modules that work together to deliver workforce scheduling capabilities:
  • Optimization engine: Integrated GA–MCS–Taguchi solver with parallel processing.
  • Database layer: PostgreSQL with a schema for historical performance and competency data.
  • Web interface: Django-based responsive design with real-time updates.
  • API gateway: RESTful services for integration with external systems.
  • Monitoring dashboard: Real-time performance analytics and system health monitoring.
The GA implementation employs permutation-based encoding with repair mechanisms to handle constraint violations. The selection of elitist tournaments maintains the top 10% of solutions between generations. Crossover operations utilize order crossover with an 80% probability, and mutation is implemented through swap mutation with adaptive rates ranging from 5% to 15%.
Monte Carlo integration uses multi-threaded simulation in 4–8 cores. Variance reduction techniques, including antithetic and control variates, are used to improve convergence rates, with real-time monitoring of the coefficient of variation.
Figure 2 summarizes the structured implementation of the Taguchi method within the integrated optimization workflow.
Together, these components are instantiated in a web-based implementation that exposes the optimization engine to end users through five tightly integrated modules. As summarized in Table 6, the task management, employee management, optimization, results dashboard, and mobile interface modules link the core functionality with explicit performance requirements and user-facing benefits. Figure 3 illustrates the decision environment of the employee-task in the deployed system, including the categorization of workers by experience, efficiency and competency level, as well as the presentation of quantitative compatibility scores and optimized assignments in the interface. This enables planners to inspect, adjust, and deploy schedules within their routine workflows. This figure is used to demonstrate the deployability and human-in-the-loop interaction, which are key to the contribution.

3.4. Risk-Adjusted Return-on-Investment (ROI) Metric

To capture the economic impact of deployment in a manner consistent with the uncertainty-aware evaluation, a risk-adjusted return on investment (ROI) metric is used. Let B ˜ denote the conservative estimate of annual monetary benefits, obtained from the lower bounds of the benefit categories reported in the case study, and let C denote the corresponding annualized implementation and operating cost. The risk-adjusted ROI is defined as
ROI risk = B ˜ C C .
In this formulation, B ˜ aggregates direct labor savings, delay cost reductions, and gains in management efficiency on an annual basis. At the same time, C combines software development, integration, training, and maintenance costs into a single annualized figure. The metric is dimensionless and can be interpreted as the net return per unit of cost under conservative benefit assumptions. ROI is adopted here because it provides a compact and familiar managerial indicator to compare the economic value of the proposed optimization system against alternative digital initiatives or capital projects, thereby linking the technical performance analysis to standard financial decision criteria [66]. Numerical estimates and category-level decompositions are reported in the Results section.

4. Results and Performance Analysis

The framework was fully validated through systematic tests in two distinct manufacturing environments, thereby testing both scalability and robustness.
Facility A: Mature project-based manufacturing environment with 100 employees organized into 15 specialized teams, handling 100 concurrent projects with varying complexity levels. Comprehensive historical performance data spanning 18 months provided the foundation for developing the competency matrix and calibrating the stochastic duration model.
Facility B: Manufacturing environment with 85 employees representing a typical small-to-medium enterprise context with limited historical data (6 months) and manual planning systems requiring 85 to 90 min daily for schedule development. This setting serves as a test bed for deployment under data-constrained and low digital maturity conditions.

4.1. ANOVA-Based Performance Validation

ANOVA is used as the primary inferential tool because it evaluates multiple conditions and factors simultaneously, requiring partitioning of the observed variance into systematic components between-groups and residual error, and this is the standard approach in industrial experimentation when quantifying whether an observed performance pattern can be reasonably attributed to experimental factors rather than random noise [65]. As summarized in Table 7, the one-way ANOVA conducted separately for Facility A, Facility B, and the combined sample reveals statistically significant differences between–groups in the focal outcome (all p-values 0.01 ). The effect size indices η 2 indicate that a non-trivial share of the variance is attributable to group membership rather than random fluctuation. The observed statistical power exceeds 0.98 in all models, suggesting a low risk of Type II error. Standard assumptions of normality, homoscedasticity, and independence were satisfied, and mean square values are reported for completeness.

4.2. Integrated Performance Metrics

The complete performance validation with significance analysis is shown in Table 8.
As presented in Table 8, the integrated performance analysis consolidates the key operational metrics across both facilities and evaluates them against pre-specified managerial thresholds. These thresholds were defined ex ante, in consultation with facility managers, to represent the minimum improvements that would justify a process change given historical variability and perceived implementation effort (e.g., at least an 8% reduction in task duration and a 60% decrease in planning time). In all cases, the estimated improvements are statistically significant (p < 0.001) with medium-to-large effect sizes, and the thresholds for duration, delay reduction, workload balance, and reduction in planning time are comfortably exceeded. The bootstrap confidence intervals are narrow, indicating precise and stable estimates. User adoption levels greater than 80% confirm that these gains are achieved under routine operating conditions rather than limited pilot use.

4.3. Sensitivity to GA Configuration

As the population size increases from 50 to 150, load balance (Figure 4a) improves steadily, reaching a peak value of 0.88 at a population size of 150. Concurrently, the total time (Figure 4b) decreases monotonically, with the lowest value observed at the same population size (399 min), indicating that the selected GA configuration offers a favorable trade-off between solution quality and computational cost.

4.4. ROI and Benefit Realization

The economic impact of the system was evaluated using the risk-adjusted ROI metric defined in (15). Table 9 summarizes the annual benefit components for both facilities, together with a conservative estimate used for ROI calculation.
Using the conservative combined annual benefit B ˜ = $ 192 , 000 and the corresponding annualized implementation and operating cost C, substituting into (15) yields a risk-adjusted annual ROI of approximately 2.48 (248%), implies a benefit-to-cost ratio of about 3.5:1 under conservative assumptions, i.e., the conservative benefit estimate is roughly three and a half times the annualized cost. In practical terms, this indicates that every monetary unit invested in the system is fully recovered and generates an additional 1.48 units of net benefit per year. These financial gains are consistent with the operational improvements reported in Table 8, where reductions in task duration, delivery delays, and planning time translate directly into labor and delay cost savings.

4.5. Pre- and Post-Implementation Performance Profiles

The pre- and post-implementation performance profiles summarized in Table 10 indicate that the overall score distribution remains stable while exhibiting a slight but systematic improvement. Key indicators such as the aggregate performance score (50.4% to 50.6%) and risk score (49.2% to 49.1%) show small shifts in central tendency accompanied by reduced dispersion. The combination of modest improvements and stable delay probabilities suggests that gains in operational KPIs are realized without destabilizing the underlying performance profiles. The narrow separation between pre- and post-implementation indicators further implies that a small set of outliers does not drive the observed improvements but instead reflects incremental, systematic gains across the portfolio of projects.

4.6. Taguchi-Based Parameter Optimization

Table 11 summarizes the results of the Taguchi analysis reporting, for each design code, the optimized completion time, the relative improvement with respect to the baseline and a feasibility assessment. Across all designs, the integrated optimum obtained from the signal-to-noise analysis corresponds to an objective value of 535.19, indicating a substantial reduction in completion times under the selected factor levels. The combination of resulting parameters for population size, crossover rate and mutation rate is consistent with the configuration reported in Table 4, confirming that the GA settings adopted are supported by systematic evidence from the design-of-experiments rather than ad hoc adjustments.
In addition, the user interface enables planners to trigger GA-based optimization and review outputs at the design-code level. A brief description of the method and the “Start GA” control is provided at the top. Under “Optimization Results”, each code (e.g., M-AYK-011, M-CKM-012) is reported in two blocks: (i) “Research Workers” summarizing the current context (“Limited”, “Quality”, “Line”), and (ii) “Alternative Workers” listing recommended workers with “Service: suitable” labels and “Applicability (%)” scores. This computational system applies principles of evolutionary computation to workforce scheduling, maximizing collective productivity through multi-parameter fitness evaluation while maintaining transparency for planners, who can inspect and adjust assignments as needed.

5. Discussion

The empirical results indicate that the proposed GA–MCS–Taguchi framework yields consistent improvements in project-based manufacturing. Across the two validation facilities, task completion times and delivery delays are reduced, and planning time drops by more than four-fifths, with all key effects statistically significant and operationally meaningful. These improvements are achieved alongside a conservative annual risk-adjusted ROI of roughly 248%, indicating that the system is not only algorithmically effective but also economically and organizationally viable under operating conditions.
From a theoretical standpoint, the study contributes to workforce scheduling and hybrid optimization in several directions. First, the multi-objective formulation and associated fitness function explicitly couple competency-adjusted makespan, workload balance, and competency alignment, providing a structured way to handle tradeoffs that are often treated informally in practice. Second, the integration of GA, MCS, and Taguchi design demonstrably outperforms the constituent components in isolation. The metaheuristic search explores the combinatorial space of assignments, the stochastic evaluation propagates uncertainty in execution times, and the robust parameter design stabilizes algorithmic performance without requiring exhaustive tuning. This notion further extends standard optimization criteria by emphasizing that sustained use and integration into day-to-day routines are necessary conditions for realizing the full value of algorithmic improvements. In this study, deployment-adjusted effectiveness is operationalized through joint observation of (i) statistically significant improvements in the core KPIs, (ii) sustained user adoption above 80% in operation, and (iii) conservative risk-adjusted ROI based on lower-bound benefit estimates reported in Table 9.
The results indicate that advanced optimization can coexist with transparency and operational usability. The skills matrix, along with the web-based interfaces, allows planners and supervisors to review assignments instead of simply accepting “black box” recommendations. The substantial decrease in planning time, along with a more balanced job distribution and decreased delays, indicates that the framework can free up managerial capacity for higher-level decision-making while simultaneously improving workplace performance. The deployment-adjusted perspective provides a pragmatic framework for assessing future digital initiatives: strategies that excel in simulations yet fail to sustain adoption would be explicitly penalized.
These contributions, however, should be viewed in light of several limitations that restrict generalizability. First, the empirical validation is based on two case facilities rather than a statistically representative sample of SMEs. The study therefore supports analytic generalization, demonstrating that the proposed framework can be applied under specific conditions rather than making population-wide inferences about all SMEs. The validation period, although it can extend up to 18 months of operation, may not fully capture long-term dynamics, such as structural changes in product mix, learning effects, or major demand shocks. The empirical focus on project-based manufacturing limits direct applicability to process industries, where continuous flows, different bottleneck structures, and alternative performance metrics may require a non-trivial reformulation of the model.
A second limitation relates to the type of SMEs for which the framework is immediately suitable. Both facilities are located in the same geographic area, share generally similar regulatory and institutional conditions, and have moderate digital maturity: routinely collected order and timekeeping data, an up-to-date skill matrix, and at least one staff member with basic industrial engineering or operations expertise [70]. Recent surveys in Europe indicate that 73% of SMEs reach basic digital intensity and 95% maintain broadband internet access, yet only 22% provide ICT training to staff [71,72].OECD data further reveal persistent adoption gaps between SMEs and large firms in sophisticated technologies such as enterprise resource planning, customer relationship management, and big data analytics [73]. Therefore, the results should not be seen as evidence that the framework can be implemented in all SMEs, especially not in micro-enterprises or organizations with highly fragmented data and very low levels of digitalization. Instead, the findings suggest that for SMEs with these minimum capabilities, the GA–MCS–Taguchi system can be deployed on standard hardware in a short time frame and produce significant performance improvements.
A further limitation is that systematic large-scale benchmarking against commercial exact solvers was not conducted in realistic stochastic instances. While the deterministic core formulation is standard and solvable for small cases, a comprehensive comparison on larger problem sizes is left for future research.
These constraints delineate clear opportunities for future research, including larger, multi-site studies with stratified samples of SMEs across various sectors, such as process and service industries, and at different levels of digital sophistication, as well as long-term evaluations conducted over extended periods. Adapting the model to settings with limited data and low digital maturity, for example, by using surrogate models, simplified metrics, or hybrid manual and algorithmic procedures, is a crucial step toward greater inclusiveness. Additionally, exploring alternative versions of the concept of deployment-adjusted effectiveness, such as incorporating learning curves, fairness considerations, or resilience indicators, constitutes a promising avenue for advancing both the generalizability and equity of outcomes.

6. Conclusions

The results show that labor scheduling in project-based manufacturing can be effectively addressed with a hybrid approach combining a meta-heuristic optimization, a stochastic evaluation, and parameter design with a multi-criteria decision-making method. In the system proposed in this study, GA, MCS, and the Taguchi method were used together, and a modular web-based application was created with this new method. This approach enables the application of advanced optimization in industrial environments while maintaining transparency and control, and allows production personnel to oversee the process. Although the study is conducted in a project-based enterprise, it can be adapted to different production environments with multi-skilled and labor constraints.
The results of two production facilities show statistically consistent and operationally significant improvements in mission duration, delivery performance, and planning effort, which occurred simultaneously with high user adoption and strong risk-adjusted return on investment. These results demonstrate that the framework not only improves the quality of solutions but also systematically improves daily operational performance and decision-making processes when integrated into a broader digital transformation program.
Beyond the context of workforce planning, the proposed approach points to a more general design pattern that brings optimization research closer to industrial application. It combines multi-objective modeling of operational trade-offs, explicit representation of uncertainty through simulation, systematic and efficient adjustment of algorithm parameters, and a focus on distribution and application quality in field conditions rather than headline technical metrics. As production systems become more human-centric, data-intensive, and operationally variable, the need for such integrated and ready-to-deploy designs will increase to translate advanced analytics into lasting gains on the shop floor. Future work could extend this research line by integrating machine learning components into the framework, which would strengthen uncertainty prediction and support adaptable, time-sensitive decision updates in complex labor environments.

Author Contributions

Conceptualization, B.D. (Berrin Denizhan); methodology, B.F. and B.Ö.; software, M.E.E.; validation, B.F., M.E.E. and B.Ö.; formal analysis, B.F. and B.D. (Bengisu Derya); investigation, B.F., M.E.E., B.Ö. and B.D. (Bengisu Derya); writing, B.F., M.E.E., B.Ö. and B.D. (Bengisu Derya); editing, E.Y.; supervision, B.D. (Berrin Denizhan). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets generated and analyzed during the current study are available in the GitHub repository at https://github.com/meferbas/Workforce-Scheduling-AI (accessed on 4 December 2025).

Acknowledgments

This work is studied under the 2209 Industry Oriented Research Project Support Programme for Undergraduate Students, TÜBİTAK.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Karam, A.; Attia, E.-A.; Duquenne, P. A MILP model for an integrated project scheduling and multi-skilled workforce allocation with flexible working hours. IFAC-PapersOnLine 2017, 50, 13964–13969. [Google Scholar] [CrossRef]
  2. Bobek, A.; Imondi, C.; Shott, T.; Toobaei, M. Heterogeneous project scheduling for optimal six-sigma cost reduction using linear programing. In Proceedings of the 2012 Proceedings of PICMET’12: Technology Management for Emerging Technologies, Vancouver, BC, Canada, 29 July–2 August 2012; pp. 2405–2413, ISBN 978-3-319-26580-3. Available online: https://ieeexplore.ieee.org/document/6304257 (accessed on 4 December 2025).
  3. Angelidis, E.; Bohn, D.; Rose, O. A simulation tool for complex assembly lines with multiskilled resources. In Proceedings of the of the 2013 Winter Simulation Conference, Washington, DC, USA, 8–11 December 2013. [Google Scholar] [CrossRef][Green Version]
  4. Chen, J.C.; Chen, Y.Y.; Chen, T.L.; Lin, Y.H. Multi-project scheduling with multi-skilled workforce assignment considering uncertainty and learning effect for large-scale equipment manufacturer. Comput. Ind. Eng. 2022, 169, 108240. [Google Scholar] [CrossRef]
  5. Gerekli, M.S.; Turan, A.H.; Gök, M.; Kocaoğlu, B. Digital transformation in SMEs: Barriers, drivers, and roadmap. Procedia Comput. Sci. 2021, 181, 737–745. [Google Scholar] [CrossRef]
  6. Gezgin, A.Y.; Arıcıoğlu, M.A. Industry 4.0 and management 4.0: Examining the impact of environmental, cultural, and technological changes. Sustainability 2025, 17, 3601. [Google Scholar] [CrossRef]
  7. Garey, M.R.; Johnson, D.S. Computers and Intractability: A Guide to the Theory of NP-Completeness; W.H. Freeman: San Francisco, CA, USA, 1979; ISBN 978-0-7167-1045-5. [Google Scholar]
  8. Pinedo, M.L. Scheduling: Theory, Algorithms, and Systems, 5th ed.; Springer International Publishing: Cham, Switzerland, 2016; ISBN 978-3-319-26580-3. [Google Scholar] [CrossRef]
  9. MESA International. MESA Model: A Framework for Smarter Manufacturing. Available online: https://mesa.org/topics-resources/mesa-model/ (accessed on 30 September 2024).
  10. Silva, J.; Ávila, P.; Patrício, L.; Sá, J.C.; Ferreira, L.P.; Bastos, J.; Castro, H. Improvement of planning and time control in the project management of a metalworking industry—Case study. Procedia Comput. Sci. 2022, 196, 288–295. [Google Scholar] [CrossRef]
  11. World Economic Forum. Future of Jobs Report 2025; World Economic Forum: Geneva, Switzerland, 2025; Available online: https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf (accessed on 29 September 2025).
  12. Khadivi, M.; Charter, T.; Yaghoubi, M.; Jalayer, M.; Ahang, M.; Shojaeinasab, A.; Najjaran, H. Deep reinforcement learning for machine scheduling: Methodology, the state-of-the-art, and future directions. Comput. Ind. Eng. 2025, 201, 110856. [Google Scholar] [CrossRef]
  13. Liu, X.; Chen, X.; Chau, V.; Musiał, J.; Błażewicz, J. Flexible job shop scheduling problem using graph neural networks and reinforcement learning. Comput. Oper. Res. 2025, 182, 107139. [Google Scholar] [CrossRef]
  14. Lei, K.; Guo, P.; Wang, Y.; Zhang, J.; Meng, X.; Qian, L. Large-scale dynamic scheduling for flexible job-shop with random arrivals of new jobs by hierarchical reinforcement learning. IEEE Trans. Ind. Inform. 2023, 20, 1007–1018. [Google Scholar] [CrossRef]
  15. Çiftçi, F.S.; Taşkıran, A.; Bulak, M.E. How to achieve sustainable emergency management? A case study for Istanbul city with a stochastic approach. Comput. Ind. Eng. 2025, 209, 111456. [Google Scholar] [CrossRef]
  16. Denizhan, B.; Yıldırım, E.; Akkan, Ö. An order-picking problem in a medical facility using genetic algorithm. Processes 2025, 13, 22. [Google Scholar] [CrossRef]
  17. Johnson, S.M. Optimal two- and three-stage production schedules with setup times included. Nav. Res. Logist. Q. 1954, 1, 61–68. [Google Scholar] [CrossRef]
  18. MacCarthy, B.L.; Liu, J. Addressing the gap in scheduling research: A review of optimization and heuristic methods in production scheduling. Int. J. Prod. Res. 1993, 31, 59–79. [Google Scholar] [CrossRef]
  19. Harjunkoski, I.; Maravelias, C.T.; Bongers, P.; Castro, P.M.; Engell, S.; Grossmann, I.E.; Hooker, J.; Méndez, C.; Sand, G.; Wassick, J. Scope for industrial applications of production scheduling models and solution methods. Comput. Chem. Eng. 2014, 62, 161–193. [Google Scholar] [CrossRef]
  20. Board of Governors of the Federal Reserve System. Industrial Production and Capacity Utilization: G.17 Statistical Release. Available online: https://www.federalreserve.gov/releases/g17/current/ (accessed on 17 September 2025).
  21. FourKites. Ocean Shipping Report: 2024 Trends and Challenges. Available online: https://www.fourkites.com/resources/ocean-shipping-report/ (accessed on 30 September 2025).
  22. Torba, R.; Dauzère-Pérès, S.; Yugma, C.; Gallais, C.; Pouzet, J. Solving a real-life multi-skill resource-constrained multi-project scheduling problem. Ann. Oper. Res. 2024, 338, 69–114. [Google Scholar] [CrossRef]
  23. Yannibelli, V.; Amandi, A. Project Scheduling: A Memetic Algorithm with Diversity-Adaptive Components that Optimizes the Effectiveness of Human Resources. Polibits 2015, 52, 93–103. [Google Scholar] [CrossRef]
  24. Li, J.; Liu, Q.; Li, X.; Gao, L. An efficient problem-specific evolutionary algorithm for flexible job shop scheduling problem with specific workers in highly customised manufacturing systems. Int. J. Prod. Res. 2025, 63, 7238–7259. [Google Scholar] [CrossRef]
  25. Tiwari, V.; Patterson, J.H.; Mabert, V.A. Scheduling projects with heterogeneous resources to meet time and quality objectives. Eur. J. Oper. Res. 2009, 193, 780–790. [Google Scholar] [CrossRef]
  26. Kolter, M.; Grunow, M.; Kolisch, R.; Stäblein, T. Strategic workforce and project planning for engineering automotive production systems: Tackling the transition to electric vehicles. Int. J. Prod. Res. 2025, 63, 1105–1125. [Google Scholar] [CrossRef]
  27. Bierwirth, C.; Mattfeld, D.C. Production scheduling with Genetic Algorithm. Evol. Comput. 1999, 7, 1–17. [Google Scholar] [CrossRef]
  28. Bredael, D.; Vanhoucke, M. A GA with resource buffers for the resource-constrained multi-project scheduling problem. Eur. J. Oper. Res. 2024, 315, 19–34. [Google Scholar] [CrossRef]
  29. Chen, R.; Gu, D.; Liang, C.; Jiang, L. A multi-skilled staff scheduling and team configuration optimisation model for artificial intelligence project portfolio considering competence development and innovation-driven. Int. J. Prod. Res. 2024, 62, 7763–7792. [Google Scholar] [CrossRef]
  30. Ono, I.; Yamamura, M.; Kobayashi, S. A genetic algorithm for job-shop scheduling problems using job-based order crossover. In Proceedings of the IEEE International Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; pp. 547–552. [Google Scholar] [CrossRef]
  31. Algethami, H.; Martínez-Gavara, A.; Landa-Silva, D. Adaptive multiple crossover genetic algorithm to solve workforce scheduling and routing problem. J. Heuristics 2019, 25, 753–792. [Google Scholar] [CrossRef]
  32. Cai, X.; Li, K.N. A genetic algorithm for scheduling staff of mixed skills under multi-criteria. Eur. J. Oper. Res. 2000, 125, 359–369. [Google Scholar] [CrossRef]
  33. Atay, Y.; Kodaz, H. Optimization of job shop scheduling problems using modified clonal selection algorithm. Turk. J. Electr. Eng. Comput. Sci. 2014, 22, 1528–1539. [Google Scholar] [CrossRef]
  34. Yıldırım, E.; Denizhan, B. A two-echelon pharmaceutical supply chain optimization via genetic algorithm. In Lecture Notes in Mechanical Engineering; Springer: Singapore, 2022; pp. 77–87. [Google Scholar] [CrossRef]
  35. Çubukçuoğlu, A.; Karacan, I.; Ceylan, Z.; Bulkan, S. Minimizing Makespan in Ordered Flow Shop Scheduling Using a Robust Genetic Algorithm. Processes 2025, 13, 1583. [Google Scholar] [CrossRef]
  36. Bastian, N.D.; Lunday, B.J.; Fisher, C.B.; Hall, A.O. Models and methods for workforce planning under uncertainty: Optimizing U.S. Army cyber branch readiness and manning. Omega 2020, 92, 102171. [Google Scholar] [CrossRef]
  37. Al-Araidah, O.; Kremer, G.E.O.; Günay, E.E.; Chu, C. A Monte Carlo simulation to estimate fatigue allowance for female order pickers in high traffic manual picking systems. Int. J. Prod. Res. 2020, 59, 4711–4722. [Google Scholar] [CrossRef]
  38. Yalçınkaya, M.; Birgören, B. Estimating confidence lower bounds of Weibull lower percentiles with small samples in material reliability analysis. Pamukkale Univ. Muh. Bilim Derg. 2020, 26, 184–194. [Google Scholar] [CrossRef]
  39. Marseguerra, M.; Zio, E. Optimizing maintenance and repair policies via genetic algorithms and Monte Carlo simulation. Reliab. Eng. Syst. Saf. 2000, 68, 249–261. [Google Scholar] [CrossRef]
  40. Marseguerra, M.; Zio, E.; Podofillini, L. Condition-based maintenance optimization by means of genetic algorithms and Monte Carlo simulation. Reliab. Eng. Syst. Saf. 2002, 77, 151–165. [Google Scholar] [CrossRef]
  41. Yoshitomi, Y.; Yamaguchi, R. A genetic algorithm and the Monte Carlo method for stochastic job-shop scheduling. Int. Trans. Oper. Res. 2003, 10, 577–596. [Google Scholar] [CrossRef]
  42. Magalhães-Mendes, J. A genetic algorithm for the job shop scheduling with a new local search using the Monte Carlo method. In Proceedings of the 10th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases (AIKED), Cambridge, UK, 20–22 February 2011; pp. 38–43. Available online: https://www.wseas.us/e-library/conferences/2011/Cambridge/AIKED/AIKED-02.pdf (accessed on 30 September 2025).
  43. Candan, G.; Yazgan, H.R. Genetic algorithm parameter optimisation using Taguchi method for a flexible manufacturing system scheduling problem. Int. J. Prod. Res. 2015, 53, 897–915. [Google Scholar] [CrossRef]
  44. Denizhan, B.; Gürbüz, F.; Öztürk, C. Managing production process in a pet resin industry using data mining and genetic programming. Int. J. Ind. Eng. 2022, 29, 607–617. [Google Scholar] [CrossRef]
  45. Mondal, S.; Singh, R. Optimizing cybersecurity budgets in financial networks: A comparative study of genetic algorithms and trust-region methods. Preprint 2025. [Google Scholar] [CrossRef]
  46. Kazemzadeh, N.; Ryan, S.M.; Hamzeei, M. Robust optimization vs. stochastic programming incorporating risk measures for unit commitment with uncertain variable renewable generation. Energy Syst. 2017, 10, 517–541. [Google Scholar] [CrossRef]
  47. Taguchi, G. Introduction to Quality Engineering: Designing Quality into Products and Processes; Asian Productivity Organization: Tokyo, Japan, 1986; ISBN 9283310845. [Google Scholar]
  48. Yang, W.H.P.; Tarng, Y.S. Design optimization of cutting parameters for turning operations based on the Taguchi method. J. Mater. Process. Technol. 1998, 84, 122–129. [Google Scholar] [CrossRef]
  49. Toksoy, M.S. A clustering-based simulated annealing algorithm with Taguchi method for the discrete ordered median problem. Sak. Univ. J. Sci. 2022, 26, 169–184. [Google Scholar] [CrossRef]
  50. Rashid, K. Optimize the Taguchi method, the signal-to-noise ratio, and the sensitivity. Int. J. Stat. Appl. Math. 2023, 8, 64–70. [Google Scholar] [CrossRef]
  51. Zhujani, F.; Todorov, G.; Kamberov, K.; Abdullahu, F. Mathematical modeling and optimization of machining parameters in CNC turning process of Inconel 718 using the Taguchi method. J. Eng. Res. 2025, 13, 320–330. [Google Scholar] [CrossRef]
  52. Yazgan, H.R.; Beypınar, İ.; Boran, S.; Ocak, C. A new algorithm and multi-response Taguchi method to solve line balancing problem in an automotive industry. Int. J. Adv. Manuf. Technol. 2011, 57, 379–392. [Google Scholar] [CrossRef]
  53. Tsai, J.T.; Liu, T.K.; Ho, W.H.; Chou, J.H. An improved genetic algorithm for job-shop scheduling problems using Taguchi-based crossover. Int. J. Adv. Manuf. Technol. 2008, 38, 987–998. [Google Scholar] [CrossRef]
  54. Himmetoglu, S.; Aydogan, E.K.; Özcan, F.; Karahan, O.; Atiş, C.D. Rough-AHP and MOORA-based Taguchi optimization for mixture proportion of building concrete. Politek. Derg. 2023, 26, 1307–1317. [Google Scholar] [CrossRef]
  55. Song, L.; Xu, Z.; Wang, C.; Su, J. A new decision method of flexible job shop rescheduling based on WOA-SVM. Systems 2023, 11, 59. [Google Scholar] [CrossRef]
  56. Potočnik, P.; Jeromen, A.; Govekar, E. Genetic Algorithm-Based Framework for Optimization of Laser Beam Path in Additive Manufacturing. Metals 2024, 14, 410. [Google Scholar] [CrossRef]
  57. Yang, J.; Zheng, Y.; Wu, J. Towards Sustainable Production: An Adaptive Intelligent Optimization Genetic Algorithm for Solid Wood Panel Manufacturing. Sustainability 2024, 16, 3785. [Google Scholar] [CrossRef]
  58. Yang, J.; Zheng, Y.; Wu, J.; Wang, Y.; He, J.; Tang, L. Enhancing Manufacturing Excellence with Digital-Twin-Enabled Operational Monitoring and Intelligent Scheduling. Appl. Sci. 2024, 14, 6622. [Google Scholar] [CrossRef]
  59. Günay, E.E.; Ramadani, R.; Mundiwala, M.; Kashef, A.; Ma, J.; Hu, C.; Kremer, P.; Kremer, G.E. Design Improvement for Facilitating Transmission Control Unit Remanufacturing. In Proceedings of the ASMEs 2025 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Boston, MA, USA, 17–20 August 2025; pp. 1–10. [Google Scholar] [CrossRef]
  60. Yıldırım, E.; Denizhan, B. Comparative Study of Application of Production Sequencing and Scheduling Problems in Tire Mixing Operations with ADAM, Grey Wolf Optimizer, and Genetic Algorithm. Systems 2025, 13, 998. [Google Scholar] [CrossRef]
  61. Wu, C.; Xiao, Y.; Zhu, X. Research on Optimization Algorithm of AGV Scheduling for Intelligent Manufacturing Company: Taking the Machining Shop as an Example. Processes 2023, 11, 2606. [Google Scholar] [CrossRef]
  62. Elbasheer, M.; Longo, F.; Mirabelli, G.; Solina, V. Flexible Symbiosis for Simulation Optimization in Production Scheduling: A Design Strategy for Adaptive Decision Support in Industry 5.0. J. Manuf. Mater. Process. 2024, 8, 275. [Google Scholar] [CrossRef]
  63. Gen, M.; Cheng, R. Genetic Algorithms and Engineering Optimization; Wiley: New York, NY, USA, 2000; ISBN 978-0471315315. [Google Scholar]
  64. Eiben, A.E.; Smith, J.E. Introduction to Evolutionary Computing, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar] [CrossRef]
  65. Montgomery, D.C. Design and Analysis of Experiments, 10th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2019; ISBN 978-1-119-49244-3. [Google Scholar]
  66. Malempati, M. Developing end-to-end intelligent finance solutions through AI and cloud integration. Int. J. Sci. Res. 2021, 10(12), 1602–1615. [Google Scholar] [CrossRef]
  67. Karp, R.M. Reducibility among combinatorial problems. In Complexity of Computer Computations; Miller, R.E., Thatcher, J.W., Eds.; Springer: Boston, MA, USA, 1972; pp. 85–103. [Google Scholar] [CrossRef]
  68. Papadimitriou, C.H.; Steiglitz, K. Combinatorial Optimization: Algorithms and Complexity; Prentice Hall: Englewood Cliffs, NJ, USA, 1982; ISBN 978-0131524620. Available online: https://api.semanticscholar.org/CorpusID:265900001 (accessed on 4 December 2025).
  69. Dass, P.L.; Nair, S.V.; Kurien, G.P.; Chandar, S.K. A systematic literature network analysis approach to assess the topology of modern-era supply chain risk management research. Int. J. Ind. Syst. Eng. 2025, 50, 106–145. [Google Scholar] [CrossRef]
  70. OECD. The Digital Transformation of SMEs; OECD Studies on SMEs and Entrepreneurship; OECD Publishing: Paris, France, 2021. [Google Scholar] [CrossRef]
  71. Schulze Brock, P.; Lagüera González, J.; Di Bella, L.; Katsinis, A. SMEs Performance Review 2025; Publications Office of the European Union: Luxembourg, 2025; p. JRC141865. [Google Scholar] [CrossRef]
  72. Eurostat. Digitalisation in Europe–2025 Edition; Publications Office of the European Union: Luxembourg, 2025; Available online: https://ec.europa.eu/eurostat/web/interactive-publications/digitalisation-2025 (accessed on 4 December 2025).
  73. OECD. SMEs Digitalisation for Competitiveness: 2025 OECD D4SMEs Survey—Policy Highlights; Organisation for Economic Co-operation and Development (OECD): Paris, France, 2025; Available online: https://www.oecd.org/content/dam/oecd/en/networks/oecd-digital-for-smes-global-initiative/D4SME-2025-Policy-Highlights.pdf (accessed on 4 December 2025).
Figure 1. Proposed integrated optimization framework for workforce scheduling.
Figure 1. Proposed integrated optimization framework for workforce scheduling.
Systems 14 00026 g001
Figure 2. Structured implementation of the Taguchi method.
Figure 2. Structured implementation of the Taguchi method.
Systems 14 00026 g002
Figure 3. Web-based implementation interface and user experience design. The last names are hidden by using *.
Figure 3. Web-based implementation interface and user experience design. The last names are hidden by using *.
Systems 14 00026 g003
Figure 4. The impact of varying population sizes on two key performance metrics.
Figure 4. The impact of varying population sizes on two key performance metrics.
Systems 14 00026 g004
Table 1. Mathematical notation.
Table 1. Mathematical notation.
SymbolDescriptionDomain
W = { 1 , 2 , , n } Set of workers n Z +
T = { 1 , 2 , , m } Set of tasks m Z +
x i j Binary assignment variable (worker i to task j) { 0 , 1 }
c i j Competency factor for worker i on task j[0.5, 2.0]
d j Base duration of task j R +
p i j Performance score of worker i on task j[0, 1]
r j Resource requirement (coverage) for task j Z +
C i Capacity of worker i over the horizon R +
q i j Qualification indicator { 0 , 1 }
d ˜ i j Stochastic (effective) duration R +
μ i j Expected performance baseline R +
σ i j Performance standard deviation R +
Table 2. Integrated system overview.
Table 2. Integrated system overview.
Problem ComponentSystem RequirementSolution MethodTechnical Implementation
Manual task assignmentAutomatic optimizationGAPopulation-based search with 100 individuals
Competency mismatchSkill-based matchingSuitability matrixWeighted scoring: competency (50%), experience (30%), efficiency (20%)
Performance uncertaintyStochastic modelingMCS10,000 iterations with truncated normal distributions
Parameter sensitivityRobust optimizationTaguchi methodL9 orthogonal array with S/N ratio analysis
Interface complexityUser-friendly systemDjango web frameworkResponsive design with PostgreSQL backend
Decision supportData-driven insightsMulti-criteria optimizationWeighted objectives: duration (60%), balance (25%), alignment (15%)
Table 3. Task–worker performance matrix with statistical parameters.
Table 3. Task–worker performance matrix with statistical parameters.
Task/WorkerABCDMean DurationStd Dev
10.850.640.780.3148 min6.2
20.470.900.560.4033 min4.7
30.920.710.620.5857 min8.9
40.600.830.490.3625 min3.1
Table 4. Algorithm parameters and optimization settings.
Table 4. Algorithm parameters and optimization settings.
ComponentParameterValueTaguchi Opt.S/N (dB)Final Config.
GAPopulation size100–200L9 array testing4.8150 (optimal)
Generations300Fixed300
Crossover rate0.75–0.853 levels3.20.80
Mutation rate0.05–0.153 levels2.10.10
SelectionTournamentFixedTournament (k = 4)
Monte CarloIterations10,000Fixed10,000
DistributionTruncated normalFixed μ = 1.0, σ = 0.15
Bounds[0.7, 1.3]FixedPerformance limits
TaguchiArray typeL9 ( 3 3 )Systematic9 experiments
Factors3FixedA, B, C
Levels3 eachFixedLow, medium, high
Table 5. Exact solver vs. GA results for 16 workforce-based benchmark instances derived from the industrial dataset. For each instance, the worker pool size | W | , required team size r j , optimal MIP objective Z * , GA objective Z GA , relative gap, and whether the GA recovers the exact same team as the MIP model are reported. In this verification experiment, the GA uses a population size of 50 and 50 generations, matching the order of magnitude used in the main computational study.
Table 5. Exact solver vs. GA results for 16 workforce-based benchmark instances derived from the industrial dataset. For each instance, the worker pool size | W | , required team size r j , optimal MIP objective Z * , GA objective Z GA , relative gap, and whether the GA recovers the exact same team as the MIP model are reported. In this verification experiment, the GA uses a population size of 50 and 50 generations, matching the order of magnitude used in the main computational study.
Instance | W | r j Z * (MIP) Z GA Gap (%)Team Formation
1246320.79320.790.00Exact match
2246317.71317.710.00Exact match
3246316.41316.410.00Exact match
4246314.44314.440.00Exact match
5246320.21320.210.00Exact match
6246331.17331.170.00Exact match
7246316.04316.040.00Exact match
8246327.50327.500.00Exact match
9246321.59321.590.00Exact match
10246328.46328.460.00Exact match
11246316.69316.690.00Exact match
12246328.25328.250.00Exact match
13246324.54324.540.00Exact match
14246327.36327.360.00Exact match
15246325.88325.880.00Exact match
16246325.18325.180.00Exact match
Table 6. Implementation modules and performance specifications.
Table 6. Implementation modules and performance specifications.
ModuleCore FunctionalityPerformance Req.User Benefits
Task managementDynamic task creation, priority assignment, deadline tracking<1 s response, 1000+ concurrent tasks85% reduction in planning time
Employee managementSkill matrix, performance history, competency updatesReal-time updates, trend analysisImproved decision-making
Optimization engineOne-click optimization, scenario analysis, constraints<2 s for 100+ tasks, 25 usersProactive workload management
Results dashboardMulti-format reporting, visualization, exportInteractive charts, PDF/ExcelEnhanced risk assessment
Mobile interfaceHigh feature accessibility on mobileResponsive design, offline modeField-accessible scheduling
Table 7. Complete ANOVA results with effect sizes and power.
Table 7. Complete ANOVA results with effect sizes and power.
FacilityF-StatdfMSp-Value η 2 Cohen’s fPower
Facility AF(3,96) = 12.423.961247.80.002 *0.2790.620.998
Facility BF(3,81) = 8.673.81892.40.008 **0.2430.570.987
CombinedF(3,181) = 15.333.1811598.2<0.001 ***0.2030.51>0.999
Note: *** p < 0.001, ** p < 0.01, * p < 0.05; MS = Mean Square.
Table 8. Integrated performance analysis.
Table 8. Integrated performance analysis.
MetricFacility AFacility BStatistical TestEffect SizeThresholdResult
Task duration reduction13.2% [10.8–15.6]15.1% [12.4–17.8]t(98) = 12.47, p < 0.001d = 1.24≥8%Both significant
Delivery delay reduction64.0% [58.3–69.7]60.2% [54.1–66.3]t(84) = 18.92, p < 0.001d = 2.14≥25%Both significant
Workload balance0.17 [0.13–0.21]0.19 [0.14–0.24]t(91) = 7.83, p < 0.001d = 0.82≥0.10Both significant
Planning time reduction85.9% [81.2–90.6]83.3% [78.8–87.8]t(89) = 24.17, p < 0.001d = 3.18≥60%Both significant
User adoption89%83%N/A≥70%Exceeded
Table 9. Risk-adjusted benefit analysis.
Table 9. Risk-adjusted benefit analysis.
Benefit CategoryFacility A ImpactFacility B ImpactConservative Estimate
Direct labor savings$68/worker/day$61/worker/day$45/worker/day
Delay cost reduction$2400/month$2200/month$1600/month
Management efficiency$180/manager/day$165/manager/day$120/manager/day
Combined annual benefits$312,000$276,000$192,000
Table 10. Comparison of pre- and post-implementation performance indicators.
Table 10. Comparison of pre- and post-implementation performance indicators.
MetricPre-ImplementationPost-ImplementationChange/Interpretation
Aggregate performance score50.4%50.6%+0.2 percentage points; slight improvement in central tendency
Risk score49.2%49.1%−0.1 percentage points; marginal reduction in aggregate risk
Possibility of delay24.1%24.1%Approximately unchanged; delay risk remains stable
Mean performance based on design85.1%85.5%+0.4 percentage points; higher expected design-level performance
Performance range (min–max)31.0–70.2%30.8–70.2%Slightly wider lower tail with unchanged upper bound
Table 11. Taguchi-based optimization outcomes by design code.
Table 11. Taguchi-based optimization outcomes by design code.
Design CodeOptimized Time (min)Improvement (%)Evaluation
M-AYK-011628.4016.20Possible
M-CKM-012561.6540.73Possible
M-D90K-017563.2027.83Possible
M-DUK-015702.8126.84Possible
M-KAM-001631.1020.95Possible
M-KK-016540.1039.15Possible
M-KKK-002536.0731.96Possible
M-MOD2-014635.1330.08Possible
M-MOD3-008509.7030.04Possible
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Denizhan, B.; Yıldırım, E.; Fındıklı, B.; Erbaş, M.E.; Öz, B.; Derya, B. Intelligent Workforce Scheduling in Manufacturing: An Integrated Optimization Framework Using Genetic Algorithm, Monte Carlo Simulation, and Taguchi Method. Systems 2026, 14, 26. https://doi.org/10.3390/systems14010026

AMA Style

Denizhan B, Yıldırım E, Fındıklı B, Erbaş ME, Öz B, Derya B. Intelligent Workforce Scheduling in Manufacturing: An Integrated Optimization Framework Using Genetic Algorithm, Monte Carlo Simulation, and Taguchi Method. Systems. 2026; 14(1):26. https://doi.org/10.3390/systems14010026

Chicago/Turabian Style

Denizhan, Berrin, Elif Yıldırım, Beyza Fındıklı, Mehmet Efe Erbaş, Batuhan Öz, and Bengisu Derya. 2026. "Intelligent Workforce Scheduling in Manufacturing: An Integrated Optimization Framework Using Genetic Algorithm, Monte Carlo Simulation, and Taguchi Method" Systems 14, no. 1: 26. https://doi.org/10.3390/systems14010026

APA Style

Denizhan, B., Yıldırım, E., Fındıklı, B., Erbaş, M. E., Öz, B., & Derya, B. (2026). Intelligent Workforce Scheduling in Manufacturing: An Integrated Optimization Framework Using Genetic Algorithm, Monte Carlo Simulation, and Taguchi Method. Systems, 14(1), 26. https://doi.org/10.3390/systems14010026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop