Next Article in Journal
Advancements in the Research on the Preparation and Growth Mechanisms of Various Polymorphs of Calcium Carbonate: A Comprehensive Review
Previous Article in Journal
First-Principles and PSO-Driven Exploration of Ca-Pt Intermetallics: Stable Phases and Pressure-Driven Transitions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration of Hybrid Machine Learning and Multi-Objective Optimization for Enhanced Turning Parameters of EN-GJL-250 Cast Iron

1
Electromechanical Department, Institute of Applied Sciences and Techniques, University of Constantine 1, Constantine 25017, Algeria
2
Mechanical Department, Institute of Applied Sciences and Techniques, University of Constantine 1, Constantine 25017, Algeria
3
Laboratory of Mechanics and Materials Development, Department of Civil Engineering, University of Djelfa, Djelfa 17000, Algeria
4
Department of Mechanical Engineering, College of Engineering, Imam Mohammad Ibn Saud Islamic University, Riyadh 11432, Saudi Arabia
5
LMS Laboratory, 8 Mai 1945 University, Guelma 24000, Algeria
6
Department of Mechanical Engineering, Faculty of Sciences and Technology, University of Bordj Bou Arreridj, Bordj Bou Arreridj 34033, Algeria
*
Author to whom correspondence should be addressed.
Crystals 2025, 15(3), 264; https://doi.org/10.3390/cryst15030264
Submission received: 6 February 2025 / Revised: 8 March 2025 / Accepted: 10 March 2025 / Published: 12 March 2025

Abstract

:
This study aims to optimize the turning parameters for EN-GJL-250 grey cast iron using hybrid machine learning techniques integrated with multi-objective optimization algorithms. The experimental design focused on evaluating the impact of cutting tool type, testing three tools: uncoated and coated silicon nitride (Si3N4) ceramic inserts and coated cubic boron nitride (CBN). Key cutting parameters such as depth of cut (ap), feed rate (f), and cutting speed (Vc) were varied to examine their effects on surface roughness (Ra), cutting force (Fr), and power consumption (Pc). The results showed that the coated Si3N4 tool achieved the best surface finish, with minimal cutting force and power consumption, while the uncoated Si3N4 and CBN tools performed slightly worse. Advanced optimization models including improved grey wolf optimizer–deep neural networks (DNN-IGWOs), genetic algorithm–deep neural networks (DNN-GAs), and deep neural network–extended Kalman filters (DNN-EKF) were compared with traditional methods like Support Vector Machines (SVMs), Decision Trees (DTs), and Levenberg–Marquardt (LM). The DNN-EKF model demonstrated exceptional predictive accuracy with an R2 value of 0.99. The desirability function (DF) method identified the optimal machining parameters for the coated Si3N4 tool: ap = 0.25 mm, f = 0.08 mm/rev, and Vc = 437.76 m/min. At these settings, Fr ranged between 46.424 and 47.405 N, Ra remained around 0.520 µm, and Pc varied between 386.518 W and 392.412 W. The multi-objective grey wolf optimization (MOGWO) further refined these parameters to minimize Fr, Ra, and Pc. This study demonstrates the potential of integrating machine learning and optimization techniques to significantly enhance manufacturing efficiency.

1. Introduction

Metal cutting is a fundamental manufacturing process that employs techniques such as turning, milling, and drilling to shape metal components with precision [1,2]. These processes transform raw metal into intricate parts essential across various industries [3,4]. Among these techniques, turning is particularly crucial in producing high-quality components, as it delivers superior dimensional accuracy [5,6] and yields an exceptionally smooth surface finish [7,8]. It typically achieves a refined finish on machined items and plays a crucial role in producing cylindrical components with precise measurements and polished surfaces. Its ability to create such high-quality components makes it a fundamental element in the automotive, aerospace, and heavy machinery sectors [9,10,11].
Enhancing cutting parameters in metal turning operations is critical for modern manufacturing, aiming to enhance productivity, efficiency, and product quality. However, identifying the most impactful parameters remains complex and time-consuming, often relying on operator experience [12,13]. Factors such as cutting speed, depth of cut, and feed rate have a direct impact on output quality and production costs [14,15]. Therefore, optimizing these parameters is essential for improving the efficiency and effectiveness of the turning process. Achieving optimal cutting parameters involves balancing surface roughness, tool wear, and productivity considerations [16]. Manufacturers must optimize machining parameters to produce high-quality, consistent turned components. Enhanced dimensional accuracy and smooth surface finish [17,18] boost product performance and reliability [19,20], while reduced variability and rework contribute to lower production costs and increased customer satisfaction [21,22].
Surface roughness and tool wear are critical factors in hard-turning operations, as they directly impact the quality and performance of machined components [23,24]. Both academic researchers and industry experts have conducted thorough research into factors affecting surface roughness, extensively studying and focusing on understanding them. For instance, Gaitonde et al. investigated machinability aspects in the hard turning of AISI D2 steel using wiper ceramic inserts. They achieved improved surface roughness (from 2.5 µm to 1.1 µm) and reduced tool wear by 25% [25]. While, Chinchanikar and Choudhury studied the machinability of hardened AISI 4340 steel at different hardness levels using coated carbide tools. Optimal performance was found at a hardness of 45 HRC, with a cutting speed of 200 m/min and a feed rate of 0.1 mm/rev, resulting in better surface finish and longer tool life [26].
Extensive research has also been conducted to identify the optimal settings for achieving desired outcomes, including selecting other effective parameters such as tool geometry, tool materials, cutting forces, lubrication, and vibration. Sadredine A. and Nouredine O. conducted an experimental study on the combined effect of tool geometry parameters on surface roughness cutting forces and vibrations, demonstrating that the optimal geometries identified reduced cutting forces, particularly the tangential force, and improved surface roughness [27]. Another study by Pengfei T. et al. involved cutting experiments on IN718 under dry and MQL (minimal quantity lubrication) conditions. MQL improved surface finish and reduced tool wear, with solution heat-treated IN718 showing 56% higher durability and 6.1% lower surface roughness than aged IN718 [28].
Researchers face a significant challenge in optimizing cutting conditions to make the turning process more cost-effective. They use various statistical techniques such as ANOVA and multiple regression analysis to develop predictive models that help in optimizing the turning process [29,30]. Additionally, experimental techniques like Taguchi methods, factorial design, Box–Behnken design (BBD), central composite design (CCD), Response Surface Methodology (RSM) and grey relational analysis (GRA) are employed to create accurate models predicting the outcomes of different cutting parameters across various materials. For instance, Kechagias et al. compared Full Factorial and Taguchi designs for machinability prediction in the turning of a titanium alloy. The study demonstrated that the Taguchi method offers robust and cost-effective solutions with fewer experimental runs, making it a preferable choice over the Full Factorial Design for this application [31]. Similarly, Stamenković et al. conducted a comparative study of Full Factorial, Face Central Composite, and Box–Behnken designs for optimizing hempseed oil extraction by n-hexane. Their findings indicated that while all three designs were effective, Box–Behnken and CCD provided more efficient and resource-effective approaches compared to FFD [32]. In another example, Kouahla et al. utilized RSM and GRA approaches to assess surface roughness, tool vibration, productivity, and cutting power during the machining of Inconel 718 using PVD-coated carbide tools. This study offered a comprehensive optimization strategy for complex machining processes [33].
Optimization techniques are essential for enhancing the efficiency and effectiveness of the turning process, as demonstrated by studies focused on high-speed turning and cermet-based cutting [34,35]. Additionally, advanced optimization approaches have been shown to improve tool life and reduce energy consumption, thereby boosting overall productivity in manufacturing applications [36,37]. Genetic algorithms (GAs) have been widely used to determine optimal cutting parameters [38]. Additionally, advanced methods such as optimization desirability functions, optimization with artificial neural networks (ANNs), deep learning (DL), mono-objective optimization, and multi-objective optimization have shown higher accuracy and efficiency in optimizing machining processes. Mia et al. conducted both mono-objective and multi-objective optimizations of performance factors in high-pressure coolant-assisted turning of Ti-6Al-4V. Their study highlighted the effectiveness of these techniques, achieving optimal cutting speeds of 100 m/min and feed rates of 0.1 mm/rev, which significantly enhanced machining performance [39]. Also, Laouissi et al. examined the optimization of cutting parameters in turning grey cast iron with silicon nitride ceramic tools. By employing RSM, GA, and ANNs, their study demonstrated improvements in surface roughness from 1.8 µm to 0.8 µm and an increase in material removal rate (MRR) by 25% [40].
Furthering the exploration of advanced techniques, Laouissi et al. used ANN-MOALO for multi-response optimization during eco-friendly machining of EN-GJL-250 cast iron. Their findings emphasized the benefits of combining these techniques for sustainable manufacturing processes, achieving a 15% reduction in energy consumption and a 20% improvement in tool life [41]. In another significant study, Nouioua et al. utilized an artificial neural network-based GWO algorithm for multi-response optimization in high machining performance with minimum quantity lubrication. The study demonstrated the algorithm’s efficiency in reducing cutting forces by 12% and enhancing surface finish by 30% [42]. Using advanced techniques, Safi et al. conducted a comparative analysis of WASPAS, MOORA, GRA, and DEAR optimization techniques in the turning of cold work tool steel. Their research underscored the importance of selecting appropriate optimization methods to improve tool wear and surface topography, achieving an 18% reduction in tool wear and a 22% reduction in surface roughness [43]. Moreover, Nouioua et al. (2022) evaluated various algorithms (MOGWO, MOALO, MOSSA, MOVO) in green machining to enhance the turning performance of X210Cr12 steel. Their comprehensive analysis provided valuable insights into the comparative effectiveness of these algorithms, resulting in a 25% improvement in machining efficiency and a 20% reduction in environmental impact [44].
This research endeavors to explore the intricate relationship between cutting parameters and machining performance during the turning of EN-GJL-250 cast iron, using silicon nitride (Si3N4) and tools. By integrating advanced machine learning techniques, specifically deep neural networks (DNNs) combined with an extended Kalman filter, with sophisticated multi-objective optimization algorithms such as the MOGWO, this study aims to optimize key machining outputs, including surface roughness, cutting force, and power consumption. Additionally, the research incorporates a comprehensive 3D surface roughness analysis to further understand the effects of different tool materials and machining conditions. Through a systematic approach that includes experimental analysis, predictive modeling, and optimization, this work seeks to provide a robust framework for improving machining efficiency and product quality in industrial applications.

2. Materials and Methods

This study investigates the machining performance of EN-GJL-250 grey cast iron (400 mm length, 90 mm diameter), chosen for its excellent castability, machinability, and wear resistance. Turning operations were conducted on a TOS TRENCIN lathe (SN40C model), with a spindle speed range of 50–2000 rpm, a power rating of 7.5 kW, and a maximum cutting diameter of 400 mm. Two silicon nitride (Si3N4) ceramic inserts were tested: one uncoated Si3N4 (m1) and one coated Si3N4 (m2), both conforming to ISO CNMG 120408 standards [45], with a honed edge radius of 0.02 mm. These inserts were mounted on an ISO PCLNR 2525M12 [45] tool holder, and dry machining was employed to eliminate coolant interference, ensuring precise control over cutting parameters.
The cutting forces were measured using a Kistler 9257B dynamometer (Kistler Group, Winterthur, Switzerland) with an accuracy of ±0.5%, providing real-time acquisition of the resultant cutting force (Fr), defined as the vector sum of force components along the orthogonal axes (x, y, and z) during machining. Accurate measurement of Fr is critical, as it plays a significant role in determining tool wear, surface quality, and power consumption (Pc) in turning operations. Surface roughness (Ra) was measured using a Surftest 201 roughness tester (Mitutoyo Corporation, Kawasaki, Japan) with a resolution of 0.01 µm. Three measurements were recorded for each machined surface and averaged to ensure reliable and consistent results.
This study employed a hybrid L81 experimental design to enhance the predictive modeling of machining parameters. The primary dataset (L54), which explores depth of cut (ap), feed rate (f), and cutting speed (Vc), was supplemented with an L27 reference dataset from Chihaoui et al. [46,47]. This reference dataset investigates cubic boron nitride (CBN) tools (m3) over an extended parameter range (ap = 0.3–0.9 mm, f = 0.08–0.2 mm/rev, Vc = 273–546 m/min), enabling a comparative analysis of ceramic (m1, m2) and CBN (m3) tools under overlapping conditions.
The integration of these datasets significantly broadens the parameter space, enhancing model robustness, predictive accuracy, and generalizability across different machining conditions [48]. Table 1 summarizes the input factors and their corresponding levels, while Table 2 presents the complete experimental results (Rows 1–54). Additionally, the Chihaoui et al. [46] dataset, detailed in Appendix A, further strengthens the training of intelligent models, improving optimization reliability and ensuring a more comprehensive representation of machining performance.
The experimental configuration employed in this study is comprehensively depicted in Figure 1. This figure provides a detailed visualization of the setup, capturing the precise arrangement and interaction of the various components and parameters integral to the research.

3. Results and Discussion

This study evaluates the turning performance of EN-GJL-250 grey cast iron using three different tool types under varied cutting conditions, focusing on the ap, f, and Vc. The hybrid dataset in Table 2, consisting of 81 runs across three tool types, uncoated Si3N4 (m1), coated Si3N4 (m2), and cubic boron nitride (m3), captures significant variations in cutting force (Fr), surface roughness (Ra), and power consumption (Pc) under the examined conditions. The analysis reveals significant variations in Fr, Ra, and Pc across the tested tools and cutting parameters. Fr ranged from 39.04 N to 536.65 N. Ra ranged from a minimum of 0.45 µm (to a maximum of 3.28 µm). Pc spanned from 292.4 W to 4325.81 W.
The coated Si3N4 tool (m2) consistently outperformed the other tools in terms of surface finish, recording the lowest Ra values. For instance, at ap = 0.5 mm, f = 0.08 mm/rev, and Vc = 530 m/min, m2 achieved an Ra of 0.45 µm, outperforming both m1 (Ra = 0.70 µm) and m3 (Ra = 0.79 µm). Similarly, at ap= 0.25 mm, f = 0.08 mm/rev, and Vc = 530 m/min, m2 achieved an Ra of 0.64 µm, which was an improvement over its previous Ra of 0.72 µm. This highlights m2’s capability to deliver superior surface quality, particularly at higher speeds and lower feed rates. The cubic boron nitride (CBN) tool (m3) exhibited the lowest cutting forces overall, particularly at high speeds. For example, at ap = 0.3 mm, f = 0.08 mm/rev, and Vc = 546 m/min, m3 recorded an Fr of 39.04 N, the lowest among all tools. This performance can be attributed to M3 high hardness and excellent thermal conductivity, which reduce tool–workpiece adhesion and cutting resistance. However, m3 produced higher Ra values compared to m2, particularly at higher feed rates, such as Ra = 3.28 µm at ap= 0.6 mm, f = 0.14 mm/rev, and Vc = 273 m/min. Feed rate and depth of cut significantly influenced cutting forces and power consumption. For example, for m1 at ap = 0.75 mm and f = 0.2 mm/rev, increasing Vc from 260 m/min to 530 m/min reduced Fr by approximately 8.6% (from 536.65 N to 490.58 N) but increased Pc by 85.3% (from 2333.46 W to 4325.81 W). Higher cutting speeds generally improved surface finish (lower Ra), likely due to the reduced formation of built-up edges.
Figure 2 presents a detailed analysis of surface roughness (Ra), represented by the arithmetic mean height (Sa), for the uncoated Si3N4 tool (m1) and the coated Si3N4 tool (m2). Three-dimensional surface plots illustrate the influence of feed rate on Sa at f = 0.08, 0.14, and 0.2 mm/rev. For m1, Sa increased from 0.3735 µm at 0.08 mm/rev to 1.480 µm at 0.2 mm/rev, underscoring the significant impact of feed rate on surface quality. In contrast, m2 maintained consistently lower Ra values across all feed rates, confirming its superior performance in achieving high-quality finishes.
Overall, these results emphasize the importance of optimizing cutting parameters to balance energy consumption, surface quality, and tool longevity. The coated Si3N4 tool (m2) emerges as the best tool for achieving high-quality surface finishes while maintaining reasonable power consumption. The uncoated Si3N4 tool (m1) and the CBN tool (m3) excel in minimizing cutting forces, with m3 being particularly effective at high speeds. However, this analysis provides only a preliminary assessment. Further deep analysis, including predictive modeling and a more comprehensive examination of the interplay between ap, Fr, and Pc, are essential for a complete characterization and comparison of the tool materials.

3.1. Analysis of Variance (ANOVA)

The Analysis of Variance (ANOVA) shown in Table 3, conducted on machining parameters, provides important insights into the influence of various factors on cutting force (Fr), Ra, and Pc. These findings are crucial for optimizing machining operations to improve efficiency and product quality. The ANOVA results show that the overall model is highly significant (p < 0.0001), indicating that the chosen parameters substantially impact cutting force. Tool type (m), depth of cut (ap), feed rate (f), and cutting speed (Vc) have significant effects, as evidenced by low p-values (<0.0001). For example, depth of cut (ap) has an F-value of 473.07. Significant interactions, such as between tool type and depth of cut ( m × a p ), with an F-value of 32.13, highlight complex, non-additive effects. However, factors like tool type and cutting speed ( m × v c ) and the quadratic components (ap2, f2, Vc2) do not have a substantial impact, suggesting that higher-order effects are less important in predicting cutting force.
The model remains highly significant for surface roughness (Ra), with feed rate (f) and cutting speed (Vc) showing substantial effects, as indicated by F-values of 339.55 and 69.76, respectively. Interestingly, depth of cut (ap) has no significant impact on surface roughness ( p = 0.6895 ), implying that other parameters may have a greater influence. The interaction between tool type and feed rate ( m × c ) is statistically significant, with an F-value of 9.46, indicating that specific tool adjustments are necessary to optimize surface quality.
Cutting speed (Vc) and depth of cut ( m × a p ) significantly impact power consumption (Pc), with high F-values of 201.75 and 633.32, respectively. The significant interaction between tool type and depth of cut (m), indicated by an F-value of 45.92, underscores the need for precise adjustments to maximize power efficiency. The quadratic terms ap2 and f2 are insignificant, suggesting that linear effects are more dominant.
In addition, the ANOVA analysis highlights the substantial influence of machining parameters on cutting force, surface roughness, and power consumption. Optimizing feed rate and cutting speed is crucial for managing surface roughness, while tool type, depth of cut, and cutting speed must be carefully controlled to regulate cutting force and power consumption. The polynomial regression models detailed in this analysis effectively illustrate the complex relationships between the response variables (Fr, Ra, Pc) and the independent variables (ap, f, Vc) across each tool. These models capture not only the direct effects of each variable but also the interactions and quadratic terms, offering a comprehensive view of how changes in the independent variables influence the outcomes for each response [49,50]. The following Equations (1)–(9) present in detail the formulation of the quadratic components.
  • Full models of uncoated Si3N4 ceramic
F r = 58.9544 + 423.767   a p + 1400.76   f 0.332168   V c + 1602.08   a p × f 0.067894   a p × V c 0.92117   f × V c 58.2565   a p 2 2325.87   f 2 + 0.000479   V c 2
R a = 1.96622 0.186784   a p + 4.05736   f 0.00753   V c + 0.012919   a p × f + 0.001021   a p × V c 0.004308   f × V c + 0.101161   a p 2 + 12.5   f 2 + 8.10169 e 06   V c 2
P c = 394.089 + 165.977   a p + 3038.18   f 1.22086   V c + 10,288.6   a p × f + 5.53479   a p × V c + 11.9582   f × V c 135.279   a p 2 18,107.7   f 2 + 0.000834   V c 2
  • Full models of coated Si3N4 ceramic
F r = 29.3575 + 273.427   a p + 1484.09   f 0.403099   V c + 1602.08   a p × f 0.067894   a p × V c 0.92117   f × V c 58.2565   a p 2 2325.87   f 2 + 0.000479   V c 2
R a = 1.62546 0.913451   a p + 6.78884   f 0.0068991   V c + 0.012919   a p × f + 0.001021   a p × V c 0.004308   f × V c + 0.101161   a p 2 + 12.5   f 2 + 8.10169 e 06   V c 2
P c = 238.315 807.17   a p + 2951.05   f 2.62479   V c + 10,288.6   a p × f + 5.53479   a p × V c + 11.9582   f × V c 135.279   a p 2 18,107.7   f 2 + 0.000834   V c 2
  • Full models of CBN
F r = 41.9437 + 59.6433   a p + 702.866   f 0.318582   V c + 1602.08   a p × f 0.067894   a p × V c 0.921174   f × V c 58.2565   a p 2 2325.87   f 2 + 0.000479   V c 2
R a = 3.11991 0.558012   a p + 9.01309   f 0.010042   V c + 0.012919   a p × f + 0.001021   a p × V c 0.004307   f × V c + 0.101161   a p 2 + 12.5   f 2 + 8.10169 e 06   V c 2
P c = 1529.02 2257.01   a p 1608.55   f 4.26283   V c + 10,288.6   a p × f + 5.53479   a p × V c + 11.9582   f × V c 135.279   a p 2 18,107.7   f 2 + 0.000834   V c 2
The statistical analysis of the models for cutting force (Fr), surface roughness (Ra), and power cutting (Pc) shown in Table 4 reveals robust and consistent performance across all outputs, underscoring the models’ reliability and utility in optimizing machining processes. For cutting force (Fr), the model demonstrates a high degree of accuracy, accounting for 94.73% of the variation (R2 = 0.95). The adjusted R2 value of 0.93, while slightly lower, still indicates a strong model fit that remains robust even when considering the number of predictors involved. The predicted R2 of 0.93 further confirms the model’s predictive capability, although there is a slight reduction compared to the adjusted R2, which is expected in model validation. Additionally, the Adequate Precision value of 32.01, which is well above the recommended threshold of 4, suggests a strong signal-to-noise ratio, affirming the model’s reliability for exploring and navigating the design space.
In terms of surface roughness (Ra), the model also performs well, capturing 91.62% of the variability (R2 = 0.92). The adjusted R2 of 0.89 is slightly lower, reflecting a conservative adjustment for model complexity, yet it still indicates a good fit. The predicted R2 of 0.8709, while somewhat lower, maintains strong predictive power, which is essential for validating the model’s practical application. The Adequate Precision value of 27.7523 further supports the model’s reliability, indicating that the model’s predictions are based on a substantial signal that outweighs the noise, ensuring accurate predictions of surface roughness under varied conditions.
The power consumption (Pc) model exhibits exceptional performance, with an R2 value of 0.9648, explaining 96.48% of the variation in power consumption. This high level of explanatory power is complemented by an adjusted R2 of 0.9554 and a predicted R2 of 0.9420, both of which confirm the model’s strong predictive accuracy and fit. The Adequate Precision value of 45.8444, significantly higher than the minimum acceptable level, indicates an extremely strong signal-to-noise ratio, reinforcing the model’s robustness and dependability for predicting power consumption. This makes the model particularly valuable for optimizing energy efficiency in machining processes.
Figure 3 presents the Normal Probability Plots of residuals for cutting force (Fr), surface roughness (Ra), and power consumption (Pc). In the Fr plot, nearly all residuals align closely with the reference line, except for one conspicuous outlier. This deviation may indicate an extreme experimental condition, a measurement anomaly, or the presence of unmodeled factors affecting the response variable.
To enhance the distribution of residuals and achieve variance stabilization, a Box–Cox analysis was performed on the original responses. Figure 4 displays the residuals vs. predicted plots for the transformed responses of Fr, Ra, and Pc, showing marked improvements. For Fr, applying the natural logarithm transformation, ln(Fr), yields residuals symmetrically centered around zero, with a Box–Cox LN(ResidualSS) value of approximately 10.02 at the optimal lambda. For Ra, the inverse square-root transformation, 1/√(Ra), produces a more homogeneous spread of residuals and minimizes heteroscedasticity, with an optimal LN(ResidualSS) near 0.776. In the case of Pc, transforming the data using ln(Pc + 0.5) addresses potential issues with near-zero values and results in a clustering of residuals within the expected range, supported by an LN(ResidualSS) value of about 13.75.
Figure 5 presents the updated Normal Probability Plots of externally studentized residuals for the transformed responses of cutting force (Fr), surface roughness (Ra), and power consumption (Pc). These plots illustrate that the chosen transformations, namely ln(Fr), 1/√(Ra), and ln(Pc + 0.5), enhance the normality and homoscedasticity of the residuals compared to the untransformed models. In the ln(Fr) plot, most points align closely with the reference line, indicating that the natural logarithm transformation effectively reduces skewness and stabilizes the variance for cutting force data. Although the largest externally studentized residual (approximately 6.285) suggests a potential outlier or high-leverage point, the overall distribution is significantly improved.
Similarly, the 1/√(Ra) plot shows a more uniform spread of residuals, confirming that the inverse square-root transformation effectively mitigates heteroscedasticity in the surface roughness data. Despite a single negative outlier near −3.5, the majority of residuals remain well distributed along the diagonal. Finally, in the ln(Pc + 0.5) plot, most points cluster around the reference line, verifying that adding a small constant and applying a logarithmic transformation helps manage low power consumption values. Although an outlier near 8.372 is still present, it does not detract from the overall improvement in the Pc distribution. Overall, these transformations yield models with enhanced normality and reduced variance-related issues, thereby improving the accuracy and reliability of the reduced regression models.
Based on these results, reduced regression models for predicting Fr, Ra, and Pc were developed using the transformed data (as detailed in Equations (10)–(18)). These models offer a more robust and statistically sound representation of the relationships between the input parameters and the response variables. By ensuring normality and homoscedasticity in the residuals, the predictive accuracy and reliability of these models are significantly enhanced, supporting more valid inferences about the underlying processes.
  • Reduced models of uncoated Si3N4 ceramic
l n ( F r ) = 3.33055   + 4.06039   a p + 10.33   f 0.000707   V c 1.93332   a p 2 21.0451   f 2
1 / S q r t ( R a ) = 0.862798 0.12787   a p 1.43384   f + 0.001983   V c 0.003027   f × V c 1.35202 e 06   V c 2
l n ( P c ) = 4.15575 + 4.08713   a p + 10.2159   f + 0.001876   V c 1.96075   f 2 20.6405   V c 2
  • Reduced models of coated Si3N4 ceramic
l n ( F r ) = 2.81703   + 4.06039   a p + 12.0424   f 0.000707   V c 1.93332   a p 2 21.0451   f 2
1 / S q r t ( R a ) = 1.06084 0.230225   a p 3.23908   f + 0.00186   V c 0.003027   f × V c 1.35202 e 06   V c 2
l n ( P c ) = 3.64435 + 4.08713   a p + 11.9162   f + 0.001876   V c 1.96075   f 2 20.6405   V c 2
  • Reduced models of CBN
l n ( F r ) = 2.22593   + 4.06039   a p + 11.3048   f 0.000707   V c 1.93332   a p 2 21.0451   f 2
1 / S q r t ( R a ) = 0.508438 0.017837   a p 1.60714   f + 0.00234   V c 0.003027   f × V c 1.35202 e 06      
l n ( P c ) = 3.04993 + 4.08713   a p + 11.195   f + 0.001876   V c 1.96075   f 2 20.6405   V c 2

3.2. Prediction of Surface Roughness Parameters Using DNN

3.2.1. Optimization of the DNN Architecture

The architectural design of a deep neural network (DNN), which includes the arrangement of neurons into layers and the connections between them, is a critical factor in determining the network’s performance. This study aimed to optimize several key parameters integral to the network architecture, such as the network size (i.e., the number of layers and the number of neurons within each layer), the optimization algorithm utilized, and the activation functions employed. The size of the neural network, defined by its depth (number of layers) and breadth (number of nodes per layer), is a pivotal design consideration. The network’s size directly influences its ability to model and capture intricate relationships within the data. In this investigation, a genetic algorithm was employed to optimize the network size, facilitating the identification of the most effective combination of layers and nodes to enhance the overall performance of the neural network.
The optimization algorithm plays a crucial role in the training process, as it adjusts the network’s weights and biases to minimize prediction error. This study conducted a thorough evaluation of various optimization algorithms, including trainlm, trainbr, trainbfg, traincgb, traincgf, traincgp, traingd, traingda, traingdm, traingdx, trainoss, trainrp, and trainscg, to determine the most effective algorithmic combination for optimizing network performance.
Additionally, the activation functions, which determine how the weighted sum of inputs is transformed into the output of a neuron, were also subjected to optimization. The selection of appropriate activation functions is vital as it affects the network’s ability to model non-linear relationships. By systematically optimizing the activation functions alongside the network architecture and training algorithms, this study sought to identify the optimal configuration that maximizes the predictive accuracy and generalization capability of the ANN. The detailed optimization parameters are presented in Table 5.

3.2.2. Improved Grey Wolf Optimizer (IGWO)

The Improved Grey Wolf Optimizer (IGWO) represents an advanced and refined optimization algorithm, inspired by the intricate social dynamics and collective hunting strategies observed in grey wolf packs. Conceived as an enhancement of the original Grey Wolf Optimizer (GWO) by Mirjalili et al. [51], the IGWO seeks to address and overcome certain limitations inherent in its predecessor while preserving its biologically inspired framework. This algorithm meticulously replicates the cooperative hunting behavior characteristic of grey wolves, where social hierarchy plays a pivotal role in the success of the hunt. The IGWO operates by conceptualizing a population of virtual wolves, each one symbolizing a candidate solution within the expansive search space of the optimization problem. The initial population is generated with random positions within this search space, effectively simulating the initial diversity of potential solutions. Such randomness is strategically designed to ensure that the algorithm initiates with a broad spectrum of possible solutions, thereby enhancing its capacity to explore various regions of the search space.
A salient feature of the IGWO is the explicit incorporation of a social hierarchy among the wolves. Each wolf is systematically ranked according to its performance, with the most dominant wolves exerting a more pronounced influence on the search for optimal solutions. This hierarchy mirrors the natural leadership structure observed in wolf packs, where the roles of alpha, beta, and delta wolves are distinctly defined in the hunting process. The IGWO meticulously balances two essential phases: exploration and exploitation. During the exploration phase, wolves systematically investigate new regions within the search space, ensuring that the algorithm does not become ensnared in local optima. This phase is indispensable for uncovering a diverse array of potential solutions. Conversely, in the exploitation phase, the algorithm shifts its focus towards refining and enhancing the quality of existing solutions, leveraging the insights gained during the exploration phase. This deliberate balance between exploration and exploitation is crucial for achieving effective optimization.
To further augment its performance, the IGWO introduces a suite of sophisticated evolutionary operations that dynamically adjust the positions of the wolves based on their hierarchical standing and individual performance metrics. These mechanisms enable the algorithm to fine-tune its search process, allowing it to adapt in real time to the landscape of the problem at hand. The IGWO is inherently designed for adaptive convergence, meaning it can iteratively modify its behavior to maximize search efficiency and progressively converge towards optimal solutions. This adaptability permits the algorithm to respond dynamically to the evolving requirements of the problem as the optimization process advances, thereby enhancing its efficacy over time.
The optimization process within the IGWO continues until a predefined stopping criterion is satisfied. This criterion may be defined by a maximum number of iterations, an acceptable level of performance, or another user-specified condition. By integrating these advanced features, the IGWO emerges as a robust and versatile tool for tackling complex optimization problems, effectively leveraging the natural intelligence embedded within grey wolf social behavior to guide its search process [52]. The flowchart depicted in Figure 6 succinctly encapsulates the procedural flow of the IGWO algorithm, offering a visual representation of its intricate operation.

3.2.3. Genetic Algorithms for Neural Network Optimization

The optimization of neural network architectures is a critical task in machine learning, as the network structure plays a pivotal role in model performance. Traditional gradient-based optimization techniques, such as stochastic gradient descent (SGD), have limitations in exploring the complex search space of neural network hyperparameters and topologies. In this context, genetic algorithms (GAs) have emerged as a promising approach to tackle the neural network optimization problem. Genetic algorithms are a class of metaheuristic optimization methods inspired by the principles of natural selection and evolutionary biology. These algorithms operate by maintaining a population of candidate solutions, which undergo iterative modification through genetic operators like selection, crossover, and mutation. The goal is to evolve the population towards optimal or near-optimal solutions over successive generations [53].
The application of genetic algorithms to neural network optimization has several advantages. Firstly, GAs can effectively explore the high-dimensional search space of neural network architectures, including the number of layers, the number of neurons per layer, the choice of activation functions, and other crucial hyperparameters [54]. This flexibility allows GAs to uncover neural network configurations that may not be easily accessible through gradient-based methods. Additionally, genetic algorithms can handle both numerical and categorical optimization variables, making them well suited for the discrete nature of certain neural network design choices, such as the selection of layer types or activation functions. This adaptability enables GAs to optimize a broader range of neural network architectures compared to traditional techniques. Furthermore, genetic algorithms exhibit inherent parallelism, as the evaluation of candidate solutions within the population can be conducted independently. This property is particularly beneficial for the optimization of computationally expensive neural networks, as the process can be efficiently distributed across multiple computing resources. Several studies have demonstrated the superiority of genetic algorithms over conventional optimization methods in neural network design. For instance, research has shown that GA-optimized convolutional neural networks can outperform their SGD-trained counterparts in image classification tasks by a significant margin [54]. Similarly, the application of GAs has led to the discovery of novel neural network architectures that exhibit enhanced regression performance, as observed in our own research.
The first component of the GA algorithm presented in Figure 7 is the coding principle, which entails the translation of each point within the state space into a corresponding data structure. This step typically follows the mathematical modeling phase of the problem under consideration. The choice of coding schema is critical to the effectiveness of the genetic algorithm. While early implementations predominantly employed binary coding, contemporary applications increasingly favor real-value encoding, especially in domains that require the optimization of continuous variables [53].
The second component is the population initialization mechanism. This mechanism is responsible for generating an initial population of individuals that is sufficiently diverse to serve as a viable foundation for subsequent generational evolution. The composition of the initial population is of paramount importance, as it exerts a significant influence on the convergence rate towards the global optimum. In scenarios where the problem domain is not well understood, it is imperative that the initial population be distributed uniformly across the entire search space to ensure comprehensive exploration. The third component is the fitness function, which assigns a quantitative evaluation to each individual within the population. This function serves as a criterion for selection, guiding the replication of superior individuals across generations. The fitness function is instrumental in steering the algorithm towards progressively optimal solutions. The fourth component involves the operators for population diversification, particularly crossover and mutation operators. These operators are crucial for maintaining genetic diversity within the population across generations. The crossover operator facilitates the recombination of genetic material from parent individuals, while the mutation operator introduces random alterations, thereby ensuring the exhaustive exploration of the state space and preventing premature convergence on suboptimal solutions. The final component pertains to the dimensioning parameters, which encompass the size of the population, the total number of generations, or alternative stopping criteria, as well as the probabilities associated with the application of crossover and mutation operators.

3.2.4. Hybrid Algorithms

Hybrid algorithms like improved grey wolf optimizer–deep neural networks (DNN-IGWOs) and genetic algorithm–deep neural networks (DNN-GAs) combine the strengths of evolutionary optimization with deep learning to enhance model performance. These hybrid approaches are particularly effective in optimizing the architecture and hyperparameters of neural networks, including the number of layers, nodes per layer, learning algorithms, and activation functions [55,56]. By leveraging IGWOs or GAs, the search for optimal configurations becomes more efficient and robust, allowing the DNN to achieve better generalization and predictive accuracy. The flowchart presented in Figure 8 visually outlines the process, starting from the initialization of neural network architectures, passing through training, evaluation, and optimization stages, and converging on the best-performing architecture using the IGWO and GA approaches.
The DNN-IGWO hybrid algorithm integrates the improved grey wolf optimizer with a deep neural network. The process begins by encoding the DNN’s parameters, such as the number of layers, nodes, learning algorithm, and activation functions, into a population of candidate solutions. The IGWO algorithm then simulates the hunting behavior of wolves, where the alpha, beta, and delta wolves guide the optimization process by exploring and exploiting the search space. The IGWO’s social hierarchy helps in balancing exploration and exploitation, ensuring that the DNN configuration is optimized effectively. The DNN-GA hybrid algorithm employs a genetic algorithm to optimize the structure and hyperparameters of a deep neural network. The process starts with encoding DNN parameters into chromosomes, where each gene represents a specific hyperparameter like the number of layers, nodes, or learning rates. The GA then iterates through generations, using selection, crossover, and mutation operations to evolve the population towards optimal solutions. The fitness function FFF for IGWO and GA can be defined as:
OBJ = ( RMSE + MAE ) ( R 2 + 1 )
In this context, RMSE refers to the Root Mean Square Error, MAE stands for Mean Absolute Error, and R denotes the coefficient of determination.

3.2.5. Kalman Filter with Artificial Deep Neural Network (DNN-EKF)

The integration of the extended Kalman filter (EKF) with a deep neural network (DNN) formulates a robust framework, denoted as the DNN-EKF algorithm. This hybrid algorithm is designed to recursively optimize the weights and biases of the neural network, leveraging the EKF’s ability to estimate and manage uncertainties in model parameters. The process begins by initializing the neural network’s parameters W0 and b0, which are obtained from a trained DNN network and then utilized to generate output predictions y(t) for a given input x(t). The EKF component of the algorithm ensures the adaptive adjustment of these parameters by employing a recursive estimation process, systematically refining the model through a series of prediction, measurement, and update phases.
In the prediction phase, the state vector W(t) and error covariance matrix P(t) are forecasted based on the transition model and previous state estimates [57,58]. The measurement phase involves computing the innovation vector ν(t), representing the discrepancy between the predicted and observed outputs. This innovation guides the update phase, wherein the Kalman gain K(t) is calculated to adjust the weights, ensuring the network’s parameters converge towards the optimal solution as new data are introduced. Mathematically, these phases can be expressed as follows:
  • for predictions:
W ^ t t 1 = f W t 1 + q t 1
P ( t | t 1 ) = A ( t ) P ( t 1 ) A T ( t ) + Q ( t 1 )
  • for measurements:
ν t = y t   H t W ^ t | t 1
S ( t ) = H ( t ) P ( t | t 1 ) H T ( t ) + R ( t )
  • for updates:
K ( t ) = P ( t | t 1 ) H T ( t ) S ( 1 ) ( t )
W ^ t | t = W ^ t | t 1 + K t ν t
P t | t = I K t H t P t | t 1
Here, A(t) is the state transition matrix, H(t) the observation matrix, Q(t) the process noise covariance, and R(t) the measurement noise covariance. This systematic, recursive approach ensures that the neural network continuously adapts its weights and biases to minimize prediction errors, making the DNN-EKF particularly effective in dynamic environments where data are subject to variability and uncertainty [59,60]. The accompanying flowchart (Figure 9) visually depicts the iterative nature of the DNN-EKF algorithm, highlighting the cyclical interaction between the neural network and the Kalman filter as it converges towards optimal parameter estimates.

3.2.6. Performance Criteria for Optimizing DNN Structures

The evaluation and optimization of the deep neural network (DNN) structures using genetic algorithms (GAs) and improved grey wolf optimization (IGWO) were based on a comprehensive set of performance criteria. These criteria, presented in Table 6, ensure a robust assessment of the predictive capabilities and efficiency of the developed models.
A fundamental measure in this evaluation is the Root Mean Square Error (RMSE), which is a widely used measure of the differences between the values predicted by a model and the actual values observed. It provides a quadratic mean of the residuals, offering insight into the model’s accuracy. A lower RMSE value indicates a better fit of the model to the data. The Mean Absolute Percentage Error (MAPE %) expresses the accuracy of the model as a percentage. It is calculated as the average of the absolute percentage errors between the predicted and actual values. MAPE is particularly useful for understanding the prediction accuracy in relative terms. A lower MAPE percentage signifies higher predictive accuracy.
In contrast, the Mean Absolute Error (MAE) measures the average magnitude of the errors in a set of predictions without considering their direction. It is the mean of the absolute differences between the predicted and actual values. A lower MAE value indicates that the model predictions are close to the actual values, reflecting high accuracy.
Beyond these error-based measures, the coefficient of determination (R2) is a statistical measure that explains the proportion of the variance in the dependent variable that is predictable from the independent variables. It ranges from 0 to 1, with values closer to 1 indicating a higher explanatory power of the model.
While these statistical measures assess prediction accuracy, optimization methodologies necessitate an overarching metric to guide performance improvements. The Objective Function Value (OBJ) is a critical parameter in optimization problems, representing the value of the function that the optimization algorithm aims to minimize or maximize. In this context, it is used to evaluate the overall performance of the DNN model, combining various performance metrics into a single value for optimization purposes.
Lastly, the Scatter Index (SI) is a dimensionless measure of the variability of model predictions relative to the observed data. It is calculated as the ratio of the RMSE to the mean of the observed values. A lower SI value indicates that the model predictions are less scattered and more consistent with the actual data.

3.2.7. DNN Model Results

The optimization of DNN architectures using IGWO and GAs has yielded distinct structures tailored for predicting cutting parameters in the turning process. The performance of each optimized architecture was assessed based on the criteria of cutting force (Fr), surface roughness (Ra), and cutting power (Pc). The results of the optimization are encapsulated in Table 7 and Table 8, which detail the optimal architectures obtained through the application of IGWO and the GA, respectively. These tables offer a comprehensive overview of the architectural parameters, including the number of layers and nodes, the selection of activation functions, and the learning algorithms employed in each case. The differences in the architectures underscore the unique strengths of each optimization method in tailoring the DNN structure to the specific requirements of the machining process, thereby enhancing the predictive accuracy for each performance metric under investigation.
The successful integration of deep neural networks (DNNs) with the EKF algorithm hinges on the initialization of the network’s weight matrices. The initial DNN architecture, as detailed in Table 9, was instrumental in determining the initial weight matrices and bias vectors. These parameters were subsequently introduced into the EKF algorithm. The DNN structure was carefully selected to optimize the predictive accuracy for the cutting parameters in the turning process, ensuring a robust foundation for the EKF’s adaptive weight estimation.
Figure 10 presents the regression plots for the predictive models developed using DNN-IGWO, DNN-GA, and DNN-EKF methods, compared against traditional methods including Support Vector Machine (SVM), Decision Tree (DT), and a standard DNN trained using the Levenberg–Marquardt (LM) algorithm. These plots illustrate the predictive accuracy for all data subsets: training, testing, and validation. For an ideal predictive model, the data points should align closely with the reference line y = x , indicating perfect correlation between the observed and predicted values. As demonstrated in Figure 10, the regression plots for the DNN-EKF and DNN-IGWO models show that the observed and predicted values for all three outputs (Fr, Ra, and Pc) align precisely along the reference line. This indicates a high level of predictive accuracy and model performance.
In contrast, the DNN-GA model also exhibits a strong alignment with the reference line; however, some data points, particularly those predicting the cutting force (Fr), deviate slightly from the ideal line. This suggests that while the DNN-GA model performs well, it is not as accurate as the DNN-EKF and DNN-IGWO models for certain outputs. On the other hand, the regression plots for the SVM, DT, and LM models reveal a significant deviation of data points from the reference line. This indicates lower predictive accuracy and suggests that these traditional methods are less effective in modeling the complex relationships inherent in the dataset compared to the optimized DNN approaches. The superior performance of the DNN-EKF and DNN-IGWO models can be attributed to their advanced optimization techniques, which enhance the neural network’s capability to accurately predict cutting parameters. The results highlight the potential of these optimized DNN methods to outperform conventional machine learning techniques in the predictive modeling of turning cutting parameters.
The bar charts of the Scatter Index (SI) presented in Figure 11 for various models reveal significant differences in predictive performance across cutting parameters (Fr, Ra, Pc) based on predefined performance thresholds. The DNN-EKF model stands out, achieving SI values below 0.1 across all outputs, categorizing its performance as excellent. This demonstrates the model’s robustness, accuracy, and reliability in predicting cutting parameters. The DNN-GA model, with SI values ranging from 0.0129 to 0.03246, also shows excellent performance, though slightly higher SI values in testing and validation suggest moderate stability. The DNN-IGWO model presents a more varied performance, with SI values ranging from 0.00881 in training (excellent) to 0.16234 in testing and validation (well), indicating potential overfitting and less consistent accuracy. Traditional methods, such as DT, LM, and SVM, exhibit SI values significantly higher than 0.3 across all parameters and data subsets, indicating poor performance and substantial prediction errors.
In contrast, the DNN-GA model also exhibits a strong alignment with the reference line; however, some data points, particularly those predicting the cutting force (Fr), deviate slightly from the ideal line. This suggests that while the DNN-GA model performs well, it is not as accurate as the DNN-EKF and DNN-IGWO models for certain outputs.
On the other hand, the regression plots for the SVM, DT, and LM models reveal a significant deviation of data points from the reference line. This indicates lower predictive accuracy and suggests that these traditional methods are less effective in modeling the complex relationships inherent in the dataset compared to the optimized DNN approaches. The superior performance of the DNN-EKF and DNN-IGWO models can be attributed to their advanced optimization techniques, which enhance the neural network’s capability to accurately predict cutting parameters. The results highlight the potential of these optimized DNN methods to outperform conventional machine learning techniques in the predictive modeling of turning cutting parameters.
The bar charts of the Scatter Index (SI) presented in Figure 11 for various models reveal significant differences in predictive performance across cutting parameters (Fr, Ra, Pc) based on predefined performance thresholds. The DNN-EKF model stands out, achieving SI values below 0.1 across all outputs, categorizing its performance as excellent. This demonstrates the model’s robustness, accuracy, and reliability in predicting cutting parameters. The DNN-GA model, with SI values ranging from 0.0129 to 0.03246, also shows excellent performance, though slightly higher SI values in testing and validation suggest moderate stability. The DNN-IGWO model presents a more varied performance, with SI values from 0.00881 in training (excellent) to 0.16234 in testing and validation (well), indicating potential overfitting and less consistent accuracy. Traditional methods, such as DT, LM, and SVM, exhibit SI values significantly higher than 0.3 across all parameters and data subsets, indicating poor performance and substantial prediction error.
The spider plots presented in Figure 12 comparing the performance indices (MAD, RMSE, MAPE, R2, OBJ) for the predictive models (DNN-EKF, DNN-GA, DNN-IGWO, DT, LM, SVM) reveal significant disparities in their effectiveness for different outputs (Fr, Ra, Pc). The DNN-EKF model consistently achieves the lowest error metrics and the highest R2 values across all outputs, indicating its exceptional predictive accuracy and reliability, with performance consistently rated as excellent (SI < 0.1). The DNN-GA model, while performing well with slightly higher error metrics and slightly lower R2 values, maintains a strong performance classified as excellent to well (SI < 0.2), demonstrating robust predictive capabilities. The DNN-IGWO model shows variability in its performance, with errors increasing notably in testing and validation phases, leading to an overall performance that is well to fair (SI between 0.1 and 0.3). In stark contrast, traditional models such as DT, LM, and SVM exhibit significantly higher error metrics and lower R2 values, indicating poor performance (SI > 0.3).
For instance, the SVM model has the highest MAD and RMSE values, and the lowest R2, highlighting its inadequacy in accurately predicting the cutting parameters. The DT and LM models, though marginally better than the SVM, still show substantial prediction errors and lower goodness of fit. These results underscore the superior efficacy of advanced optimization techniques like DNN-EKF in developing highly accurate and reliable predictive models for machining processes, significantly outperforming conventional machine learning methods. This analysis emphasizes the importance of using sophisticated AI-driven methods to achieve optimal predictive performance in industrial applications.
The Taylor plots provide a visual representation of the standard deviations and correlation coefficients for different predictive models (DNN-EKF, DNN-GA, DNN-IGWO, DT, LM, SVM) across various outputs (Fr, Ra, Pc). These metrics are essential for understanding the models’ accuracy and consistency in capturing the variability and correlation of the predicted values with the actual data. As illustrated in Figure 13, the performance of the predictive models for the outputs Fr, Ra, and Pc was assessed using Taylor plots. For Fr, the DNN-EKF model exhibits a correlation coefficient of 99.99 and a standard deviation of 127.07, closely aligning with the reference point and indicating superior predictive accuracy. Similarly, DNN-GA and DNN-IGWO show strong performance with correlation coefficients of 0.9994 and 0.9958, and standard deviations of 126.63 and 124.76, respectively.
In contrast, traditional models such as DT, LM, and SVM have lower correlation coefficients (0.9048, 0.9476, and 0.9416) and higher standard deviations (112.25, 104.93, and 35.25). For Ra, DNN-EKF achieves a near-perfect correlation coefficient of 0.99988 and a standard deviation of 0.61521, outperforming DNN-GA (0.99784, 0.61838) and DNN-IGWO (0.99781, 0.62850). Traditional models like DT, LM, and SVM show lower performance with correlation coefficients of 0.95837, 0.84575, and 0.84441 and standard deviations of 0.61761, 0.44539, and 0.50581, respectively. For Pc, DNN-EKF again demonstrates exceptional performance with a correlation coefficient of 0.99989 and a standard deviation of 866.00, followed closely by DNN-GA (0.99947, 864.46) and DNN-IGWO (0.99650, 857.86). Traditional models (DT, LM, SVM) exhibit poorer performance with correlation coefficients of 0.81422, 0.88673, and 0.89648 and standard deviations of 648.20, 604.18, and 37.23.
These observations from the Taylor plots validate the statistical results, highlighting the superior performance of the DNN-EKF, DNN-GA, and DNN-IGWO models in accurately predicting machining parameters compared to conventional methods.

3.3. Multi-Objective Optimization of Cutting Parameters

In this section, we explore the optimization of various machining outputs using two distinct methodologies, the desirability function (DF) method and advanced multi-objective optimization algorithms, namely multi-objective grey wolf optimization (MOGWO). The aim is to identify the optimal combination of the cutting parameters tool type (m), depth of cut (ap), feed rate (f), and cutting speed (Vc) that yield the most favorable outcomes for cutting force (Fr), surface roughness (Ra), and power consumption (Pc).
Prior to exploring the optimization outcomes, it is essential to examine the baseline performance of each output variable in relation to the input parameters. This examination is presented through a series of 3D plots (Figure 14), where the outputs cutting force (Fr), surface roughness (Ra), and power cutting force (Pc) are analyzed against the input variables. For each output, three distinct figures are provided, with each figure encompassing three 3D plots. These plots correspond to the performance of different tool materials: coated silicon nitride (Si3N4) ceramic, uncoated silicon nitride (Si3N4) ceramic, and cubic boron nitride (CBN).
These visualizations are instrumental in mapping the initial response surface, offering critical insights into the influence of individual input parameters on the outputs. This foundational analysis is pivotal in guiding the subsequent optimization process, as it highlights the regions of parameter space where optimization efforts are most likely to yield significant improvements.

3.3.1. Desirability Function Method (DF)

The Desirability Function (DF) method is a multi-objective optimization technique that transforms multiple response variables into individual desirability functions, each ranging from 0 (undesirable) to 1 (desirable) [61,62]. These individual functions are then combined into a composite desirability index, typically using the geometric mean. This method allows for the simultaneous optimization of various, often conflicting, criteria by providing a single measure of overall desirability.
The results of the desirability function analysis provide a clear overview of the optimal machining parameters aimed at minimizing cutting force (Fr), surface roughness (Ra), and power consumption (Pc). The analysis identified ten different solutions, all yielding a high desirability score of 0.978, indicating that the parameter settings are nearly optimal for achieving minimal outputs.
As shown in Table 10, these ten solutions consistently use the same tool type (m = 2), depth of cut (ap = 0.250 mm), and feed rate (f = 0.080 mm/rev), with slight variations in cutting speed (Vc) ranging between approximately 431 and 445 m/min. Despite these variations, the predicted outputs remain effectively minimized. Cutting force (Fr) is predicted to be between 46.424 and 47.405 N, surface roughness (Ra) is consistently around 0.520 µm (with a minor variation of 0.521 µm in one case), and power consumption (Pc) varies slightly between 386.518 W and 392.412 W. These consistent results across the identified parameter settings indicate a robust model capable of maintaining minimal output values.
Figure 15 offers a comprehensive visual representation of the DF analysis outcomes by integrating a composite desirability contour plot, individual contour plots, and interaction plots. The composite desirability plot, with its clear color gradient, highlights regions where the desirability index approaches 1, indicating optimal parameter combinations—specifically, a cutting speed of approximately 437 m/min, a feed rate of 0.080 mm/rev, and a depth of cut of 0.250 mm. Similarly, the individual contour plots for cutting force (Fr), surface roughness (Ra), and power consumption (Pc) demonstrate that the regions of minimized output values align with those of highest desirability; for instance, Fr averages around 46.9 N, Ra remains near 0.520 µm, and Pc is approximately 389 W.
Furthermore, the interaction plots provide additional insight into the effects of machining parameters (ap, f, and Vc) when using a coated tool, revealing that cutting speed and feed rate are critical in achieving the desired minimization, while variations in depth of cut have a less significant impact within the tested range. Together, these visualizations reinforce the DF analysis by confirming that specific ranges of cutting speed and feed rate effectively minimize the outputs, thereby validating the robustness and practical relevance of our multi-objective optimization framework for industrial machining applications.

3.3.2. Optimization of DNN-EKF Prediction Model Using MOGWO

Optimization is fundamental to enhancing the performance and accuracy of predictive models. The combination of a deep neural network with an extended Kalman filter (DNN-EKF) has demonstrated remarkable predictive capabilities across diverse applications. However, its full potential can be realized by integrating advanced multi-objective optimization techniques. This section focuses on optimizing the DNN-EKF prediction model using the multi-objective grey wolf optimization (MOGWO) algorithm, which mimics the social hunting behavior of wolves to efficiently explore and exploit complex search spaces. The MOGWO algorithm is renowned for its ability to balance conflicting objectives, making it a robust solution for improving predictive model performance. Its adaptability and convergence efficiency have made it a popular choice in various engineering applications [63]. Figure 16 provides a schematic representation of the algorithm. The optimization results presented in Figure 17 illustrate the significant interplay between tool types, cutting conditions, and performance metrics, such as cutting force (Fr), surface roughness (Ra), and cutting power (Pc). These findings provide critical insights for selecting optimal machining parameters that enhance manufacturing efficiency and product quality.

Single-Objective Optimization Analysis of Fr, Ra, and Pc

The single-objective optimization process for cutting force (Fr) yielded an optimal value of 43.6906 N when using cubic boron nitride (CBN) inserts. The best machining parameters included a depth of cut of 0.4379 mm, a feed rate of 0.0800 mm/rev, and a cutting speed of 546 m/min. CBN’s superior hardness and thermal stability proved effective in reducing the forces exerted on the tool, contributing to extended tool life and improved machining efficiency. For applications that prioritize minimizing cutting forces, the use of CBN combined with moderate cutting depths and high cutting speeds emerges as the most effective strategy.
The optimization for surface roughness (Ra) achieved an optimal value of 0.4109 µm using uncoated silicon nitride (Si3N4) ceramic inserts. The optimal parameters were a depth of cut of 0.6005 mm, a feed rate of 0.0800 mm/rev, and a cutting speed of 546 m/min. The sharp cutting edges of uncoated Si3N4 ceramic inserts contributed to a finer surface finish by minimizing material deformation during cutting. This result suggests that uncoated Si3N4 ceramics are preferable for achieving high-quality surface finishes, especially when operating at higher depths of cut while maintaining moderate feed rates and high cutting speeds.
For cutting power (Pc), the optimal value obtained was 229.4923 W, achieved using coated silicon nitride (Si3N4) ceramic inserts. The best parameters included a depth of cut of 0.2500 mm, a feed rate of 0.0985 mm/rev, and a cutting speed of 260 m/min. The coating on Si3N4 inserts enhanced wear resistance and reduced friction, thereby lowering the required cutting power. This demonstrates the efficacy of coated ceramics in energy-efficient machining, particularly under conditions involving lower depths of cut and cutting speed, combined with slightly higher feed rates.

Bi-Objective Optimization Analysis of Output Parameters

The bi-objective optimization analysis highlighted the trade-offs between different performance metrics. The compromise analysis between cutting force (Fr) and cutting power (Pc) revealed optimal ranges of 43 < Fr< 60 and 272.9452 < Pc < 949.1732. The recommended parameters involved tool types 2 (coated Si3N4) and 3 (CBN), with depths of cut ranging from 0.25 to 0.5579 mm, feed rates from 0.08 to 0.0944 mm/rev, and cutting speeds between 260 and 546 m/min. This analysis indicated that uncoated Si3N4 and CBN inserts are effective in reducing both cutting forces and power consumption, with careful management of the depth of cut and feed rate proving essential.
Similarly, the analysis between cutting force (Fr) and surface roughness (Ra) identified optimal ranges of 43 < Fr < 60 and 0.293 < Ra < 0.671. The recommended parameters included tool types 2 (uncoated Si3N4) and 3 (CBN), depths of cut between 0.25 and 0.5579 mm, feed rates ranging from 0.08 to 0.0944 mm/rev, and cutting speeds varying from 260 to 546 m/min. The results indicated that uncoated Si3N4 and CBN inserts were effective in balancing cutting force and surface quality, emphasizing the need for precise control over machining parameters.
The balance between surface roughness (Ra) and cutting power (Pc) optimization revealed optimal ranges of 0.54 < Ra < 0.6 and 322.5609 < Pc < 850.4057. The ideal parameters were tool types 2 (uncoated Si3N4) and 3 (CBN), with depths of cut from 0.25 to 0.5979 mm, feed rates from 0.08 to 0.1027 mm/rev, and cutting speeds between 285.4510 and 546 m/min. This analysis highlighted the effectiveness of uncoated Si3N4 and CBN inserts in achieving a balance between surface quality and energy efficiency, with a focus on fine-tuning cutting speed and depth of cut to optimize both parameters.

Simultaneous Optimization Analysis of Fr, Ra, and Pc

In the simultaneous optimization of Fr, Ra, and Pc, the interdependent relationships between these parameters became evident. The optimal ranges were 43 < Fr < 60, 0.293 < Ra < 0.671, and 272.9452 < Pc < 949.1732. The analysis suggested that tool types 2 (coated Si3N4) and 3 (CBN), combined with depths of cut between 0.25 and 0.5579 mm, feed rates from 0.08 to 0.0944 mm/rev, and cutting speeds between 260 and 546 m/min, provided the best compromise for balancing these three outputs.
Overall, the optimization results underscore the importance of selecting appropriate machining parameters and tool materials to achieve desirable outcomes in manufacturing processes. The MOGWO algorithm demonstrated its effectiveness in navigating complex optimization landscapes, offering valuable insights for machining applications that require a balance between performance metrics.

3.3.3. Experimental Validation of Optimized Machining Parameters

To verify the effectiveness of the optimized machining parameters obtained through the EKF-DNN model, we conducted experimental validation, and the results are summarized in Table 11. In solution 1, using tool type m = 2 (coated Si3N4) with a depth of cut (ap) of 0.25 mm, a feed rate (f) of 0.08 mm/rev, and a cutting speed (Vc) of 530 m/min, the measured outputs were a cutting force (Fr) of 56.18 N, a surface roughness (Ra) of 0.64 µm, and a power consumption (Pc) of 492.55 W. In solution 2, employing tool type m = 3 (CBN) with a higher depth of cut (0.6 mm), the same feed rate of 0.08 mm/rev, and a cutting speed of 546 m/min, the results showed a lower cutting force of 42.80 N and reduced power consumption of 389.48 W, although the surface roughness increased slightly to 0.81 µm.
These experimental results closely match the predicted values, confirming that the EKF-DNN model effectively identifies optimal cutting conditions that minimize cutting force and power consumption while maintaining an acceptable surface finish. This validation underscores the robustness of our approach and demonstrates its potential for applications in industrial precision machining.

4. Conclusions

This study presents a comprehensive evaluation of machining parameters for EN-GJL-250 grey cast iron using silicon nitride tools by integrating advanced optimization techniques with deep learning to enhance predictive accuracy and process performance. We employed a hybrid L81 experimental design that incorporated both the experimental (L54) and reference (L27) datasets, and applied Response Surface Methodology (RSM) for model development alongside advanced optimization models (DNN-IGWO, DNN-GA, and DNN-EKF) and traditional methods (SVM, DT, and LM). This approach allowed us to diversify our dataset and refine our modeling strategy. Our key findings are as follows:
-
The coated Si3N4 tool (m2) consistently outperformed both the uncoated Si3N4 (m1) and the CBN (m3) tools, achieving an exceptional surface finish with a Ra as low as 0.45 µm.
-
ANOVA of the RSM model confirmed robust and statistically significant relationships between machining parameters and performance outputs, with high coefficients of determination (R2 of 0.94 for cutting force, 0.92 for surface roughness, and 0.95 for power consumption).
-
The DNN-EKF model demonstrated outstanding predictive performance, with a Scatter Index of 0.03 and correlation coefficients exceeding 0.98, thereby outperforming traditional methods such as SVM, Decision Trees, and LM-trained networks, as well as models developed using RSM.
-
Multi-objective optimization using the desirability function and multi-objective grey wolf optimization (MOGWO) effectively balanced the conflicting objectives of minimizing cutting force, surface roughness, and power consumption. The analysis identified near-optimal machining settings for the coated Si3N4 tool, specifically a feed rate of 0.08 mm/rev, a cutting speed of 530 m/min, and a depth of cut between 0.25 and 0.56 mm.
-
The integration of these advanced optimization methods with deep learning establishes a robust, transferable predictive framework that enhances machining efficiency and product quality, while being adaptable to a variety of industrial applications.
Overall, our findings demonstrate that combining advanced optimization techniques with deep neural network models significantly improves the control and performance of machining processes, thereby laying the groundwork for more energy-efficient and high-quality industrial manufacturing.

Author Contributions

Conceptualization, Y.K., H.B. and A.L.; methodology, Y.K. and H.B.; software, Y.K.; validation, H.B., M.A.Y., S.A., R.K. and A.L.; formal analysis, H.B. and O.R.; investigation, Y.K. and A.L.; resources, M.A.Y. and A.L.; data curation, Y.K., O.R. and A.L.; writing—original draft preparation, Y.K., O.R. and Y.C.; writing—review and editing, H.B., S.A., R.K., M.A.Y. and A.L.; visualization, Y.K., H.B., Y.C. and A.L.; supervision, H.B., M.A.Y. and A.L.; project administration, S.A. and R.K.; funding acquisition, S.A. and R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-DDRSP2503).

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Si3N4Silicon nitride
CBNCubic boron nitride
apDepth of cut
fFeed rate
VcCutting speed
RaSurface roughness
FrCutting force
PcPower consumption
DNN-IGWOImproved grey wolf optimizer-deep neural network
DNN-GAGenetic algorithm–deep neural network
DNN-EKFDeep neural network–extended Kalman filter
SVMSupport Vector Machines
DTDecision Trees
LMLevenberg–Marquardt
DFDesirability Function
MOGWOMulti-objective grey wolf optimization

Appendix A

Table A1. Results of Fr, Ra, and Pc according to the cutting conditions obtained from Ref. [46].
Table A1. Results of Fr, Ra, and Pc according to the cutting conditions obtained from Ref. [46].
Input FactorsOutput Parameters
mapfVcFrRaPc
130.30.0827363.91.35292.4
230.60.0827384.851.32386.04
330.90.08273143.411.28648.76
430.30.1427369.912.64319.56
530.60.14273133.263.28598.57
630.90.142731992.37903.97
730.30.2273103.283.03468.75
830.60.2273182.642.68832.14
930.90.2273239.172.791078.33
1030.30.0838250.331.04318.6
1130.60.0838275.651.06478.06
1230.90.08382121.021.06766.03
1330.30.1438262.141.57393.67
1430.60.14382117.131.57743.29
1530.90.14382184.551.611161.11
1630.30.238290.172.1570.92
1730.60.2382171.252.151091.21
1830.90.2382223.672.171413.75
1930.30.0854639.040.7354.83
2030.60.0854665.980.8607.74
2130.90.08546109.990.79994.59
2230.30.1454655.41.16505.81
2330.60.14546106.411.2961.86
2430.90.14546173.431.381583.91
2530.30.2546662.05600.81
2630.60.2546158.712.041461.69
2730.90.2546214.902.051934.67

References

  1. Dogra, M.; Sharma, V.S.; Sachdeva, A.; Suri, N.M.; Dureja, J.S. Tool Wear, Chip Formation and Workpiece Surface Issues in CBN Hard Turning: A Review. Int. J. Precis. Eng. Manuf. 2010, 11, 341–358. [Google Scholar] [CrossRef]
  2. Akinwekomi, A.D.; Lawal, A.I. Neural Network-Based Model for Predicting Particle Size of AZ61 Powder During High-Energy Mechanical Milling. Neural Comput. Appl. 2021, 33, 17611–17619. [Google Scholar] [CrossRef]
  3. Saha, S.; Mondal, A.K.; Čep, R.; Joardar, H.; Haldar, B.; Kumar, A.; Alsalah, N.A.; Ataya, S. Multi-Response Optimization of Electrochemical Machining Parameters for Inconel 718 via RSM and MOGA-ANN. Machines 2024, 12, 335. [Google Scholar] [CrossRef]
  4. Kumar, R.; Kumar, A.; Kant, L.; Prasad, A.; Bhoi, S.; Meena, C.S.; Singh, V.P.; Ghosh, A. Experimental and RSM-Based Process-Parameters Optimisation for Turning Operation of EN36B Steel. Materials 2022, 16, 339. [Google Scholar] [CrossRef]
  5. Abdelrazek, A.H.; Choudhury, I.A.; Nukman, Y.; Kazi, S.N. Metal Cutting Lubricants and Cutting Tools: A Review on the Performance Improvement and Sustainability Assessment. Int. J. Adv. Manuf. Technol. 2020, 106, 4221–4245. [Google Scholar] [CrossRef]
  6. Zhuang, K.; Fu, C.; Weng, J.; Hu, C. Cutting Edge Microgeometries in Metal Cutting: A Review. Int. J. Adv. Manuf. Technol. 2021, 116, 2045–2092. [Google Scholar] [CrossRef]
  7. Altınsoy, Ş.; Üllen, N.B.; Ersoy, M.; Can, D. Machining Performance of Uncoated and Carbide Coated Cutting Inserts in Ti6Al4V Turning: An Experimental and Numerical Approach. J. Mater. Eng. Perform. 2024, 1–19. [Google Scholar] [CrossRef]
  8. Tan, L.; Yao, C.; Li, X.; Fan, Y.; Cui, M. Effects of Machining Parameters on Surface Integrity When Turning Inconel 718. J. Mater. Eng. Perform. 2022, 31, 4176–4186. [Google Scholar] [CrossRef]
  9. Sales, W.F.; Schoop, J.; da Silva, L.R.R.; Machado, Á.R.; Jawahir, I.S. A Review of Surface Integrity in Machining of Hardened Steels. J. Manuf. Process 2020, 58, 136–162. [Google Scholar] [CrossRef]
  10. Shihab, S.K.; Khan, Z.A.; Mohammad, A.; Siddiquee, A.N. A Review of Turning of Hard Steels Used in Bearing and Automotive Applications. Prod. Manuf. Res. 2014, 2, 24–49. [Google Scholar] [CrossRef]
  11. Kundrák, J.; Karpuschewski, B.; Gyani, K.; Bana, V. Accuracy of Hard Turning. J. Mater. Process Technol. 2008, 202, 328–338. [Google Scholar] [CrossRef]
  12. Panda, A.; Sahoo, A.K.; Kumar, R.; Das, D. A Concise Review of Uncertainty Analysis in Metal Machining. Mater. Today Proc. 2020, 26, 1734–1739. [Google Scholar] [CrossRef]
  13. Touati, S.; Mekhilef, S. Statistical Analysis of Surface Roughness in Turning Based on Cutting Parameters and Tool Vibrations with Response Surface Methodology (RSM). Matér. Tech. 2017, 105, 401. [Google Scholar] [CrossRef]
  14. Tosun, N.; Ozler, L. Optimisation for Hot Turning Operations with Multiple Performance Characteristics. Int. J. Adv. Manuf. Technol. 2004, 23, 777–782. [Google Scholar] [CrossRef]
  15. Kara, F.; Aslantas, K.; Çiçek, A. ANN and Multiple Regression Method-Based Modelling of Cutting Forces in Orthogonal Machining of AISI 316L Stainless Steel. Neural Comput. Appl. 2015, 26, 237–250. [Google Scholar] [CrossRef]
  16. Chetan; Ghosh, S.; Venkateswara Rao, P. Application of Sustainable Techniques in Metal Cutting for Enhanced Machinability: A Review. J. Clean. Prod. 2015, 100, 17–34. [Google Scholar] [CrossRef]
  17. Ming, W.; Shen, F.; Zhang, G.; Liu, G.; Du, J.; Chen, Z. Green Machining: A Framework for Optimization of Cutting Parameters to Minimize Energy Consumption and Exhaust Emissions During Electrical Discharge Machining of Al 6061 and SKD 11. J. Clean. Prod. 2021, 285, 124889. [Google Scholar] [CrossRef]
  18. Pimenov, D.Y.; Mia, M.; Gupta, M.K.; Machado, Á.R.; Pintaude, G.; Unune, D.R.; Khanna, N.; Khan, A.M.; Tomaz, Í.; Wojciechowski, S.; et al. Resource Saving by Optimization and Machining Environments for Sustainable Manufacturing: A Review and Future Prospects. Renew. Sustain. Energy Rev. 2022, 166, 112660. [Google Scholar] [CrossRef]
  19. Saez, M.; Barton, K.; Maturana, F.; Tilbury, D.M. Modeling Framework to Support Decision Making and Control of Manufacturing Systems Considering the Relationship Between Productivity, Reliability, Quality, and Energy Consumption. J. Manuf. Syst. 2022, 62, 925–938. [Google Scholar] [CrossRef]
  20. Li, B.; Tian, X.; Zhang, M. Modeling and Multi-Objective Optimization Method of Machine Tool Energy Consumption Considering Tool Wear. Int. J. Precis. Eng. Manuf.-Green. Technol. 2022, 9, 127–141. [Google Scholar] [CrossRef]
  21. Sharma, S.; Das, P.P.; Ladakhi, T.Y.; Pradhan, B.B.; Phipon, R. Performance Evaluation and Parametric Optimization of Turning Operation of Ti6Al-4V Alloy Under Dry and Minimum Quantity Lubrication Cutting Environments. J. Mater. Eng. Perform. 2023, 32, 5353–5364. [Google Scholar] [CrossRef]
  22. Singh, J.; Gill, S.S.; Mahajan, A. Experimental Investigation and Optimizing of Turning Parameters for Machining of Al7075-T6 Aerospace Alloy for Reducing the Tool Wear and Surface Roughness. J. Mater. Eng. Perform. 2023, 33, 8745–8756. [Google Scholar] [CrossRef]
  23. Abellán-Nebot, J.V.; Vila Pastor, C.; Siller, H.R. A Review of the Factors Influencing Surface Roughness in Machining and Their Impact on Sustainability. Sustainability 2024, 16, 1917. [Google Scholar] [CrossRef]
  24. Mou, W.; Zhu, S. Vibration, Tool Wear and Surface Roughness Characteristics in Turning of Inconel 718 Alloy with Ceramic Insert Under LN2 Machining. J. Braz. Soc. Mech. Sci. Eng. 2020, 42, 1–12. [Google Scholar] [CrossRef]
  25. Gaitonde, V.N.; Karnik, S.R.; Figueira, L.; Paulo Davim, J. Machinability Investigations in Hard Turning of AISI D2 Cold Work Tool Steel with Conventional and Wiper Ceramic Inserts. Int. J. Refract. Met. Hard Mater. 2009, 27, 754–763. [Google Scholar] [CrossRef]
  26. Chinchanikar, S.; Choudhury, S.K. Investigations on Machinability Aspects of Hardened AISI 4340 Steel at Different Levels of Hardness Using Coated Carbide Tools. Int. J. Refract. Met. Hard Mater. 2013, 38, 124–133. [Google Scholar] [CrossRef]
  27. Abainia, S.; Ouelaa, N. Experimental Study of the Combined Influence of the Tool Geometry Parameters on the Cutting Forces and Tool Vibrations. Int. J. Adv. Manuf. Technol. 2015, 79, 1127–1138. [Google Scholar] [CrossRef]
  28. Tian, P.; He, L.; Zhou, T.; Du, F.; Zou, Z.; Zhou, X. Experimental Characterization of the Performance of MQL-Assisted Turning of Solution Heat-Treated and Aged Inconel 718 Alloy. Int. J. Adv. Manuf. Technol. 2023, 125, 3839–3851. [Google Scholar] [CrossRef]
  29. Gao, H.; Ma, B.; Singh, R.P.; Yang, H. Areal Surface Roughness of AZ31B Magnesium Alloy Processed by Dry Face Turning: An Experimental Framework Combined with Regression Analysis. Materials 2020, 13, 2303. [Google Scholar] [CrossRef]
  30. Dahbi, S.; Ezzine, L.; El Moussami, H. Modeling of Cutting Performances in Turning Process Using Multiple Regression Method. Int. J. Eng. Res. Afr. 2017, 29, 54–69. [Google Scholar] [CrossRef]
  31. Kechagias, J.D.; Aslani, K.E.; Fountas, N.A.; Vaxevanidis, N.M.; Manolakos, D.E. A Comparative Investigation of Taguchi and Full Factorial Design for Machinability Prediction in Turning of a Titanium Alloy. Measurement 2020, 151, 107213. [Google Scholar] [CrossRef]
  32. Stamenković, O.S.; Kostić, M.D.; Radosavljević, D.B.; Veljković, V.B. Comparison of Box-Behnken, Face Central Composite and Full Factorial Designs in Optimization of Hempseed Oil Extraction by n-Hexane: A Case Study. Period. Polytech. Chem. Eng. 2018, 62, 359–367. [Google Scholar] [CrossRef]
  33. Kouahla, I.; Yallese, M.A.; Belhadi, S.; Safi, K.; Nouioua, M. Tool Vibration, Surface Roughness, Cutting Power, and Productivity Assessment Using RSM and GRA Approach During Machining of Inconel 718 with PVD-Coated Carbide Tool. Int. J. Adv. Manuf. Technol. 2022, 122, 1835–1856. [Google Scholar] [CrossRef]
  34. Awale, A.; Inamdar, K. Multi-Objective Optimization of High-Speed Turning Parameters for Hardened AISI S7 Tool Steel Using Grey Relational Analysis. J. Braz. Soc. Mech. Sci. Eng. 2020, 42, 1–17. [Google Scholar] [CrossRef]
  35. Zhao, X.; Du, X.; Xu, F.; Zuo, D.; Li, Z.; Lu, W.; Zhang, Q. Cutting Parameters Optimization and Cutting Performance of Ti(C,N)-Based Cermets by Reactive Hot-Pressing from Co–Ti–C–BN System in Dry Turning Austenitic Stainless Steels. J. Braz. Soc. Mech. Sci. Eng. 2023, 45, 1–11. [Google Scholar] [CrossRef]
  36. Zhao, Y.; Cui, L.; Sivalingam, V.; Sun, J. Understanding Machining Process Parameters and Optimization of High-Speed Turning of NiTi SMA Using Response Surface Method (RSM) and Genetic Algorithm (GA). Materials 2023, 16, 5786. [Google Scholar] [CrossRef]
  37. Myśliwiec, P.; Kubit, A.; Szawara, P. Optimization of 2024-T3 Aluminum Alloy Friction Stir Welding Using Random Forest, XGBoost, and MLP Machine Learning Techniques. Materials 2024, 17, 1452. [Google Scholar] [CrossRef]
  38. Vukelic, D.; Simunovic, K.; Kanovic, Z.; Saric, T.; Tadic, B.; Simunovic, G. Multi-Objective Optimization of Steel AISI 1040 Dry Turning Using Genetic Algorithm. Neural Comput. Appl. 2021, 33, 12445–12475. [Google Scholar] [CrossRef]
  39. Mia, M.; Khan, M.A.; Rahman, S.S.; Dhar, N.R. Mono-Objective and Multi-Objective Optimization of Performance Parameters in High Pressure Coolant Assisted Turning of Ti-6Al-4V. Int. J. Adv. Manuf. Technol. 2017, 90, 109–118. [Google Scholar] [CrossRef]
  40. Laouissi, A.; Yallese, M.A.; Belbah, A.; Belhadi, S.; Haddad, A. Investigation, Modeling, and Optimization of Cutting Parameters in Turning of Gray Cast Iron Using Coated and Uncoated Silicon Nitride Ceramic Tools. Based on ANN, RSM, and GA Optimization. Int. J. Adv. Manuf. Technol. 2019, 101, 523–548. [Google Scholar] [CrossRef]
  41. Laouissi, A.; Nouioua, M.; Yallese, M.A.; Abderazek, H.; Maouche, H.; Bouhalais, M.L. Machinability Study and ANN-MOALO-Based Multi-Response Optimization During Eco-Friendly Machining of EN-GJL-250 Cast Iron. Int. J. Adv. Manuf. Technol. 2021, 117, 1179–1192. [Google Scholar] [CrossRef]
  42. Nouioua, M.; Laouissi, A.; Yallese, M.A.; Khettabi, R.; Belhadi, S. Multi-Response Optimization Using Artificial Neural Network-Based GWO Algorithm for High Machining Performance with Minimum Quantity Lubrication. Int. J. Adv. Manuf. Technol. 2021, 116, 3765–3778. [Google Scholar] [CrossRef]
  43. Safi, K.; Yallese, M.A.; Belhadi, S.; Mabrouki, T.; Laouissi, A. Tool Wear, 3D Surface Topography, and Comparative Analysis of GRA, MOORA, DEAR, and WASPAS Optimization Techniques in Turning of Cold Work Tool Steel. Int. J. Adv. Manuf. Technol. 2022, 121, 701–721. [Google Scholar] [CrossRef]
  44. Nouioua, M.; Laouissi, A.; Brahami, R.; Blaoui, M.M.; Hammoudi, A.; Yallese, M.A. Evaluation of: MOSSA, MOALO, MOVO and MOGWO Algorithms in Green Machining to Enhance the Turning Performances of X210Cr12 Steel. Int. J. Adv. Manuf. Technol. 2022, 120, 2135–2150. [Google Scholar] [CrossRef]
  45. ISO 5610-1:2010; Holders with Rectangular Shanks for Cutting Inserts, Part 1—General Review, Correlation, and Parameter Definition. ISO: Geneva, Switzerland, 2010.
  46. Chihaoui, S.; Yallese, M.A.; Belhadi, S.; Belbah, A.; Safi, K.; Haddad, A. Coated CBN Cutting Tool Performance in Green Turning of Gray Cast Iron EN-GJL-250: Modeling and Optimization. Int. J. Adv. Manuf. Technol. 2021, 113, 3643–3665. [Google Scholar] [CrossRef]
  47. Chihaoui, S. Evaluation Des Performances Des Outils En Cbn Revêtus Lors De L’usinage À Sec De La Fonte Grise—Approche Statistique Et Optimisation Multi Objectifs. Ph.D. Thesis, 8 Mai 1945 University, Guelma, Algeria, 2021. [Google Scholar]
  48. Kellouche, Y.; Tayeh, B.A.; Chetbani, Y.; Zeyad, A.M.; Mostafa, S.A. Comparative Study of Different Machine Learning Approaches for Predicting the Compressive Strength of Palm Fuel Ash Concrete. J. Build. Eng. 2024, 88, 109187. [Google Scholar] [CrossRef]
  49. Abidi, A.; Ben Salem, S.; Yallese, M.A. Machining Quality of High Speed Helical Milling of Carbon Fiber Reinforced Plastics. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2021, 236, 1049–1066. [Google Scholar] [CrossRef]
  50. Del Pino, G.G.; Bezazi, A.; Boumediri, H.; Kieling, A.C.; Garcia, S.D.; Torres, A.R.; De Souza Soares, R.; De Macêdo Neto, J.C.; Dehaini, J.; Panzera, T.H. Optimal Tensile Properties of Biocomposites Made of Treated Amazonian Curauá Fibres Using Taguchi Method. Mater. Res. 2021, 24, e20210326. [Google Scholar] [CrossRef]
  51. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  52. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An Improved Grey Wolf Optimizer for Solving Engineering Problems. Expert. Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  53. Elsayed, S.M.; Sarker, R.A.; Essam, D.L. A New Genetic Algorithm for Solving Optimization Problems. Eng. Appl. Artif. Intell. 2014, 27, 57–69. [Google Scholar] [CrossRef]
  54. Yuan, Y.; Wang, W.; Pang, W. A Genetic Algorithm with Tree-Structured Mutation for Hyperparameter Optimisation of Graph Neural Networks. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation, CEC 2021, Krakow, Poland, 28 June–1 July 2021; pp. 482–489. [Google Scholar] [CrossRef]
  55. Touati, S.; Boumediri, H.; Karmi, Y.; Chitour, M.; Boumediri, K.; Zemmouri, A.; Moussa, A.; Fernandes, F. Performance Analysis of Steel W18CR4V Grinding Using RSM, DNN-GA, KNN, LM, DT, SVM Models, and Optimization via Desirability Function and MOGWO. Heliyon 2025, 11, e42640. [Google Scholar] [CrossRef]
  56. Reffas, O.; Boumediri, H.; Karmi, Y.; Kahaleras, M.S.; Bousba, I.; Aissa, L. Statistical Analysis and Predictive Modeling of Cutting Parameters in EN-GJL-250 Cast Iron Turning: Application of Machine Learning and MOALO Optimization. Int. J. Adv. Manuf. Technol. 2025, 137, 1991–2009. [Google Scholar] [CrossRef]
  57. Gaytan, A.; Begovich-Mendoza, O.; Arana-Daniel, N. Training of Convolutional Neural Networks for Image Classification with Fully Decoupled Extended Kalman Filter. Algorithms 2024, 17, 243. [Google Scholar] [CrossRef]
  58. Sum, J.; Leung, C.S.; Young, G.H.; Kan, W.K. On the Kalman Filtering Method in Neural-Network Training and Pruning. IEEE Trans. Neural Netw. 1999, 10, 161–166. [Google Scholar] [CrossRef]
  59. Crassidis, J.L.; Junkins, J.L. Optimal Estimation of Dynamic Systems; Chapman and Hall/CRC: Boca Raton, FL, USA, 2012. [Google Scholar]
  60. Haykin, S. Neural Networks: A Comprehensive Foundation by Simon Haykin. Knowl. Eng. Rev. 1999, 13, 409–412. [Google Scholar]
  61. Touati, S.; Ghelani, L.; Zemmouri, A.; Boumediri, H. Optimization of Gas Carburizing Treatment Parameters of Low Carbon Steel Using Taguchi and Grey Relational Analysis (TA-GRA). Int. J. Adv. Manuf. Technol. 2022, 120, 7937–7949. [Google Scholar] [CrossRef]
  62. Touati, S.; Boumediri, H.; Mansouri, K.; Chitour, M.; Reffas, O.; Karmi, Y.; Boumediri, K.; Zemmouri, A. Optimization of Cutting Conditions for the Metallic Surfaces of 50CrNi3Mn Alloy Steel Using Box-Behnken Design, ANOVA, and Desirability Function (Box-ANOVA-DF). J. Nano-Electron. Phys. 2025, 17, 1004. [Google Scholar] [CrossRef]
  63. Mirjalili, S.; Saremi, S.; Mirjalili, S.M.; Coelho, L.D.S. Multi-Objective Grey Wolf Optimizer: A Novel Algorithm for Multi-Criterion Optimization. Expert. Syst. Appl. 2016, 47, 106–119. [Google Scholar] [CrossRef]
Figure 1. Comprehensive illustration of the experimental configuration utilized in this study.
Figure 1. Comprehensive illustration of the experimental configuration utilized in this study.
Crystals 15 00264 g001
Figure 2. Three-dimensional surface roughness topography for various feed rate values: (a) Uncoated Si3N4, (b) Coated Si3N4.
Figure 2. Three-dimensional surface roughness topography for various feed rate values: (a) Uncoated Si3N4, (b) Coated Si3N4.
Crystals 15 00264 g002
Figure 3. Normal Probability Plots of residuals for Fr, Ra, and Pc.
Figure 3. Normal Probability Plots of residuals for Fr, Ra, and Pc.
Crystals 15 00264 g003
Figure 4. Box–Cox plots for optimal power transformations of Fr, Ra, and Pc.
Figure 4. Box–Cox plots for optimal power transformations of Fr, Ra, and Pc.
Crystals 15 00264 g004
Figure 5. Adjusted Normal Probability Plots of transformed residuals for Fr, Ra, and Pc.
Figure 5. Adjusted Normal Probability Plots of transformed residuals for Fr, Ra, and Pc.
Crystals 15 00264 g005
Figure 6. IGWO algorithm flowchart.
Figure 6. IGWO algorithm flowchart.
Crystals 15 00264 g006
Figure 7. GAs algorithm flowchart.
Figure 7. GAs algorithm flowchart.
Crystals 15 00264 g007
Figure 8. Hybrid algorithm flowchart (DNN-GA and DNN-IGWO).
Figure 8. Hybrid algorithm flowchart (DNN-GA and DNN-IGWO).
Crystals 15 00264 g008
Figure 9. DNN-EKF algorithm flowchart.
Figure 9. DNN-EKF algorithm flowchart.
Crystals 15 00264 g009
Figure 10. Comparison between measured and predicted Fr, Ra, and Pc using proposed optimization models (DNN-IGWO, DNN-GA, and DNN-EKF) and traditional methods (SVM, DT, and LM).
Figure 10. Comparison between measured and predicted Fr, Ra, and Pc using proposed optimization models (DNN-IGWO, DNN-GA, and DNN-EKF) and traditional methods (SVM, DT, and LM).
Crystals 15 00264 g010
Figure 11. Comparison of SI performance between measured and predicted Fr, Ra, and Pc using proposed optimization models (DNN-IGWO, DNN-GA, and DNN-EKF) and traditional methods (SVM, DT, and LM).
Figure 11. Comparison of SI performance between measured and predicted Fr, Ra, and Pc using proposed optimization models (DNN-IGWO, DNN-GA, and DNN-EKF) and traditional methods (SVM, DT, and LM).
Crystals 15 00264 g011
Figure 12. Spider plot comparison between measured and predicted Fr, Ra, and Pc using proposed optimization models (DNN-IGWO, DNN-GA, and DNN-EKF) and traditional methods (SVM, DT, and LM).
Figure 12. Spider plot comparison between measured and predicted Fr, Ra, and Pc using proposed optimization models (DNN-IGWO, DNN-GA, and DNN-EKF) and traditional methods (SVM, DT, and LM).
Crystals 15 00264 g012
Figure 13. Taylor diagrams for Fr, Ra, and Pc of various predictive models.
Figure 13. Taylor diagrams for Fr, Ra, and Pc of various predictive models.
Crystals 15 00264 g013
Figure 14. Surface plots of cutting force (Fr), surface roughness (Ra), and cutting power (Pc) as a function of ap, f, and Vc.
Figure 14. Surface plots of cutting force (Fr), surface roughness (Ra), and cutting power (Pc) as a function of ap, f, and Vc.
Crystals 15 00264 g014
Figure 15. Contour plots and optimization profiles for optimizing cutting parameters using the desirability function method.
Figure 15. Contour plots and optimization profiles for optimizing cutting parameters using the desirability function method.
Crystals 15 00264 g015
Figure 16. Schematic representation of the multi-objective grey wolf optimization (MOGWO) algorithm.
Figure 16. Schematic representation of the multi-objective grey wolf optimization (MOGWO) algorithm.
Crystals 15 00264 g016
Figure 17. Pareto fronts for MO optimization of (a) Ra vs. Fr; (b) Pc vs. Fr; (c) Pc vs. Ra; and (d) Ra vs. Fr vs. Pc.
Figure 17. Pareto fronts for MO optimization of (a) Ra vs. Fr; (b) Pc vs. Fr; (c) Pc vs. Ra; and (d) Ra vs. Fr vs. Pc.
Crystals 15 00264 g017
Table 1. Input factors and levels of experimental dataset.
Table 1. Input factors and levels of experimental dataset.
ParametersLevel
123
Type of toolsm12-
Depth of cutap (mm)0.250.50.75
Feed ratef (mm/rev)0.080.140.2
Cutting speedVc (m/min)260370530
Table 2. Results of Fr, Ra, and Pc according to the cutting conditions.
Table 2. Results of Fr, Ra, and Pc according to the cutting conditions.
Input FactorsOutput Parameters
map
(mm)
f
(mm/rev)
Vc
(m/min)
Fr
(N)
Ra
(µm)
Pc
(W)
110.250.08260124.650.82541.61
210.250.08370107.410.77664.44
310.250.08530101.780.72896.11
410.250.14260157.081.29680.57
510.250.14370136.310.91839.77
610.250.14530133.060.831182.64
710.250.2260172.871.53745.43
810.250.2370161.601.391000.93
910.250.2530158.151.181397.22
1010.50.08260202.300.75877.18
1110.50.08370169.460.731052.04
1210.50.08530147.730.701301.71
1310.50.14260323.521.351411.42
1410.50.14370314.480.881947.25
1510.50.14530291.860.792580.65
1610.50.2260364.111.781583.53
1710.50.2370346.991.432130.87
1810.50.2530333.841.272942.99
1910.750.08260327.160.931416.84
2010.750.08370312.200.851921.57
2110.750.08530297.350.712622.97
2210.750.14260445.561.311929.74
2310.750.14370442.811.212733.56
2410.750.14530421.401.063725.98
2510.750.2260536.651.792333.46
2610.750.2370515.171.583189.26
2710.750.2530490.581.404325.81
2820.250.0826070.850.86307.49
2920.250.0837069.360.72427.16
3020.250.0853056.180.64498.78
3120.250.14260116.011.16500.32
3220.250.14370110.930.91683.06
3320.250.14530103.910.83919.44
3420.250.2260156.501.86677.29
3520.250.2370145.781.76895.13
3620.250.2530143.201.661277.12
3720.50.08260158.400.55685.28
3820.50.08370148.260.52917.54
3920.50.08530140.390.451241.25
4020.50.14260215.050.88935.83
4120.50.14370202.970.811252.28
4220.50.14530189.850.771669.62
4320.50.2260530.601.652305.69
4420.50.2370253.601.651581.76
4520.50.2530236.051.652094.45
4620.750.08260211.710.62928.85
4720.750.08370219.770.541359.93
4820.750.08530196.560.471749.49
4920.750.14260349.670.871505.57
5020.750.14370350.780.802155.17
5120.750.14530318.010.772812.83
5220.750.2260424.481.531835.91
5320.750.2370378.361.502321.90
5420.750.2530382.821.433387.45
Table 3. ANOVA analysis for Fr, Ra, and Pc.
Table 3. ANOVA analysis for Fr, Ra, and Pc.
SourceSum of SquaresdfMean SquareF-Valuep-ValueRemarks
FrModel1.215 × 1061771,469.1166.61<0.0001Significant
A-m4.234 × 10522.117 × 105197.29<0.0001Significant
B-ap5.076 × 10515.076 × 105473.07<0.0001Significant
C-f2.097 × 10512.097 × 105195.48<0.0001Significant
D-Vc16,719.35116,719.3515.580.0002Significant
AB68,954.79234,477.4032.13<0.0001Significant
AC22,730.86211,365.4310.590.0001Significant
AD1334.322667.160.62180.5402Not Significant
BC23,839.14123,839.1422.22<0.0001Significant
BD221.471221.470.20640.6512Not Significant
CD2043.7112043.711.900.1724Not Significant
B2324.031324.030.30200.5846Not Significant
C21261.9711261.971.180.2823Not Significant
D21277.9811277.981.190.2793Not Significant
Residual67,595.95631072.95
Cor Total1.283 × 10680
RaModel28.11171.6540.53<0.0001Significant
A-m8.1624.0899.99<0.0001Significant
B-ap0.006610.00660.16110.6895Not Significant
C-f13.85113.85339.55<0.0001Significant
D-Vc2.8512.8569.76<0.0001Significant
AB0.297120.14853.640.0319Not Significant
AC0.772120.38609.460.0003Significant
AD1.7320.862921.15<0.0001Not Significant
BC1.550 × 10−611.550 × 10−60.00000.9951Not Significant
BD0.050110.05011.230.2719Not Significant
CD0.044710.04471.100.2993Not Significant
B20.001010.00100.02390.8775Not Significant
C20.036510.03650.89350.3482Not Significant
D20.365410.36548.960.0039Not Significant
Residual2.57630.0408
Cor Total30.6880
PcModel5.757 × 107173.386 × 106101.70<0.0001Significant
A-m1.823 × 10729.116 × 106273.77<0.0001Significant
B-ap2.109 × 10712.109 × 107633.32<0.0001Significant
C-f8.355 × 10618.355 × 106250.89<0.0001Significant
D-Vc6.718 × 10616.718 × 106201.75<0.0001Significant
AB3.058 × 10621.529 × 10645.92<0.0001Significant
AC8.721 × 10524.361 × 10513.10<0.0001Significant
AD1.464 × 10627.320 × 10521.98<0.0001Significant
BC9.832 × 10519.832 × 10529.53<0.0001Significant
BD1.472 × 10611.472 × 10644.20<0.0001Significant
CD3.444 × 10513.444 × 10510.340.0021Significant
B21747.2311747.230.05250.8196Not Significant
C276,489.60176,489.602.300.1346Not Significant
D23871.0513871.050.11630.7343Not Significant
Residual2.098 × 1066333,299.08
Cor Total5.967 × 10780
Table 4. Statistical metrics for model performance in predicting for the outputs Fr, Ra, and Pc.
Table 4. Statistical metrics for model performance in predicting for the outputs Fr, Ra, and Pc.
OutputStd. Dev.MeanC.V. %R2R2-AdjustedR2-PredictedAdeq Precision
Fr32.76206.4415.870.950.930.9332.01
Ra0.20201.3015.570.920.890.8727.75
Pc182.481313.4813.890.970.960.9445.84
Table 5. DNN optimization parameters.
Table 5. DNN optimization parameters.
Hidden LayersHidden Layer SizeLearning AlgorithmsActivation Functions
Min: 1
Max: 10
Min: 1
Max: 10
trainlm: Levenberg–Marquardt backpropagationCompet: Competitive transfer function
trainbr: Bayesian Regulation backpropagationelliotsig: Elliot sigmoid transfer function
trainbfg: BFGS quasi-Newton backpropagationhardlim: Positive hard limit transfer function
traincgb: Conjugate gradient backpropagation with Powell–Beale restartshardlims: Symmetric hard limit transfer function
traincgf: Conjugate gradient backpropagation with Fletcher–Reeves updateslogsig: Logarithmic sigmoid transfer function
traincgp: Conjugate gradient backpropagation with Polak–Ribiere updatesnetinv: Inverse transfer function
traingd: Gradient descent backpropagationposlin: Positive linear transfer function
traingda: Gradient descent with adaptive lr backpropagationpurelin: Linear transfer function
traingdm: Gradient descent with momentumradbas: Radial basis transfer function
Traingdx: Gradient descent w/momentum and adaptive lr backpropagationradbasn: Radial basis normalized transfer function
trainoss: One-step secant backpropagationsatlin: Positive saturating linear transfer function
trainrp: RPROP backpropagationsatlins: Symmetric saturating linear transfer function
trainscg: Scaled conjugate gradient backpropagationsoftmax: Soft max transfer function
tansig: Symmetric sigmoid transfer function
tribas: Triangular basis transfer function
Table 6. Error criteria formulas.
Table 6. Error criteria formulas.
CriteriaFormula
RMSE x = 1 n y m y p 2 n
MAPE (%) x = 1 n y m y p / y e x n × 100
MAE x = 1 n y m y p n
R2 x = 1 n y p y m x = 1 n y p y m
OBJ R M S E + M A E R 2 + 1
SI R M S E y ¯
where y m : experimental value; y p : predicted value; y ¯ : average of the experimentally determined values; and n : number of experiments.
Table 7. Optimal parameters of DNNs obtained with IGWO.
Table 7. Optimal parameters of DNNs obtained with IGWO.
ParameterDNN ModelsHLayer NumberHLayer SizeLearning-AlgorithmAct-Fct
FrDNN-IGWO65trainbrlogsig
10softmax
2tribas
9netinv
5purelin
3netinv
RaDNN-IGWO27trainbrradbas
7radbas
PcDNN-IGWO38trainbrradbas
3elliotsig
4radbas
Table 8. Optimal parameters of DNNs obtained with GA.
Table 8. Optimal parameters of DNNs obtained with GA.
ParameterDNN ModelsHLayer NumberHLayer SizeLearning-AlgorithmAct-Fct
FrDNN-GA34trainbrradbas
9elliotsig
10radbas
RaDNN-GA28trainbrelliotsig
10elliotsig
PcDNN-GA38trainbrradbas
3elliotsig
4radbas
Table 9. Initial DNN architecture for EKF algorithm initialization.
Table 9. Initial DNN architecture for EKF algorithm initialization.
ParameterDNN ModelsHLayer NumberHLayer SizeLearning-AlgorithmAct-Fct
FrDNN-EKF23trainlmtansig
8tansig
RaDNN-EKF27trainlmtansig
10tansig
PcDNN-EKF27trainlmtansig
4tansig
Table 10. Solutions found for desirability function.
Table 10. Solutions found for desirability function.
NumbermapfVcFrRaPcDesirability
120.2500.080437.76246.9080.520389.3220.978
220.2500.080438.08646.8850.520389.4670.978
320.2500.080437.68146.9150.520389.2890.978
420.2500.080438.36446.8650.520389.5970.978
520.2500.080435.77847.0590.520388.4460.978
620.2500.080440.29746.7230.520390.4560.978
720.2500.080433.45647.2370.520387.4210.978
820.2500.080442.55446.5640.521391.4830.978
920.2500.080431.36747.4050.520386.5180.978
1020.2500.080444.59646.4240.521392.4120.978
Table 11. Experimental validation of the optimized machining parameters.
Table 11. Experimental validation of the optimized machining parameters.
mapfVcFrRaPc
120.250.0853056.180.64492.55
230.60.0854642.800.81389.48
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karmi, Y.; Boumediri, H.; Reffas, O.; Chetbani, Y.; Ataya, S.; Khan, R.; Yallese, M.A.; Laouissi, A. Integration of Hybrid Machine Learning and Multi-Objective Optimization for Enhanced Turning Parameters of EN-GJL-250 Cast Iron. Crystals 2025, 15, 264. https://doi.org/10.3390/cryst15030264

AMA Style

Karmi Y, Boumediri H, Reffas O, Chetbani Y, Ataya S, Khan R, Yallese MA, Laouissi A. Integration of Hybrid Machine Learning and Multi-Objective Optimization for Enhanced Turning Parameters of EN-GJL-250 Cast Iron. Crystals. 2025; 15(3):264. https://doi.org/10.3390/cryst15030264

Chicago/Turabian Style

Karmi, Yacine, Haithem Boumediri, Omar Reffas, Yazid Chetbani, Sabbah Ataya, Rashid Khan, Mohamed Athmane Yallese, and Aissa Laouissi. 2025. "Integration of Hybrid Machine Learning and Multi-Objective Optimization for Enhanced Turning Parameters of EN-GJL-250 Cast Iron" Crystals 15, no. 3: 264. https://doi.org/10.3390/cryst15030264

APA Style

Karmi, Y., Boumediri, H., Reffas, O., Chetbani, Y., Ataya, S., Khan, R., Yallese, M. A., & Laouissi, A. (2025). Integration of Hybrid Machine Learning and Multi-Objective Optimization for Enhanced Turning Parameters of EN-GJL-250 Cast Iron. Crystals, 15(3), 264. https://doi.org/10.3390/cryst15030264

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop