Next Article in Journal
Detection of Outliers via Uncertain Knowledge and the IF–THEN Method
Previous Article in Journal
Terrain Surface Interpolation from Large-Scale 3D Point Cloud Data with Semantic Segmentation in Earthwork Sites
Previous Article in Special Issue
A Transformer Tube-Based Model Predictive Control Method Under Model Mismatches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Machine Learning Method for Hardness Prediction of Metal Materials Fabricated by 3D Selective Laser Melting

1
Faculty of Information Studies, Novo Mesto, Ljubljanska Cesta 31a, 8000 Novo Mesto, Slovenia
2
Faculty of Mechanical Engineering, University of Ljubljana, Aškerčeva Cesta 6, 1000 Ljubljana, Slovenia
3
Institute of Mechanical Science, Faculty of Mechanics, Vilnius Gediminas Technical University, Sauletekio al. 11, LT-10223 Vilnius, Lithuania
4
Faculty of Mechanical Engineering, Casimir Pulaski Radom University, Stasieckiego Str. 54, 26-600 Radom, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(23), 12832; https://doi.org/10.3390/app152312832
Submission received: 31 October 2025 / Revised: 30 November 2025 / Accepted: 2 December 2025 / Published: 4 December 2025
(This article belongs to the Special Issue Applications of Artificial Intelligence in Industrial Engineering)

Featured Application

The presented results and methodology are useful in cases when the selective laser melting technique is used for the fabrication of components with a specific requirement on hardness. The ability to predict the hardness of SLM specimens is crucial to avoid the production of defective elements, which will increase efficiency and decrease waste.

Abstract

In this article, models for prediction of surface hardness for SLM specimens are presented. In experiments, EOS Maraging Steel MS1 was processed using EOS M 290 3D printer via selective laser melting (SLM). To predict hardness of SLM specimens, several machine learning methods were applied, including genetic programming, neural network, multiple regression, k-nearest neighbors, support vector machine, logistic regression, and random forest. In the research, fractal geometry was used to characterize the complexity of SLM-shaped microstructures. It was found that fractal geometry combined with machine learning techniques together greatly improved our comprehension of the intricacies of surface analysis and provided highly efficient predictions. All the applied algorithms exhibited predictability above 90%, with the best average result of 98.7% for genetic programming.

1. Introduction

Additive manufacturing (AM) technologies are revolutionizing the automotive, aviation, and space industries [1]. Practically every company active in the automotive sector is investing in 3D printing technologies, and the worldwide AM market is expected to fabricate components and end products worth ca. US$2 trillion by 2030 [2]. Among the methods, selective laser melting (SLM) has become an extremely popular topic with over 2300 papers published by Elsevier in 2024 alone. SLM is one of the most widely used AM techniques for production of ready metallic components [3]. It represents a powder bed fusion process in which high-energy lasers are used to melt metallic powder particles selectively, fusing them with the previous layer.
Improved performance and enhanced yield strength is required as very demanding applications are exhibited by maraging steel 3D-printed components. This material has excellent combination of strength and ductility, improved weldability, and high corrosion resistance [4]. Many researchers have undertaken attempts to characterize the behavior of the additively manufactured steel components under loading conditions and compare it to the respective counterparts produced conventionally [5]. For instance, since the corrosion resistance of SLM-produced maraging steel was negatively corelated with pore size, an appropriate heat treatment process was proposed, and it was demonstrated that the pore size was sensitive to elevated temperatures [6]. Patil et al. [7] employed a Box–Behnken design response surface methodology (RSM) to evaluate the roles of critical process parameters including layer thickness, laser power, scan speed, and hatch spacing on the relative density of SLM-produced maraging steel, its micro-hardness, tensile strength, and surface roughness. Hong with his team [8] found that microhardness HV decreased with increasing volumetric energy density (VED) and respective relative density in porous specimens, while micro HV increased with increasing VED in the full dense specimens. Another report presented a tribocorrosion model for SLM maraging steel, taking into account the effect of grain size on the wear processes and corrosion synergy [9]. The results proved that reduced power of the laser beam resulted in a material of more fine-grained structure and higher hardness, with better resistant to abrasive wear, while its corrosion resistance appeared to be worsened compared to that of the material with larger grains. Other process parameters were investigated, too, among others, including the layer-by-layer build strategy, time homogenization, as well as the effects of thermal treatment and building platform preheating [10].
There are numerous reports on the properties of SLM-fabricated maraging steel components [11]. For instance, Marciniak with co-authors [12] as well as Branco et al. [13] examined fatigue behavior of the material under both constant- and variable-amplitude loads. Other reports [14] demonstrated that the microcracks and pores that appeared during the SLM process affected the mechanical characteristics of the 3D-printed structures, which was especially important in the case of tribological functional surfaces or the function related to heat transfer.
Numerical simulations of the SLM process have emerged to reduce costs of experimental trials with various 3D-printed materials. In particular, some researchers focused on the formation process and metallurgical defects, including numerical simulations of thermodynamic processes, of microstructure evolution prediction, and of mechanical performance [15]. Kolomy et al. [16] examined the compressive yield strength and microhardness of the SLM-fabricated high-strength maraging steel and obtained the satisfactory correlation of simulations with observed experimental results.
Machine learning algorithms were found useful for SLM process optimization [17] and for prediction of mechanical properties in metal additive manufacturing [18]. Machine learning is a data analysis technique that allows a computer to read large amounts of data, learn patterns hidden within the data, and acquire rules for judging the unknown data [19]. Machine learning involves training data to build a model that a computer can use to classify test data and ultimately real-world data. Barrionuevo et al. [20] proposed a machine learning-aided interpretable model, featuring gradient-boosting techniques, extreme gradient-boosting regression, and AdaBoost for the prediction of the microhardness of SLM-fabricated alloys and metal-based composites. Comprehensive review on the in situwere monitoring systems and machine learning algorithms used for quality assurance in laser powder bed fusion systems was published by Taherkhani and co-authors [21]. They found that among the classifiers using different feature extraction methods, fractal methods were reported to achieve the highest accuracy. In the study [22], the authors presented a fatigue life prediction mapping model framework for rib-to-deck welds (RTDWs) in orthotropic steel bridge decks (OSBDs). They used a limited dataset of 27 experimental fatigue tests. The framework included comparison of four machine learning approaches, highlighting the superior prediction accuracy and robustness of a Gaussian Variational Bayes Network (GVBN). The analysis revealed the sensitivity of RTDW fatigue life to various parameters.
The concept of fractal geometry, invented by Mandelbrot, is nothing but a mirror that reflects the complex shapes of nature that Euclidean geometry has neglected [23]. Fractals are structures that have the property of so-called self-similarity, i.e., parts of the system have similar shapes at different degrees of magnification. The individual, easy-to-identify scale-independent fractal parameters were found useful for characterization of the surface topography of the SLM-produced steel along with standard parameters for fracture surface [24]. Song et al. [25] studied the cutting mechanisms and proposed a prediction model of a high-speed dry milling process based on specific cutting energy distribution, where 3D fractal dimension reflected the surface integrity quantitatively. However, to the best of our knowledge, fractal geometry principles have not been applied to create models for prediction of other characteristics of SLM-fabricated maraging steel components.
It is widely accepted that the data-driven prediction is crucial for the process parameter optimization in the laser metal additive manufacturing [26]. To implement smart manufacturing, it is necessary to develop effective models and methods for enhancement of the prediction of quality, streamline scheduling of the products, conduct proactive failure detection, as well as perform predictive maintenance [27]. Continuous, real-time monitoring of the AM processes, detection, and prediction of defects, and thus cost-effectiveness, quality assurance, and productivity improvement require reliable tools [28]. The present contribution proposes such a tool, tested for the SLM process, which is also transferable to other applications.
In this work, we propose a method for predicting the surface hardness of 3D selective laser melting metal materials based on machine learning algorithms and with application of fractal geometry. It was presumed that application of fractal parameters to the models of the process predicts HV better than sole process modeling. Thus, partial replacement of expensive post-process microstructure measurements in favor of an image-based complexity metric could be expected.

2. Materials and Methods

2.1. Experimental Work and Material Preparation

The EOS M 290 system (EOS Gmbh, Krailing, Germany) was used for preparation of the SLM-fabricated maraging steel specimens. Yb-fiber laser provides high quality laser beam with a focus diameter of ca. 100 µm and consequently excellent detail resolution in the extensive range of validated materials available on the market [29]. The specimens were made out of EOS Maraging Steel MS1 delivered by EOS Gmbh (Krailing, Germany). The chemical composition corresponded to the classifications as follows: 18Ni300 (AISI), 1.2709 (EN), or X3NiCoMoTi 18-9-5 (DIN) [30]. According to the specification, the particle size of the powder was 15–65 μm. Seventeen specimens were prepared and labeled S1 to S17, of a cubic shape and volume of 1 cm3. For each specimen, hardness was measured on 10 different points, and the average value of hardness was calculated.
The available published reports indicated that MS1 material exhibited consistent and predictable mechanical properties, and its performance met the quality requirements [31]. Moreover, no significant anisotropy in the mechanical properties was found in the thin-walled elements printed in different directions [32]. Maraging steel is a special class of high-strength stainless steel, which is age-hardened via dispersive precipitations, obtaining high strength during the second phase of the heat treatment [33]. However, different studies provided different data on the optimum heat treatment (aging) conditions, suggesting some skepticism about the best aging parameters, both in terms of strength and hardness values, and in terms of the combination of time and temperature to embody ductility and toughness [34]. Thus, it was decided in the experimental work to analyze the specimens without additional heat treatment, right after SLM fabrication.

2.2. Methodology of Structure Characterization

Figure 1 presents an example of the specimen’s microstructure. It is a typical solidification substructure described in many publications, e.g., [35,36]. The melt pool boundaries (MPBs) are visible, marking the tracks left along the profile of the selective melt pools during SLM. During the process, the area under the laser beam leading edge is subject to faster cooling, producing the semi-elliptical shape of MPBs.
The dimensions of observed melt pools are a result of numerous factors, such as spot distance, laser beam size, laser power, exposure time, as well as building strategy [35]. The microstructure of SLM specimens is very complex and requires advanced characterization methodologies to understand the process–microstructure–property relations [37]. The classical Euclidean geometry does not provide necessary tools, so it was decided to apply fractal geometry for complexity characterization of the microstructure of SLM-fabricated specimens.

2.2.1. Fractals

In fractal geometry, two parameters are important: self-similarity or self-affinity, and fractal dimension [38]. Generally speaking, understanding how something scales allows us to infer the scaling rule or fractal dimension FD. Every kind of fractal analysis is predicated on a fractal dimension FD of some kind. FDs come in a variety of forms, but they may all be grouped together into one type: meters of complexity. The link between N, the number of pieces, and the scale ε, utilized to obtain the new pieces, is the formal focus of this concept. It can be written as follows [39]:
Nε−FD.
The respective fractal dimensions were calculated from 2D images of the microstructure of SLM-produced specimens.
Hurst analysis, which is a fractal-based scale invariant approach for analyzing long-term time series data, can provide insight into this issue as a quantitative approach for evaluating temporal scale in time series. Hurst parameter H is used to quantify long memory in time series, showing autocorrelation and a declining rate with increasing quality of the lag between values [40]. To calculate the fractal dimensions FD, Hurst parameter H was determined according to the methodology described in [41].

2.2.2. Modeling

For modeling of the surface hardness of the SLM specimen, we used intelligent system techniques (ISTs), including genetic programming, neural network, multiple regression, k-nearest neighbors, support vector machine, logistic regression, and random forest. The proposed model reflected only those aspects that correlated with the 2D surface image and the selected modes. The model did not replace volumetric inspection.
Genetic programming (GP) [42] aims to extend the genotype of genetic algorithms (GAs) to handle structural expressions and to apply them to program generation, learning, inference, concept formation, etc. The attempt to apply the idea of GP to AI for problem solving is called evolutionary learning. GP extends the GA method to handle graph structures, especially tree structures, as shown in Figure 2. Generally, tree structures can be described as LISP S-expressions, so GP often handles LISP programs as gene types. In addition, GP operators are provided for tree structures, and selection and reproduction are repeated. Then, the operator in the figure changes the structure of the program little by little, and more suitable (smart) programs remain, while the desired (optimal) program can be searched for. Also, in the tree structure shown in Figure 2, nodes with branches below them are called non-terminal symbols (function symbols) (+, progn, incf, etc.), while nodes without branches below them are called terminal symbols (constants, variables, etc.).
The following evolutionary parameters were selected to process the simulated evolutions with the GP algorithm:
  • 100 for the maximum number of generations;
  • 500 for the size of the population of organisms;
  • 0.5 for the reproduction probability;
  • 0.6 for the crossover probability;
  • 6 for the maximum permissible depth in creation of the population;
  • 10 for the maximum permissible depth after the operation of crossover of two organisms;
  • and 2 for the smallest permissible depth of organisms in generating new organisms.
Genetic operations of reproduction and crossover were used. To select the organisms, the tournament method with a tournament size of 6 was used.
A neural network (NN) [43,44] is a machine learning model that mimics the nerve cells in the human brain called “neurons” using a mathematical formula known as artificial neurons. Because the NN can recognize and learn complex data patterns such as voice and images, they have been attracting attention in recent years as a technology that supports artificial intelligence (AI). Neural networks are known as the basic technology of deep learning. A typical neural network consists of the following three layers: (1) input layer, (2) hidden layer, and (3) output layer. The learning method known as “perceptron,” which was used prior to deep learning, had few or no hidden layers, and was limited to simple information processing. However, the establishment of deep learning, which uses multiple hidden layers in neural networks, has made it possible to perform more complex information processing. Neural networks are important in a wide range of fields because they can tackle problems that are difficult to solve with conventional algorithms, ensuring high accuracy and efficiency. Nowadays, AI enhanced by neural networks is used in a variety of fields, such as machine translation, stock price prediction, classification of defective products, object recognition for autonomous driving, and cancer prediction in the medical field [45]. In particular, the accuracy of image or voice recognition sometimes exceeds human predictions. Figure 3 presents the structure of a neural network.
In the research, the following parameters of the neural network were used:
  • Number of neurons per hidden layer: 100;
  • Activation function: 100;
  • Solver: SGD, Alpha 0.0001;
  • Max iterations: 200.
Multiple regression analysis refers to a regression analysis that has multiple explanatory variables (independent variables) [46,47]. Regression analysis is a statistical method for estimating the relationship between explanatory variables and dependent variables, and explanatory variables are variables that affect or are expected to affect results when examining causal relationships. The multiple regression analysis allows for making well-founded predictions about items for which data has not yet been obtained. For example, there are many factors that affect sales at a retail store, such as location (distance from a station, etc.), sales floor size, and number of products. Multiple regression analysis makes it possible to analyze which of these factors has the greatest influence. Multiple regression analysis can be understood also as a multivariate analysis, which is a statistical method that analyzes the associations between data with two or more variables. In this research, the following parameters of multiple regression were used: Lasso regression [48] (L1 regularization) 0.5:0.5. An example of multiple regression is shown in Figure 4.
The k-nearest neighbors (k-NN) algorithm is a non-parametric supervised learning classifier that uses proximity to make classifications or predictions about groupings of individual data points [49]. It is one of the most popular and simplest classification and regression classifiers used in machine learning today. The kNN algorithm can be used for both regression and classification tasks, but it is typically used as a classification algorithm operating on the assumption that similar points are near each other. For classification problems, class labels are assigned based on a majority vote, i.e., the label that is most frequently represented around a particular data point is used. This is technically considered a “relative majority”, but the term “majority vote” is more commonly used in the literature. The difference between these terms is that a majority vote technically requires more than 50% of the votes, which is mainly valid when there are only two categories, not when there are multiple categories. For example, if there are four categories, 50% of the votes are not necessarily needed to reach a conclusion about a class. A class label can be assigned with more than 25% of the votes. Regression problems use similar concepts as classification problems, but in this case, the average of the k-nearest neighbors is taken and a prediction is made about the classification. The main difference here is that classification is used for discrete values, while regression is used for continuous values. However, before classification can be performed, distance must be defined. The most commonly used is Euclidean distance, which we will discuss in more detail later. For the present research, the following parameters of kNN were used: number of neighbors 8, Euclidean metric, and uniform weight. Figure 5 presents the classification by the nearest neighbors (kNNs).
Support vector machine (SVM) [50] is one of the most famous algorithms in machine learning. In today’s world, where AI technology is developing and spreading rapidly, support vector machines are useful for applications such as highly accurate prediction, detection, and identification, and will also contribute to improving work efficiency and productivity. The “support vector” in support vector machine (SVM) refers to the data that is closest to the best fit line that divides the data. Support vectors play a major role in classification tasks with SVM. Defining the support vector makes it possible to clarify the data that will serve as the basis for dividing lines, etc. Once the basis is determined, the class classification is predicted based on which side the target data is on. One of the goals of SVM is to maximize the distance between the support vector and the target data, thereby improving the accuracy of classification estimation. AI machine learning can be divided into three categories: (1) supervised learning, (2) unsupervised learning, and (3) reinforcement learning, depending on the type of data and the situation. Support vector machines are primarily used in “classification,” a type of machine learning method of recognition and prediction after learning from teacher data, i.e., “supervised learning.” In particular, binary classification using two options is one of the areas support vector machines excel at, and they can also handle multi-value classification, which uses multiple binary classification algorithms. Although less common than classification, they can also be applied to “regression (prediction of values),” a type of supervised learning, and a representative example is the support vector regression (SVR). The following parameters of SVM were used in the present study:
  • SVM, Cost (C): 1,00;
  • Regression loss epsilon: 0.1;
  • Kernel: RBF;
  • Optimization parameters: Numerical tolerance: 0.001;
  • Iteration limit: 100.
Figure 6 illustrates the idea of the support vector machine (SVM).
Logistic regression analysis is a statistical method for predicting the probability of two outcomes, “yes” or “no,” based on several factors. The logistic regression model is a statistical method that can be used in a variety of fields, including business marketing strategies, medicine, finance, and meteorology [51,52]. The predicted values calculated using logistic regression are useful for prediction of future responses and relevant measures. A logistic regression model can predict the probability of a binary outcome (target variable), such as “pass/fail” or “accept/reject,” so that probability can basically only be determined between 0 and 1. However, in normal regression analysis, the predicted probability may be outside the range, such as −0.2 or 1.2. Logistic regression thus uses a transformation to bring the predicted probability between 0 and 1. In the present study, the parameters of LR were used as follows: Regularization type Ridge (L2), and Strength C = 1. Figure 7 presents an example of the logistic regression diagram.
Random forest (RF) [53] is a popular algorithm in machine learning, trademarked by Leo Breiman and Adele Catler. It combines the output from multiple decision trees to arrive at a single result. Its ease of use and flexibility in addressing both classification and regression problems has led to its adoption. RF achieves high prediction accuracy by combining multiple decision trees. RF is a promising option for improvement in the system’s performance. RF is a type of technique called ensemble learning, which refers to a technique of building a stronger learning machine by combining multiple weak learning machines. In a random forest, by combining many decision trees, the weaknesses of individual decision trees are compensated for, and the overall prediction accuracy is improved. In RF, the decision tree construction process is randomly changed to generate a diverse collection of decision trees.
The settings of RF in the present research were as follows:
  • Number of trees: 10;
  • Number of attributes considered at each split: 5;
  • Growth control: Do not split subsets smaller than 5.
Figure 8 presents an example of a random forest.
In the present study, the abovementioned algorithms were combined together in Orange Data Mining (University of Ljubljana, Slovenia), a free visual programming software package [54]. Orange 3.39.0 version was used. This open-source data visualization tool can be installed in a variety of ways. The simplest is to obtain the standalone installer from the official website. Orange is an effective tool for analyzing and visualizing data, observing data flow, and increasing productivity. Figure 9 presents the diagram of hardness modeling with different methods of machine learning in Orange.

2.2.3. Training and Validation Methodology

The procedure for training and validation was performed as follows. The dataset was divided, with 80% allocated for training the model and the remaining 20% set aside for testing.
In order to assess the performance of the proposed machine learning model, k-fold cross-validation technique was used [55]. The value k = 2 was set for the results in 2-fold cross-validation. Then, the dataset was randomly shuffled into two sets d0 and d1, so that both sets were of equal size. This method is usually implemented when shuffling the data array and then splitting it into two. Then the model underwent training on d0 and validation on d1, followed by training on d1 and validation on d0 sets.
The hyperparameters were tuned using the rid search method [56], which is suitable for research on rather small datasets.

3. Results and Discussion

Table 1 presents parameters of SLM specimen fabrication and the measured hardness HV. In the first column, we denoted specimens from S1 to S17. The second column represents the power of the laser in W, labeled as X1. The third column represents the speed of the laser in mm/s, labeled as X2. The fourth column, labeled X3, represents the complexity of the SLM specimen, expressed as its fractal dimension FD. In essence, FD is a scaling rule comparing how a pattern’s detail changes with the scale at which it is considered. To obtain the complexity parameter, the Hurst exponent H method was used [57]. We calculated fractal dimension FD = 2 – H for 2D objects and FD = 3 – H for 3D objects, where H is the Hurst parameter calculated according to the methodology described in detail in [41]. Table 2 presents the values of hardness predicted by different algorithms.
The genetic programming model for hardness of SLM specimen can be written as follows:
X 1 + 0.324359 X 1 X 3 1 X 2 0.104362 X 1 0.0351499 X 1 + 0.113721 X 2 0.675641 X 1 X 3 X 3 2.083 X 3 ( X 1 + 0.113721 X 2 + 0.324359 X 1 X 3 + 0.324359 X 1 + 0.113721 X 2 + 0.648719 X 1 X 3 4.083 X 3 X 3 + X 3 X 1 X 1 X 2 X 1 2.083 X 3 + X 3 ) + X 2 / ( 0.0368864 X 1 X 3 + 1 X 1 0.0338509 X 1 X 3 + 0.0338509 0.113721 X 2 X 1 X 3 + 3.083 X 3 ( 0.113721 X 2 X 1 0.113721 X 2 0.675641 X 1 X 3 4.083 X 3 X 1 X 3 + 0.324359 X 1 X 3 X 3 + 0.0338509 0.113721 X 2 X 1 X 3 + 3.083 X 3 0.104362 X 1 + 0.113721 X 2 X 1 X 3 + 5.166 X 3 ) )
The lowest average hardness of 350.8 HV was exhibited by specimen S6, while the highest of 382.6 HV was exhibited by specimen S7. Figure 10 shows diagrams of experimental hardness values of SLM specimens, labeled RD, compared with predictions from different models, as follows: GP (genetic programming), MR (multiple regression), RF (forest model), NN (neural network), k-NN (k-nearest neighbors), SVM (support vector machine), and LR (logistic regression). The statistics of predictability are collected in Table 2, specifying average, median, minimal, maximal, and range values for hardness HV predicted for 17 specimens.
The results shown in Table 3 demonstrated predictability between 99.9% (the best single result for GP, RF, and NN) and 90.9% (the worst single results for RF and k-NN). Except for the MR model, all average values are smaller than respective medians.
Even if a prediction is correct in terms of average values and variability, their distribution may differ considerably. Thus, the linear relationships between the measured and predicted values as shown in Table 3 also have to be examined. Table 4 presents the correlation between measured results and predictions in terms of Pearson’s correlation coefficient. For comparison, distances between the measured and predicted values were calculated in terms of percentage errors with respect to the average values of hardness. Moreover, since mean square error (MSE) is a crucial tool for assessment of prediction accuracy in statistical modeling, it was also added to Table 4.
Figure 11 shows a graphical representation of data with equal Pearson’s correlation coefficients for the measured and predicted hardness. For example, depending on the particular ISTs methods, such coefficients can be either 0.5643 or 0.3883 for the hardness values predicted by GP and MR, respectively. The correspondence between the coefficients implies that the two predicting methods are similarly successful. At the same time, the diagram illustrates how certain characteristics may go unnoticed if the analysis is restricted.
The second row in Table 4 represents the errors in terms of distances between the measured and predicted values. These are calculated as percentage errors with respect to the average values of hardness. It is evident in most of the cases that all the different algorithms were able to provide valid predictions. The prediction accuracy of each method was calculated as the absolute difference between the predicted and measured data divided by the measured value. Then, the average of all values was calculated. Thus, the GP model exhibited a prediction accuracy of 98.7%, the MR model 97.8%, and the RF model provided 96.0% of prediction accuracy. The NN model showed a prediction accuracy of 96.8%, the k-NN model 95.6%, the SVM model 95.8%, and the LR model exhibited a prediction accuracy of 97.0%.
The third row in Table 4 presents the mean square error (MSE), which provides a clear evaluation of the model’s performance. It was calculated as the average squared difference between anticipated and actual values. MSE turned out to be very helpful for discovering models that avoid major prediction errors due to its squared nature. The performance of GP and MR models can be considered better than that of LR, SVM, k-NN, NN, and RF models, since they exhibited lower MSE.
The results for all averages and medians are higher than that reported recently for WC-based composites, where the gradient-boosting decision tree (GBDT) and backpropagation neural network (BPNN) algorithm models reached good performance, exhibiting 0.940 and 0.913, respectively [58]. Undoubtedly, the best performance can be considered that of GP, with an average of 98.7% and a median 99.3%, while RF and NN exhibited much lower averages and medians, even though they both reached the best single result of 99.9%. Moreover, MR and LR can be considered equally. Though MR exhibited higher average predictability than LR by 0.7%, its respective median was by 0.3% lower. Presumably, GP outperformed other methods because of differences in the way of searching for solutions, representing knowledge, and handling constraints. Basically, GP shows good results on small datasets, while other methods often need larger amounts of data.
It can be assumed that Pearson’s correlation can be conveniently adopted not only for a quick evaluation of data, but also for the validation of predictions. Information on hardness can be taken from micrographs of microstructure directly by a conventional process of image analysis and macro-indicators without the need to go through deeper metallurgical investigations. With the application of the proposed models, 3D printing production time can be saved, which means savings in terms of costs and resources, and an increase in competitiveness. Especially at the stage of project and implementation, if more than 100 samples with different process parameters have to be tested, it would take a lot of time, as well as additional material and expenses. A significant reduction in necessary samples and application of the models for hardness prediction would contribute to economic efficiency and quality of the product.
The presented study results are transferable to other settings, groups, or situations, as the models can be applied to other datasets. The applicability of the proposed model is not limited to additive manufacturing or mechanical engineering. The presented results cannot be directly transferred to other applications, but they can be useful when investigating predictive models with small sample sizes, where access to the database is more difficult from a financial or other perspective.

4. Conclusions

A comparative analysis of models used for prediction of surface hardness for SLM specimens provided interesting and valuable results. The specimens made out of EOS Maraging Steel MS1 using EOS M 290 3D printer exhibited hardness from 350.8 HV up to 382.6 HV. Application of fractal geometry and machine learning techniques together greatly improved the surface analysis and efficiency of machine learning. Prediction of the hardness of SLM specimens produced at different processing parameters was tested for the following algorithms: genetic programming, neural network, multiple regression, k-nearest neighbors, support vector machine, logistic regression, and random forest. A combination of fractal analysis with machine learning methods ensured high predictability (above 95%) for all the tested algorithms. In particular, the best results were demonstrated by the GP method, with an average of 98.7% and median 99.3%. The second-best results were exhibited by MR and LR, with averages and medians above 97%. Thus, high predictability of the structural properties provided a more accurate and efficient method of quality control.
This study not only marked an important advance in the field of additive manufacturing, but it also laid the groundwork for further research into different materials and properties, such as fracture toughness or modulus. Further development of analytical methods is also possible.

Author Contributions

Conceptualization, R.Š.; methodology, R.Š.; software, M.B.; validation, M.R. and Z.S.; formal analysis, M.B. and M.R.; investigation, R.Š. and Z.S.; resources, M.B.; data curation, Z.S.; writing—original draft preparation, M.B.; writing—review and editing, R.Š., M.R., and Z.S.; visualization, M.R.; supervision, R.Š.; project administration, M.B.; funding acquisition, R.Š. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
AMAdditive manufacturing
BPNNBackpropagation neural networks
FDFractal dimension
GAGenetic algorithms
GBDTGradient boosting decision tree
GPGenetic programming
GVBNGaussian Variational Bayes Network
ISTIntelligent system techniques
kNNk-nearest neighbors
MRMultiple regression
MPBMelt pool boundaries
MSEMean square error
NNNeural network
OSBDOrthotropic steel bridge decks
RFRandom forest
RSMResponse surface methodology
RTDWRib-to-deck welds
SLMSelective laser melting
SVMSupport vector machine
VEDVolumetric energy density

References

  1. Wawryniuk, Z.; Brancewicz-Steinmetz, E.; Sawicki, J. Revolutionizing transportation: An overview of 3D printing in aviation, automotive, and space industries. Int. J. Adv. Manuf. Technol. 2024, 134, 3083–3105. [Google Scholar] [CrossRef]
  2. Prashar, G.; Vasudev, H.; Bhuddhi, D. Additive manufacturing: Expanding 3D printing horizon in industry 4.0. Int. J. Interact. Des. Manuf. 2023, 17, 2221–2235. [Google Scholar] [CrossRef]
  3. Lashgari, H.R.; Ferry, M.; Li, S. Additive manufacturing of bulk metallic glasses: Fundamental principle, current/future developments and applications. J. Mater. Sci. Technol. 2022, 119, 131–149. [Google Scholar] [CrossRef]
  4. Król, M.; Snopiński, P.; Czech, A. The phase transitions in selective laser-melted 18-NI (300-grade) maraging steel. J. Therm. Anal. Calorim. 2020, 142, 1011–1018. [Google Scholar] [CrossRef]
  5. Silva, T.; Silva, F.; Xavier, J.; Gregório, A.; Reis, A.; Rosa, P.; Konopík, P.; Rund, M.; Jesus, A. Mechanical Behaviour of Maraging Steel Produced by SLM. Procedia Struct. Integr. 2021, 34, 45–50. [Google Scholar] [CrossRef]
  6. Zhao, Z.; Dong, C.; Kong, D.; Wang, L.; Ni, X.; Zhang, L.; Wu, W.; Zhu, L.; Li, X. Influence of pore defects on the mechanical property and corrosion behavior of SLM 18Ni300 maraging steel. Mater. Charact. 2021, 182, 111514. [Google Scholar] [CrossRef]
  7. Patil, V.V.; Mohanty, C.P.; Prashanth, K.G. Selective laser melting of a novel 13Ni400 maraging steel: Material characterization and process optimization. J. Mater. Res. Technol. 2023, 27, 3979–3995. [Google Scholar] [CrossRef]
  8. Hong, S.H.; Ha, S.Y.; Song, G.; Cho, J.; Kim, K.B.; Park, H.J.; Kang, G.C.; Park, J.M. Correlation between micro-to-macro mechanical properties and processing parameters on additive manufactured 18Ni-300 maraging steels. J. Alloys Compd. 2023, 960, 171031. [Google Scholar] [CrossRef]
  9. Wieczorek, D.; Ulbrich, D.; Stachowiak, A.; Bartkowski, D.; Bartkowska, A.; Petru, J.; Hajnyš, J.; Popielarski, P. Mechanical, corrosion and tribocorrosion resistance of additively manufactured Maraging C300 steel. Tribol. Int. 2024, 195, 109604. [Google Scholar] [CrossRef]
  10. Zetková, I.; Thurnwald, P.; Bohdan, P.; Trojan, K., Jr.; Čapek, J.; Ganev, N.; Zetek, M.; Kepka, M.; Kepka, M.; Houdková, Š. Improving of mechanical properties of printed maraging steel. Procedia Struct. Integr. 2024, 54, 256–263. [Google Scholar] [CrossRef]
  11. Tyczyński, P.; Siemiątkowski, Z.; Bąk, P.; Warzocha, K.; Rucki, M.; Szumiata, T. Performance of Maraging Steel Sleeves Produced by SLM with Subsequent Age Hardening. Materials 2020, 13, 3408. [Google Scholar] [CrossRef]
  12. Marciniak, Z.; Branco, R.; Macek, W.; Malça, C. Fatigue behaviour of SLM maraging steel under variable-amplitude loading. Procedia Struct. Integr. 2024, 56, 131–137. [Google Scholar] [CrossRef]
  13. Branco, R.; Silva, J.; Martins Ferreira, J.; Costa, J.D.; Capela, C.; Berto, F.; Santos, L.; Antunes, F.V. Fatigue behaviour of maraging steel samples produced by SLM under constant and variable amplitude loading. Procedia Struct. Integr. 2019, 22, 10–16. [Google Scholar] [CrossRef]
  14. Wang, H.; Deng, D.; Zhai, Z.; Yao, Y. Laser-processed functional surface structures for multi-functional applications—A review. J. Manuf. Process. 2024, 116, 247–283. [Google Scholar] [CrossRef]
  15. Wang, X.; Lu, Q.; Zhang, P.; Yan, H.; Shi, H.; Sun, T.; Zhou, K.; Chen, K. A review on the simulation of selective laser melting AlSi10Mg. Opt. Laser Technol. 2024, 174, 110500. [Google Scholar] [CrossRef]
  16. Kolomy, S.; Jopek, M.; Sedlak, J.; Benc, M.; Zouhar, J. Study of dynamic behaviour via Taylor anvil test and structure observation of M300 maraging steel fabricated by the selective laser melting method. J. Manuf. Process. 2024, 125, 283–294. [Google Scholar] [CrossRef]
  17. Liu, Q.; Wu, H.; Paul, M.J.; He, P.; Peng, Z.; Gludovatz, B.; Kruzic, J.J.; Wang, C.H.; Li, X. Machine-learning assisted laser powder bed fusion process optimization for AlSi10Mg: New microstructure description indices and fracture mechanisms. Acta Mater. 2020, 201, 316–328. [Google Scholar] [CrossRef]
  18. Akbari, P.; Zamani, M.; Mostafaei, A. Machine learning prediction of mechanical properties in metal additive manufacturing. Addit. Manuf. 2024, 91, 104320. [Google Scholar] [CrossRef]
  19. Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
  20. Barrionuevo, G.O.; Walczak, M.; Ramos-Grez, J.; Sánchez-Sánchez, X. Microhardness and wear resistance in materials manufactured by laser powder bed fusion: Machine learning approach for property prediction. CIRP J. Manuf. Sci. Technol. 2023, 43, 106–114. [Google Scholar] [CrossRef]
  21. Taherkhani, K.; Ero, O.; Liravi, F.; Toorandaz, S.; Toyserkani, E. On the application of in-situ monitoring systems and machine learning algorithms for developing quality assurance platforms in laser powder bed fusion: A review. J. Manuf. Process. 2023, 99, 848–897. [Google Scholar] [CrossRef]
  22. Zhang, H.; Deng, Y.; Chen, F.; Luo, Y.; Xiao, X.; Lu, N.; Liu, Y.; Deng, Y. Fatigue life prediction for orthotropic steel bridge decks welds using a Gaussian variational bayes network and small sample experimental data. Reliab. Eng. Syst. Saf. 2025, 264 Pt B, 111406, ISSN 0951-8320. [Google Scholar] [CrossRef]
  23. González Morales, S.R.; Yamada, K.M. Cell and matrix dynamics in branching morphogenesis. In Principles of Tissue Engineering, 5th ed.; Lanza, R., Langer, R., Vacanti, J.P., Atala, A., Eds.; Academic Press: London, UK, 2020; pp. 217–235. [Google Scholar] [CrossRef]
  24. Macek, W.; Branco, R.; Podulka, P.; Kopec, M.; Zhu, S.-P.; Costa, J.D. A brief note on entire fracture surface topography parameters for 18Ni300 maraging steel produced by LB-PBF after LCF. Eng. Fail. Anal. 2023, 153, 107541. [Google Scholar] [CrossRef]
  25. Song, Y.; Cao, H.; Qu, D.; Yi, H.; Kang, X.; Huang, X.; Zhou, J.; Yan, C. Surface integrity optimization of high speed dry milling UD-CF/PEEK based on specific cutting energy distribution mechanisms effected by impact and size effect. J. Manuf. Process. 2022, 79, 731–744. [Google Scholar] [CrossRef]
  26. Wu, Z.; Li, C.; Zhang, C.; Han, B.; Wang, Z.; Fan, W.; Xu, Z. Process parameter optimisation method based on data-driven prediction model and multi-objective optimisation for the laser metal deposition manufacturing process monitoring. Comput. Ind. Eng. 2025, 204, 111108. [Google Scholar] [CrossRef]
  27. Leng, J.; Li, R.; Xie, J.; Zhou, X.; Li, X.; Liu, Q.; Chen, X.; Shen, W.; Wang, L. Federated learning-empowered smart manufacturing and product lifecycle management: A review. Adv. Eng. Inform. 2025, 65 Pt A, 103179. [Google Scholar] [CrossRef]
  28. Bhandarkar, V.V.; Broteen Das, B.; Tandon, P. Real-time remote monitoring and defect detection in smart additive manufacturing for reduced material wastage. Measurement 2025, 252, 117362. [Google Scholar] [CrossRef]
  29. EOS M 290. Available online: https://www.eos.info/metal-solutions/metal-printers/eos-m-290#key-features (accessed on 11 March 2025).
  30. EOS Maraging Steel MS1 M400-4: Material Data Sheet. Available online: https://www.eos.info/metal-solutions/metal-materials/data-sheets/mds-eos-maragingsteel-ms1 (accessed on 11 March 2025).
  31. Owsiński, R.; Miozga, R.; Łagoda, A.; Kurek, M. Mechanical Properties of X3NiCoMoTi 18-9-5 Produced via Additive Manufacturing Technology—Numerical and Experimental Study. Adv. Sci. Technol. Res. J. 2024, 18, 45–61. [Google Scholar] [CrossRef] [PubMed]
  32. Bochnia, J.; Kozior, T.; Zyz, J. The Mechanical Properties of Direct Metal Laser Sintered Thin-Walled Maraging Steel (MS1) Elements. Materials 2023, 16, 4699. [Google Scholar] [CrossRef] [PubMed]
  33. Tian, J.; Zhou, G.; Wang, W.; Hu, Q.; Jiang, Z.; Yang, K. Understanding the effect of cobalt on the precipitation hardening behavior of the maraging stainless steel. J. Mater. Res. Technol. 2023, 27, 6719–6728. [Google Scholar] [CrossRef]
  34. Mooney, B.; Kourousis, K.I. A Review of Factors Affecting the Mechanical Properties of Maraging Steel 300 Fabricated via Laser Powder Bed Fusion. Metals 2020, 10, 1273. [Google Scholar] [CrossRef]
  35. Swain, S.; Datta, S.; Roy, T. Microstructure and Mechanical Property Characterization of Additively Manufactured Maraging Steel 18Ni(300) Built Part. J. Mater. Eng. Perform. 2025; in press. [Google Scholar] [CrossRef]
  36. Zhang, L.; Wang, M.; Li, H.; Li, Q.; Liu, J. Influence of layer thickness and heat treatment on microstructure and properties of selective laser melted maraging stainless steel. J. Mater. Res. Technol. 2024, 33, 3911–3927. [Google Scholar] [CrossRef]
  37. Durmaz, A.R.; Müller, M.; Lei, B.; Thomas, A.; Britz, D.; Holm, E.A.; Eberl, C.; Mücklich, F.; Gumbsch, P. A deep learning approach for complex microstructure inference. Nat. Commun. 2021, 12, 6272. [Google Scholar] [CrossRef] [PubMed]
  38. Larsson, C. Self-Similarity, Fractality, and Chaos. In 5G Networks; Larsson, C., Ed.; Academic Press: London, UK, 2018; pp. 67–102. [Google Scholar] [CrossRef]
  39. Young, B.K.; Kovacs, K.D.; Adelman, R.A. Fractal Dimension Analysis of Widefield Choroidal Vasculature as Predictor of Stage of Macular Degeneration. Transnatl. Vis. Sci. Technol. 2020, 9, 22. [Google Scholar] [CrossRef] [PubMed]
  40. Gómez-Águila, A.; Trinidad-Segovia, J.E.; Sánchez-Granero, M.A. Improvement in Hurst exponent estimation and its application to financial markets. Financ. Innov. 2022, 8, 86. [Google Scholar] [CrossRef]
  41. Babič, M.; Kokol, P.; Guid, N.; Panjan, P. A new method for estimating the Hurst exponent H for 3D objects. Mater. Technol. 2014, 48, 203–208. [Google Scholar]
  42. Kovačič, M.; Zupanc, A.; Župerl, U.; Brezočnik, M. Reducing scrap in long rolled round steel bars using Genetic Programming after ultrasonic testing. Adv. Prod. Eng. Manag. 2024, 19, 435–442. [Google Scholar] [CrossRef]
  43. Prieto, A.; Prieto, B.; Ortigosa, E.M.; Ros, E.; Pelayo, F.; Ortega, J.; Rojas, I. Neural networks: An overview of early research, current frameworks and new challenges. Neurocomputing 2016, 214, 242–268. [Google Scholar] [CrossRef]
  44. Martínez, F.S.; Casas-Roma, J.; Subirats, L.; Parada, R. Spiking neural networks for autonomous driving: A review. Eng. Appl. Artif. Intell. 2024, 138 Pt B, 109415. [Google Scholar] [CrossRef]
  45. Shanthi, D.; Madhuravani, B.; Kumar, A. (Eds.) Handbook of Artificial Intelligence; Bentham Publishers: Singapore, 2023. [Google Scholar]
  46. Arkes, J. Regression Analysis: A Practical Introduction, 2nd ed.; Routledge: New York, NY, USA, 2023. [Google Scholar]
  47. Miller, H.N.; LaFave, S.; Marineau, L.; Stephens, J.; Thorpe, R.J. The impact of discrimination on allostatic load in adults: An integrative review of literature. J. Psychosom. Res. 2021, 146, 110434. [Google Scholar] [CrossRef]
  48. Henry, A.; Nagaraj, N. Augmented regression models using neurochaos learning. Chaos Solitons Fractals 2025, 201 Pt 2, 117213. [Google Scholar] [CrossRef]
  49. Halder, R.K.; Uddin, M.N.; Uddin, M.A.; Aryal, S.; Khraisat, A. Enhancing K-nearest neighbor algorithm: A comprehensive review and performance analysis of modifications. J. Big Data 2024, 11, 113. [Google Scholar] [CrossRef]
  50. Abe, S. Support Vector Machines for Pattern Classification; Springer: London, UK, 2005. [Google Scholar]
  51. Pochiraju, B.; Kollipara, H.S.S. Statistical Methods: Regression Analysis. In Essentials of Business Analytics; Pochiraju, B., Seshadri, S., Eds.; Springer: Cham, Switzerland, 2019; pp. 179–246. [Google Scholar] [CrossRef]
  52. Powell, R.T. Computational precision therapeutics and drug repositioning. In Comprehensive Precision Medicine; Ramos, K.S., Ed.; Elsevier: Amsterdam, The Netherlands, 2024; Volume 1, pp. 57–74. [Google Scholar] [CrossRef]
  53. Salman, H.A.; Kalakech, A.; Steiti, A. Random Forest Algorithm Overview. Babylon. J. Mach. Learn. 2024, 2024, 69–79. [Google Scholar] [CrossRef]
  54. Dobesova, Z. Evaluation of Orange data mining software and examples for lecturing machine learning tasks in geoinformatics. Comput. Appl. Eng. Educ. 2024, 32, e22735. [Google Scholar] [CrossRef]
  55. Anandan, B.; Manikandan, M. Machine learning approach with various regression models for predicting the ultimate tensile strength of the friction stir welded AA 2050-T8 joints by the K-Fold cross-validation method. Mater. Today Commun. 2023, 34, 105286. [Google Scholar] [CrossRef]
  56. Ogunsanya, M.; Isichei, J.; Desai, S. Grid search hyperparameter tuning in additive manufacturing processes. Manuf. Lett. 2023, 35 (Supplement), 1031–1042. [Google Scholar] [CrossRef]
  57. Salcedo-Sanz, S.; Casillas-Pérez, D.; Del Ser, J.; Casanova-Mateo, C.; Cuadra, L.; Piles, M.; Camps-Valls, G. Persistence in complex systems. Phys. Rep. 2022, 957, 1–73. [Google Scholar] [CrossRef]
  58. Ren, H.; Wang, K.; Xu, K.; Lou, M.; Kan, G.; Jia, Q.; Li, C.; Xiao, X.; Chang, K. Machine learning-assisted prediction of mechanical properties in WC-based composites with multicomponent alloy binders. Compos. Part B Eng. 2025, 299, 112389. [Google Scholar] [CrossRef]
Figure 1. Microstructure of the SLM specimen.
Figure 1. Microstructure of the SLM specimen.
Applsci 15 12832 g001
Figure 2. A simple tree program of GP.
Figure 2. A simple tree program of GP.
Applsci 15 12832 g002
Figure 3. A neural network.
Figure 3. A neural network.
Applsci 15 12832 g003
Figure 4. Multiple regression.
Figure 4. Multiple regression.
Applsci 15 12832 g004
Figure 5. Nearest neighbor (kNN) algorithm.
Figure 5. Nearest neighbor (kNN) algorithm.
Applsci 15 12832 g005
Figure 6. Support vector machine (SVM) principle.
Figure 6. Support vector machine (SVM) principle.
Applsci 15 12832 g006
Figure 7. Logistic regression.
Figure 7. Logistic regression.
Applsci 15 12832 g007
Figure 8. Random forest (RF).
Figure 8. Random forest (RF).
Applsci 15 12832 g008
Figure 9. Modeling hardness with different methods of machine learning in the Orange software.
Figure 9. Modeling hardness with different methods of machine learning in the Orange software.
Applsci 15 12832 g009
Figure 10. Experimental and predicted values of hardness.
Figure 10. Experimental and predicted values of hardness.
Applsci 15 12832 g010
Figure 11. A graphical representation of data with equal Pearson’s correlation coefficients for measured and predicted values of hardness. The line represents a trend for RF model.
Figure 11. A graphical representation of data with equal Pearson’s correlation coefficients for measured and predicted values of hardness. The line represents a trend for RF model.
Applsci 15 12832 g011
Table 1. Parameters of SLM process and the measured hardness HV of the specimens.
Table 1. Parameters of SLM process and the measured hardness HV of the specimens.
SpecimenX1
Power, W
X2
Speed, mm/s
X3
FD
Hardness
HV
S132010002.423354.7
S232011502.424362.7
S332013002.430376.0
S42708502.418356.7
S527010002.431370.8
S627011502.434350.8
S727013002.439382.6
S82207002.460351.8
S92208502.450374.2
S1022010002.439373.2
S1122011502.485358.7
S1222013002.456380.2
S131707002.404380.5
S141708502.327366.5
S1517010002.450374.6
S1617011502.380366.8
S1717013002.441363.4
Table 2. Predicted hardness HV of the respective specimens from different algorithms.
Table 2. Predicted hardness HV of the respective specimens from different algorithms.
SpecimenGPMRRFNNk-NNSVM
S1355.6364.5376.0356.7376.0380.0
S2365.8363.5376.0376.0376.0376.0
S3377.5366.3362.7354.7363.4362.7
S4357.3361.9380.0354.7362.7380.0
S5366.0364.1373.0354.7354.7380.0
S6367.3364.4382.6376.0382.6380.0
S7380.9368.7370.8376.0350.8380.0
S8352.7361.3380.0380.0374.2380.0
S9373.8365.4356.7351.8373.0351.8
S10367.2368.0374.6380.0374.2380.0
S11362.5371.2374.2363.4380.0380.0
S12377.2372.6358.7363.4363.4363.4
S13350.5365.9366.8366.5351.8374.2
S14364.0368.9366.8366.8380.0380.0
S15370.6369.5373.0363.4366.5373.0
S16369.6375.8380.0366.5380.0380.0
S17365.6371.3380.0374.6382.6380.0
Table 3. Predictability reached by different algorithms.
Table 3. Predictability reached by different algorithms.
GPMRRFNNk-NNSVMLR
Average0.9870.9770.9600.9680.9550.9570.970
Max0.999 *0.9980.999 *0.999 *0.9970.9960.994
Min0.9220.9610.909 **0.9200.909 **0.9170.928
Range0.0770.0370.0900.0790.0870.0790.066
Median0.9930.9760.9630.9690.9570.9630.979
* The best single result. ** The worst single result.
Table 4. Correlation between the measured results and predictions.
Table 4. Correlation between the measured results and predictions.
GPMRRFNNk-NNSVMLR
Pearson coefficient0.56430.3883−0.7215 −0.1587 −0.6591 −0.4889 −0.0406
Error (distances)1.30%2.20%3.90%3.20%4.40%4.20%3.00%
MSE70.73778.800273.908202.963 329.272 290.238 184.532
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Babič, M.; Šturm, R.; Rucki, M.; Siemiątkowski, Z. Application of Machine Learning Method for Hardness Prediction of Metal Materials Fabricated by 3D Selective Laser Melting. Appl. Sci. 2025, 15, 12832. https://doi.org/10.3390/app152312832

AMA Style

Babič M, Šturm R, Rucki M, Siemiątkowski Z. Application of Machine Learning Method for Hardness Prediction of Metal Materials Fabricated by 3D Selective Laser Melting. Applied Sciences. 2025; 15(23):12832. https://doi.org/10.3390/app152312832

Chicago/Turabian Style

Babič, Matej, Roman Šturm, Mirosław Rucki, and Zbigniew Siemiątkowski. 2025. "Application of Machine Learning Method for Hardness Prediction of Metal Materials Fabricated by 3D Selective Laser Melting" Applied Sciences 15, no. 23: 12832. https://doi.org/10.3390/app152312832

APA Style

Babič, M., Šturm, R., Rucki, M., & Siemiątkowski, Z. (2025). Application of Machine Learning Method for Hardness Prediction of Metal Materials Fabricated by 3D Selective Laser Melting. Applied Sciences, 15(23), 12832. https://doi.org/10.3390/app152312832

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop