Next Article in Journal
Photocurable Hydrogel Substrate—Better Potential Substitute on Bone-Marrow-Derived Dendritic Cells Culturing
Previous Article in Journal
Polyurethane-Based Porous Carbons Suitable for Medical Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Phase Prediction of High-Entropy Alloys by Integrating Criterion and Machine Learning Recommendation Method

1
School of Information and Electrical Engineering, Hebei University of Engineering, Handan 056038, China
2
School of Mechanical Engineering, Dalian University of Technology, Dalian 116024, China
3
Industrial and Manufacturing Systems Engineering, Kansas State University, Manhattan, KS 66506, USA
*
Authors to whom correspondence should be addressed.
Materials 2022, 15(9), 3321; https://doi.org/10.3390/ma15093321
Submission received: 30 March 2022 / Revised: 28 April 2022 / Accepted: 28 April 2022 / Published: 5 May 2022

Abstract

:
The comprehensive properties of high-entropy alloys (HEAs) are highly-dependent on their phases. Although a large number of machine learning (ML) algorithms has been successfully applied to the phase prediction of HEAs, the accuracies among different ML algorithms based on the same dataset vary significantly. Therefore, selection of an efficient ML algorithm would significantly reduce the number and cost of the experiments. In this work, phase prediction of HEAs (PPH) is proposed by integrating criterion and machine learning recommendation method (MLRM). First, a meta-knowledge table based on characteristics of HEAs and performance of candidate algorithms is established, and meta-learning based on the meta-knowledge table is adopted to recommend an algorithm with desirable accuracy. Secondly, an MLRM based on improved meta-learning is engineered to recommend a more desirable algorithm for phase prediction. Finally, considering poor interpretability and generalization of single ML algorithms, a PPH combining the advantages of MLRM and criterion is proposed to improve the accuracy of phase prediction. The PPH is validated by 902 samples from 12 datasets, including 405 quinary HEAs, 359 senary HEAs, and 138 septenary HEAs. The experimental results shows that the PPH achieves performance than the traditional meta-learning method. The average prediction accuracy of PPH in all, quinary, senary, and septenary HEAs is 91.6%, 94.3%, 93.1%, and 95.8%, respectively.

Graphical Abstract

1. Introduction

High-entropy alloys are composed of multiple (not less than five) main elements [1,2]. They typically possess high hardness, high strength, high temperature-softening resistance, superior wear resistance, and corrosion resistance [3]. HEAs have broad application prospects in the nuclear power industry, biochemistry, chemical industry, etc. [4]. The phases of HEAs mainly include solid solution (SS), intermetallic compound (IM), solid solution and intermetallic compound (SS+IM), and amorphous phase (AM) [5,6,7]. Because these phases are key factors determining the performance of materials, the accurate prediction of the phases in HEAs is crucial for material design [8].
In recent decades, many phase prediction methods for HEAs have been applied in the materials field. However, the quantity of element combination of HEAs is much larger than single-principal component alloys, so it is more difficult to predict the phases of HEAs. The traditional trial-and-error method is an approach to detect the phases of HEAs that has low efficiency, long cycle time, and high cost [9]. The functional density theory and calculation of phase diagram method (CALPHAD) are other methods to predict the phases of HEAs; both are inefficient and have a heavy computational burden [10]. To address these issues, parameter methods based on criterion have been researched. Yang [5] proposed the parameter Ω and pointed out that SS was easily formed when Ω 1.1 and δ 6.6 % , where δ is the is the mean square deviation of the atomic size of all elements in multi-principal alloys. Similarly, Zhang et al. [11] confirmed that SS was easily formed if Ω 1.1 and δ 6.6 % . Guo [12] concluded that ∆Hmix and ∆Smix can determine SS, where ∆Hmix is the enthalpy of mixing and ∆Smix is the entropy of mixing. Tan [13] demonstrated that the SS was formed under the same conditions in the literature [5]. However, the above-mentioned parameter methods have limited application space outside these criteria. In addition, the establishment process of these criteria requires heavy workload and time consumption implications.
Some scholars have predicted the phases of HEAs by ML algorithms. These ML algorithms can establish the mapping relationship between the input parameters and phases of HEAs with extensive training [14,15,16,17,18]. Islam et al. [19] classified the phases of HEAs as SS, IM, and AM by multi-layer neural network algorithm. Huang et al. [20] predicted the phases of HEAs by K-nearest neighbor (KNN), support vector machine (SVM), and artificial neural network (ANN). Qu et al. [21] established a universal method by SVM to predict the phases of HEAs. Li et al. [22] adopted SVM to distinguish the phases of HEAs based on cross validation method, which achieved better accuracy than CALPHAD. Although many ML algorithms have been applied to predict the phases of HEAs and accomplished desirable results, the accuracies of different ML algorithms based on the same dataset vary considerably. The famous theorem free lunch (NoFreeLunch, NFL) in the machine learning field [23] shows that there is no ‘general algorithm’ that can solve all problems once and for all. If material designers can select the ML algorithm with the most desirable accuracy, it will significantly reduce the number of experiments, accrued time, and cost savings. The issue of algorithm selection is still challenging and time-consuming for material designers.
Therefore, the selection of an appropriate algorithm has attracted great interest. Khan et al. [24] reviewed the meta-learning algorithm, which can recommend a desirable algorithm. Aguiar et al. [25] adopted meta-learning to select the most suitable image segmentation ML algorithm and obtained desirable results. Chu et al. [26] proposed an adaptive recommendation model based on meta-learning that maps the relationship between the performance of algorithms and datasets so as to recommend ideal algorithms in different datasets. Cui [27] proposed a general meta-modeling recommendation system based on meta-learning that can automatically recommend desirable algorithms for researchers. The meta-learner is a key component that considerably affects the accuracy of meta-learning. Pimentel et al. [28] adopted KNN as the meta-learner in meta-learning to recommend a suitable ML algorithm for a new dataset. Ferrari et al. [29] adopted KNN to recommend algorithms in meta-learning and achieved good results. However, when there are noise points and the useful neighbor information of each sample is not considered, the performance of KNN is poor. Song et al. [30] proposed a decremental instance selection for KNN regression (DISKR) and achieved better experimental results than with KNN alone. Zhang [31] designed a shelly nearest neighbor (SNN) algorithm that considers the information of the left and right nearest neighbors of the test sample and obtains a better result than KNN. Although the single data-driven model has achieved success in several fields, it has poor interpretability and generalization and requires a large number of experimental samples. Additionally, literature about meta-learning in the phase prediction of HEAs is still scarce.
Some scholars have carried out research combining physical models and data-driven models. These models benefit from the advantage of the powerful learning ability of data-driven models and the strong interpretability and generalization of physical models. Such models have achieved desirable results in a wide range of areas. Lv et al. [32] proposed a novel prediction model constituted by the mechanism method and ML model to forecast the liquid steel temperature in a ladle furnace. Later on, Lv et al. [33] proposed a novel steel temperature prediction model based on an ML algorithm and first-principle method. Hou et al. [34] proposed a framework composed of the hard division model and prediction model that combines the mechanism model and ML model.
However, the literature rarely addresses how to recommend an ideal algorithm for HEAs. In this paper, a PPH constituted by MLRM and criterion of SS is proposed to provide desirable results for material designers. First, a meta-knowledge table with meta-features of datasets and accuracies of algorithms is established, and meta-learning is adopted to recommend an ideal algorithm for material designers. Secondly, an MLRM based on improved meta-learning is proposed to recommend a more desirable algorithm. Finally, the PPH based on criterion of SS and the MLRM is proposed to predict the phases of HEAs. Compared with other ML algorithms in the HEA field, the method proposed in this paper can recommend an ML algorithm with ideal accuracy for material designers, which can reduce the burden on material designers in selecting algorithms, effectively improve the prediction accuracy of phases, and greatly reduce the experimental time and cost.

2. Preliminaries

2.1. Research Background of HEAs

HEAs have excellent properties that are considerably affected by their phase structures [35]. The phases of HEAs include SS, IM, SS+IM, and AM. Some scholars have adopted the parameter method to determine the phase formation of HEAs in the materials field [5,11,12,13].
To date, the main parameters of HEAs include δ , Δ H m i x , Ω , Δ χ , and Δ S m i x . The formulae of these parameters are shown in Equations (1)–(5) [11,12]:
δ = i = 1 n c i ( 1 r i / r ¯ ) 2
where δ is the mean square deviation of atomic size of all elements in multi-principal alloy, ci is the atomic percentage of the i-th element, ri is the atomic radius of the i-th principal element, and r ¯ is the weighted average atomic radius of all principals, r ¯ = i = 1 n c i r i .
Δ H m i x = i = 1 , i < j n 4 H i j c i c j
where Δ H m i x is the enthalpy of mixing; Hij is the mixing enthalpy of i-th and j-th binary liquid alloy; and ci and cj are the atomic percentage of the i-th and j-th element, respectively.
Δ S m i x = R i = 1 n c i ln c i
where Δ S m i x is the entropy of mixing; ci is the atomic percentage of the i-th element; and R is the gas constant, the value of which is 8.314 J·K−1·mol−1.
Ω = T m Δ S m i x | Δ H m i x |
where Ω is the number of states of multi-principal element alloy system molecules; and Tm is the weighted average melting point for all elements, T m = i = 1 n c i T m i .
Δ χ = i = 1 n c i ( χ i χ ¯ )
where Δ χ is the electronegativity difference of the HEA system, and χ ¯ = i = 1 n c i χ i , χ i is the electronegativity of constituent elements.

2.2. Meta-Learning

Meta-learning is utilized to solve the problem of algorithm selection [36,37]. The meta-learning method establishes the relationship between different datasets and the performance of algorithms to recommend the most appropriate algorithm. A schematic diagram of meta-learning is shown in Figure 1.
In Figure 1, the data library contains N datasets, including D1, D2, …, DN. The selection of meta-features significantly affects the performance of meta-learning. The simple, statistical, and information theoretic meta-features can reflect the characteristic of several datasets [24,26,29,38]. In Figure 1, there are E candidate algorithms. The meta-knowledge table is constructed by meta-features and performance of candidate algorithms. The meta-learner is a very important part of meta-learning. The meta-learner is trained based on the meta-knowledge table, which takes the meta-features of datasets as input variables and yields the performance of algorithms as output variables. The KNN method is often used as a meta-learner in the literature [28,29]. The meta-model is a trained meta-learner that maps relationships between the meta-features of datasets and the performance of candidate algorithms. The algorithm steps for meta-learning are as follows:
Step 1: Compute the meta-features of each dataset.
Step 2: Train each algorithm on each dataset to evaluate its performance.
Step 3: Construct the meta-knowledge table with the meta-features of datasets and the performance of candidate algorithms.
Step 4: Train the meta-learner based on the meta-knowledge table to map the relationship between meta-features and performance of the candidate algorithms. The trained meta-learner is also called the meta-model.
Step 5: Compute the meta-features of the new dataset and predict the performance of candidate algorithms on the new dataset by the meta-model. Select the algorithm with the highest predictive performance as the recommended algorithm.

2.3. Shelly Nearest Neighbor

The SNN is a neighbor-instance selection method for classification and regression problems. Given an instance, its shelly nearest neighbors refer to the nearest neighbors that make up the shell to encapsulate the instance [39]. The SNN method considers its left and right nearest neighbors of each attribute in a given dataset [31]. The specific steps of the SNN algorithm are as follows [40].
Let F = { f 1 , f 2 , , f R } be the set of R class labels and D = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x N , y N ) } be the dataset consisting of N instances, where xi is a vector of M attributes, and y i F is the label of the i-th instance, xi.
For the j-th attribute ( 1 j M ), the left nearest neighbor of a query instance, xt, within D refers to the instances whose value on the j-th attribute is smaller than xt but larger than the rest. The left nearest neighbor of a query instance, xt, on the j-th attribute is defined as follows (Equation (6)):
x t ( D , j ) = { x i D | x k j x i j x t j , 1 i , k N }
where xij is the j-th attribute value of xi. According to this definition, if xtj was not the smallest one, xt has at least one left nearest neighbor, x i D , on the j-th attribute, such that x i j x t j , whereas x k j x i j for the remaining instances, xk, within D. Based on Equation (6), the left nearest neighbors of xt over all attributes within D are represented as Equation (7):
x t ( D ) = j = 1 .. M x t ( D , j )
In a similar way, the right nearest neighbors of xt over all attributes within D are represented as Equation (8):
x t + ( D ) = j = 1 .. M x t + ( D , j )
where x t + ( D , j ) is the right nearest neighbors of xt with respect to the j-th attribute, as shown in Equation (9):
x t + ( D , j ) = { x i D | x k j x i j x t j , 1 i , k N }
The SNN of xt within D refers to its left and right nearest neighbors, as shown in Equation (10):
S N N ( x t ) = x t + ( D ) x t ( D )
Generally speaking, there are about 2 × M shelly nearest neighbors for xt if D has M attributes. For ease of understanding, the SNN of the query instance is shown in Figure 2.
Figure 2 shows an instances diagram of two-dimensional distribution. The red triangle is the query instance, marked as Query. The blue quadrangle is the neighbor instances of the query instance, which are selected by the SNN. xt is the query instance. The right nearest neighbor set, x t + ( D , j ) , is composed of all instances in dataset D whose values are greater than or equal to xtj. The left nearest neighbor set, x t ( D , j ) , is composed of all instances in dataset D whose values are less than or equal to xtj. The total dimension number is M = 2, where j = 1 represents the horizontal axis variable and j = 2 represents the vertical axis variable. xt1 and xt2 represents the horizontal and vertical axis value of the query instance, respectively.
The selection process of neighbor instances of xt is as follows: First, the horizontal axis value and the vertical axis value of all instances in the dataset, D, are compared with xt1 and xt2, respectively. Secondly, if the value of the horizontal axis variable of the instances is greater or less than xt1, these instances form the instance set x t + ( D , 1 ) or x t ( D , 1 ) . If the value of the vertical axis variable of the instances is greater or less than xt2, these instances form the instance set x t + ( D , 2 ) or x t ( D , 2 ) . Thirdly, the left and right nearest neighbor of query instance xt along the horizontal axis are x1 and x2, which are the maximum or minimum in x t ( D , 1 ) or x t + ( D , 1 ) . Fourthly, the instance with a maximum value in x t ( D , 2 ) is x3, and the instance with a minimum value in x t + ( D , 2 ) is the same instance, x3. The left and right nearest neighbor of query instance xt along the vertical axis is the same instance, x3. Thus, the shelly nearest neighbor set of the query sample xt instance includes instances x1, x2, and x3.

2.4. Decremental Instance Selection for KNN Regression

Decremental instance selection for KNN regression was first proposed by Song [30]. DISKR is an effective instance selection algorithm for KNN regression that removes outliers and the samples with less effect in the training set for KNN.
The KNN regressor is learned by comparing the given test instances with the training set. Let D = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x N , y N ) } be the training set, where x i = ( x i 1 , x i 2 , , x i M ) is the i-th instance with M attributes, and yi is the output of xi. N is the number of instances. When a query instance, xt, is given, the distance, di, between xt and each instance, xi, in D is calculated first, and then di is sorted by ascending order. The first k instances whose di ranks ahead are selected as k nearest neighbors of xt, and the predicted output, y t ^ , of xt is the average value of yi of k nearest neighbors.
The specific steps of DISKR are as follows:
Input: Dataset D = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x N , y N ) } , The parameter θ .
Output: The subset S D .
Step 1: Remove outliers. If P D ( x i ) = | y i y i ^ | > ( 1 θ ) y i , then the instance xi is an outlier instance and removed; otherwise, it is not an outlier instance and is retained. yi is the output of xi; y i ^ is the predicted output of xi.
Step 2: Sort the remaining instances after removing outliers. Sort instances, xi, by the absolute difference, P D ( x i ) = | y i y i ^ | , in descending order.
Step 3: Delete instances with less effect on the KNN regressor. The effect of xi could be estimated by the change in performance of KNN over D and D { x i } . The training error is used to approximately estimate the performance of KNN, which is expressed by the residual sum of squares (RSS).
R b f ( x i ) is the RSS on the D. R a f ( x i ) is the RSS on the D { x i } , which represents the training set without xi. R b f ( x i ) and R a f ( x i ) are shown in Equations (11) and (12):
R b f ( x i ) = x j D { x i } ( y j y j ^ ) 2
R a f ( x i ) = x j D { x i } ( y j y j ^ ) 2
where y j ^ ( 1 j N , j i ) and y j ^ ( 1 j N , j i ) are the predicted output of KNN based on D and D { x i } .
The effect of xi on KNN is represented as Equation (13):
( x i ) = R a f ( x i ) R b f ( x i )
After an instance, xi, is removed, the following rule is adopted to avoid a significant negative change in the performance of the regressor, as shown in Equation (14):
( x i ) θ R b f ( x i )
where θ ( 0 , 1 ) is the significant coefficient.
Step 4: Output the subset S; the remaining samples in the subset S are more relative samples that remove outliers and points that have less effect on KNN.

3. Methodology

In this section, the phase prediction of high-entropy alloys (PPH) is proposed by integrating machine learning recommendation method and criterion to predict the phases of HEAs.

3.1. Machine Learning Recommendation Method

An MLRM based on improved meta-learning is proposed. The schematic diagram shown in Figure 3 illustrates MLRM, which can guide material designers to recommend an ideal algorithm.
As shown in Figure 3, the data library, L = { D 1 , D 2 , , D N } , contains N datasets. Di is the i-th dataset of L. The parameters Δ S m i x , Δ H m i x , δ , Δχ, and Ω of each sample in each dataset, Di, are calculated by the parameter method [11,12]. The meta-features should reflect the characteristic of the datasets. The meta-feature set, M F = { m f 1 , m f 2 , , m f F } , includes Δ S m i x ¯ , Δ H m i x ¯ , δ ¯ , Δ χ ¯ , Ω ¯ , σ Δ S m i x 2 , σ Δ H m i x 2 , σ δ 2 , σ Δ χ 2 , and σ Ω 2 . Δ S m i x ¯ , Δ H m i x ¯ , δ ¯ , Δ χ ¯ , and Ω ¯ are the mean value of Δ S m i x , Δ H m i x , δ , Δ χ , and Ω , respectively; and σ Δ S m i x 2 , σ Δ H m i x 2 , σ δ 2 , σ Δ χ 2 , and σ Ω 2 are the variance of Δ S m i x , Δ H m i x , δ , Δ χ , and Ω , respectively, in each dataset. The candidate algorithm set, C = { C 1 , C 2 , , C M } , includes the decision tree (DT), KNN, SVM, random forest (RF), and bagging. The meta-knowledge table describes the relationship between the values of meta-features (MF) and the accuracy of candidate algorithms (CA).
The new meta-learner based on SNN and DISKR is proposed to improve the performance of meta-learning and is called improved SNN and DISKR (ISD). Therefore, an MLRM based on improved meta-learning is proposed. The meta-model is the trained meta-learner, ISD.
The steps of the MLRM are as follows:
Input: Data library L = { D 1 , D 2 , , D N } , Candidate algorithm set C = { C 1 , C 2 , , C M } , New dataset Dnew.
Output: Recommendation algorithm.
Step 1: Compute the parameters Δ S m i x , Δ H m i x , δ , Δ χ , and Ω of each sample in each dataset, Di, by Equations (1)–(5) in Section 2.1.
Step 2: Construct meta-features and calculate the values of meta-features: Δ S m i x ¯ , Δ H m i x ¯ , δ ¯ , Δ χ ¯ , Ω ¯ , σ Δ S m i x 2 , σ Δ H m i x 2 , σ δ 2 , σ Δ χ 2 , and σ Ω 2 .
Step 3: Based on the values of meta-features and the real classification results on each dataset, train candidate algorithms DT, KNN, SVM, RF, and bagging in candidate algorithm set C = { C 1 , C 2 , , C M } to evaluate their accuracies.
Step 4: The meta-knowledge table is established based on meta-features of datasets and accuracies of candidate algorithms. The values of meta-features are used as the input variable (MF). The accuracies of candidate algorithms are used as the output variable (CA).
Step 5: Train the new meta-learner, ISD, based on the input variable, MF, and output variable, CA, in Step 4. Thus, obtain the trained meta-learner ISD, also known as the meta-model.
Step 6: Compute parameters of each sample in the new dataset, Dnew, by the parameter method: Δ S m i x , Δ H m i x , δ , Δ χ , and Ω . Compute the values of meta-features, MFnew, of the new dataset, Dnew: Δ S m i x ¯ , Δ H m i x ¯ , δ ¯ , Δ χ ¯ , Ω ¯ , σ Δ S m i x 2 , σ Δ H m i x 2 , σ δ 2 , σ Δ χ 2 , and σ Ω 2 .
Step 7: Input meta-features, MFnew, of the new dataset, Dnew, to the meta-model. The realization process of the meta-model is as follows.
(1)
The nearest neighbor set, S N N ( M F n e w ) , for the meta-features, MFnew, of the new dataset, Dnew, is obtained by SNN in Section 2.3 based on the meta-knowledge table.
(2)
The subset, S, is obtained by DISKR in Section 2.4, which is the remaining sample on the meta-knowledge table after removing outliers and points that have less effect on KNN. The nearest neighbor set, D I ( M F n e w ) , for the meta-features, MFnew, is obtained by the first k samples with the smallest distance in DISKR.
(3)
Obtain the nearest neighbor set, S D ( M F n e w ) , for the meta-features, MFnew, of the new dataset, Dnew, by ISD. The nearest neighbor set, S D ( M F n e w ) , is shown as Equation (15):
S D ( M F n e w ) = S N N ( M F n e w ) D I ( M F n e w )
where the nearest neighbor set, S D ( M F n e w ) , is obtained by the intersection of S N N ( M F n e w ) and D I ( M F n e w ) .
Step 8: Output the average accuracies of candidate algorithms by S D ( M F n e w ) . Select the algorithm with the highest average accuracy as the recommended algorithm.

3.2. Phase Prediction of HEAs

The criterion of the SS phase has been verified by a large number of experiments in the material field. In the criterion of SS, if δ and Ω fall within the range of δ 6.6 % and Ω 1.1 , the SS phase is easily formed [5,11,13]. The decision process of the criterion is easy to understand, saves computation time, and has strong interpretability and generalization. In this paper, the criterion of SS is integrated into the phase prediction of HEAs.
Combining the advantages of strong learning ability of MLRM and strong interpretability and generalization of the criterion of SS, the phase prediction of HEAs is proposed by integrating criterion and MLRM. The PPH is a serial model. First, the criterion of the SS phase is used to determine the phases of the test dataset, T. Secondly, when the remaining samples cannot be determined by the criterion of SS, the MLRM is adopted to recommend the appropriate ML algorithm for the remaining samples. Finally, the recommended algorithm is used to predict the phase of HEAs.
A schematic diagram of the PPH is shown in Figure 4.
In Figure 4, the blue frame is the criterion of SS, and the green frame is the MLRM. The specific details of the PPH are illustrated as follows:
Input: Test dataset T = { t 1 , t 2 , t N } .
Output: Phases of samples in T.
Step 1: Compute the features δ and Ω of samples in test dataset T. Utilize the criterion of SS to judge the phase of samples. The dataset, P = { p 1 , p 2 , , p J } , is composed of samples satisfying δ 6.6 % and Ω 1.1 . Output: the judgement results of samples in the dataset, P, are SS.
Step 2: In addition to the samples of dataset P, the remaining samples of the test dataset, T, constitute dataset A = { a 1 , a 2 , , a Q } , that is, A = TP. When the criterion of SS cannot judge the samples of dataset A, the machine learning recommendation method is adopted to predict phases of HEAs.
Step 3: For dataset A, compute the parameters of each sample in dataset A by Equations (1)–(5) in Section 2.1, and compute the values of meta-features of dataset A. Then, input the values of meta-features into the meta-model.
Step4: Predict the accuracies of the candidate algorithms by meta-model, and select an ideal algorithm as the recommended algorithm in dataset A.
Step5: The recommended algorithm is used to predict the phase of dataset A.
Step6: Output the phases of test dataset T, combining results of dataset P and dataset A.

4. Results and Discussions

In order to verify the effectiveness of the proposed PPH, numerous experiments are carried out. The accuracy of the proposed method is compared with several traditional ML algorithms, including DT, KNN, SVM, RF, and bagging [41,42,43]. In this paper, the parameters of the algorithms in this experiment are introduced. In DT, the hyperparameter ‘maximum depth’ is set to ‘25’. The hyperparameter ‘k’ of KNN is set to ‘5’. The support vector classification (SVC) method is used in SVM. In bagging, the hyperparameter ‘number of weak learners’ is set to ‘11’. In ISDISKR, the hyperparameter ‘ θ ’ is set as ‘0.05’. The other parameters of all ML algorithms above are set as default.
The experiments are carried out by the sklearn library of Python (version 3.7.4). Python was created by Guido van Rossum in Amsterdam, the Netherlands. The hardware device is a laptop, and the brand of the computer is Lenovo. The CPU is an Intel(R) Core(M) I7-8700 CPU @ 3.20 GHz 3.19 GHz, the RAM is 16.0 GB, the manufacturer is Lenovo Group Limited, and the location is Beijing, China.

4.1. Overview of Experimental Datasets

In our study, the 12 HEAs datasets were collected from several papers [4,6,7,10,14,44,45,46,47,48,49]. For convenience, the 12 datasets are sequentially marked by D1, D2, D3, D4, D5, D6, D7, D8, D9, D10, D11, and D12. The 12 datasets contain 902 samples, including 405 quinary HEAs, 359 senary HEAs, and 138 septenary HEAs. The phases of 902 samples include SS, IM, SS+IM, and AM. The quantity of SS, IM, SS+IM, and AM of the quinaries, senaries, and septenaries in 12 datasets is shown in Figure 5. As shown in Figure 5, the number of SS is the largest, and the number of IM is the least among the 12 datasets. Figure 5 shows that each dataset contains different quantities of quinaries, senaries, and septenaries.
The 12 datasets include 24 elements, which contain Al, Ag, Be, Ce, Co, Cr, Cu, Dy, Fe, Gd, Hf, Mg, Mn, Mo, Nb, Ni, Pd, Si, Sn, Ta, Ti, V, Y, and Zr. The occurrence time of each element in each dataset is shown in Figure 6. As shown in Figure 6, the occurrence time of different elements varies. The elements Al, Co, Cr, Cu, Fe, Ni, and Ti appear more frequently, and the element Gd appears only once in 12 datasets.
The maximum and minimum values of the five input parameters, Δ S m i x , Δ H m i x , δ , Δ χ , and Ω , are shown in Table 1.
A snapshot of part of the samples in the HEA dataset according to the Pandas data frame format is shown as Table 2. In Table 2, the first column is the number of HEAs, and the second column provides the names of the HEAs. The columns ranging from three to seven are the parameters of HEAs, i.e., δ , Δ H m i x , Ω , Δ χ , and Δ S m i x . The final column is the phases of HEAs. Table 2 shows a snapshot of the first six samples taken from all HEAs.
The correlation between any two parameters in Table 2 plays an important role in phase prediction of HEAs, so the Pearson correlation coefficient, Cab, is selected to describe the correlations of HEA parameters, as shown in Equation (16) [14]:
C a b = 1 n 1 i = 1 n ( a i a ¯ ) ( b i b ¯ ) s a s b
where ai and bi are the sample values of parameters a and b, respectively; a ¯ and b ¯ are the mean values of parameters a and b, respectively; sa and sb are the standard deviation of parameters a and b, respectively; and n is the number of samples. If C a b = 1 or C a b = 1 , the parameters a and b are almost completely relevant or completely irrelevant, respectively. The correlation among the 5 parameters of 12 datasets is shown in Figure 7.
In Figure 7, the values of the correlation coefficient between the two corresponding parameters is shown in every grid. Color intensity is proportional to the correlation coefficients. In the left side of the correlogram, the legend color shows the relationship between the correlation coefficients and the corresponding color. The darker the blue color, the higher the correlation; the darker the orange color, the lower the correlation. The range of Cab in 12 datasets is between−0.88 and 0.96. Among them, ∆Hmix and Ω in D5 have the highest correlation.

4.2. Comparison between Meta-Learning and Traditional ML Algorithms

The meta-learning based on meta-knowledge table can recommend an ideal algorithm for material designers so as to predict the phases of HEAs. In order to validate the performance of meta-learning, the experimental results of meta-learning are compared with the DT, KNN, SVM, RF, and bagging [41,42,43]. The fivefold cross-validation method is also carried out 20 times to avoid the overfitting problem of ML algorithms [50]. The comparison experimental results between meta-learning and traditional ML algorithms for all, quinaries, senaries, and septenaries are shown in Figure 8. In Figure 8, the experimental results of all, quinaries, senaries, and septenaries between meta-learning and other traditional ML algorithms are shown in (a), (b), (c), and (d), respectively. In Figure 8a, “all” represents samples containing quinaries, senaries, and septenaries. The horizontal axis represents the ID of these datasets, and the vertical axis represents the accuracies of meta-learning and traditional ML algorithms. The red line with pentagonal stars represents the accuracy of algorithms recommended by meta-learning. The black line with squares shows the accuracy of DT. The navy line with circles shows the accuracy of KNN. The blue line with upward triangles represents the accuracy of SVM. The pink line with downward triangles shows the accuracy of RF, and the olive line with diamonds represents the accuracy of bagging. Based on the results presented in Figure 8, the algorithm with the highest accuracy can be recommended by meta-learning on some datasets.
In order to facilitate the reader’s observation, the results of all, quinaries, senaries, and septenaries are shown in Table 3, Table 4, Table 5 and Table 6. The bold part of the tables is the algorithm with the highest accuracy on the corresponding dataset.
In Table 3, the first column is the ID of 12 datasets, and the second column to the seventh column are the accuracies of DT, KNN, SVM, RF, bagging, and meta-learning, respectively. As shown in Table 3, meta-learning recommends the algorithm with the highest accuracy; in D9, D11, and D12, the algorithms with the highest accuracy are bagging, bagging, and bagging, and the accuracies are 0.983, 0.995, and 0.912, respectively, for all. In D4, D5, and D7, although meta-learning does not recommend the highest-accuracy algorithm, it recommends the algorithm with the second highest accuracy; the algorithms with the second highest accuracy are bagging, bagging, and bagging, and the accuracies are 0.833, 0.871, 0.911, respectively, for all.
It can be seen from Table 4 that meta-learning recommends the algorithm with the highest accuracy in D2, D3, D6, D7, D9, and D12, and the accuracies are 0.927, 0.894, 0.994, 0.898, 0.968, 0.853, respectively, for quinaries. Meta-learning recommends the algorithm with the second highest accuracy in D1 and D11; the algorithms with the second highest accuracy are bagging and RF in D1 and D11, respectively.
As shown in Table 5, meta-learning recommends the algorithm with the highest accuracy in D1, D3, D6, and D10 for senaries, and the accuracies are 0.872, 0.900, 0.972, 0.817, respectively. Meta-learning recommends the algorithm with the second highest accuracy in D2 for senaries; the algorithm with the second highest accuracy is bagging in D2.
In Table 6, the algorithm with the highest accuracy is recommended by meta-learning in D2, D3, and D12 for septenaries; the algorithms with the highest accuracy are RF, RF, and RF in D2, D3, and D12, respectively. Meta-learning recommends the algorithm with the second highest accuracy in D4 and D7 for septenaries; the accuracies are 0.969 and 0.964, respectively.
Meta-learning recommends the algorithm with the high highest accuracy in some datasets. The reason should be that meta-learning can mine the relationship between mathematical statistical characteristics of HEA datasets and the performance of algorithms. The mathematical statistical characteristics of datasets include the mean value and variance of these parameters presented in Section 2.1. It is similarly reported in the literature in other fields meta-learning can recommend the desirable algorithm in most cases [24,27]. The experimental results also show that RF and bagging have higher accuracies than DT, KNN, and SVM [41,42,43]. The reason could be that RF and bagging are both ensemble algorithms, which are composed of multiple classifiers and integrate all the classification results.
However, meta-learning cannot recommend the algorithm with the highest accuracy in some datasets. The reasons could be as follows: firstly, the meta-learner is the crucial part in meta-learning method, and the meta-learner of meta-learning is KNN. The performance of KNN is not ideal in ML algorithms. Secondly, the KNN does not fully consider the information of the left and right nearest neighbors of every sample by every attribute. Finally, the meta-learner, KNN, does not consider how to exclude the noise samples and make up for missing information, which influences the decision result of different algorithms. If the performance of the meta-learner can be improved, the recommendation of meta-learning will be more accurate.

4.3. Comparison between MLRM and Meta-Learning

To alleviate the problem presented in Section 4.2, a novel MLRM method is proposed based on improved meta-learning. In order to validate the performance of MLRM, a comparison between MLRM and meta-learning is shown in Figure 9. In Figure 9, the accuracies of MLRM and meta-learning of all, quinaries, senaries, and septenaries are shown in (a), (b), (c), and (d), respectively. The horizontal axis is the ID of these datasets, and the vertical axis is the accuracy. The red line with circles shows the accuracy of algorithms recommended by MLRM. The black line with squares represents the accuracy of algorithms recommended by meta-learning. It can be seen from Figure 9 that all experimental results of MLRM are better than those of meta-learning.
In order to facilitate the reader’s observation, the rankings of algorithms recommended by meta-learning and MLRM, as wells real rankings are listed in Table 7, Table 8, Table 9 and Table 10. The accuracies of the optimal algorithms recommended by meta-learning and MLRM, as well as the accuracies of real optimal algorithms are listed in Table 11, Table 12, Table 13 and Table 14.
In Table 7, the first column is the ID of datasets. The second column and the third column are the ranking of the algorithms recommended by meta-learning and MLRM, respectively. The fourth column is the real algorithm ranking, which is also called TURE. The previous two best algorithms recommended by meta-learning and MLRM are listed from column 2 to column 3. The fourth column represents the actual previous two best algorithms. The first algorithm name and the second algorithm name appearing from column 2 to column 4 represent the highest and the second highest algorithm. In Table 11, the first column is the ID of datasets, and the second column to the third column are the accuracies of the optimal algorithms recommended by meta-learning and MLRM. The fourth column is the accuracy of the real optimal algorithm. As seen in Table 7 and Table 11, the MLRM can recommend the algorithm with the highest accuracy in 12 datasets for all. However, meta-learning only recommends the algorithm with the highest accuracy in D9, D11, and D12; the second highest accuracy in D4, D5, and D7; and neither the highest nor the second highest accuracy algorithm in other datasets.
According to Table 8 and Table 12, meta-learning only recommends the algorithm with the highest accuracy in D2, D3, D6, D7, D9, and D12; and the second highest accuracy in D1 and D11. The MLRM can recommend the algorithm with the highest accuracy in every dataset for quinaries.
As shown in Table 9 and Table 13, MLRM recommends the algorithm with the highest accuracy for senaries. However, meta-learning only recommends the algorithm with the highest accuracy in D1, D3, D6, and D10; and the second highest accuracy in D2. The quantity of ideal algorithms recommended by meta-learning is less than that recommended by MLRM.
As shown in Table 10 and Table 14, meta-learning only recommends the algorithm with the highest accuracy in D2, D3, and D12; and the second highest accuracy in D4 and D7. The MLRM recommends the algorithm with the highest accuracy in 12 datasets for septenaries.
In summary, MLRM can recommend the algorithm with the highest accuracy in every dataset, whereas meta-learning can only recommend the algorithm with the highest accuracy on several datasets. The reasons may be that the meta-learner in traditional meta-learning is KNN, and the meta-learner in MLRM is ISD based on DISKR and SNN. KNN performs poorly when there are noise points and does not consider the useful neighbor information of each sample. In other fields, the DISKR in MLRM has been shown to remove noise points and reduce the size of the datasets to improve performance [30]. The SNN in MLRM is adopted to obtain more reliable and useful left and right neighbor information [31].

4.4. Comparison between PPH and MLRM

A PPH is proposed based on the criterion of SS and MLRM. The criterion of SS is as follows: if the δ and Ω of samples are in the scope of δ 6.6 % and Ω 1.1 , the phases of samples are recognized as SS. To better illustrate the criterion of SS, a scatter plot based on the δ Ω coordinate system is shown in Figure 10, which shows the distribution of SS and non-SS of HEAs. In Figure 10, the horizontal axis is δ % , and the vertical axis is Ω . The horizontal virtual line is Ω = 1.1 , the vertical virtual line is δ = 6.6 % , the upper left corner of the dotted line represents the scope of δ 6.6 % , and Ω 1.1 . The red circle represents the samples for which the phase is SS, and the blue triangle represents the samples for which the phase is non-SS. Figure 10 shows that the phase of most samples in the upper left corner are SS in each dataset, and most samples of SS fall within the range of δ 6.6 % and Ω 1.1 . The experimental results show that the criterion δ 6.6 % and Ω 1.1 has good effect on SS phase determination. Therefore, the criterion of SS is integrated to the PPH to improve the phase prediction accuracy of HEAs.
In order to verify the performance of the PPH, a set of comparison experiments between PPH and MLRM are carried out. The comparison of the results is shown in Figure 11. In Figure 11, the accuracies of PPH and MLRM in all, quinaries, senaries, and septenaries are shown in (a), (b), (c), and (d), respectively. The horizontal axis is the ID of these datasets, and the vertical axis is the accuracy. The red line with circles shows the accuracy of PPH. The black line with squares represents the accuracy of algorithms recommended by MLRM. As shown in Figure 11, PPH has higher prediction accuracy than MLRM in all, quinaries, senaries, and septenaries.
The specific results are listed in Table 15. The first column is the ID of datasets. Column 2, column 4, column 6, and column 8 are the accuracies of MLRM in all, quinaries, senaries, and septenaries, respectively. Column 3, column 5, column 7, and column 9 are the accuracies of PPH in all, quinaries, senaries, and septenaries, respectively. NULL represents no accuracy of MLRM and PPH. From Table 15, it can be concluded that the accuracy of PPH is higher than that of MLRM. The reason could be that PPH combines the advantages of the criterion of SS and MLRM. The MLRM is a pure ML method, and it does not make full use of the professional knowledge in the material field. The criterion of SS has strong interpretability and generalization and requires only a small amount of training samples. Therefore, the prediction accuracy of PPH is higher than that of MLRM [33,34,51].

5. Conclusions

In order to predict the phases of HEAs, in this paper, we propose the phase prediction of HEAs by integrating the criterion and machine learning recommendation method. First, a meta-knowledge table based on characteristics of HEAs and performance of candidate algorithms is established, and the experiments show that meta-learning can recommend the algorithm with ideal accuracy on some datasets for material designers. Second, in order to guide material designers to select an algorithm with higher accuracy, an MLRM based on SNN and DISKR is proposed. The experimental results show that the recommendation accuracy of the MLRM is higher than that of meta-learning based on KNN on all, quinary, senary, and septenary HEAs. Third, the PPH consisting of the criterion of SS and MLRM is proposed. Compared with other ML algorithms in the HEA field, the experimental results show that the PPH can achieve performance than the traditional meta-learning method. The average prediction accuracy of the PPH in all, quinary, senary, and septenary HEAs is 91.6%, 94.3%, 93.1%, and 95.8%, respectively. The method proposed in this paper can reduce the burden of material designers in selecting algorithms, effectively improve the prediction accuracy of HEAs phase, and considerably reduce the experimental time and cost. In addition, the PPH can also provide a research foundation in other material fields.

Author Contributions

Conceptualization, S.H. and Y.L.; methodology, S.H. and Y.L.; formal analysis, M.B. and D.L.; validation, S.H. and Y.L.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, M.B., M.S., W.L., C.W., H.T., and D.L.; supervision, S.H.; funding acquisition, S.H., M.B., W.L., C.W., and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (No. 52175455), the National Science Foundation (Award No. 1943445), the Science and Technology Innovation Fund of Dalian (No. 2020JJ26GX040), the Equipment Pre-research Foundation (No. 80923010401), the Fundamental Research Funds for the Central Universities (No. DUT20JC19), Key research and development planning project of Hebei Province (21350101D), the Natural Science Foundation of Hebei Province of China (A2020402013), the Iron and Steel Joint Foundation of Hebei Province (E2020402016), and Handan City Science and Technology Research and Development Projects (19422101008-27).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Manzoor, A.; Pandey, S.; Chakraborty, D.; Phillpot, S.R.; Aidhy, D.S. Entropy contributions to phase stability in binary random solid solutions. NPJ Comput. Mater. 2018, 4, 1–10. [Google Scholar] [CrossRef]
  2. Gorniewicz, D.; Przygucki, H.; Kopec, M.; Karczewski, K.; Jozwiak, S. TiCoCrFeMn (BCC + C14) High-Entropy Alloy Multiphase Structure Analysis Based on the Theory of Molecular Orbitals. Materials 2021, 14, 5285. [Google Scholar] [CrossRef] [PubMed]
  3. Liu, L.; Paudel, R.; Liu, Y.; Zhao, X.L.; Zhu, J.C. Theoretical and Experimental Studies of the Structural, Phase Stability and Elastic Properties of AlCrTiFeNi Multi-Principle Element Alloy. Materials 2020, 13, 4353. [Google Scholar] [CrossRef] [PubMed]
  4. Miracle, D.B.; Senkov, O.N. A critical review of high entropy alloys and related concepts. Acta Mater. 2017, 122, 448–511. [Google Scholar] [CrossRef] [Green Version]
  5. Yang, X.; Zhang, Y. Prediction of high-entropy stabilized solid-solution in multi-component alloys. Mater. Chem. Phys. 2012, 132, 233–238. [Google Scholar] [CrossRef]
  6. Senkov, O.N.; Miracle, D.B. A new thermodynamic parameter to predict formation of solid solution or intermetallic phases in high entropy alloys. J. Alloys Compd. 2016, 658, 603–607. [Google Scholar] [CrossRef] [Green Version]
  7. Gao, M.C.; Zhang, C.; Gao, P.; Zhang, F.; Ouyang, L.Z.; Widom, M.; Hawk, J.A. Thermodynamics of concentrated solid solution alloys. Curr. Opin. Solid State Mater. Sci. 2017, 21, 238–251. [Google Scholar] [CrossRef]
  8. Ding, Q.Q.; Zhang, Y.; Chen, X.; Fu, X.Q.; Chen, D.K.; Chen, S.J.; Gu, L.; Wei, F.; Bei, H.B.; Gao, Y.F.; et al. Tuning element distribution, structure and properties by composition in high-entropy alloys. Nature 2019, 574, 223–227. [Google Scholar] [CrossRef]
  9. Li, Z.M.; Pradeep, K.G.; Deng, Y.; Raabe, D.; Tasan, C.C. Metastable high-entropy dual-phase alloys overcome the strength-ductility trade-off. Nature 2016, 534, 227–230. [Google Scholar] [CrossRef]
  10. Ye, Y.F.; Wang, Q.; Lu, J.; Liu, C.T.; Yang, Y. High-entropy alloy: Challenges and prospects. Mater. Today 2016, 19, 349–362. [Google Scholar] [CrossRef]
  11. Zhang, Y.; Yang, X.; Liaw, P.K. Alloy Design and Properties Optimization of High-Entropy Alloys. JOM 2012, 64, 830–838. [Google Scholar] [CrossRef]
  12. Guo, S.; Liu, C.T. Phase stability in high entropy alloys: Formation of solid-solution phase or amorphous phase. Prog. Nat. Sci. Mater. Int. 2011, 21, 433–446. [Google Scholar] [CrossRef] [Green Version]
  13. Tan, Y.M.; Li, J.S.; Tang, Z.W.; Wang, J.; Kou, H.C. Design of high-entropy alloys with a single solid-solution phase: Average properties vs. their variances. J. Alloys Compd. 2018, 742, 430–441. [Google Scholar] [CrossRef]
  14. Dai, D.B.; Xu, T.; Wei, X.; Ding, G.T.; Xu, Y.; Zhang, J.C.; Zhang, H.R. Using machine learning and feature engineering to characterize limited material datasets of high-entropy alloys. Comput. Mater. Sci. 2020, 175, 1–6. [Google Scholar] [CrossRef]
  15. Zhou, Z.; Zhou, Y.; He, Q.; Ding, Z.; Li, F.; Yang, Y. Machine learning guided appraisal and exploration of phase design for high entropy alloys. NPJ Comput. Mater. 2019, 5, 1–9. [Google Scholar] [CrossRef] [Green Version]
  16. Pei, Z.; Yin, J.; Hawk, J.A.; Alman, D.E.; Gao, M.C. Machine-learning informed prediction of high-entropy solid solution formation: Beyond the Hume-Rothery rules. NPJ Comput. Mater. 2020, 6, 1–8. [Google Scholar] [CrossRef]
  17. Klimenko, D.; Stepanov, N.; Li, J.; Fang, Q.; Zherebtsov, S. Machine Learning-Based Strength Prediction for Refractory High-Entropy Alloys of the Al-Cr-Nb-Ti-V-Zr System. Materials 2021, 14, 7213. [Google Scholar] [CrossRef]
  18. Dai, W.; Zhang, L.; Fu, J.; Chai, T.; Ma, X. Dual-Rate Adaptive Optimal Tracking Control for Dense Medium Separation Process Using Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4202–4216. [Google Scholar] [CrossRef]
  19. Islam, N.; Huang, W.J.; Zhuang, H.L.L. Machine learning for phase selection in multi-principal element alloys. Comput. Mater. Sci. 2018, 150, 230–235. [Google Scholar] [CrossRef]
  20. Huang, W.J.; Martin, P.; Zhuang, H.L.L. Machine-learning phase prediction of high-entropy alloys. Acta Mater. 2019, 169, 225–236. [Google Scholar] [CrossRef]
  21. Qu, N.; Chen, Y.C.; Lai, Z.H.; Liu, Y.; Zhu, J.C. The phase selection via machine learning in high entropy alloys. Procedia Manuf. 2019, 37, 299–305. [Google Scholar] [CrossRef]
  22. Liu, H.; Chen, J.; Hissel, D.; Su, H. Remaining useful life estimation for proton exchange membrane fuel cells using a hybrid method. Appl. Energy 2019, 237, 910–919. [Google Scholar] [CrossRef]
  23. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Search. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  24. Khan, I.; Zhang, X.C.; Rehman, M.; Ali, R. A Literature Survey and Empirical Study of Meta-Learning for Classifier Selection. IEEE Access 2020, 8, 10262–10281. [Google Scholar] [CrossRef]
  25. Aguiar, G.J.; Mantovani, R.G.; Mastelini, S.M.; de Carvalho, A.C.P.F.L.; Campos, G.F.C.; Junior, S.B. A meta-learning approach for selecting image segmentation algorithm. Pattern Recognit. Lett. 2019, 128, 480–487. [Google Scholar] [CrossRef]
  26. Chu, X.H.; Cai, F.L.; Cui, C.; Hu, M.Q.; Li, L.; Qin, Q.D. Adaptive recommendation model using meta-learning for population-based algorithms. Inf. Sci. 2019, 476, 192–210. [Google Scholar] [CrossRef]
  27. Cui, C.; Hu, M.Q.; Weir, J.D.; Wu, T. A recommendation system for meta-modeling: A meta-learning based approach. Expert Syst. Appl. 2016, 46, 33–44. [Google Scholar] [CrossRef] [Green Version]
  28. Pimentel, B.A.; de Carvalho, A.C.P.L.F. A new data characterization for selecting clustering algorithms using meta-learning. Inf. Sci. 2019, 477, 203–219. [Google Scholar] [CrossRef]
  29. Ferrari, D.G.; de Castro, L.N. Clustering algorithm selection by meta-learning systems: A new distance-based problem characterization and ranking combination methods. Inf. Sci. 2015, 301, 181–194. [Google Scholar] [CrossRef]
  30. Song, Y.S.; Liang, J.Y.; Lu, J.; Zhao, X.W. An efficient instance selection algorithm for k nearest neighbor regression. Neurocomputing 2017, 251, 26–34. [Google Scholar] [CrossRef]
  31. Zhang, S.C. Shell-neighbor method and its application in missing data imputation. Appl. Intell. 2010, 35, 123–133. [Google Scholar] [CrossRef]
  32. Lv, W.; Mao, Z.Z.; Yuan, P.; Jia, M.X. Multi-kernel learnt partial linear regularization network and its application to predict the liquid steel temperature in ladle furnace. Knowl. Based Syst. 2012, 36, 280–287. [Google Scholar] [CrossRef]
  33. Lv, W.; Mao, Z.Z.; Yuan, P.; Jia, M.X. Pruned Bagging Aggregated Hybrid Prediction Models for Forecasting the Steel Temperature in Ladle Furnace. Steel Res. Int. 2014, 85, 405–414. [Google Scholar] [CrossRef]
  34. Hou, S.; Liu, J.H.; Lv, W. Flotation Height Prediction under Stable and Vibration States in Air Cushion Furnace Based on Hard Division Method. Math. Probl. Eng. 2019, 2019, 1–14. [Google Scholar] [CrossRef] [Green Version]
  35. Chang, X.J.; Zeng, M.Q.; Liu, K.L.; Fu, L. Phase Engineering of High-Entropy Alloys. Adv. Mater. 2020, 32, 1–22. [Google Scholar] [CrossRef]
  36. Lemke, C.; Budka, M.; Gabrys, B. Metalearning: A survey of trends and technologies. Artif. Intell. Rev. 2015, 44, 117–130. [Google Scholar] [CrossRef] [Green Version]
  37. Arjmand, A.; Samizadeh, R.; Dehghani Saryazdi, M. Meta-learning in multivariate load demand forecasting with exogenous meta-features. Energy Effic. 2020, 13, 871–887. [Google Scholar] [CrossRef]
  38. Smith-Miles, K.A. Cross-disciplinary perspectives on meta-learning for algorithm selection. ACM Comput. Surv. 2009, 41, 1–25. [Google Scholar] [CrossRef]
  39. Zhang, S.C. Parimputation: From Imputation and Null-Imputation to Partially Imputation. IEEE Intell. Inform. Bull. 2008, 9, 32–38. [Google Scholar]
  40. Liu, H.W.; Wu, X.D.; Zhang, S.C. Neighbor selection for multilabel classification. Neurocomputing 2016, 182, 187–196. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Wen, C.; Wang, C.X.; Antonov, S.; Xue, D.Z.; Bai, Y.; Su, Y.J. Phase prediction in high entropy alloys with a rational selection of materials descriptors and machine learning models. Acta Mater. 2020, 185, 528–539. [Google Scholar] [CrossRef]
  42. Tamal, M.; Alshammari, M.; Alabdullah, M.; Hourani, R.; Alola, H.A.; Hegazi, T.M. An integrated framework with machine learning and radiomics for accurate and rapid early diagnosis of COVID-19 from Chest X-ray. Expert Syst. Appl. 2021, 180, 1–8. [Google Scholar] [CrossRef] [PubMed]
  43. Wani, M.A.; Roy, K.K. Development and validation of consensus machine learning-based models for the prediction of novel small molecules as potential anti-tubercular agents. Mol. Divers. 2021, 1–12. [Google Scholar] [CrossRef]
  44. Toda-Caraballo, I.; Rivera-Díaz-del-Castillo, P.E.J. A criterion for the formation of high entropy alloys based on lattice distortion. Intermetallics 2016, 71, 76–87. [Google Scholar] [CrossRef]
  45. Leong, Z.Y.; Huang, Y.H.; Goodall, R.; Todd, I. Electronegativity and enthalpy of mixing biplots for High Entropy Alloy solid solution prediction. Mater. Chem. Phys. 2018, 210, 259–268. [Google Scholar] [CrossRef]
  46. Poletti, M.G.; Battezzati, L. Electronic and thermodynamic criteria for the occurrence of high entropy alloys in metallic systems. Acta Mater. 2014, 75, 297–306. [Google Scholar] [CrossRef]
  47. King, D.J.M.; Middleburgh, S.C.; McGregor, A.G.; Cortie, M.B. Predicting the formation and stability of single phase high-entropy alloys. Acta Mater. 2016, 104, 172–179. [Google Scholar] [CrossRef]
  48. Andreoli, A.F.; Orava, J.; Liaw, P.K.; Weber, H.; de Oliveira, M.F.; Nielsch, K.; Kaban, I. The elastic-strain energy criterion of phase formation for complex concentrated alloys. Materialia 2019, 5, 1–12. [Google Scholar] [CrossRef]
  49. Peng, Y.T.; Zhou, C.Y.; Lin, P.; Wen, D.Y.; Wang, X.D.; Zhong, X.Z.; Pan, D.H.; Que, Q.; Li, X.; Chen, L.; et al. Preoperative Ultrasound Radiomics Signatures for Noninvasive Evaluation of Biological Characteristics of Intrahepatic Cholangiocarcinoma. Acad. Radiol. 2020, 27, 785–797. [Google Scholar] [CrossRef]
  50. Jung, Y. Multiple predicting K-fold cross-validation for model selection. J. Nonparametr. Stat. 2017, 30, 197–215. [Google Scholar] [CrossRef]
  51. Hou, S.; Zhang, X.; Dai, W.; Han, X.; Hua, F. Multi-Model- and Soft-Transition-Based Height Soft Sensor for an Air Cushion Furnace. Sensors 2020, 20, 926. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Schematic diagram of meta-learning.
Figure 1. Schematic diagram of meta-learning.
Materials 15 03321 g001
Figure 2. SNN of the query instance.
Figure 2. SNN of the query instance.
Materials 15 03321 g002
Figure 3. Schematic diagram of MLRM.
Figure 3. Schematic diagram of MLRM.
Materials 15 03321 g003
Figure 4. Schematic diagram of phase prediction frame of HEAs.
Figure 4. Schematic diagram of phase prediction frame of HEAs.
Materials 15 03321 g004
Figure 5. Quantity of SS, IM, SS+IM, and AM in the quinaries, senaries, and septenaries. (a) D1, (b) D2, (c) D3, (d) D4, (e) D5, (f) D6, (g) D7, (h) D8, (i) D9, (j) D10, (k) D11, and (l) D12.
Figure 5. Quantity of SS, IM, SS+IM, and AM in the quinaries, senaries, and septenaries. (a) D1, (b) D2, (c) D3, (d) D4, (e) D5, (f) D6, (g) D7, (h) D8, (i) D9, (j) D10, (k) D11, and (l) D12.
Materials 15 03321 g005
Figure 6. Statistics of the occurrence time of each element in 12 datasets. (a) D1, (b) D2, (c) D3, (d) D4, (e) D5, (f) D6, (g) D7, (h) D8, (i) D9, (j) D10, (k) D11, (l) D12.
Figure 6. Statistics of the occurrence time of each element in 12 datasets. (a) D1, (b) D2, (c) D3, (d) D4, (e) D5, (f) D6, (g) D7, (h) D8, (i) D9, (j) D10, (k) D11, (l) D12.
Materials 15 03321 g006
Figure 7. Heatmap of Pearson correlation coefficient matrix among 5 parameters in 12 datasets.
Figure 7. Heatmap of Pearson correlation coefficient matrix among 5 parameters in 12 datasets.
Materials 15 03321 g007
Figure 8. Accuracy comparison between meta-learning and traditional ML algorithms. (a) all, (b) quinaries, (c) senaries, and (d) septenaries.
Figure 8. Accuracy comparison between meta-learning and traditional ML algorithms. (a) all, (b) quinaries, (c) senaries, and (d) septenaries.
Materials 15 03321 g008
Figure 9. Accuracy comparison between MLRM and meta-learning. (a) all, (b) quinaries, (c) senaries, and (d) septenaries.
Figure 9. Accuracy comparison between MLRM and meta-learning. (a) all, (b) quinaries, (c) senaries, and (d) septenaries.
Materials 15 03321 g009
Figure 10. Scatter plot based on the δ Ω coordinate system. (a) D1, (b) D2, (c) D3, (d) D4, (e) D5, (f) D6, (g) D7, (h) D8, (i) D9, (j) D10, (k) D11, (l) D12.
Figure 10. Scatter plot based on the δ Ω coordinate system. (a) D1, (b) D2, (c) D3, (d) D4, (e) D5, (f) D6, (g) D7, (h) D8, (i) D9, (j) D10, (k) D11, (l) D12.
Materials 15 03321 g010
Figure 11. Accuracy comparison between PPH and MLRM. (a) all, (b) quinaries, (c) senaries, and (d) septenaries.
Figure 11. Accuracy comparison between PPH and MLRM. (a) all, (b) quinaries, (c) senaries, and (d) septenaries.
Materials 15 03321 g011
Table 1. Maximum values and minimum values of the five parameters.
Table 1. Maximum values and minimum values of the five parameters.
NumberParameterMaximum ValueMinimum ValueParameter Description
1 Δ S m i x 16.187.78thermodynamic parameter
2 Δ H m i x 17.12−48.64chemical parameter
3 δ 35.300.21electronic parameter
4 Δ χ 23.100.04electronic parameter
5 Ω 283.500.37chemical thermodynamic parameter
Table 2. Pandas snapshot of the first six samples.
Table 2. Pandas snapshot of the first six samples.
NumberHEAs Δ S m i x Δ H m i x δ ( % ) Δ χ Ω Phase
0AlCr0.5NbTiV13.150000−15.4100005.2300000.0376471.680000SS
1Mg65Cu15Ag5Pd5Gd109.100000−13.2400009.3600000.2980620.770000AM
2AlCoCrFeNiSi0.614.778118−22.7551025.8772030.1200101.090691SS+IM
3CoFeMnTiVZr0.414.585020−16.0493838.0886260.1656971.692345IM
4Ti0.2CoCrFeNiCuAl0.515.445251−4.1489690.2100000.1187506.318158SS
5Al0.5CoCrCuFeNiTi0.815.995280−10.1000005.8000000.1372802.725683SS+IM
Table 3. Accuracy comparison between meta-learning and traditional ML algorithms in all.
Table 3. Accuracy comparison between meta-learning and traditional ML algorithms in all.
IDDTKNNSVMRFBaggingMeta-Learning
D10.816 ± 0.0120.849 ± 0.0080.835 ± 0.0070.839 ± 0.0100.837 ± 0.0050.837 ± 0.005
D20.932 ± 0.0030.941 ± 0.0220.904 ± 0.0090.949 ± 0.0130.944 ± 0.0200.932 ± 0.003
D30.856 ± 0.0080.873 ± 0.0190.769 ± 0.0130.873 ± 0.0030.861 ± 0.0020.861 ± 0.002
D40.811 ± 0.0050.813 ± 0.0120.803 ± 0.0170.838 ± 0.0090.833 ± 0.0140.833 ± 0.014
D50.884 ± 0.0300.853 ± 0.0160.853 ± 0.0110.857 ± 0.0120.871 ± 0.0300.871 ± 0.030
D60.957 ± 0.0090.945 ± 0.0040.891 ± 0.0070.989 ± 0.0070.985 ± 0.0080.957 ± 0.009
D70.918 ± 0.0100.890 ± 0.0270.842 ± 0.0050.900 ± 0.0080.911 ± 0.0090.911 ± 0.009
D80.873 ± 0.0250.857 ± 0.0170.760 ± 0.0230.839 ± 0.0110.872 ± 0.0060.839 ± 0.011
D90.960 ± 0.0030.967 ± 0.0060.860 ± 0.0090.978 ± 0.0100.983 ± 0.0100.983 ± 0.010
D100.787 ± 0.0090.711 ± 0.0190.784 ± 0.0100.712 ± 0.0120.766 ± 0.0260.766 ± 0.026
D110.943 ± 0.0060.967 ± 0.0160.833 ± 0.0220.988 ± 0.0020.995 ± 0.0020.995 ± 0.002
D120.912 ± 0.0040.898 ± 0.0120.832 ± 0.0090.898 ± 0.0150.912 ± 0.0030.912 ± 0.003
Table 4. Accuracy comparison between meta-learning and traditional ML algorithms in quinaries.
Table 4. Accuracy comparison between meta-learning and traditional ML algorithms in quinaries.
IDDTKNNSVMRFBaggingMeta-Learning
D10.920 ± 0.0050.920 ± 0.0030.800 ± 0.0020.960 ± 0.0270.920 ± 0.0020.920 ± 0.002
D20.893 ± 0.0150.893 ± 0.0120.887 ± 0.0090.927 ± 0.0050.927 ± 0.0040.927 ± 0.004
D30.840 ± 0.0050.881 ± 0.0150.786 ± 0.0170.894 ± 0.0100.866 ± 0.0100.894 ± 0.010
D40.733 ± 0.0340.819 ± 0.0110.795 ± 0.0350.826 ± 0.0340.810 ± 0.0080.810 ± 0.008
D60.945 ± 0.0250.956 ± 0.0050.911 ± 0.0050.994 ± 0.0020.991 ± 0.0010.994 ± 0.002
D70.898 ± 0.0070.831 ± 0.0040.895 ± 0.0030.898 ± 0.0050.876 ± 0.0020.898 ± 0.005
D80.914 ± 0.0040.881 ± 0.0070.779 ± 0.0160.881 ± 0.0030.914 ± 0.0060.881 ± 0.003
D90.905 ± 0.0270.950 ± 0.0110.800 ± 0.0220.968 ± 0.0060.968 ± 0.0120.968 ± 0.006
D100.905 ± 0.0040.850 ± 0.0080.900 ± 0.0030.891 ± 0.0270.888 ± 0.0070.891 ± 0.027
D110.938 ± 0.0060.900 ± 0.0240.567 ± 0.0270.970 ± 0.0090.978 ± 0.0050.970 ± 0.009
D120.808 ± 0.0250.833 ± 0.0230.759 ± 0.0290.853 ± 0.0220.853 ± 0.0330.853 ± 0.022
Table 5. Accuracy comparison between meta-learning and traditional ML algorithms in senaries.
Table 5. Accuracy comparison between meta-learning and traditional ML algorithms in senaries.
IDDTKNNSVMRFBaggingMeta-Learning
D10.822 ± 0.0150.855 ± 0.0080.830 ± 0.0140.847 ± 0.0070.872 ± 0.0060.872 ± 0.006
D20.927 ± 0.0090.893 ± 0.0130.893 ± 0.0090.893 ± 0.0200.893 ± 0.0060.893 ± 0.006
D30.884 ± 0.0080.884 ± 0.0110.884 ± 0.0180.900 ± 0.0140.900 ± 0.0130.900 ± 0.013
D40.824 ± 0.0120.899 ± 0.0170.889 ± 0.0050.839 ± 0.0100.838 ± 0.0170.839 ± 0.010
D60.971 ± 0.0100.833 ± 0.0090.833 ± 0.0110.964 ± 0.0170.972 ± 0.0120.972 ± 0.012
D70.897 ± 0.0070.903 ± 0.0020.903 ± 0.0130.886 ± 0.0080.899 ± 0.0080.899 ± 0.008
D100.683 ± 0.0230.700 ± 0.0360.700 ± 0.0100.750 ± 0.0320.817 ± 0.0060.817 ± 0.006
D120.910 ± 0.0200.890 ± 0.0070.890 ± 0.0090.935 ± 0.0050.880 ± 0.0270.880 ± 0.027
Table 6. Accuracy comparison between meta-learning and traditional ML algorithms in septenaries.
Table 6. Accuracy comparison between meta-learning and traditional ML algorithms in septenaries.
IDDTKNNSVMRFBaggingMeta-Learning
D20.855 ± 0.0310.900 ± 0.0050.900 ± 0.0070.918 ± 0.0220.911 ± 0.0130.918 ± 0.022
D30.806 ± 0.0160.680 ± 0.0080.720 ± 0.0050.806 ± 0.0070.806 ± 0.0110.806 ± 0.007
D40.959 ± 0.0090.660 ± 0.0280.747 ± 0.0120.969 ± 0.0130.970 ± 0.0050.969 ± 0.013
D70.957 ± 0.0150.650 ± 0.0190.717 ± 0.0190.964 ± 0.0090.968 ± 0.0180.964 ± 0.009
D80.925 ± 0.0330.900 ± 0.0230.800 ± 0.0350.860 ± 0.0180.844 ± 0.0170.860 ± 0.018
D120.867 ± 0.0270.876 ± 0.0340.876 ± 0.0310.933 ± 0.0220.933 ± 0.0330.933 ± 0.022
Table 7. Recommendation results between MLRM and meta-learning in all.
Table 7. Recommendation results between MLRM and meta-learning in all.
IDMeta-LearningMLRMTRUE
D1Bagging, KNNKNN, RFKNN, RF
D2DT, BaggingRF, BaggingRF, Bagging
D3Bagging, RFRF, BaggingRF, KNN
D4Bagging, DTRF, BaggingRF, Bagging
D5Bagging, RFDT, BaggingDT, Bagging
D6DT, BaggingRF, BaggingRF, Bagging
D7Bagging, RFDT, BaggingDT, Bagging
D8RF, BaggingDT, BaggingDT, Bagging
D9Bagging, DTBagging, RFBagging, RF
D10Bagging, RFDT, BaggingDT, SVM
D11Bagging, RFBagging, RFBagging, RF
D12Bagging, RFBagging, RFBagging, DT
Table 8. Recommendation results between MLRM and meta-learning in quinaries.
Table 8. Recommendation results between MLRM and meta-learning in quinaries.
IDMeta-LearningMLRMTRUE
D1Bagging, RFRF, BaggingRF, Bagging
D2Bagging, RFBagging, RFBagging, RF
D3RF, BaggingRF, BaggingRF, KNN
D4Bagging, RFRF, BaggingRF, KNN
D6RF, BaggingRF, BaggingRF, Bagging
D7RF, BaggingRF, BaggingRF, DT
D8RF, BaggingDT, BaggingDT, Bagging
D9RF, BaggingRF, BaggingRF, Bagging
D10RF, BaggingDT, RFDT, SVM
D11RF, BaggingBagging, RFBagging, RF
D12RF, BaggingRF, BaggingRF, Bagging
Table 9. Recommendation results between MLRM and meta-learning in senaries.
Table 9. Recommendation results between MLRM and meta-learning in senaries.
IDMeta-LearningMLRMTRUE
D1Bagging, RFBagging, KNNBagging, KNN
D2Bagging, RFDT, BaggingDT, Bagging
D3Bagging, RFBagging, RFBagging, RF
D4RF, BaggingKNN, SVMKNN, SVM
D6Bagging, RFBagging, RFBagging, DT
D7Bagging RFKNN, SVMKNN, SVM
D10Bagging, DTBagging, RFBagging, RF
D12Bagging RFRF, DTRF, DT
Table 10. Recommendation results between MLRM and meta-learning in septenaries.
Table 10. Recommendation results between MLRM and meta-learning in septenaries.
IDMeta-LearningMLRMTRUE
D2RF, BaggingRF, BaggingRF, Bagging
D3RF, BaggingRF, BaggingRF, Bagging
D4RF, BaggingBagging, RFBagging, RF
D7RF, BaggingBagging, RFBagging, RF
D8RF, BaggingDT, RFDT, KNN
D12RF, DTRF, BaggingRF, Bagging
Table 11. Accuracy comparison between MLRM and meta-learning in all.
Table 11. Accuracy comparison between MLRM and meta-learning in all.
IDMeta-LearningMLRMTRUE
D10.837 ± 0.0050.849 ± 0.0080.849 ± 0.008
D20.932 ± 0.0030.949 ± 0.0130.949 ± 0.013
D30.861 ± 0.0020.873 ± 0.0030.873 ± 0.003
D40.833 ± 0.0140.838 ± 0.0090.838 ± 0.009
D50.871 ± 0.0300.884 ± 0.0300.884 ± 0.030
D60.957 ± 0.0090.989 ± 0.0070.989 ± 0.007
D70.911 ± 0.0090.918 ± 0.0100.918 ± 0.010
D80.839 ± 0.0110.873 ± 0.0250.873 ± 0.025
D90.983 ± 0.0100.983 ± 0.0100.983 ± 0.010
D100.766 ± 0.0260.787 ± 0.0090.787 ± 0.009
D110.995 ± 0.0020.995 ± 0.0020.995 ± 0.002
D120.912 ± 0.0030.912 ± 0.0030.912 ± 0.003
Table 12. Accuracy comparison between MLRM and meta-learning in quinaries.
Table 12. Accuracy comparison between MLRM and meta-learning in quinaries.
IDMeta-LearningMLRMTRUE
D10.920 ± 0.0020.960 ± 0.0270.960 ± 0.027
D20.927 ± 0.0040.927 ± 0.0040.927 ± 0.004
D30.894 ± 0.0100.894 ± 0.0100.894 ± 0.010
D40.810 ± 0.0080.826 ± 0.0340.826 ± 0.034
D60.994 ± 0.0020.994 ± 0.0020.994 ± 0.002
D70.898 ± 0.0050.898 ± 0.0050.898 ± 0.005
D80.881 ± 0.0030.914 ± 0.0040.914 ± 0.004
D90.968 ± 0.0060.968 ± 0.0060.968 ± 0.006
D100.891 ± 0.0270.905 ± 0.0040.905 ± 0.004
D110.970 ± 0.0090.978 ± 0.0050.978 ± 0.005
D120.853 ± 0.0220.853 ± 0.0220.853 ± 0.022
Table 13. Accuracy comparison between MLRM and meta-learning in senaries.
Table 13. Accuracy comparison between MLRM and meta-learning in senaries.
IDMeta-LearningMLRMTRUE
D10.872 ± 0.0060.872 ± 0.0060.872 ± 0.006
D20.893 ± 0.0060.927 ± 0.0090.927 ± 0.009
D30.900 ± 0.0130.900 ± 0.0130.900 ± 0.013
D40.839 ± 0.0100.899 ± 0.0170.899 ± 0.017
D60.972 ± 0.0120.972 ± 0.0120.972 ± 0.012
D70.899 ± 0.0080.903 ± 0.0020.903 ± 0.002
D100.817 ± 0.0060.817 ± 0.0060.817 ± 0.006
D120.880 ± 0.0270.935 ± 0.0050.935 ± 0.005
Table 14. Accuracy comparison between MLRM and meta-learning in septenaries.
Table 14. Accuracy comparison between MLRM and meta-learning in septenaries.
IDMeta-LearningMLRMTRUE
D20.918 ± 0.0220.918 ± 0.0220.918 ± 0.022
D30.806 ± 0.0070.806 ± 0.0070.806 ± 0.007
D40.969 ± 0.0130.970 ± 0.0050.970 ± 0.005
D70.964 ± 0.0090.968 ± 0.0180.968 ± 0.018
D80.860 ± 0.0180.925 ± 0.0330.925 ± 0.033
D120.933 ± 0.0220.933 ± 0.0220.933 ± 0.022
Table 15. Accuracy comparison between PPH and MLRM in all, quinaries, senaries, and septenaries.
Table 15. Accuracy comparison between PPH and MLRM in all, quinaries, senaries, and septenaries.
IDAllQuinariesSenariesSeptenaries
MLRMPPHMLRMPPHMLRMPPHMLRMPPH
D10.849 ± 0.0080.859 ± 0.0050.960 ± 0.0271.000 ± 0.0000.872 ± 0.0060.889 ± 0.008NULLNULL
D20.949 ± 0.0130.959 ± 0.0070.927 ± 0.0040.958 ± 0.0210.927 ± 0.0090.955 ± 0.0140.918 ± 0.0220.941 ± 0.009
D30.873 ± 0.0030.884 ± 0.0110.894 ± 0.0100.917 ± 0.0130.900 ± 0.0130.915 ± 0.0090.806 ± 0.0070.890 ± 0.010
D40.838 ± 0.0090.862 ± 0.0130.826 ± 0.0340.834 ± 0.0160.899 ± 0.0170.922 ± 0.0070.970 ± 0.0050.987 ± 0.005
D50.884 ± 0.0300.886 ± 0.009NULLNULLNULLNULLNULLNULL
D60.989 ± 0.0070.997 ± 0.0020.994 ± 0.0020.999 ± 0.0010.972 ± 0.0121.000 ± 0.000NULLNULL
D70.918 ± 0.0100.929 ± 0.0040.898 ± 0.0050.913 ± 0.0050.903 ± 0.0020.932 ± 0.0130.968 ± 0.0180.986 ± 0.006
D80.873 ± 0.0250.903 ± 0.0200.914 ± 0.0040.966 ± 0.004NULLNULL0.925 ± 0.0331.000 ± 0.000
D90.983 ± 0.0100.994 ± 0.0030.968 ± 0.0060.997 ± 0.002NULLNULLNULLNULL
D100.787 ± 0.0090.797 ± 0.0020.905 ± 0.0040.921 ± 0.0120.817 ± 0.0060.882 ± 0.025NULLNULL
D110.995 ± 0.0021.000 ± 0.0000.978 ± 0.0051.000 ± 0.000NULLNULLNULLNULL
D120.912 ± 0.0030.920 ± 0.0040.853 ± 0.0220.868 ± 0.0110.935 ± 0.0050.950 ± 0.0190.933 ± 0.0220.941 ± 0.008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hou, S.; Li, Y.; Bai, M.; Sun, M.; Liu, W.; Wang, C.; Tetik, H.; Lin, D. Phase Prediction of High-Entropy Alloys by Integrating Criterion and Machine Learning Recommendation Method. Materials 2022, 15, 3321. https://doi.org/10.3390/ma15093321

AMA Style

Hou S, Li Y, Bai M, Sun M, Liu W, Wang C, Tetik H, Lin D. Phase Prediction of High-Entropy Alloys by Integrating Criterion and Machine Learning Recommendation Method. Materials. 2022; 15(9):3321. https://doi.org/10.3390/ma15093321

Chicago/Turabian Style

Hou, Shuai, Yujiao Li, Meijuan Bai, Mengyue Sun, Weiwei Liu, Chao Wang, Halil Tetik, and Dong Lin. 2022. "Phase Prediction of High-Entropy Alloys by Integrating Criterion and Machine Learning Recommendation Method" Materials 15, no. 9: 3321. https://doi.org/10.3390/ma15093321

APA Style

Hou, S., Li, Y., Bai, M., Sun, M., Liu, W., Wang, C., Tetik, H., & Lin, D. (2022). Phase Prediction of High-Entropy Alloys by Integrating Criterion and Machine Learning Recommendation Method. Materials, 15(9), 3321. https://doi.org/10.3390/ma15093321

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop