Next Article in Journal
Application of Machine Learning Methods for Identifying Wave Aberrations from Combined Intensity Patterns Generated Using a Multi-Order Diffractive Spatial Filter
Previous Article in Journal
Recent Advances in Spatially Incoherent Coded Aperture Imaging Technologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Machine Learning Methods for the Prediction of the Optical Parameters of Tellurite Glasses

by
Fahimeh Ahmadi
1,
Mohsen Hajihassani
2,
Tryfon Sivenas
3,
Stefanos Papanikolaou
4,5 and
Panagiotis G. Asteris
3,*
1
Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
2
Department of Engineering, Urmia University, Urmia 5756151818, Iran
3
Computational Mechanics Laboratory, School of Pedagogical and Technological Education, 12243 Athens, Greece
4
NOMATEN Centre of Excellence, National Center for Nuclear Research, ul. A. Soltana 7, Swierk, 05-400 Otwock, Poland
5
Department of Nuclear Science & Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
*
Author to whom correspondence should be addressed.
Technologies 2025, 13(6), 211; https://doi.org/10.3390/technologies13060211
Submission received: 8 March 2025 / Revised: 20 April 2025 / Accepted: 19 May 2025 / Published: 25 May 2025

Abstract

:
This study evaluates the predictive performance of advanced machine learning models, including DeepBoost, XGBoost, CatBoost, RF, and MLP, in estimating the Ω2, Ω4, and Ω6 parameters based on a comprehensive set of input variables. Among the models, DeepBoost consistently demonstrated the best performance across the training and testing phases. For the Ω2 prediction, DeepBoost achieved an R2 of 0.974 and accuracy of 99.895% in the training phase, with corresponding values of 0.971 and 99.902% in the testing phase. In comparison, XGBoost ranked second with an R2 of 0.929 and accuracy of 99.870% during testing. For Ω4, DeepBoost achieved a training phase R2 of 0.955 and accuracy of 99.846%, while the testing phase results included an R2 of 0.945 and accuracy of 99.951%. Similar trends were observed for Ω6, where DeepBoost obtained near-perfect training phase results (R2 = 0.997, accuracy = 99.968%) and testing phase performance (R2 = 0.994, accuracy = 99.946%). These findings are further supported by violin plots and correlation analyses, underscoring DeepBoost’s superior predictive reliability and generalization capabilities. This work highlights the importance of model selection in predictive tasks and demonstrates the potential of machine learning for capturing complex relationships in data.

1. Introduction

The Judd–Ofelt (JO) theory has received a great deal of interest due to its wide range of applications in materials science and chemistry, along with their numerous academic issues. Such uses include solid-state lasers [1,2] thermal sensors [3,4], optical amplifiers, upconversion [5], and diverse biological contexts [6,7]. One of the main uses of the JO theory in this application, as in many others, is to provide a description of the optical properties of materials [8]. These properties may include characteristics such as transition probability, branching ratio, and emission cross-section.
The JO theory provides significant insights into the structure of glass and the environment of the rare earth (RE), since the values of the parameters Ωt (t = 2, 4, and 6) are sensitive to variations of the RE site symmetry and of the RE-O covalency. The Ω2 parameter describes the ligand field asymmetry of the local RE environment [9,10,11] and/or is proportional to the degree of bond covalency for RE-O [12]. In contrast, the JO intensity parameters Ω4 and Ω6 display the viscosity and dielectric properties of the glass matrix [13,14].
Although the JO theory exhibits mathematical elegance and physical utility, it presents a challenging framework not only in its understanding but also in its use. Similar to numerous theories in this field, it requires a sufficient depth of knowledge of solid-state physics and quantum mechanics. Additionally, the glass matrix under consideration for the calculation of the JO intensity parameters and its subsequent characterizations are highly specialized. Consequently, the combination of restrictions on the materials’ preparation, the measurement methods, the subsequent calculations, and the final interpretation makes JO theory elegant but frequently unapproachable. In this respect, given the breadth of applications that have emerged even despite the many limitations, it stands to reason that a more accessible method for obtaining the same or similar information would have very important implications for many scientific actions: for instance, how the three JO parameters could be predicted in the absence of spectral measurements and their corresponding mathematical calculations.
Predicting the relationship between composition and properties plays a crucial role in the developing of novel compositions. The developing physics-based models for predicting the properties in glasses remain a significant challenge that need to be addressed. An alternative method to address these challenges is to employ data-based modeling methods including machine learning [9,10,15]. These techniques rely on accessible data to develop models that capture the hidden trends in the relationships between input and output. In the field of material informatics, ML is employed for various applications, including the development of interatomic potentials [11,16,17], the predicting of novel materials and composites [18,19], the prediction of the composition–property relationship [10,15,20,21], and the development of the energy landscape [22]. Specifically, ML has been successfully used in oxide glasses for predicting a wide range of equilibrium and nonequilibrium composition–property relationships, including the liquidus temperature [9], solubility [20], glass transition temperature [15], stiffness [23], and dissolution kinetics [10].
This research leverages powerful machine learning models including XGBoost, LightGBM, GWO-XGBoost, and GWO-LightGBM to estimate the JO parameters in Er3+-doped tellurite glasses. Er3+-doped tellurite glass has received a great deal of interest in recent years because of its optical and chemical properties [24]. Their high linear and nonlinear refractive indices, relatively low-phonon energy spectra, a low bonding strength of Te-O, chemical durability, and low glass transitions make them good candidates for fiber laser and 1.5 μm broadband optical amplifier applications [24].
The experimental oscillator strengths ( f e x p ) of the f-f induced electric dipole transitions of the various absorption bands are determined by measuring the integral area of the corresponding absorption transitions using the Judd–Ofelt theory [13,14] and the following equation:
f e x p = 2.303 m c 2 N π e 2 ε v d v = 4.318 × 10 9 ε v d v
where m and e are the electron mass and electron charge, respectively; c is the light velocity; N is the Avogadro’s number; and ν is the transition energy (in cm−1). The oscillator strengths ( f c a l ) for each absorption transition of the rare-earth ions within the 4f configuration were calculated through the following equation:
f c a l = 8 π 2 m c v 3 h 2 J + 1 n 2 + 2 2 9 n × t = 2 , 4 , 6 Ω t Ψ J U t Ψ J 2
where n is the refractive index; J is the total angular momentum of the ground state; Ω t ( t = 2, 4, and 6) are the Judd–Ofelt intensity parameters, which are used to characterize the metal–ligand band in the host matrix; and U t 2 is the square reduced matrix elements of the unit tensor operator. The square reduced matrix elements U t 2 for this present work were obtained from the reported literature [10].
The JO intensity parameters are host-dependent and play a vital role in investigating the glass structure and transition rates of the RE ion energy levels. The Ω2 JO parameter is related to the covalency and symmetry of the ligand field around the rare-earth ions [15]. The Ω4 and Ω6 parameters explore bulk properties like viscosity, the dielectric constant, and the vibronic transitions around the rare-earth ions [9].
Traditional physics-based models, such as those derived from the Judd–Ofelt theory, have been extensively used to predict the optical parameters of rare-earth-doped glasses. These models rely heavily on detailed knowledge of the material’s atomic structure and the interactions between the rare-earth ions and the glass matrix. While they provide valuable insights into the optical properties of the materials, the process often involves complex calculations and assumptions that may not capture the full range of material behaviors, particularly in heterogeneous or poorly characterized systems.
In contrast, ML models, such as DeepBoost, XGBoost, and CatBoost, offer the advantage of data-driven predictions that can account for complex, nonlinear relationships in the data without relying on predefined physical models. These models excel at handling large, multi-dimensional datasets, which may be difficult to interpret using traditional physics-based approaches. Our study demonstrates that ML models, particularly DeepBoost, outperform conventional methods in terms of predictive accuracy and computational efficiency, making them a promising alternative for predicting the optical parameters in materials science. Moreover, ML approaches require fewer domain-specific assumptions, making them applicable to a broader range of materials and conditions where conventional models may not be easily adapted. While traditional models remain invaluable for understanding fundamental principles, ML methods complement them by providing more flexible, scalable, and efficient solutions for predicting material properties.

2. Experimental Procedure

While substantial progress has been made in leveraging machine learning techniques for predictive modeling, significant gaps remain in the systematic evaluation and application of advanced algorithms like DeepBoost, XGBoost, and CatBoost for specific parameters such as Ω2, Ω4, and Ω6. Existing studies often rely on traditional modeling approaches or simpler machine learning models, which fail to capture the intricate nonlinear relationships present in complex datasets with the same level of accuracy and robustness. Moreover, the current literature lacks a detailed comparative analysis of these advanced boosting algorithms in the context of multi-parameter prediction tasks. This creates uncertainty regarding their relative strengths, limitations, and applicability to real-world scenarios. Additionally, most studies do not provide a comprehensive assessment of computational efficiency alongside predictive performance, an essential aspect for the practical deployment of machine learning models in industrial and scientific domains. This study addresses these gaps by introducing a rigorous evaluation framework for these algorithms, emphasizing both accuracy and computational efficiency. By doing so, it provides critical insights into their suitability for modeling Ω2, Ω4, and Ω6, offering a novel contribution to the field and setting the stage for further advancements in machine learning-driven predictive analytics.
In this study, a significant portion of the scientific literature related to the experimental calculation of the three JO parameters (Ω2, Ω4, and Ω6) in erbium-doped tellurite glasses was examined. The final review involved 26 scientific papers, which corresponded to 70 unique types of tellurite glasses doped with erbium [25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50]. The corresponding JO parameters and the percentage of oxide compositions for each glass were determined using stoichiometry (Table 1 summarizes the data and the synthesized data).

3. Research Significance

This study is significant as it advances the application of cutting-edge machine learning models—DeepBoost, XGBoost, and CatBoost—in accurately predicting the Ω2, Ω4, and Ω6 parameters, which are crucial in various scientific and industrial domains. The research provides a comprehensive comparison of these algorithms, highlighting their ability to model complex, nonlinear relationships with high precision. By achieving R2 values exceeding 0.99 and error metrics such as RMSE and MAPE at remarkably low levels, this study sets a benchmark for predictive modeling in the field. The findings demonstrate the transformative potential of these models in fields like materials science, structural engineering, and environmental management, where accurate parameter predictions are critical for optimizing designs, processes, and resource utilization. Furthermore, the detailed evaluation methodology presented here establishes a framework for future research aiming to adopt advanced machine learning techniques for predictive analytics, fostering a more data-driven and efficient approach to problem-solving. By addressing computational efficiency and prediction reliability, this work also contributes to enhancing real-world applicability, bridging the gap between theoretical advancements and practical implementation in data-driven domains.

4. Data Presentation

This research investigates a substantial segment of the scientific literature concerning the experimental determination of the three JO parameters (Ω2, Ω4, and Ω6) in RE-doped tellurite glasses. The concluding review encompassed scholarly articles [25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50], which related to 70 unique varieties of Er3+-doped tellurite glasses. The relevant JO parameters and the percentage of oxide compositions for each glass were established using stoichiometry.
The dataset used in this study includes a comprehensive set of input and output parameters relevant to the analysis of chemical compositions and their effects on the output indices (Ω2, Ω4, and Ω6). Table 1 provides a detailed summary of the descriptive statistics for all the parameters involved. For each parameter, the key statistical metrics are reported, including the mean, median, standard deviation, and minimum and maximum values. Among the input parameters, the oxide compositions (e.g., TeO2, SrO, P2O5, CaO, and CaF2) and other chemical compounds (e.g., K2O, Bi2O3, and TiO2) show considerable variability, reflecting the diverse chemical nature of the dataset. For example, TeO2 exhibits a wide range of values, with a mean of 46.483 and a standard deviation of 23.523, indicating significant variation across the samples. Similarly, other components, such as SrO and P2O5, have skewed distributions, as evidenced by their median values being notably different from the mean. The maximum values of certain parameters, such as P2O5 (35) and B2O3 (79.5), demonstrate that some samples contain extraordinarily high concentrations of specific compounds, which may influence the output indices significantly. The output parameters (Ω2, Ω4, and Ω6) represent specific indices calculated based on the input compositions. These indices display unique statistical characteristics. For instance, Ω2 has a mean of 5.937 and a standard deviation of 2.457, suggesting moderate variability. In contrast, Ω4 and Ω6 show lower mean values of 1.847 and 1.590, respectively, with relatively smaller standard deviations. These indices provide a quantitative measure of the system’s behavior, which is further analyzed in the correlation matrix (Figure 1). The descriptive statistics serve as a foundation for the subsequent correlation and modeling analysis. By understanding the variability and distribution of the input and output parameters, researchers can better assess the relationships and interactions within the dataset, ultimately enhancing the interpretability of the findings.
The correlation matrix depicted in Figure 1 illustrates the relationships between the input parameters and the output indices (Ω2, Ω4, and Ω6). The matrix displays the Pearson correlation coefficients, which quantify the linear relationship between pairs of variables. Values closer to 1 or −1 indicate stronger positive or negative correlations, respectively, while values near zero suggest weak or no correlations. Several key patterns can be observed from the matrix. The parameter TeO2 shows moderate negative correlations with output indices such as Ω2 (−0.353) and Ω6 (−0.081), indicating that higher concentrations of TeO2 might slightly reduce these indices. In contrast, SrO exhibits a strong positive correlation with Ω2 (0.711) and a moderate positive correlation with Ω6 (0.591), suggesting its significant influence on these outputs. Interestingly, Bi2O3 is also strongly correlated with Ω2 (0.674) and Ω6 (0.605), highlighting its potential role in determining the system’s characteristics. Some input parameters demonstrate notable interdependence. For example, CaF2 and SrO are strongly positively correlated (0.709), as are MgO and K2O (0.743). These relationships may indicate underlying chemical or physical interactions between these compounds. Additionally, the weak or negative correlations observed between some parameters, such as B2O3 and ZnO (−0.307), suggest minimal interaction or opposing trends. The output indices Ω2, Ω4, and Ω6 exhibit distinct correlations with the input parameters. Ω2 shows significant positive relationships with several variables, including SrO, Bi2O3, and CaF2, while Ω4 demonstrates strong positive correlations with CdF2 and moderate negative correlations with MgO and ZnO. Ω6, on the other hand, is positively influenced by Bi2O3 and SrO but shows weaker interactions with many other parameters. Figure 1 is critical for identifying the dominant factors influencing the output indices and serves as a guide for further modeling and analysis. The insights derived from the correlation matrix provide a valuable foundation for predictive modeling, enabling the identification of the most impactful parameters and their interactions.

5. Methods

5.1. Multilayer Perceptron (MLP)

There are numerous successive layers of neurons that make up a MLP, which is a form of artificial neural network (ANN). These layers comprise an input layer, one or more hidden layers, and an output layer. Each layer is completely coupled to the layer that comes after it. As a result of its capacity to represent complex and nonlinear interactions between data points, multilayer perceptrons, also known as MLPs, are widely used for supervised learning tasks such as classification and regression [51]. MLPs are able to extract nonlinear features with ease because of the utilization of nonlinear activation functions in the hidden layers. This capability makes it possible for MLPs to facilitate the representation of complicated data and the transfer of data to higher dimensional space. According to Equation (3), the training approach for a multilayer perceptron makes use of the backpropagation algorithm. This algorithm is responsible for refining a loss function by adjusting the weights in accordance with the gradients that are produced via error propagation. For the purpose of this investigation, the Sigmoid function (Equation (4)) and the mean squared error (MSE) loss function (Equation (5)) were applied.
w w η L w
σ x = 1 1 + e x
L = 1 N i = 1 N y i l o g y ^ i + 1 y i l o g 1 y ^ i
The activation functions used in the hidden layers are Rectified Linear Units (ReLUs), which were chosen due to their ability to efficiently capture nonlinearities and improve the model’s convergence speed. The output layer employs a linear activation function, as we are dealing with a regression problem where continuous values are predicted. To prevent overfitting and enhance generalization, we employed several regularization techniques. Moreover, dropout regularization with a dropout rate of 0.2 was applied to the hidden layers to further prevent overfitting and encourage the model to generalize better. The model was trained using the backpropagation algorithm with the Adam optimizer, which is known for its efficiency in terms of both memory and computation. The learning rate was set to 0.001, and the model was trained for 500 epochs with early stopping to avoid overfitting. The mean squared error (MSE) loss function was used to optimize the model’s predictions.

5.2. Extreme Gradient Boosting (XGBoost)

This is analogous to the ensemble technique, a kind of machine learning algorithm. To provide a stronger and more accurate forecast, the ensemble uses a number of different basic regression models, sometimes known as decision trees. Boosting is the technique of fitting these numerous models sequentially [52]. To boost, you train a series of rudimentary models, or “weak learners”, one after the other, with the idea that one model may learn from the mistakes of the others. A single or double branch is used to construct these basic models. By averaging the forecasts from all the basic models, the final prediction considers all of them. With its many hyperparameters that can be adjusted for a personalized fit, this model excels at handling complicated and huge datasets. By using the XGBoost XGBRegressor package, this concept is put into action [53]. The following is a definition of this pattern:
y ˆ i = k = 1 K f k x i
The value that is anticipated to be used for updating the i-th building is denoted by the symbol y ˆ i , where fk represents the prediction of the k-th tree for building xi and K represents the total number of trees that are included in the model. With each additional tree that is built, the accuracy of the prediction steadily increases. The model optimizes the objective function Lϕ, which results in a reduction in the amount of prediction error:
L ϕ = i = 1 n l y i , y ˆ i + k = 1 K Ω f k
where the regularization term and the loss function are represented by Ω(fk) and l y i , y ˆ i , respectively. The difference between the actual update demand, denoted by yi, and the value that was expected, denoted by y ˆ i , is what the loss function measures. The regularization term is responsible for controlling the complexity of the model in order to avoid overfitting in the urban renewal prediction. The formula for the regularization term is presented as follows:
Ω f k = Υ T + 1 2 γ j = 1 T w j 2
In this equation, the parameters Υ and γ represent regularization, T represents the number of leaf nodes, and w j represents the weight of the leaf.
The hyperparameter tuning process for the XGBoost model was performed to optimize its performance and ensure good generalization to unseen data. The key hyperparameters were tuned, including the learning rate (η), which controls the step size during optimization, with values tested in the range of 0.01 to 0.1; the maximum depth (max_depth) of each tree, ranging from 3 to 10, which controls model complexity; the number of estimators (n_estimators), tested from 50 to 500, which defines the number of boosting rounds; the subsample ratio, ranging from 0.5 to 1.0, which dictates the fraction of samples used for fitting each individual tree; and the colsample_bytree parameter, controlling the fraction of features used for each tree, optimized in the range of 0.3 to 1.0. The optimization was carried out using grid search, where we exhaustively tested combinations of these hyperparameters within predefined grids. The best configuration was selected based on the model’s performance on the validation set, where the mean squared error (MSE) was used as the objective function to minimize. To further prevent overfitting and enhance model performance, early stopping was implemented during training, with the training halting if the performance on the validation set did not improve for 100 consecutive rounds. In this model, the optimal set of hyperparameters obtained through the grid search resulted in the best model configuration for predicting the optical parameters of the tellurite glasses.

5.3. Random Forest Regressor (RF)

One example of a mathematical method is one that generates several decision trees by using random subsets of the characteristics from the training set. A random portion of the training data and a random subset of the predictor variables are introduced into each decision tree throughout the training process. For the purpose of determining the ultimate forecast, each decision tree generates a unique prediction, and the final output is computed by taking the average of the predictions generated by all of the trees [54]. The essence of the Random Forest resides in the fact that any single decision tree may be subject to bias or mistakes; nevertheless, when taken as a whole, the trees have the potential to provide a more accurate forecast than any one tree could ever produce on its own. To further enhance the model’s generalizability and decrease overfitting, it is recommended to train each tree with random features and use a random subset of the training data [55]. It is possible to create this model in Python 3.12.7 by using the Random Forest Regressor module that is part of the sklearn ensemble distribution.

5.4. CatBoost

As a high-performance Gradient Boosting technique for categorical data, CatBoost is an open-source algorithm [56]. It is capable of handling categorical data, which eliminates the need for preprocessing techniques such as label encoding or one-hot operation [57]. CatBoost is a useful strategy for use with smaller datasets since it employs statistical methods and target-based encoding to decrease the amount of overfitting that occurs. In addition, it functions well with the default settings, which eliminates the need for hyperparameter customization [58]. It is the default behavior of CatBoost to produce one thousand six-level binary two-leaf trees. Because the calculation of the automated learning rate is limited by the characteristics of the training dataset and the number of iterations, automatic learning is the most efficient method. Training may be sped up by increasing the learning rate and decreasing the number of iterations. A model depth of (6, 8, and 10), learning rate of (0.01, 0.1, and 0.2), and model iterations of (100 and 200) were the hyperparameters used to train the CatBoost model and acquire the desired parameters.

6. Model Evaluation

Model evaluation is a critical step in the development and implementation of predictive models, as it provides a comprehensive assessment of their performance and reliability. The primary objective of this process is to determine the model’s ability to generalize effectively to unseen data while ensuring that it meets the desired accuracy and robustness criteria. Evaluating models is essential to identify the most suitable algorithm for a specific problem, especially in complex predictive tasks where multiple models, such as MLP, CatBoost, XGBoost, RF, and DeepBoost, are employed.
In this study, the evaluation process involved the calculation of several performance metrics for each model during both the training and testing phases. Metrics such as the Coefficient of Determination (R2); Variance Accounted For (VAF); a-20 index, Performance Index (PI), and accuracy were employed to evaluate the models based on the literature’s suggestions [59,60,61,62,63,64]. Each metric provides unique insights into the models’ performance. For instance, R2 measures the proportion of variance explained by the model, while VAF indicates the degree to which the predicted values align with the observed values. The a-20 index evaluates the percentage of predictions falling within an acceptable range of deviation, and PI combines multiple aspects of prediction accuracy into a single measure. Lastly, accuracy reflects the overall correctness of the predictions.
By evaluating the models across these diverse metrics, this study aims to identify the optimal predictive algorithm for forecasting the Ω2, Ω4, and Ω6 parameters. Such a comprehensive evaluation is not only essential for selecting the best-performing model but also for understanding the strengths and weaknesses of each algorithm, thereby enabling informed decisions in future applications. This rigorous approach ensures that the selected model provides reliable and accurate predictions, which are crucial for addressing the underlying research objectives effectively. The used statistical indices in this study can be formulated as follows [60,62,63,64,65,66,67,68,69,70,71,72]:
R 2 = i = 1 n m Ω i m ¯ Ω 2 i = 1 n m Ω i P Ω i 2 i = 1 n m Ω i m ¯ Ω 2
V A F = 1 v a r m Ω i P Ω i v a r m Ω i × 100
a 20   i n d e x = m 20 n
P I = R 2 + 0.01 × V A F R M S E
A C C = 100 100 n × i = 1 n m Ω i P Ω i m Ω i + P Ω i / 2
where m signifies the number of data points; and m Ω i , m ¯ Ω , and P Ω i are, respectively, the measured, anticipated, and average of the real Ω t , t = 2 ,   4 ,   and   6 values [67,69,70,73].
Underpinning the success of this evaluation is the preprocessing step of data normalization, which plays a fundamental role in ensuring the models’ reliability and comparability. Normalization adjusts the scale of input features to prevent variables with larger ranges from disproportionately influencing the learning process. This step is particularly critical given the diversity of input features utilized for predicting Ω2, Ω4, and Ω6.
In this study, the min–max normalization technique was employed, which scales each feature to a range of [0, 1] using the following formula:
x i n o r m = x i x m i n x m a x x m i n
Here, xi represents the original data point, while xmin and xmax denote the minimum and maximum values of the respective feature. This transformation ensures that all features contribute equally to the training process, enhancing the models’ convergence rates and reducing computational inefficiencies.
The normalization process is particularly vital for models such as MLP and DeepBoost, where the scale of inputs significantly impacts the optimization of weights. Moreover, while tree-based models like CatBoost, XGBoost, and RF are less sensitive to feature scaling, normalization was applied uniformly across all the models to ensure consistency and fairness in the evaluation process.
By incorporating normalization, this study guarantees that the comparative analysis of model performance remains unbiased and that the chosen model delivers robust and accurate predictions of Ω2, Ω4, and Ω6. This preprocessing step further underscores the rigor and methodological soundness of the evaluation framework.
To ensure a robust evaluation of the predictive models, the dataset was partitioned into two distinct subsets: a training set and a testing set. This division is a fundamental practice in machine learning to assess the model’s performance on unseen data and to avoid overfitting, where the model performs well on training data but poorly on new data.
In this study, 80% of the available data, corresponding to 56 samples, was allocated to the training set. The training set is used to fit the models, allowing them to learn the underlying patterns and relationships in the data. The remaining 20%, consisting of 14 samples, was designated as the testing set. The testing set serves as an independent dataset to evaluate the model’s generalization ability, providing an unbiased estimate of its predictive performance. Although the data can be partitioned according to various schemes (e.g., 60/40, 70/30, 80/20, and 90/10), in this study the chosen partitioning ratio was selected based on the researcher’s recommendation [74,75,76,77,78,79,80].
The data partitioning was carried out using a random sampling method to ensure that both subsets represent the overall distribution of the dataset. This approach minimizes the risk of introducing selection bias, which could compromise the reliability of the evaluation. Additionally, care was taken to maintain the integrity of the dataset by ensuring that no overlap occurred between the training and testing sets.
By adopting this partitioning strategy, this study guarantees that the models are rigorously evaluated under realistic conditions. The separate evaluation on the testing set provides critical insights into each model’s ability to predict the Ω2, Ω4, and Ω6 parameters accurately and consistently, further reinforcing the validity of the performance comparison.
To gain a deeper understanding of the relationships between the input parameters and the optical properties (Ω2, Ω4, and Ω6), a sensitivity analysis was conducted using the Cosine Amplitude Method (CAM). This method quantifies the strength of the relationship between pairs of effective parameters and their influence on the output variables (Ωt). The CAM employs the following equation:
r i j = k = 1 m x i k . x j k k = 1 m x i k 2 . k = 1 m x j k 2
in which rij is the intensity impact between xi (input) and xj (output).
The sensitivity analysis conducted using the Cosine Amplitude Method (CAM) provides valuable insights into the relative importance of the input parameters in influencing the output optical properties, specifically Ω2, Ω4, and Ω6. As shown in Figure 2, the results indicate that certain parameters have a stronger effect on the prediction of these optical properties. For instance, TeO2 demonstrates a strong influence on Ω2, with a higher sensitivity value indicating that changes in the TeO2 concentration have a notable impact on the optical behavior of the material. Similarly, parameters like B2O3 and CaF2 are shown to significantly affect Ω4 and Ω6, with their contributions being more pronounced in predicting these parameters. Other parameters, such as ZnO and Na2O, while still influential, have a relatively weaker effect on the outputs.

7. Results and Discussion

The statistical performance of the models for predicting Ω2 is detailed in Table 2 and Table 3. These tables reveal distinct trends in the accuracy and reliability of each model during the training and testing phases.
During the training phase, DeepBoost emerged as the most effective model, achieving the highest R2 value of 0.974, indicating that it explains 97.4% of the variance in the training data. This was corroborated by its VAF score of 96.704, further emphasizing its robust fitting ability. DeepBoost’s Performance Index (PI) of 1.297 and accuracy of 99.895 demonstrate its ability to make precise predictions. Additionally, the a-20 index, which reflects the proportion of predictions falling within 20% of the observed values, was the highest for DeepBoost (0.944), showcasing its reliability in practical scenarios.
Other models, while competitive, lagged behind DeepBoost. XGBoost achieved the second-highest R2 (0.931) and VAF (92.344), but its PI (1.142) and accuracy (99.889) were slightly lower. Similarly, CatBoost and RF had R2 values of 0.920 and 0.920, respectively, but their a-20 indices (0.907 for CatBoost and 0.889 for RF) were lower than that of DeepBoost. MLP, while showing decent performance (R2 = 0.907), had the lowest PI (1.055) and accuracy (99.877), indicating relatively less precise predictions.
The testing phase results reveal that DeepBoost maintained its superior performance. It achieved an R2 value of 0.971, VAF of 96.282, and PI of 1.108, all significantly higher than those of the other models. Its accuracy of 99.902 and a-20 index of 0.929 further underscored its strong generalization ability.
In contrast, other models displayed varying degrees of decline in their performance. XGBoost demonstrated relatively strong results, with an R2 of 0.929 and a PI of 0.722, but its accuracy (99.870) and a-20 index (0.786) were notably lower than those of DeepBoost. CatBoost performed moderately well, achieving an R2 of 0.887 and a PI of 0.605. MLP and RF showed the weakest generalization ability, with R2 values of 0.869 and 0.905, respectively, and lower a-20 indices of 0.786.
As shown in Table 3, DeepBoost ranked first across both the training and testing phases, achieving the best total rate of 49. The consistent performance of DeepBoost reflects its ability to balance accuracy and reliability. In contrast, MLP ranked last with a total rate of 12, suggesting its limited effectiveness for predicting Ω2.
Table 4 highlights the strong performance of DeepBoost in predicting Ω4. It achieved the highest R2 (0.955), VAF (95.171), and PI (1.674) during the training phase, indicating its exceptional ability to model the data. Its accuracy of 99.846 and a-20 index of 0.741 reinforce its reliability.
Other models, while competitive, demonstrated weaker performances. XGBoost (R2 = 0.929, VAF = 92.857) and RF (R2 = 0.919, VAF = 91.429) followed DeepBoost, but their PI values (1.555 and 1.512, respectively) were notably lower. CatBoost achieved moderate results (R2 = 0.910, VAF = 90.786), while MLP ranked lowest, with an R2 of 0.899 and a PI of 1.427.
In the testing phase, DeepBoost continued to dominate, achieving an R2 of 0.945, VAF of 93.992, and PI of 1.787. Its accuracy of 99.951 and a-20 index of 1.000 signify its excellent generalization performance.
Other models showed varying levels of success. XGBoost and RF achieved relatively high R2 values (0.911 and 0.897, respectively) and competitive accuracy scores (99.945 and 99.941). However, their PI and a-20 indices were lower than those of DeepBoost. CatBoost displayed moderate performance, while MLP again showed the weakest results, with an R2 of 0.867 and a PI of 1.495.
As shown in Table 5, DeepBoost ranked first with a total rate of 46, significantly outperforming other models. MLP ranked last with a total rate of 11, reinforcing its limited capability in predicting Ω4.
The training phase results for the Ω6 predictions, shown in Table 6, reveal that DeepBoost excelled with near-perfect values for all the indicators. It achieved an R2 of 0.997, VAF of 99.681, and PI of 1.948, alongside an accuracy of 99.968 and an a-20 index of 1.000. These metrics highlight its ability to capture the underlying relationships in the data.
Other models displayed good but less impressive performances. RF followed with an R2 of 0.953 and a VAF of 94.049, but its PI (1.738) and a-20 index (0.815) were notably lower. XGBoost and CatBoost achieved similar R2 values (0.934 and 0.939, respectively), but their lower PI values (1.701 and 1.695) and a-20 indices (0.870 and 0.815) limited their competitiveness. MLP showed the weakest results, with an R2 of 0.927 and a PI of 1.654.
In the testing phase, DeepBoost maintained its superiority, achieving an R2 of 0.994, VAF of 99.323, and PI of 1.870. Its accuracy of 99.946 and a-20 index of 1.000 confirmed its exceptional generalization ability.
Other models showed declines in performance compared to the training phase. RF (R2 = 0.949) and XGBoost (R2 = 0.920) followed DeepBoost, but their PI and a-20 indices were lower. CatBoost and MLP exhibited moderate results, with R2 values of 0.924 and 0.919, respectively.
Table 7 confirms that DeepBoost achieved the top rank with a total rate of 50. MLP, despite its reasonable accuracy, ranked last with a total rate of 13, highlighting its comparatively weaker predictive performance.
The analyses across Ω2, Ω4, and Ω6 consistently identify DeepBoost as the most effective model. Its exceptional performance in both the training and testing phases underscores its ability to handle complex datasets and provide reliable predictions. Conversely, MLP ranked lowest for all the targets, demonstrating limited utility in this context.
The findings validate the evaluation framework and highlight the importance of model selection in predictive tasks, offering significant implications for similar studies and practical applications.
Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 illustrate the correlation between the measured and predicted values of the target parameters (Ω2, Ω4, and Ω6) for both the training and testing phases. These plots provide a visual assessment of the predictive accuracy of the developed models. In Figure 3 and Figure 4, the measured versus predicted values for Ω2 during the training and testing phases are shown. The data points cluster tightly around the 45-degree line, indicating a strong correlation and minimal deviation between the observed and predicted values. This highlights the models’ reliability in predicting Ω2. Figure 5 and Figure 6 depict the correlation for Ω4 during the training and testing phases, respectively. While the alignment of data points with the 45-degree line remains strong, there is a slight increase in the dispersion of points in the testing phase, reflecting the inherent challenges of generalization to unseen data. Similarly, Figure 7 and Figure 8 display the correlation for Ω6 in the training and testing phases. The near-perfect alignment of the data points with the diagonal line, particularly for the DeepBoost model, confirms its exceptional predictive capability. The consistency across the training and testing phases further validates the robustness of the developed models. These figures collectively emphasize the effectiveness of the proposed methodologies in capturing the complex relationships between the input variables and the target parameters, making them suitable for practical applications. It should be mentioned that the dashed line in these figures represents the linear regression fit between predicted and measured values.
Violin plots are a robust visualization tool that combines the features of a box plot and a kernel density plot. They provide a comprehensive representation of the distribution of a dataset by showing both the central tendencies and the variability of the data. The plot displays a mirrored density curve, highlighting the data’s distribution shape, while an internal box plot indicates key statistical metrics such as the median and interquartile range (IQR). This visualization is particularly useful for comparing multiple models, as it allows for a detailed assessment of the spread, skewness, and potential outliers within the predictions. By evaluating the width and shape of the violin plot, one can infer the consistency and reliability of each model’s performance.
Figure 9, Figure 10 and Figure 11 illustrate violin plots of the developed models for predicting Ω2, Ω4, and Ω6, respectively, in both the training (left) and testing (right) phases. These plots provide a comparative analysis of the distribution and variability of the predictions across the different models, offering insights into their consistency and robustness.
As detailed in Table 8, we employed the Kruskal–Wallis H test—a distribution-free analogue of one-way ANOVA—to determine whether DeepBoost’s lower error metrics represent genuine improvements over competing models for each target variable (Ω2, Ω4, and Ω6) on both the training (n = 56) and test (n = 14) splits. Every “omnibus” comparison produced a p-value below the 0.05 threshold (Ω2: Htrain = 1.77, p = 0.0078 and Htest = 8.94, p = 0.0063; Ω4: Htrain = 9.06, p = 0.0069 and Htest = 4.03, p = 0.0043; and Ω6: Htrain = 79.55, p = 0.0217 and Htest = 9.31, p = 0.0342), confirming that at least one model’s error distribution differs significantly across the methods in every scenario. In all six cases, DeepBoost attained both the smallest median absolute deviation—Ω2: 0.52 (training), 0.62 (test); Ω4: 0.19, 0.08; and Ω6: 0.03, 0.12—and the lowest RMSE—Ω2: 0.61, 0.83; Ω4: 0.23, 0.10; and Ω6: 0.05, 0.12—demonstrating not only numerical superiority but statistical distinctness from its peers. The particularly large H statistic for the Ω6 training underscores an especially pronounced effect, while the significant yet more moderate H values on the test splits highlight DeepBoost’s consistent advantage even with smaller sample sizes.
While the predictive performance of the models, particularly DeepBoost, has been thoroughly evaluated in this study, it is equally important to consider the trade-off between model accuracy and computational cost. In real-world applications, the choice of model often depends not only on its predictive power but also on its computational efficiency, especially when dealing with large datasets or time-sensitive tasks.
DeepBoost, for instance, achieved the highest accuracy across all the parameters, but its computational cost was higher compared to simpler models such as the Random Forest (RF) and a multilayer perceptron (MLP). Although DeepBoost demonstrated superior performance, its training time and resource requirements may limit its use in scenarios where rapid predictions are essential or computational resources are constrained.
On the other hand, models like XGBoost and RF, while slightly less accurate than DeepBoost, offer a better trade-off in terms of computational efficiency, making them suitable for real-time applications or situations with limited computational resources. These models require less training time and can be deployed more easily in industrial settings where quick predictions are needed.
Therefore, the selection of an appropriate model should take into account not only its accuracy but also its computational cost. In practice, if time and resource constraints are critical, simpler models with faster training times and less computational demand may be preferred, even if this results in a slight decrease in predictive accuracy. Conversely, when prediction accuracy is paramount and computational resources are available, more complex models like DeepBoost may be the best choice.
While this study emphasizes the computational efficiency and predictive performance of the advanced machine learning models used (e.g., DeepBoost, XGBoost, and CatBoost), we recognize that incorporating domain knowledge, such as ab initio data, into the modeling process has become a key trend in recent research. Recent studies, such as Zhang et al. [81], have demonstrated the potential of hybrid neural networks that combine machine learning techniques with physics-based insights, such as NN potentials, to predict material properties more accurately and efficiently. These models integrate first-principles data with machine learning algorithms, enabling a deeper understanding of material behavior and improving predictive performance.
In addition to the application of machine learning in materials science, recent studies have shown the potential of hybrid models in image processing. For example, in a study conducted by Zhang et al. [82], a hybrid neural network approach was used to efficiently predict key parameters such as pore pressure and temperature in fire-loaded concrete structures by leveraging a combination of autoencoders and fully connected neural networks. This work demonstrates the value of using images to represent complex material behaviors and to extract the key features for predictive modeling. Similarly, this study discusses the integration of image data with neural networks to enhance the analysis and prediction of concrete properties under extreme conditions, providing valuable parallels to the image-based data representations used in our study.
In comparison, our approach relies purely on data-driven machine learning models, without integrating domain-specific knowledge, which may limit the accuracy and interpretability of the results in complex systems like tellurite glasses. While our models perform well in terms of predictive accuracy and computational efficiency, integrating domain knowledge from the physical properties of materials could further enhance their performance. Future work could explore hybrid ML approaches, incorporating ab initio simulations or first-principles data, to improve the generalization capability of our models for complex material systems.

8. Conclusions

This study provides a comprehensive evaluation of advanced machine learning models for predicting the Ω2, Ω4, and Ω6 parameters. Among the five models analyzed, DeepBoost consistently outperformed its counterparts across all the targets and metrics. For Ω2, DeepBoost achieved the highest training phase R2 (0.974) and accuracy (99.895%), maintaining superior performance during testing with an R2 of 0.971 and accuracy of 99.902%. Similar trends were observed for the Ω4 and Ω6 predictions, where DeepBoost consistently achieved the highest R2 values (0.955 for Ω4 and 0.997 for Ω6 during training and 0.945 for Ω4 and 0.994 for Ω6 during testing) and the highest accuracy scores (99.951% for Ω4 and 99.946% for Ω6 in testing). In contrast, MLP showed the weakest performance, with the lowest R2 values and total ranking scores for all the targets. The violin plots and measured versus predicted value analyses further confirmed the superior consistency and reliability of DeepBoost, making it the most suitable model for practical applications. These results underscore the critical role of advanced machine learning in solving complex prediction problems and highlight the effectiveness of DeepBoost in capturing intricate data relationships. This research sets the stage for leveraging these models in similar domains and provides a robust framework for model evaluation and selection. In addition, while this study focused on Er3+-doped glasses, future research should contain other RE ions to enhance generalizability.

Author Contributions

F.A.: Writing—review & editing, Writing—original draft, Supervision, Resources, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. M.H.: Writing—review & editing, Writing—original draft, Visualization, Validation, Software, Methodology, Investigation. T.S.: Writing—review & editing, Writing—original draft, Visualization, Validation, Software, Methodology, Investigation. S.P.: Writing—review & editing, Writing—original draft, Visualization, Validation, Methodology, Investigation. P.G.A.: Writing—review & editing, Writing—original draft, Validation, Supervision, Software, Methodology, Investigation, Conceptualization. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be made available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Moizan, V.; Nazabal, V.; Troles, J.; Houizot, P.; Adam, J.-L.; Doualan, J.-L.; Moncorgé, R.; Smektala, F.; Gadret, G.; Pitois, S. Er3+-Doped GeGaSbS Glasses for Mid-IR Fibre Laser Application: Synthesis and Rare Earth Spectroscopy. Opt. Mater. 2008, 31, 39–46. [Google Scholar] [CrossRef]
  2. Lalla, E.A.; Rodríguez-Mendoza, U.R.; Lozano-Gorrín, A.D.; Sanz-Arranz, A.; Rull, F.; Lavín, V. Nd3+-Doped TeO2–PbF2–AlF3 Glasses for Laser Applications. Opt. Mater. 2016, 51, 35–41. [Google Scholar] [CrossRef]
  3. Lalla, E.A.; León-Luis, S.F.; Monteseguro, V.; Pérez-Rodríguez, C.; Cáceres, J.M.; Lavín, V.; Rodríguez-Mendoza, U.R. Optical Temperature Sensor Based on the Nd3+ Infrared Thermalized Emissions in a Fluorotellurite Glass. J. Lumin. 2015, 166, 209–214. [Google Scholar] [CrossRef]
  4. León-Luis, S.F.; Rodríguez-Mendoza, U.R.; Martín, I.R.; Lalla, E.; Lavín, V. Effects of Er3+ Concentration on Thermal Sensitivity in Optical Temperature Fluorotellurite Glass Sensors. Sens. Actuators B Chem. 2013, 176, 1167–1175. [Google Scholar] [CrossRef]
  5. Qin, G.; Qin, W.; Wu, C.; Huang, S.; Zhang, J.; Lu, S.; Zhao, D.; Liu, H. Enhancement of Ultraviolet Upconversion in Yb3+ and Tm3+ Codoped Amorphous Fluoride Film Prepared by Pulsed Laser Deposition. J. Appl. Phys. 2003, 93, 4328–4330. [Google Scholar] [CrossRef]
  6. Lourenço, A.V.S.; Kodaira, C.A.; Ramos-Sanchez, E.M.; Felinto, M.C.F.C.; Goto, H.; Gidlund, M.; Malta, O.L.; Brito, H.F. Luminescent Material Based on the [Eu(TTA)3(H2O)2] Complex Incorporated into Modified Silica Particles for Biological Applications. J. Inorg. Biochem. 2013, 123, 11–17. [Google Scholar] [CrossRef]
  7. Legendziewicz, J.; Oczko, G.; Wiglusz, R.; Amirkhanov, V. Correlation between Spectroscopic Characteristics and Structure of Lanthanide Phosphoro-Azo Derivatives of β-Diketones. J. Alloys Compd. 2001, 323, 792–799. [Google Scholar] [CrossRef]
  8. Lalla, E.A.; Konstantinidis, M.; De Souza, I.; Daly, M.G.; Martín, I.R.; Lavín, V.; Rodríguez-Mendoza, U.R. Judd-Ofelt Parameters of RE3+-Doped Fluorotellurite Glass (RE3+ = Pr3+, Nd3+, Sm3+, Tb3+, Dy3+, Ho3+, Er3+, and Tm3+). J. Alloys Compd. 2020, 845, 156028. [Google Scholar] [CrossRef]
  9. Mauro, J.C.; Tandia, A.; Vargheese, K.D.; Mauro, Y.Z.; Smedskjaer, M.M. Accelerating the Design of Functional Glasses through Modeling. Chem. Mater. 2016, 28, 4267–4277. [Google Scholar] [CrossRef]
  10. Krishnan, N.M.A.; Mangalathu, S.; Smedskjaer, M.M.; Tandia, A.; Burton, H.; Bauchy, M. Predicting the Dissolution Kinetics of Silicate Glasses Using Machine Learning. J. Non. Cryst. Solids 2018, 487, 37–45. [Google Scholar] [CrossRef]
  11. Chen, C.; Deng, Z.; Tran, R.; Tang, H.; Chu, I.-H.; Ong, S.P. Accurate Force Field for Molybdenum by Machine Learning Large Materials Data. Phys. Rev. Mater. 2017, 1, 43603. [Google Scholar] [CrossRef]
  12. Tanabe, S.; Ohyagi, T.; Soga, N.; Hanada, T. Compositional Dependence of Judd-Ofelt Parameters of Er3+ Ions in Alkali-Metal Borate Glasses. Phys. Rev. B 1992, 46, 3305. [Google Scholar] [CrossRef]
  13. Krupke, W.F. Optical Absorption and Fluorescence Intensities in Several Rare-Earth-Doped Y2O3 and LaF3 Single Crystals. Phys. Rev. 1966, 145, 325. [Google Scholar] [CrossRef]
  14. Lakshminarayana, G.; Yang, R.; Mao, M.; Qiu, J. Spectral Analysis of RE3+ (RE = Sm, Dy, and Tm): P2O5–Al2O3–Na2O Glasses. Opt. Mater. 2009, 31, 1506–1512. [Google Scholar] [CrossRef]
  15. Cassar, D.R.; de Carvalho, A.C.; Zanotto, E.D. Predicting Glass Transition Temperatures Using Neural Networks. Acta Mater. 2018, 159, 249–256. [Google Scholar] [CrossRef]
  16. Dragoni, D.; Daff, T.D.; Csányi, G.; Marzari, N. Achieving DFT Accuracy with a Machine-Learning Interatomic Potential: Thermomechanics and Defects in Bcc Ferromagnetic Iron. Phys. Rev. Mater. 2018, 2, 13808. [Google Scholar] [CrossRef]
  17. Mocanu, F.C.; Konstantinou, K.; Lee, T.H.; Bernstein, N.; Deringer, V.L.; Csányi, G.; Elliott, S.R. Modeling the Phase-Change Memory Material, Ge2Sb2Te5, with a Machine-Learned Interatomic Potential. J. Phys. Chem. B 2018, 122, 8998–9006. [Google Scholar] [CrossRef]
  18. Bassman Oftelie, L.; Rajak, P.; Kalia, R.K.; Nakano, A.; Sha, F.; Sun, J.; Singh, D.J.; Aykol, M.; Huck, P.; Persson, K. Active Learning for Accelerated Design of Layered Materials. npj Comput. Mater. 2018, 4, 74. [Google Scholar] [CrossRef]
  19. Gopakumar, A.M.; Balachandran, P.V.; Xue, D.; Gubernatis, J.E.; Lookman, T. Multi-Objective Optimization for Materials Discovery via Adaptive Design. Sci. Rep. 2018, 8, 3738. [Google Scholar] [CrossRef]
  20. Brauer, D.S.; Rüssel, C.; Kraft, J. Solubility of Glasses in the System P2O5–CaO–MgO–Na2O–TiO2: Experimental and Modeling Using Artificial Neural Networks. J. Non. Cryst. Solids 2007, 353, 263–270. [Google Scholar] [CrossRef]
  21. Deringer, V.L.; Caro, M.A.; Jana, R.; Aarva, A.; Elliott, S.R.; Laurila, T.; Csányi, G.; Pastewka, L. Computational Surface Chemistry of Tetrahedral Amorphous Carbon by Combining Machine Learning and Density Functional Theory. Chem. Mater. 2018, 30, 7438–7445. [Google Scholar] [CrossRef]
  22. Scherbela, M.; Hörmann, L.; Jeindl, A.; Obersteiner, V.; Hofmann, O.T. Charting the Energy Landscape of Metal/Organic Interfaces via Machine Learning. Phys. Rev. Mater. 2018, 2, 43803. [Google Scholar] [CrossRef]
  23. Yang, K.; Xu, X.; Yang, B.; Cook, B.; Ramos, H.; Krishnan, N.M.A.; Smedskjaer, M.M.; Hoover, C.; Bauchy, M. Predicting the Young’s Modulus of Silicate Glasses Using High-Throughput Molecular Dynamics Simulations and Machine Learning. Sci. Rep. 2019, 9, 8739. [Google Scholar] [CrossRef]
  24. Sudo, S. Optical Fiber Amplifiers: Materials, Devices, and Applications; Artech House: Washington, DC, USA, 1997; ISBN 0890068097. [Google Scholar]
  25. Selvaraju, K.; Vijaya, N.; Marimuthu, K.; Lavin, V. Composition Dependent Spectroscopic Properties of Er3+-doped Boro-tellurite Glasses. Phys. Status Solidi 2013, 210, 607–615. [Google Scholar] [CrossRef]
  26. Yusof, N.N.; Ghoshal, S.K.; Azlan, M.N. Optical Properties of Titania Nanoparticles Embedded Er3+-Doped Tellurite Glass: Judd-Ofelt Analysis. J. Alloys Compd. 2017, 724, 1083–1092. [Google Scholar] [CrossRef]
  27. Madhu, A.; Srinatha, N. Structural and Spectroscopic Studies on the Concentration Dependent Erbium Doped Lithium Bismuth Boro Tellurite Glasses for Optical Fiber Applications. Infrared Phys. Technol. 2020, 107, 103300. [Google Scholar] [CrossRef]
  28. Rolli, R.; Gatterer, K.; Wachtler, M.; Bettinelli, M.; Speghini, A.; Ajo, D. Optical Spectroscopy of Lanthanide Ions in ZnO–TeO2 Glasses. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2001, 57, 2009–2017. [Google Scholar] [CrossRef]
  29. Yanmin, Y.; Baojiu, C.; Cheng, W.; Guozhong, R.; Xiaojun, W. Investigation of Modification Effect of B2O3 Component on Optical Spectroscopy of Er3+ Doped Tellurite Glasses. J. Rare Earths 2007, 25, 31–35. [Google Scholar] [CrossRef]
  30. Fang, R.E.N.; MEI, Y.; Chao, G.A.O.; Zhu, L.; LU, A. Thermal Stability and Judd-Ofelt Analysis of Optical Properties of Er3+-Doped Tellurite Glasses. Trans. Nonferrous Met. Soc. China 2012, 22, 2021–2026. [Google Scholar]
  31. Sazali, E.S.; Sahar, M.R.; Rohani, M.S. Optical Investigation of Erbium Doped Lead Tellurite Glass: Judd-Ofelt Analysis. Mater. Today Proc. 2015, 2, 5241–5245. [Google Scholar] [CrossRef]
  32. Gomes, J.F.; Lima, A.; Sandrini, M.; Medina, A.N.; Steimacher, A.; Pedrochi, F.; Barboza, M.J. Optical and Spectroscopic Study of Erbium Doped Calcium Borotellurite Glasses. Opt. Mater. 2017, 66, 211–219. [Google Scholar] [CrossRef]
  33. Pan, Z.; Morgan, S.H.; Dyer, K.; Ueda, A.; Liu, H. Host-dependent Optical Transitions of Er3+ Ions in Lead–Germanate and Lead-tellurium-germanate Glasses. J. Appl. Phys. 1996, 79, 8906–8913. [Google Scholar] [CrossRef]
  34. Sajna, M.S.; Thomas, S.; Mary, K.A.A.; Joseph, C.; Biju, P.R.; Unnikrishnan, N.V. Spectroscopic Properties of Er3+ Ions in Multicomponent Tellurite Glasses. J. Lumin. 2015, 159, 55–65. [Google Scholar] [CrossRef]
  35. Nandi, P.; Jose, G. Spectroscopic Properties of Er3+ Doped Phospho-Tellurite Glasses. Phys. B Condens. Matter 2006, 381, 66–72. [Google Scholar] [CrossRef]
  36. Nandi, P.; Jose, G. Erbium Doped Phospho-Tellurite Glasses for 1.5 Μm Optical Amplifiers. Opt. Commun. 2006, 265, 588–593. [Google Scholar] [CrossRef]
  37. Gaafar, M.S.; Marzouk, S.Y. Judd–Ofelt Analysis of Spectroscopic Properties of Er3+ Doped TeO2-BaO-ZnO Glasses. J. Alloys Compd. 2017, 723, 1070–1078. [Google Scholar] [CrossRef]
  38. Luo, Y.; Zhang, J.; Sun, J.; Lu, S.; Wang, X. Spectroscopic Properties of Tungsten–Tellurite Glasses Doped with Er3+ Ions at Different Concentrations. Opt. Mater. 2006, 28, 255–258. [Google Scholar] [CrossRef]
  39. Mahraz, Z.A.S.; Sahar, M.R.; Ghoshal, S.K.; Dousti, M.R. Concentration Dependent Luminescence Quenching of Er3+-Doped Zinc Boro-Tellurite Glass. J. Lumin. 2013, 144, 139–145. [Google Scholar] [CrossRef]
  40. Dai, S.; Zhang, J.; Yu, C.; Zhou, G.; Wang, G.; Hu, L. Effect of Hydroxyl Groups on Nonradiative Decay of Er3+: 4I13/24I15/2 Transition in Zinc Tellurite Glasses. Mater. Lett. 2005, 59, 2333–2336. [Google Scholar] [CrossRef]
  41. Rayappan, I.A.; Selvaraju, K.; Marimuthu, K. Structural and Luminescence Investigations on Sm3+ Doped Sodium Fluoroborate Glasses Containing Alkali/Alkaline Earth Metal Oxides. Phys. B Condens. Matter 2011, 406, 548–555. [Google Scholar] [CrossRef]
  42. Rodin, N.L.A.; Sahar, M.R. Erbium Doped Sodium Magnesium Boro-Tellurite Glass: Stability and Judd-Ofelt Analysis. Mater. Chem. Phys. 2018, 216, 177–185. [Google Scholar] [CrossRef]
  43. Lakshmi, Y.A.; Swapna, K.; Reddy, K.S.R.K.; Venkateswarlu, M.; Mahamuda, S.; Rao, A.S. Structural, Optical and NIR Studies of Er3+ Ions Doped Bismuth Boro Tellurite Glasses for Luminescence Materials Applications. J. Lumin. 2019, 211, 39–47. [Google Scholar] [CrossRef]
  44. Rolli, R.; Montagna, M.; Chaussedent, S.; Monteil, A.; Tikhomirov, V.K.; Ferrari, M. Erbium-Doped Tellurite Glasses with High Quantum Efficiency and Broadband Stimulated Emission Cross Section at 1.5 Μm. Opt. Mater. 2003, 21, 743–748. [Google Scholar] [CrossRef]
  45. Jlassi, I.; Elhouichet, H.; Ferid, M.; Barthou, C. Judd–Ofelt Analysis and Improvement of Thermal and Optical Properties of Tellurite Glasses by Adding P2O5. J. Lumin. 2010, 130, 2394–2401. [Google Scholar] [CrossRef]
  46. Benmadani, Y.; Kermaoui, A.; Chalal, M.; Khemici, W.; Kellou, A.; Pelle, F. Erbium Doped Tellurite Glasses with Improved Thermal Properties as Promising Candidates for Laser Action and Amplification. Opt. Mater. 2013, 35, 2234–2240. [Google Scholar] [CrossRef]
  47. Mahraz, Z.A.S.; Sahar, M.R.; Ghoshal, S.K. Near-Infrared up-Conversion Emission from Erbium Ions Doped Amorphous Tellurite Media: Judd-Ofelt Evaluation. J. Alloys Compd. 2018, 740, 617–625. [Google Scholar] [CrossRef]
  48. Bilir, G.; Mustafaoglu, N.; Ozen, G.; DiBartolo, B. Characterization of Emission Properties of Er3+ Ions in TeO2–CdF2–WO3 Glasses. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2011, 83, 314–321. [Google Scholar] [CrossRef]
  49. Coelho, J.; Azevedo, J.; Hungerford, G.; Hussain, N.S. Luminescence and Decay Trends for NIR Transition (4I13/24Il5/2) at 1.5 Μm in Er3+-Doped LBT Glasses. Opt. Mater. 2011, 33, 1167–1173. [Google Scholar] [CrossRef]
  50. Balda, R.; Al-Saleh, M.; Miguel, A.; Fdez-Navarro, J.M.; Fernández, J. Spectroscopy and Frequency Upconversion of Er3+ Ions in Fluorotellurite Glasses. Opt. Mater. 2011, 34, 481–486. [Google Scholar] [CrossRef]
  51. Taud, H.; Mas, J.-F. Multilayer Perceptron (MLP). In Geomatic Approaches for Modeling Land Change Scenarios; Springer: Berlin/Heidelberg, Germany, 2018; pp. 451–455. [Google Scholar]
  52. Chen, T.; He, T. Xgboost: Extreme Gradient Boosting. R Lect. 2014. [Google Scholar]
  53. Pérez Cortés, S.A.; Contreras Moreno, E.H.; Flores Páez, H.; Hurtado Cruz, J.P.; Jarufe Troncoso, J.A. Predictive Model for Water Consumption in a Copper Mineral Concentrator Plant Located in a Desert Area Using Machine Learning. Water 2024, 17, 15. [Google Scholar] [CrossRef]
  54. Rodrigo, J.A. Random Forest Con Python. Cienc. De Datos 2020, 10. [Google Scholar]
  55. He, Y.; Chen, C.; Li, B.; Zhang, Z. Prediction of Near-Surface Air Temperature in Glacier Regions Using ERA5 Data and the Random Forest Regression Method. Remote Sens. Appl. Soc. Environ. 2022, 28, 100824. [Google Scholar] [CrossRef]
  56. Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. CatBoost: Unbiased Boosting with Categorical Features. arXiv 2017, arXiv:1706.09516. [Google Scholar]
  57. Dorogush, A.V.; Ershov, V.; Gulin, A. CatBoost: Gradient Boosting with Categorical Features Support. arXiv 2018, arXiv:1810.11363. [Google Scholar]
  58. Ibragimov, B.; Gusev, G. Minimal Variance Sampling in Stochastic Gradient Boosting. In The Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2019. [Google Scholar]
  59. Hosseini, S.; Poormirzaee, R.; Hajihassani, M. Application of Reliability-Based Back-Propagation Causality-Weighted Neural Networks to Estimate Air-Overpressure Due to Mine Blasting. Eng. Appl. Artif. Intell. 2022, 115, 105281. [Google Scholar] [CrossRef]
  60. Hosseini, S.; Poormirzaee, R.; Hajihassani, M. An Uncertainty Hybrid Model for Risk Assessment and Prediction of Blast-Induced Rock Mass Fragmentation. Int. J. Rock Mech. Min. Sci. 2022, 160, 105250. [Google Scholar] [CrossRef]
  61. Hosseini, S.; Poormirzaee, R.; Hajihassani, M.; Kalatehjari, R. An ANN-Fuzzy Cognitive Map-Based Z-Number Theory to Predict Flyrock Induced by Blasting in Open-Pit Mines. Rock Mech. Rock Eng. 2022, 55, 4373–4390. [Google Scholar] [CrossRef]
  62. Wang, Q.; Qi, J.; Hosseini, S.; Rasekh, H.; Huang, J. ICA-LightGBM Algorithm for Predicting Compressive Strength of Geo-Polymer Concrete. Buildings 2023, 13, 2278. [Google Scholar] [CrossRef]
  63. Lawal, A.I.; Hosseini, S.; Kim, M.; Ogunsola, N.O.; Kwon, S. Prediction of Factor of Safety of Slopes Using Stochastically Modified ANN and Classical Methods: A Rigorous Statistical Model Selection Approach. Nat. Hazards 2023, 120, 2035–2056. [Google Scholar] [CrossRef]
  64. Hosseini, S.; Poormirzaee, R.; Gilani, S.-O.; Jiskani, I.M. A Reliability-Based Rock Engineering System for Clean Blasting: Risk Analysis and Dust Emissions Forecasting. Clean Technol. Environ. Policy 2023, 25, 1903–1920. [Google Scholar] [CrossRef]
  65. Hosseini, S.; Mousavi, A.; Monjezi, M.; Khandelwal, M. Mine-to-Crusher Policy: Planning of Mine Blasting Patterns for Environmentally Friendly and Optimum Fragmentation Using Monte Carlo Simulation-Based Multi-Objective Grey Wolf Optimization Approach. Resour. Policy 2022, 79, 103087. [Google Scholar] [CrossRef]
  66. Wang, X.; Hosseini, S.; Jahed Armaghani, D.; Tonnizam Mohamad, E. Data-Driven Optimized Artificial Neural Network Technique for Prediction of Flyrock Induced by Boulder Blasting. Mathematics 2023, 11, 2358. [Google Scholar] [CrossRef]
  67. Hosseini, S.; Pourmirzaee, R. Green Policy for Managing Blasting Induced Dust Dispersion in Open-Pit Mines Using Probability-Based Deep Learning Algorithm. Expert Syst. Appl. 2023, 240, 122469. [Google Scholar] [CrossRef]
  68. Kamran, M.; Chaudhry, W.; Taiwo, B.O.; Hosseini, S.; Rehman, H. Decision Intelligence-Based Predictive Modelling of Hard Rock Pillar Stability Using K-Nearest Neighbour Coupled with Grey Wolf Optimization Algorithm. Processes 2024, 12, 783. [Google Scholar] [CrossRef]
  69. Zhou, J.; Su, Z.; Hosseini, S.; Tian, Q.; Lu, Y.; Luo, H.; Xu, X.; Chen, C.; Huang, J. Decision Tree Models for the Estimation of Geo-Polymer Concrete Compressive Strength. Math. Biosci. Eng. 2024, 21, 1413–1444. [Google Scholar] [CrossRef] [PubMed]
  70. Hosseini, S.; Javanshir, S.; Sabeti, H.; Tahmasebizadeh, P. Mathematical-Based Gene Expression Programming (GEP): A Novel Model to Predict Zinc Separation from a Bench-Scale Bioleaching Process. J. Sustain. Metall. 2023, 9, 1601–1619. [Google Scholar] [CrossRef]
  71. Hosseini, S.; Khatti, J.; Taiwo, B.O.; Fissha, Y.; Grover, K.S.; Ikeda, H.; Pushkarna, M.; Berhanu, M.; Ali, M. Assessment of the Ground Vibration during Blasting in Mining Projects Using Different Computational Approaches. Sci. Rep. 2023, 13, 18582. [Google Scholar] [CrossRef]
  72. Zhao, J.; Hosseini, S.; Chen, Q.; Armaghani, D.J. Super Learner Ensemble Model: A Novel Approach for Predicting Monthly Copper Price in Future. Resour. Policy 2023, 85, 103903. [Google Scholar] [CrossRef]
  73. Hosseini, S.; Mousavi, A.; Monjezi, M. Prediction of Blast-Induced Dust Emissions in Surface Mines Using Integration of Dimensional Analysis and Multivariate Regression Analysis. Arab. J. Geosci. 2022, 15, 163. [Google Scholar] [CrossRef]
  74. Hosseini, S.; Jodeiri Shokri, B.; Mirzaghorbanali, A.; Nourizadeh, H.; Entezam, S.; Motallebiyan, A.; Entezam, A.; McDougall, K.; Karunasena, W.; Aziz, N. Predicting Axial-Bearing Capacity of Fully Grouted Rock Bolting Systems by Applying an Ensemble System. Soft Comput. 2024, 28, 10491–10518. [Google Scholar] [CrossRef]
  75. Hosseini, S.; Entezam, S.; Jodeiri Shokri, B.; Mirzaghorbanali, A.; Nourizadeh, H.; Motallebiyan, A.; Entezam, A.; McDougall, K.; Karunasena, W.; Aziz, N. Predicting Grout’s Uniaxial Compressive Strength (UCS) for Fully Grouted Rock Bolting System by Applying Ensemble Machine Learning Techniques. Neural Comput. Appl. 2024, 36, 18387–18412. [Google Scholar] [CrossRef]
  76. Taiwo, B.O.; Hosseini, S.; Fissha, Y.; Kilic, K.; Olusola, O.A.; Chandrahas, N.S.; Li, E.; Akinlabi, A.A.; Khan, N.M. Indirect Evaluation of the Influence of Rock Boulders in Blasting to the Geohazard: Unearthing Geologic Insights Fused with Tree Seed Based LSTM Algorithm. Geohazard Mech. 2024, 2, 244–257. [Google Scholar] [CrossRef]
  77. Zhang, Z.; Hosseini, S.; Monjezi, M.; Yari, M. Extension of Reliability Information of Z-Numbers and Fuzzy Cognitive Map: Development of Causality-Weighted Rock Engineering System to Predict and Risk Assessment of Blast-Induced Rock Size Distribution. Int. J. Rock Mech. Min. Sci. 2024, 178, 105779. [Google Scholar] [CrossRef]
  78. Kahraman, E.; Hosseini, S.; Taiwo, B.O.; Fissha, Y.; Jebutu, V.A.; Akinlabi, A.A.; Adachi, T. Fostering Sustainable Mining Practices in Rock Blasting: Assessment of Blast Toe Volume Prediction Using Comparative Analysis of Hybrid Ensemble Machine Learning Techniques. J. Saf. Sustain. 2024, 1, 75–88. [Google Scholar] [CrossRef]
  79. Hosseini, S.; Gordan, B.; Kalkan, E. Development of Z Number-Based Fuzzy Inference System to Predict Bearing Capacity of Circular Foundations. Artif. Intell. Rev. 2024, 57, 146. [Google Scholar] [CrossRef]
  80. Esangbedo, M.O.; Taiwo, B.O.; Abbas, H.H.; Hosseini, S.; Sazid, M.; Fissha, Y. Enhancing the Exploitation of Natural Resources for Green Energy: An Application of LSTM-Based Meta-Model for Aluminum Prices Forecasting. Resour. Policy 2024, 92, 105014. [Google Scholar] [CrossRef]
  81. Zhang, Y.-W.; Sorkin, V.; Aitken, Z.H.; Politano, A.; Behler, J.; Thompson, A.P.; Ko, T.W.; Ong, S.P.; Chalykh, O.; Korogod, D. Roadmap for the Development of Machine Learning-Based Interatomic Potentials. Model. Simul. Mater. Sci. Eng. 2025, 33, 23301. [Google Scholar] [CrossRef]
  82. Zhang, Y.; Gao, Z.; Wang, X.; Liu, Q. Predicting the Pore-Pressure and Temperature of Fire-Loaded Concrete by a Hybrid Neural Network. Int. J. Comput. Methods 2022, 19, 2142011. [Google Scholar] [CrossRef]
Figure 1. The correlation of the effective parameters and Ωt  t = 2 ,   4 ,   and   6 .
Figure 1. The correlation of the effective parameters and Ωt  t = 2 ,   4 ,   and   6 .
Technologies 13 00211 g001
Figure 2. The importance of each of the parameters and their impact on Ωt  t = 2 ,   4 ,   and   6 .
Figure 2. The importance of each of the parameters and their impact on Ωt  t = 2 ,   4 ,   and   6 .
Technologies 13 00211 g002
Figure 3. Correlation of measured and predicted Ω2 in training phase.
Figure 3. Correlation of measured and predicted Ω2 in training phase.
Technologies 13 00211 g003
Figure 4. Correlation of measured and predicted Ω2 in testing phase.
Figure 4. Correlation of measured and predicted Ω2 in testing phase.
Technologies 13 00211 g004
Figure 5. Correlation of measured and predicted Ω4 in training phase.
Figure 5. Correlation of measured and predicted Ω4 in training phase.
Technologies 13 00211 g005
Figure 6. Correlation of measured and predicted Ω4 in testing phase.
Figure 6. Correlation of measured and predicted Ω4 in testing phase.
Technologies 13 00211 g006
Figure 7. Correlation of measured and predicted Ω6 in training phase.
Figure 7. Correlation of measured and predicted Ω6 in training phase.
Technologies 13 00211 g007
Figure 8. Correlation of measured and predicted Ω6 in testing phase.
Figure 8. Correlation of measured and predicted Ω6 in testing phase.
Technologies 13 00211 g008
Figure 9. Violin plot of the developed models for predicting Ω2 in both the training (left) and testing (right) phases.
Figure 9. Violin plot of the developed models for predicting Ω2 in both the training (left) and testing (right) phases.
Technologies 13 00211 g009
Figure 10. Violin plot of the developed models for predicting Ω4 in both the training (left) and testing (right) phases.
Figure 10. Violin plot of the developed models for predicting Ω4 in both the training (left) and testing (right) phases.
Technologies 13 00211 g010
Figure 11. Violin plot of the developed models for predicting Ω6 in both the training (left) and testing (right) phases.
Figure 11. Violin plot of the developed models for predicting Ω6 in both the training (left) and testing (right) phases.
Technologies 13 00211 g011
Table 1. Descriptive statistics of the effective parameters and Ω t   t = 2 ,   4 ,   and   6 .
Table 1. Descriptive statistics of the effective parameters and Ω t   t = 2 ,   4 ,   and   6 .
TypeParameterMeanMedianStandard DeviationMinimumMaximum
InputTeO246.4834523.523080
SrO0.85702.820010
P2O51.78607.325035
CaO1.40405.314025.9
CaF21.57103.666010
K2O1.92905.057015
Bi2O30.96303.002015
TiO20.01400.06400.4
B2O323.37728.2524.307079.5
LiO22.36106.457025
CdF20.75703.205018
WO34.679011.462039.92
ZnO5.27107.437020
MgO3.35706.063015
Na2O4.19306.032019
Er2O31.11911.3320.0110
OutputΩ25.9375.982.4571.9511.99
Ω41.8471.6450.9580.1715.39
Ω61.5901.620.7470.373.54
Table 2. The calculated statistical indicators for the developed models for the prediction of Ω2.
Table 2. The calculated statistical indicators for the developed models for the prediction of Ω2.
ModelTraining PhaseTesting Phase
R2VAFPIAccuracya-20R2VAFPIAccuracym20
MLP0.90790.3821.05599.8770.8520.86986.7080.38999.8300.786
CatBoost0.92090.7431.10399.8860.9070.88788.6810.60599.8740.714
XGBoost0.93192.3441.14299.8890.8520.92987.4680.72299.8700.786
RF0.92091.4551.22499.8950.8890.90587.8000.26299.8390.786
DeepBoost0.97496.7041.29799.8950.9440.97196.2821.10899.9020.929
Table 3. Rating the statistical indicators to select the best developed model for the prediction of Ω2.
Table 3. Rating the statistical indicators to select the best developed model for the prediction of Ω2.
ModelTraining PhaseTesting PhaseTotal RateModel Rank
R2VAFPIAccuracya-20R2VAFPIAccuracya-20
MLP1111111212125
CatBoost2222424341264
XGBoost4433142432302
RF3345333122293
DeepBoost5554555555491
Table 4. The calculated statistical indicators for the developed models for the prediction of Ω4.
Table 4. The calculated statistical indicators for the developed models for the prediction of Ω4.
ModelTraining PhaseTesting Phase
R2VAFPIAccuracya-20R2VAFPIAccuracym20
MLP0.89989.8101.42799.7850.7040.86777.3441.49599.9170.929
CatBoost0.91090.7861.48899.8170.6110.88283.4211.59399.9361.000
XGBoost0.92992.8571.55599.8160.7780.91188.8121.69799.9451.000
RF0.91991.4291.51299.8310.7040.89787.3251.66399.9411.000
DeepBoost0.95595.1711.67499.8460.7410.94593.9921.78799.9511.000
Table 5. Rating the statistical indicators to select the best developed model for the prediction of Ω4.
Table 5. Rating the statistical indicators to select the best developed model for the prediction of Ω4.
ModelTraining PhaseTesting PhaseTotal RateModel Rank
R2VAFPIAccuracya-20R2VAFPIAccuracya-20
MLP1111211111115
CatBoost2223122222204
XGBoost4442544442372
RF3334233332293
DeepBoost5555455552461
Table 6. The calculated statistical indicators for the developed models for the prediction of Ω6.
Table 6. The calculated statistical indicators for the developed models for the prediction of Ω6.
ModelTraining PhaseTesting Phase
R2VAFPIAccuracya-20R2VAFPIAccuracym20
MLP0.92791.2991.65499.8470.8330.91990.8011.50899.8230.714
CatBoost0.93993.4951.69599.8670.8150.92492.4351.58599.8720.857
XGBoost0.93493.3221.70199.8870.8700.92090.3031.51599.8670.857
RF0.95394.0491.73899.8800.8150.94994.7831.65099.8760.786
DeepBoost0.99799.6811.94899.9681.0000.99499.3231.87099.9461.000
Table 7. Rating the statistical indicators to select the best developed model for the prediction of Ω6.
Table 7. Rating the statistical indicators to select the best developed model for the prediction of Ω6.
ModelTraining PhaseTesting PhaseTotal RateModel Rank
R2VAFPIAccuracya-20R2VAFPIAccuracya-20
MLP1111312111135
CatBoost3322133333263
XGBoost2234421223254
RF4443144442342
DeepBoost5555555555501
Table 8. Kruskal–Wallis H test summary of median absolute error ( Ω ^     Ω ) and RMSE across the models for the targets Ω2, Ω4, and Ω6.
Table 8. Kruskal–Wallis H test summary of median absolute error ( Ω ^     Ω ) and RMSE across the models for the targets Ω2, Ω4, and Ω6.
Target ΩData SplitnKruskal–Wallis Hp-Value Minimum   Median   Ω ^ Ω Minimum RMSE
Ω2Training561.770.780.520.61
Test148.940.0630.620.83
Ω4Training569.060.060.190.23
Test144.030.40.080.1
Ω6Training5679.552.17 × 10−160.030.05
Test149.310.0540.120.12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmadi, F.; Hajihassani, M.; Sivenas, T.; Papanikolaou, S.; Asteris, P.G. Advanced Machine Learning Methods for the Prediction of the Optical Parameters of Tellurite Glasses. Technologies 2025, 13, 211. https://doi.org/10.3390/technologies13060211

AMA Style

Ahmadi F, Hajihassani M, Sivenas T, Papanikolaou S, Asteris PG. Advanced Machine Learning Methods for the Prediction of the Optical Parameters of Tellurite Glasses. Technologies. 2025; 13(6):211. https://doi.org/10.3390/technologies13060211

Chicago/Turabian Style

Ahmadi, Fahimeh, Mohsen Hajihassani, Tryfon Sivenas, Stefanos Papanikolaou, and Panagiotis G. Asteris. 2025. "Advanced Machine Learning Methods for the Prediction of the Optical Parameters of Tellurite Glasses" Technologies 13, no. 6: 211. https://doi.org/10.3390/technologies13060211

APA Style

Ahmadi, F., Hajihassani, M., Sivenas, T., Papanikolaou, S., & Asteris, P. G. (2025). Advanced Machine Learning Methods for the Prediction of the Optical Parameters of Tellurite Glasses. Technologies, 13(6), 211. https://doi.org/10.3390/technologies13060211

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop