Previous Article in Journal
Effect of Magnetic Field on Electrochemical Corrosion Behavior of H62 Brass Alloy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Core Loss Prediction Model of High-Frequency Sinusoidal Excitation Based on Artificial Neural Network

School of Mechanical Engineering, Yangzhou University, Yangzhou 225000, China
*
Author to whom correspondence should be addressed.
Magnetochemistry 2025, 11(11), 93; https://doi.org/10.3390/magnetochemistry11110093 (registering DOI)
Submission received: 6 September 2025 / Revised: 16 October 2025 / Accepted: 23 October 2025 / Published: 25 October 2025

Abstract

The magnitude of core loss is a crucial factor affecting the efficiency of power converters. Due to the complex mechanism of core loss, diverse influencing factors, and the strong coupling characteristics between materials and operating conditions, traditional core loss prediction models struggle to achieve high-precision prediction of core loss. Based on the Artificial Neural Network (ANN), this paper investigates core loss under high-frequency sinusoidal excitation. The core loss training data is processed using a logarithmic transformation method, and an ANN core loss prediction model is established with temperature, frequency, and magnetic flux density as features. The results show that, compared with non-logarithmic processing, logarithmic transformation of the data can effectively improve the prediction accuracy (PA) of the ANN model. Within the ±10% error range, the maximum PA of the ANN prediction model reaches 98.48%, and the minimum Mean Absolute Percentage Error (MAPE) can be as low as 2.58%. In addition, a comparison with the Steinmetz Equation (SE) and K-nearest neighbor (KNN) prediction models reveals that, for four materials, within the ±10% error range of the true core loss values, the minimum PA of the ANN model is 93.33% with an average of 95.38%; the minimum PA of the KNN model is 43.94% with an average of 62.07%; and the minimum PA of the SE model is 14.91% with an average of 19.83%. Furthermore, the MAPE of the ANN model is within 5%.

1. Introduction and Literature Review

1.1. Research Background

With the development of third-generation power semiconductor technology, high frequency, high power density, and high reliability have become the development directions of power converter products. To achieve high efficiency and power density, in addition to meeting the feasible design of the electrical parameters of magnetic components, the loss of magnetic components also needs to be low. The loss of magnetic components includes winding loss and core loss. Among them, winding loss can be accurately determined through electromagnetic finite element simulation technology, and the winding loss of copper conductors can be accurately obtained through electromagnetic field finite element simulation technology [1]. However, core loss is a power loss generated by magnetic materials under the action of high-frequency alternating magnetic flux, which is a complex phenomenon, and its basic physical theory has not yet been fully established. Various factors such as excitation waveform, frequency, stress, temperature, and material properties all affect the generation of core loss [2].

1.2. Literature Review

Regarding core loss prediction, many scholars have conducted research so far, and the research is mainly divided into two directions: 1. analytical models for core loss prediction; 2. data-driven core loss prediction models.
In the research on analytical models for core loss prediction, Chas. P. Steinmetz first proposed the Steinmetz Equation (SE) model describing core loss in 1892 [3], which is an empirical formula that can calculate the total energy loss of any magnetic material, expressed as power loss per unit volume. However, the SE model ignores the influence of other factors, making it difficult to achieve high-precision prediction. To address these issues, other scholars have made many modifications to this SE model; the most well-known ones are the Modified Steinmetz Equation (MSE) model [4], Generalized Steinmetz Equation (GSE) [5], and improved Generalized Steinmetz Equation (iGSE) [6]. In recent years, scholars have continuously made modifications based on these equations and proposed new analytical models. ChengBo Li first proposed a novel model for predicting core loss based on the new vector magnetic circuit theory [7]. Based on the SE model, Thomas Guillod constructed an improved Generalized Composite Calculation method (iGCC) [8]. Asier Arruti established the Composite improved Generalized Steinmetz Equation (ciGSE) based on the traditional improved Generalized Steinmetz Equation (iGSE) [9].
In the research on data-driven core loss prediction models, Dixant Bikal Sapkota established a core loss prediction model using a Long Short-Term Memory (LSTM) network and studied the model’s prediction accuracy for ten types of materials. The results showed that the average error for the ten considered materials was less than 7%, and the 95th percentile error was less than 23% [10]. Zhengzhao Li proposed a core loss prediction model combining the Fast Fourier Transform (FFT) and a feedforward neural network, and used multi-objective optimization to determine the optimal combination of hyperparameters, where the optimized neural network outperformed traditional empirical methods in terms of accuracy [11]. The core loss estimation accuracy of the multi-layer perceptron (MLP) surrogate model constructed by Minwook Choi reached 91.78%, which was 17% higher than that of the SE model [12]. Deqiu Yang adopted a method of conducting three wiring tests (primary winding, secondary winding, and primary-secondary series connection) on a two-winding high-frequency transformer to extract the equivalent resistance, and combined it with an Artificial Neural Network (ANN) to predict the loss of the excitation inverter. This achieved high-precision modeling of high-frequency transformer loss and accurate measurement of the transformer’s equivalent resistance under different operating points [13]. Daniel Santamargarita proposed a scheme for online monitoring of the maximum temperature and loss of medium-frequency transformers using an ANN trained by finite element simulation, achieving high-precision online monitoring of maximum temperature and loss, with estimation errors both lower than 2% [14]. Navid Rasekh established a neural network-aided loss map based on an ANN and studied the loss estimation of inductors and high-frequency transformers under multiple operating conditions. The results showed that the average relative error of the total loss of inductors was 6.61%, and that of high-frequency transformers was 3.48% [15]. Thomas Guillod trained a dual-regression multi-layer perceptron ANN and used the trained neural network to construct magnetic and thermal models. The magnetic properties, thermal properties, and multi-objective optimization of inductors under multi-variable coupling were studied. The results showed that the calculation deviation of the model was less than 3% [16]. Giovanni Di Nuzzo adopted an adaptive artificial network method to establish multi-data-driven models for predicting the conduction loss and switching loss of SiC MOSFETs. The final results showed that the model’s conduction loss prediction error at 25 °C was 3.83%, and the switching loss prediction error for 25 mm2 chips was 3.92% [17]. Junyun Deng used a Knowledge-Aware Artificial Neural Network initialized by an analytical model to establish a loss modeling framework suitable for planar magnetic components. The results showed that training with only 200 core loss samples or 50 winding loss samples could achieve most core loss errors less than 5% and most winding loss errors less than 3% [18]. Junyun Deng adopted a Knowledge-Aware Artificial Neural Network that integrates analytical loss expressions into a feedforward ANN, allowing the neural network to learn the residual between the predicted values of the analytical model and the true values. A modeling framework suitable for high-frequency core loss was established, and the effects of different analytical loss models, weight functions, and measurement errors of training data on the performance of the Knowledge-Aware Neural Network (KANN) were studied. The results showed that this method could achieve high-precision prediction with only a small amount of training data, with an average error as low as 1.7–1.8% [19]. Bima Nugraha Sanusi established a DC-biased core loss prediction model suitable for high-frequency scenarios (500 kHz–3 MHz) based on the ANN. The average prediction errors for PC50 and PC200 materials were 7.2% and 6.7%, respectively [20]. Xiaobing Shen proposed a Deep Neural Network (DNN) method for core loss estimation, realizing high-precision prediction of core loss [21].
In summary, while analytical modeling approaches for core loss prediction can intuitively reflect the impact of various parameters on magnetic loss, they suffer from an over-reliance on empirical parameters. Empirical parameters in the SE model and its derivatives—such as k1, α1, and β1—exhibit high material specificity (for instance, Asier Arruti’s ciGSE [9] requires re-fitting parameters for different materials). This leads to poor generalizability across various core materials and often necessitates extensive experimental data [21]. Furthermore, many models primarily focus on frequency and magnetic flux density while overlooking factors like temperature and the extreme-value characteristics of core loss data, resulting in limited prediction accuracy under complex operating conditions.
In recent years, data-driven core loss prediction models have made significant progress. Neural network algorithms, capable of establishing complex nonlinear models while integrating multi-source features and generalizing to new operating conditions, are widely applied in core loss prediction research. However, many data-driven models neglect data distribution optimization. Studies such as Dixant Bikal Sapkota’s LSTM model [10] and Minwook Choi’s MLP model [12] fail to address extreme values in core loss data, which may cause gradient abnormalities during model training and reduce prediction stability. Additionally, some data-driven studies lack systematic comparison with classical models, only validating the model’s accuracy without conducting horizontal comparisons against classical analytical models or traditional machine learning models.
This paper constructs a core loss prediction model under high-frequency sinusoidal excitation based on an Artificial Neural Network (ANN). Among them, the data preprocessing process includes performing logarithmic transformation on the target variable, followed by feature standardization, which aligns with the latest trends in machine learning research for magnetic systems. As can be seen in recent studies by Xiaoyan Shen [22], Milyutin, Vasily A [23] and other researchers, normalization and feature encoding can effectively stabilize training and enhance prediction accuracy. By applying logarithmic transformation, outliers and high-loss samples can be suppressed, enabling ANN to learn smoother correlations between physical variables and iron loss. This addresses the limitation of neglected data distribution in existing data-driven models, effectively mitigates the interference of extreme values on ANN training, and enhances the model’s ability to learn features from samples with different loss levels. To compensate for the insufficient model comparison in existing research, a comprehensive comparison of three types of models (ANN model, K-nearest neighbors, SE model) was conducted on four core materials, to fully validate the superiority of the proposed ANN model in high-frequency sinusoidal excitation scenarios.

2. Data Sources and Processing

2.1. Data Source

The research data on core loss is derived from Appendix 1 of Problem C in the 2024 China Postgraduate Mathematical Contest in Modeling (CPMCM). The data includes four types of materials: Material 1, Material 2, Material 3, and Material 4. Each material contains core loss (y) under sinusoidal excitation with different temperatures (T), frequencies (F), and magnetic flux densities (Bm). Among the four materials, the frequencies are all above 50 kHz, which belong to high-frequency excitation. The amount of data is shown in Table 1.
For magnetic flux density, the file contains data of 1024 sampling points (sampled at equal intervals within one cycle). The maximum magnetic flux density is used as the research feature, and there is no missing data in the dataset. It should be noted that the CPMCM 2024 dataset includes only four magnetic core materials. Although this limitation constrains the generalization range, it ensures data uniformity and measurement consistency across materials, providing a reliable benchmark for model comparison.
To observe the data distribution, taking Material 1 as an example, a data distribution plot of the core loss of Material 1 was generated, as shown in Figure 1.
As shown in Figure 1, the core loss values of Material 1 exhibit a regular variation with the change in sample number. Among these values, the maximum core loss is 1,223,675.08 W/m3, and the minimum is 684.05 W/m3. The difference between the two is approximately 1.2 × 106 W/m3, indicating an obvious issue of extreme values that requires further resolution.

2.2. Data Processing

For an ANN, the existence of extreme values will directly destroy the original distribution characteristics of the data. Neural networks update weights through backpropagation, and extreme values can cause gradient anomalies—for instance, the loss function increases sharply due to extreme values, which interferes with weight optimization, and may even prevent the model from converging, leading to a significant decline in prediction accuracy. Methods for handling extreme values can be divided into two categories: post-processing after identification and non-direct handling of outliers. Considering that the dataset used contains extreme values, directly processing the dataset after identifying extreme values will result in the loss of valid information in the dataset, or lead to insufficient sample size or distribution deviation. Therefore, logarithmic transformation is performed on the core loss (y) dataset:
y log = ln y + 1
where ylog is the converted core loss value, and y is the original core loss value. The results of the processed core loss data for Material 1 are shown in Figure 2.
Figure 1 shows that the original core loss fluctuates drastically, with an enormous gap between high and low values. Such a distribution will cause the model to focus more on samples with high values and ignore the patterns of samples with low values during training. Figure 2 shows that the impact of extreme values is compressed by logarithmic transformation: the magnitude of high-loss samples on the vertical axis is significantly reduced, and the gap with low-loss samples is narrowed, enabling the model to learn the characteristics of samples with different loss levels more evenly.

2.3. Correlation Analysis

Correlation analysis can measure the degree of association between data features, and the input features of ANN can be selected based on the results of correlation analysis. Feature selection is a crucial part of ANN, whose core purpose is to screen out the most representative subset that is most relevant to the task objective from the original feature set. Reasonable feature selection can reduce redundancy and noise, lower computational costs, improve the learning efficiency of the model, and at the same time enhance the generalization ability of the model and improve prediction stability. In the research dataset, there are three types of input features: temperature, frequency, and magnetic flux density. The Spearman correlation coefficient is used to analyze the correlation between these three input features and core loss, and a heatmap is drawn as shown in Figure 3.
In Figure 3, the correlation coefficients indicate the linear relationship strength among variables. The correlation coefficient between magnetic flux density and core loss is 0.85, while the correlation coefficients of frequency and temperature with core loss are 0.11 and −0.06, respectively. There is a strong positive correlation between core loss and magnetic flux density. Although the correlations of temperature and frequency with core loss are weaker than that of magnetic flux density, they also have non-negligible impacts. Therefore, temperature, frequency, and magnetic flux density are selected as input features, and core loss as the target variable.

3. ANN Core Loss Prediction Model

This paper adopts the multi-layer perceptron (MLP)-type Artificial Neural Network algorithm, whose structure is shown in Figure 4. It consists of an input layer, two hidden layers, and an output layer [16]. The input layer contains three features: temperature, frequency, and magnetic flux density. The output layer has only one unique output variable, which is core loss; therefore, the output layer has only one neuron.
The core formula system of ANN revolves around three major links: forward propagation (calculating predicted values), loss function (measuring prediction errors), and backpropagation (updating parameters). Features need to be standardized before input:
m 0 = X μ σ
where X is the input feature matrix, m0 is the standardized output feature, μ is the feature mean, and σ is the feature standard deviation. The symbols that appear subsequently are shown in Table 2.
Among them, l = 1, 2, and 3 correspond to the connections from the input layer to Hidden Layer 1, from Hidden Layer 1 to Hidden Layer 2, and from Hidden Layer 2 to the output layer, respectively.
Forward propagation is the process in which data flows from the input layer to the output layer, and feature mapping is achieved through linear transformation and nonlinear activation. The calculation from the standardized output features to the hidden layer is as follows:
Z 1 = W 1 m 0 + b 1 m 1 = tanh Z 1
The calculation from the first hidden layer to the second hidden layer is a key step in the transition of features from preliminary extraction to in-depth abstraction, and the specific calculation is as follows:
Z 2 = W 2 m 1 + b 2 m 2 = tanh Z 2
The output m2 of the second hidden layer flows into the output layer, where these abstract features are converted into specific predicted values. The calculation of the output layer is as follows:
y ^ log = W 3 m 2 + b 3
where y ^ log is the predicted value on a logarithmic scale. To restore the real scale, it is necessary to convert the predicted value on the logarithmic scale to the real scale, and the calculation is as follows:
y ^ = e y ^ log 1
where y ^ is the predicted value on the real scale.
The loss function can measure the difference between the model’s predicted values and the true values, and it provides a clear optimization objective (minimizing the loss) for parameter updates. The loss functions used in the ANN include Mean Squared Error (MSE) and regularization, and the specific loss function formulas are as follows:
Loss = 1 n i = 1 n ( y ^ i y i ) 2 + ξ 2 W 1 2 + W 2 2 + W 3 2
The first half of Equation (7) is the Mean Squared Error (MSE), and the second half is the regularization term. y ^ represents the model’s predicted value of the real scale for the i th sample, and yi represents the true value of the i th sample. W 2 denotes the squared norm of the weight matrix, and ξ is the regularization coefficient.
Parameter update is the specific learning process of the ANN. Based on the gradient of the loss function, it adjusts weights and biases to reduce the loss [19]. The core formula for parameter update is as follows:
θ t + 1 = θ t η θ t Loss
where θ generally refers to the parameters in the model, such as W1, b1, W2, and b2; t refers to the number of iterations; and η refers to the learning rate, which is used to control the step size. If it is too large, it is easy to miss the minimum loss point; if it is too small, it will lead to slow convergence. ∇θtLoss is the gradient of the loss function with respect to parameter θt, which determines the direction and magnitude of adjusting weights and biases. The gradient of the ANN is obtained by the backpropagation (BP) algorithm, which is calculated backward from the output layer to the input layer using the chain rule.
The termination condition for the gradient descent algorithm in this paper is that the loss function does not decrease for 20 consecutive iterations. Meanwhile, to prevent infinite training, the maximum number of iterations is set to 500. The hyperparameters of the ANN in this paper—including the optimizer, learning rate, batch size, and neuron configuration—are determined through an empirical trial-and-error method. Specifically, while keeping other hyperparameters unchanged, each hyperparameter is adjusted individually, and multiple candidate values are tested to identify the configuration that achieves the lowest validation loss and stable convergence. The Adam optimizer is selected because it exhibits faster and more stable convergence compared with SGD and RMSProp. The learning rate is tested within the range of 0.001–0.01, and 0.005 is finally chosen. The batch size is adjusted among 16, 32, and 64, with 32 selected in the end. For the neuron configuration, multiple combinations are evaluated; the results show that the network with two hidden layers (containing 26 and 13 neurons, respectively) achieves the lowest validation error without overfitting.
Three evaluation metrics are used to evaluate the ANN model, which are the Mean Absolute Percentage Error (MAPE), Coefficient of Determination (R2), and an additional metric named prediction accuracy (PA).
Mean Absolute Percentage Error (MAPE) represents the average of the ratios of the errors between predicted values and true values to true values, and can reflect the magnitude of the relative error of the prediction results.
MAPE = 1 n i = 1 n y ^ i y i y i · 100 %
where y ^ i is the predicted value, yi is the true value, and n is the total number of predictions. The larger the MAPE, the greater the difference between the predicted values and the actual values, and the lower the accuracy.
R2 measures the degree of fit of the regression model to the data, that is, the proportion of the variation in the target variable that the model can explain.
R 2 = 1 ( y i y ^ i ) 2 ( y i y ¯ ) 2
where y ¯ is the mean value of the true values. The closer R2 is to 1, the better the fitting effect of the model.
Prediction accuracy (PA) represents the ratio of the number of predicted values falling within the corresponding interval to the total number of predictions when constructing an error interval centered on each true value, which allows for an intuitive understanding of the model’s prediction performance under certain accuracy requirements. Among them, the error interval = [true value × 90%, true value × 110%].
PA = k n · 100 %
where k is the number of predicted values falling within the corresponding interval. The higher the PA, the more predicted values are concentrated around the true values.

4. Results Comparison

4.1. Comparison with Other Models

The data of the four materials are divided into the training set, validation set, and test set, with division rates of 70%, 15%, and 15%, respectively. The training set is the main dataset used for the model to learn data patterns; the validation set is mainly used to evaluate the model’s performance during the training process and adjust the model’s hyperparameters accordingly; the test set is the dataset used to finally evaluate the model’s generalization ability after the model training and optimization are completed.
It should be noted that although traditional k-fold cross-validation was not performed within a single dataset, the proposed Artificial Neural Network (ANN) model underwent independent training and testing on four different magnetic materials separately. This design provides a form of material-based external validation, which can effectively evaluate the model’s robustness across different material domains. The predictive performance (MAPE, R2, and PA) observed among four materials further confirms the reliability of the adopted validation strategy.
First, the accuracy of the ANN core loss prediction model with logarithmic transformation of the research data and without logarithmic transformation of the data is studied. The prediction results of core loss of the four materials are shown in Table 3.
Table 3 shows that for the ANN model using logarithmic data processing, the maximum PA and minimum MAPE are 93.33% and 4.56%, respectively; for the ANN model without logarithmic transformation processing, the maximum PA and minimum MAPE are 74.44% and 8.11%, respectively. For the four materials, the PA of the ANN model with logarithmic processing is higher than that of the model without logarithmic processing, and MAPE is also smaller than that of the model without logarithmic processing. Logarithmic transformation of the data can improve the accuracy of the ANN model.
To further verify the prediction effect of the ANN core loss prediction model, the Steinmetz Equation (SE) model and K-nearest neighbor (KNN) algorithm model are used to predict core loss for comparison, with the main evaluation indicators for comparison being MAPE, R2, and PA.
The SE model is a classic core loss prediction model. Under sinusoidal wave excitation, the core loss calculation formula of SE model is
y ^ S E = k 1 · f α 1 · B m β 1
where y ^ S E is the core loss; f is the frequency; Bm is the peak value of magnetic flux density; and k1, α1, and β1 are the coefficients fitted from the experimental data—generally, 1 < α1 < 3, 2 < β1 < 3. The formula indicates that the core loss per unit volume (core loss density) P depends on the power functions of frequency f and peak magnetic flux density Bm. The empirical coefficients k1, α1, and β1 in the SE model are generally fitted using experimental data. For the four materials, the fitting results are shown in Table 4.
KNN is an instance-based regression model. When predicting new input features, KNN does not need to construct an explicit model; instead, it finds the K-nearest neighbors to the input features and uses the weighted average of their target values as the prediction result. KNN regression can adapt to local feature changes but is sensitive to high-dimensional data. KNN predicts results through the weighted average, where the weight can be defined as the reciprocal of the distance between samples, calculated as follows:
y ^ K N N = i = 1 K 1 distance ( x , x i ) y i i = 1 K 1 distance ( x , x i )
In core loss prediction, xi are the K input features from the training set that are closest to the new input feature x, yi are the core losses corresponding to these input features, 1 distance ( x , x i ) is the weight corresponding to yi (the greater the distance, the greater the weight), and y ^ K N N is the weighted average of losses based on the nearest neighbor features.
For the four types of materials, the calculation results of three models (ANN, KNN, and SE) are shown in Figure 5 and Table 5.
It can be seen from the core loss graphs of the four materials in Figure 5 that the prediction results of each model can fit well the variation trend of the true value of core loss, among which the predicted values of the ANN model are the closest to the true value of core loss. From the model scatter plot, it can be observed that the prediction results of the three models are all around the ideal fitting line, and the prediction results of the ANN model are relatively closer to the ideal fitting line. This indicates that among the three models, the ANN model has relatively better prediction performance.
Table 5 shows that, for Material 1, the ANN model achieves the highest PA of 96.27%, while the SE model has the lowest PA at only 15.26%. The R2 values of all models are above 0.9, which can capture the overall variation trend of the data well. However, only the MAPE of the ANN model is within 5%. For Material 2, the accuracy of the KNN model is significantly improved compared with that for Material 1, with a PA of 83.64% and MAPE reduced to 6.62%. Although the prediction accuracy of the SE model is improved, it is still lower than those of the other two models. The MAPE, R2, and PA of the ANN model are 4.56%, 0.9791, and 93.33%, respectively, which are better than those of the KNN and SE models. For Material 3, the ANN model still performs the best among the three models in terms of prediction accuracy and data fitting degree, with a PA of 93.42% and MAPE of 4.28%. For Material 4, the accuracy of the KNN model decreases compared with those for Material 2 and Material 3, with a PA of 43.94% and MAPE of 15.65%. In contrast, the accuracy of the ANN model is improved compared with those for the previous three materials, achieving a PA of 98.48%, R2 of 0.9988, and MAPE of only 2.58%. In addition, the KNN model exhibits better MAPE performance for Material 2 compared with the other materials. As shown in Figure 5a,c,e,f, the data distribution of Material 2 is more uniform, with significantly less clustering imbalance among the sample points than in the other materials. The denser and smoother distribution of Material 2 allows the distance-based local interpolation of the KNN model to perform more effectively. In contrast, for the other three materials with less uniform data distributions, the distance-weighted interpolation of KNN cannot adequately capture the coupled nonlinear relationships among features, resulting in decreased prediction accuracy.

4.2. Interpretability Analysis

To conduct the interpretability analysis of the model, take Material 1 as an example. The SHapley Additive exPlanations (SHAP) method was adopted to explore the contribution mechanism of input features to the core loss prediction results. First, background data was randomly sampled from the standardized training set; subsequently, an explainer was initialized based on KernelExplainer, which is compatible with the MLP regression model under the scikit-learn framework. With the model’s prediction function and background data as inputs, the explainer completes the interpretability encapsulation of the model’s prediction logic; then, this explainer was used to calculate SHAP values for the features of the standardized test set. The obtained SHAP values represent the contribution degree of different feature dimensions of each sample to the model output, and the correlation patterns between features and core loss prediction results were systematically presented from three dimensions: global feature importance, single-feature impact trend, and single-sample local interpretation. The global parameter importance for Material 1 is shown in Figure 6.
Figure 6 shows the impact of each feature on the output of the core loss prediction model. The features are ranked in descending order of their mean absolute SHAP values as follows: peak magnetic flux density (Bm), frequency (F), and temperature (T). This indicates that the peak magnetic flux density has the most significant impact on the model’s predictions, followed by frequency, while temperature has the relatively weakest impact.
In Figure 6, red represents feature values with high magnitudes, and blue represents those with low magnitudes. From the correlation between feature values and SHAP values, high magnitudes of peak magnetic flux density and frequency (the red regions) correspond to positive SHAP values, which promote an increase in the predicted core loss; low magnitudes (the blue regions) correspond to negative SHAP values, which inhibit an increase in the predicted core loss.
In contrast, SHAP values corresponding to high and low temperature values are scattered across both positive and negative ranges, indicating that the pattern of the impact of temperature on core loss is complex. However, it can be clearly observed that high temperature values correspond to negative SHAP values (inhibiting an increase in the predicted core loss), while low temperature values correspond to positive SHAP values (promoting an increase in the predicted core loss).

4.3. Result Analysis

To gain a more intuitive understanding of the prediction error distribution and to evaluate the robustness and reliability of the models, a residual histogram of the ANN model’s predictions was plotted to illustrate how the residuals are distributed relative to the zero-error line in Figure 7. In addition, Table 6 summarizes and compares the prediction bias of each model across four materials to analyze their respective systematic deviations.
In Figure 7, the residuals of the ANN model’s predictions for four materials are concentrated near the zero-error line and exhibit an approximately symmetric distribution, indicating that the ANN model has no significant systematic bias. The narrow and sharp shapes of the histograms suggest small residual variances and stable prediction performance. Among them, Material 4 shows the most concentrated residual distribution, implying the highest prediction consistency, while Materials 1–3 present slightly wider residual distributions, which can be attributed to the greater data dispersion of these materials.
As shown in Table 6, the ANN model exhibits relatively small mean residuals for all four materials (within ±1300 W/m3), indicating that its predictions have no significant systematic bias. In contrast, the SE model shows large fluctuations in mean residuals among different materials, with a clear overestimation for Material 3 and an underestimation trend for the others. The KNN model, on the other hand, presents positive residuals across all materials, suggesting an overall overestimation tendency.
A comprehensive analysis of the prediction results for the three models across four materials indicates that the ANN model performs best in terms of prediction accuracy and bias control. Within the ±10% error range of the true values in the test set, the minimum PA of the ANN model is 93.33%, with an average of 95.38%, whereas the average PA values of KNN and SE models are 62.07% and 19.83%, respectively. From a statistical perspective, the ANN model not only maintains small mean residuals but also demonstrates good stability across different materials. Its error distribution is approximately symmetric and centered around zero, suggesting that the combination of logarithmic transformation and nonlinear fitting effectively reduces heteroscedasticity and enhances the generalization stability of the model.
In addition, it is worth noting from Table 6 that the KNN model shows a smaller mean residual than the ANN model does for Material 3, but its PA value is lower. This occurs because the mean residual may approach zero when positive and negative errors offset each other, and thus a smaller mean residual does not necessarily imply higher prediction accuracy. To comprehensively evaluate the predictive performance of a model, multiple indicators should be considered together rather than relying solely on the mean residual.

5. Conclusions

Based on Problem C of the 2024 China Postgraduate Mathematical Contest in Modeling, this paper studies the core loss under sinusoidal excitation. Based on the ANN method, a core loss prediction model is built. By performing logarithmic transformation on the core loss data of the training set input to an ANN, the prediction effect of core loss under high-frequency sinusoidal excitation is improved.
A comparison between the ANN model with logarithmic transformation of the data and the ANN model without logarithmic transformation shows that, after logarithmic transformation of core loss, PA increases from 57.35% to 96.27%, and MAPE decreases from 14.27% to 3.86%. Further comparison results of the three prediction models (ANN, KNN, and SE) for four materials reveals that the prediction accuracy of the ANN is higher than those of the other two models. Within the error range of ±10% of the true values in the test set, the average PA of ANN’s prediction results for the four materials can reach 95.38%, the average R2 can reach 0.9873, the highest MAPE is 4.56%, and the lowest MAPE is only 2.58%. This confirms the better performance of the ANN model with the logarithmic transformation of input data in core loss prediction.
The ANN model proposed in this paper can complete training in approximately 1.49 s under an ordinary computer environment (Intel i9 CPU, 32 GB memory), with a peak memory usage of about 0.19 MB—indicating that the model has a low computational burden. This provides a practical data-driven method for predicting core loss. In practical applications, the trained model can be integrated into the core design process as a fast surrogate model to assist in material selection and performance evaluation, or combined with finite element simulation to reduce the computational cost in the design optimization process.
In summary, the currently used dataset only includes three parameters: magnetic flux density waveform, frequency, and temperature. Future work will focus on expanding the dataset to include a broader range of materials and excitation conditions, testing the model under non-sinusoidal waveforms, and exploring transfer learning techniques to improve adaptability across different magnetic materials. These efforts will further enhance the generalization and practical applicability of the proposed approach.

Author Contributions

Conceptualization, C.L.; methodology, C.L.; software, F.M.; validation, F.M. and J.Z.; formal analysis, F.M.; investigation, Z.Z.; resources, C.L.; data curation, C.L. and F.M.; writing—original draft preparation, C.L. and F.M.; writing—review and editing, C.L. and F.M.; visualization, J.Z. and Z.Z.; supervision, C.L.; project administration, C.L.; funding acquisition, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Jiangsu Provincial Postgraduate Research and Practice Innovation Program Fund (SJCX24_2216).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Restrictions apply to the availability of these data. Data were obtained from “The Huawei Cup” the 21st China Postgraduate Mathematical Modeling Competition, Problem C and are available on the Official Website of the China Postgraduate Mathematical Contest in Modeling with the permission of the Organizer.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shi, H.; Jin, Z. Multi-Condition Magnetic Core Loss Prediction and Magnetic Component Performance Optimization Based on Improved Deep Forest. IEEE Access 2025, 13, 82261–82277. [Google Scholar] [CrossRef]
  2. Barg, S.; Barg, S.; Bertilsson, K. A Review on the Empirical Core Loss Models for Symmetric Flux Waveforms. IEEE Trans. Power Electron. 2024, 40, 1609–1621. [Google Scholar] [CrossRef]
  3. Steinmetz, C.P. On the law of hysteresis. Proc. IEEE 1984, 72, 197–221. [Google Scholar] [CrossRef]
  4. Reinert, J.; Brockmeyer, A.; De Doncker, R.W. Calculation of losses in ferro-and ferrimagnetic materials based on the modified Steinmetz equation. IEEE Trans. Ind. Appl. 2001, 37, 1055–1061. [Google Scholar] [CrossRef]
  5. Li, J.; Abdallah, T.; Sullivan, C.R. Improved calculation of core loss with nonsinusoidal waveforms. In Proceedings of the 2001 IEEE Industry Applications Conference. 36th IAS Annual Meeting (Cat. No. 01CH37248), Chicago, IL, USA, 30 September–4 October 2001; Volume 4, pp. 2203–2210. [Google Scholar] [CrossRef]
  6. Venkatachalam, K.; Sullivan, C.R.; Abdallah, T.; Tacca, H. Accurate prediction of ferrite core loss with nonsinusoidal waveforms using only Steinmetz parameters. In Proceedings of the 2002 IEEE Workshop on Computers in Power Electronics, Mayaguez, PR, USA, 3–4 June 2002; Proceedings. pp. 36–41. [Google Scholar] [CrossRef]
  7. Li, C.; Cheng, M.; Qin, W.; Wang, Z.; Ma, X.; Wang, W. Analytical loss model for magnetic cores based on vector magnetic circuit theory. IEEE Open J. Power Electron. 2024, 5, 1659–1670. [Google Scholar] [CrossRef]
  8. Guillod, T.; Lee, J.S.; Li, H.; Wang, S.; Chen, M.; Sullivan, C.R. Calculation of ferrite core losses with arbitrary waveforms using the composite waveform hypothesis. In Proceedings of the 2023 IEEE Applied Power Electronics Conference and Exposition (APEC), Orlando, FL, USA, 19–23 March 2023; pp. 1586–1593. [Google Scholar] [CrossRef]
  9. Arruti, A.; Anzola, J.; Pérez-Cebolla, F.J.; Aizpuru, I.; Mazuela, M. The composite improved generalized steinmetz equation (ciGSE): An accurate model combining the composite waveform hypothesis with classical approaches. IEEE Trans. Power Electron. 2023, 39, 1162–1173. [Google Scholar] [CrossRef]
  10. Sapkota, D.B.; Neupane, P.; Joshi, M.; Khan, S. Deep learning model for enhanced power loss prediction in the frequency domain for magnetic materials. IET Power Electron. 2024, 1–12. [Google Scholar] [CrossRef]
  11. Li, Z.; Wang, L.; Liu, R.; Mirzadarani, R.; Luo, T.; Lyu, D.; Niasar, M.G.; Qin, Z. A data-driven model for power loss estimation of magnetic materials based on multi-objective optimization and transfer learning. IEEE Open J. Power Electron. 2024, 5, 605–617. [Google Scholar] [CrossRef]
  12. Choi, M.; Park, S.; Jang, E.; Ouk, M.; Park, K.; Lee, S.; Noh, G. Fabrication-specific simulation of Mn-Zn ferrite core-loss for machine learning-based surrogate modeling with limited experimental data. IEEE Trans. Power Electron. 2024, 40, 1519–1531. [Google Scholar] [CrossRef]
  13. Yang, D.; Wang, B.; Shao, S.; Zhang, J. High-Frequency Transformer Loss Measurement and Modeling: A DC Loss Method. IEEE Trans. Power Electron. 2024, 40, 5635–5645. [Google Scholar] [CrossRef]
  14. Santamargarita, D.; Molinero, D.; Bueno, E.; Marrón, M.; Vasić, M. On-line monitoring of maximum temperature and loss distribution of a medium frequency transformer using artificial neural networks. IEEE Trans. Power Electron. 2023, 38, 15818–15828. [Google Scholar] [CrossRef]
  15. Rasekh, N.; Wang, J.; Yuan, X. Artificial neural network aided loss maps for inductors and transformers. IEEE Open J. Power Electron. 2022, 3, 886–898. [Google Scholar] [CrossRef]
  16. Guillod, T.; Papamanolis, P.; Kolar, J.W. Artificial neural network (ANN) based fast and accurate inductor modeling and design. IEEE Open J. Power Electron. 2020, 1, 284–299. [Google Scholar] [CrossRef]
  17. Di Nuzzo, G.; Pai, A.P.; Su, Y. Adaptive artificial neural networks for power loss prediction in SiC MOSFETs. In Proceedings of the 2024 IEEE 10th Electronics System-Integration Technology Conference (ESTC), Berlin, Germany, 11–13 September 2024; pp. 1–8. [Google Scholar] [CrossRef]
  18. Deng, J.; Wang, W.; Venugopal, P.; Popovic, J.; Rietveld, G. Knowledge-aware artificial neural network for loss modeling of planar magnetic components. In Proceedings of the 2022 IEEE Energy Conversion Congress and Exposition (ECCE), Detroit, MI, USA, 9–13 October 2022; pp. 1–6. [Google Scholar] [CrossRef]
  19. Deng, J.; Wang, W.; Ning, Z.; Venugopal, P.; Popovic, J.; Rietveld, G. High-frequency core loss modeling based on knowledge-aware artificial neural network. IEEE Trans. Power Electron. 2023, 39, 1968–1973. [Google Scholar] [CrossRef]
  20. Sanusi, B.N.; Zambach, M.; Frandsen, C.; Beleggia, M.; Jørgensen, A.M.; Ouyang, Z. Investigation and modeling of DC bias impact on core losses at high frequency. IEEE Trans. Power Electron. 2023, 38, 7444–7458. [Google Scholar] [CrossRef]
  21. Shen, X.; Wouters, H.; Martinez, W. Deep neural network for magnetic core loss estimation using the magnet experimental database. In Proceedings of the 2022 24th European Conference on Power Electronics and Applications (EPE’22 ECCE Europe), Hanover, Germany, 5–9 September 2022; pp. 1–8. [Google Scholar]
  22. Shen, X.; Zhong, H.; Wu, H.; Mao, Y.; Han, R. Bi-objective optimization of magnetic core loss and magnetic energy transfer of magnetic element based on a hybrid model integrating GAN and NSGA-II. Int. J. Electr. Power 2025, 170, 110834. [Google Scholar] [CrossRef]
  23. Milyutin, V.A.; Bureš, R.; Faberova, M.; Birčáková, Z.; Molčanová, Z.; Kunca, B. Machine learning assisted optimization of soft magnetic properties in ternary Fe–Si–Al alloys. J. Mater. Res. Technol. 2024, 29, 5060–5073. [Google Scholar] [CrossRef]
Figure 1. Distribution chart of unprocessed core loss data.
Figure 1. Distribution chart of unprocessed core loss data.
Magnetochemistry 11 00093 g001
Figure 2. Data chart of core loss after logarithmic transformation.
Figure 2. Data chart of core loss after logarithmic transformation.
Magnetochemistry 11 00093 g002
Figure 3. Correlation matrix of the input features (temperature, frequency, and peak magnetic flux density) and core loss.
Figure 3. Correlation matrix of the input features (temperature, frequency, and peak magnetic flux density) and core loss.
Magnetochemistry 11 00093 g003
Figure 4. Architecture of the proposed ANN used for core loss prediction.
Figure 4. Architecture of the proposed ANN used for core loss prediction.
Magnetochemistry 11 00093 g004
Figure 5. Comparison of core loss of different models ((a) core loss of Material 1; (b) model scatter plot of Material 1; (c) core loss of Material 2; (d) model scatter plot of Material 2; (e) core loss of Material 3; (f) model scatter plot of Material 3; (g) core loss of Material 4; (h) model scatter plot of Material 4).
Figure 5. Comparison of core loss of different models ((a) core loss of Material 1; (b) model scatter plot of Material 1; (c) core loss of Material 2; (d) model scatter plot of Material 2; (e) core loss of Material 3; (f) model scatter plot of Material 3; (g) core loss of Material 4; (h) model scatter plot of Material 4).
Magnetochemistry 11 00093 g005
Figure 6. Global parameter importance summary plot for Material 1.
Figure 6. Global parameter importance summary plot for Material 1.
Magnetochemistry 11 00093 g006
Figure 7. Residual histograms of ANN model for four materials ((a) Material 1; (b) Material 2; (c) Material 3; (d) Material 4).
Figure 7. Residual histograms of ANN model for four materials ((a) Material 1; (b) Material 2; (c) Material 3; (d) Material 4).
Magnetochemistry 11 00093 g007
Table 1. The total amount of data provided for 4 materials.
Table 1. The total amount of data provided for 4 materials.
MaterialData Volume
Material 11067
Material 21097
Material 31010
Material 4880
Table 2. Symbol definition.
Table 2. Symbol definition.
SymbolMeaning
ZlWeighted sum of the l-th layer
mlActivation output of the l-th layer
WlWeight matrix of the l-th layer
BlBias vector of the l-th layer
Table 3. Comparison of two data processing methods for ANN model.
Table 3. Comparison of two data processing methods for ANN model.
MaterialEvaluation IndicatorLogarithmic TransformationWithout Logarithmic Transformation
Material 1MAPE3.86%14.27%
R20.98490.9963
PA96.27%57.35%
Material 2MAPE4.56%11.08%
R20.97910.9945
PA93.33%62.45%
Material 3MAPE4.28%9.53%
R20.98650.9980
PA93.42%69.48%
Material 4MAPE2.58%8.11%
R20.99880.9952
PA98.48%74.44%
Table 4. Fitting parameters of SE model for four materials.
Table 4. Fitting parameters of SE model for four materials.
Materialk1α1β1
Material 11.6141.4182.432
Material 20.6321.4952.280
Material 30.7131.5212.4151
Material 40.2541.6232.479
Table 5. Prediction results for each material.
Table 5. Prediction results for each material.
MaterialEvaluation IndicatorANN ModelKNN ModelSE Model
Material 1MAPE3.86%16.18%37.40%
R20.98490.96080.9428
PA96.27%50.31%14.91%
Material 2MAPE4.56%6.62%47.92%
R20.97910.94730.9318
PA93.33%83.64%22.42%
Material 3MAPE4.28%9.81%34.43%
R20.98650.94620.9498
PA93.42%70.39%23.03%
Material 4MAPE2.58%15.65%32.88%
R20.99880.93570.9303
PA98.48%43.94%18.94%
Table 6. Residual summary table for different models.
Table 6. Residual summary table for different models.
MaterialANN (W/m3)KNN (W/m3)SE (W/m3)
Material 1948.983734.31−3329.16
Material 21280.334618.93−2372.79
Material 3729.82226.505004.48
Material 4384.421719.50−1562.34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, C.; Meng, F.; Zhang, J.; Zhang, Z. Core Loss Prediction Model of High-Frequency Sinusoidal Excitation Based on Artificial Neural Network. Magnetochemistry 2025, 11, 93. https://doi.org/10.3390/magnetochemistry11110093

AMA Style

Lu C, Meng F, Zhang J, Zhang Z. Core Loss Prediction Model of High-Frequency Sinusoidal Excitation Based on Artificial Neural Network. Magnetochemistry. 2025; 11(11):93. https://doi.org/10.3390/magnetochemistry11110093

Chicago/Turabian Style

Lu, Cunhao, Fanjie Meng, Jiajie Zhang, and Zeyuan Zhang. 2025. "Core Loss Prediction Model of High-Frequency Sinusoidal Excitation Based on Artificial Neural Network" Magnetochemistry 11, no. 11: 93. https://doi.org/10.3390/magnetochemistry11110093

APA Style

Lu, C., Meng, F., Zhang, J., & Zhang, Z. (2025). Core Loss Prediction Model of High-Frequency Sinusoidal Excitation Based on Artificial Neural Network. Magnetochemistry, 11(11), 93. https://doi.org/10.3390/magnetochemistry11110093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop