Thermal Behavior Modeling Based on BP Neural Network in Keras Framework for Motorized Machine Tool Spindles

This paper presents the development and evaluation of neural network models using a small input–output dataset to predict the thermal behavior of a high-speed motorized spindles. Different neural multi-output regression models were developed and evaluated using Keras, one of the most popular deep learning frameworks at the moment. ANN was developed and evaluated considering the following: the influence of the topology (number of hidden layers and neurons within), the learning parameter, and validation techniques. The neural network was simulated using a dataset that was completely unknown to the network. The ANN model was used for analyzing the effect of working conditions on the thermal behavior of the motorized grinder spindle. The prediction accuracy of the ANN model for the spindle thermal behavior ranged from 95% to 98%. The results show that the ANN model with small datasets can accurately predict the temperature of the spindle under different working conditions. In addition, the analysis showed a very strong effect of type coolant on spindle unit temperature, particularly for intensive cooling with water.


Introduction
The importance of machine tool manufacturing in the metal machining industry is evident today, and it has a twofold effect on the industry. First, machine tool manufacturing as a part of the industry affects the position and the importance of the industry through its level of development and manufacturing output. Second, machine tool manufacturing creates the means of the working process for the metal-cutting industry, which, in turn, increases its total efficiency. In today's prosperity of industrial rise, the application of the high-speed motorized spindle has significantly increased machining productivity and reduced manufacturing cost. However, its high speed and compact structure impose some negative effects on spindle thermal behaviors. Heat generation from spindle motors and bearings generally influence the temperature rise of a motorized spindle unit in its operation, resulting in errors, which are the main reasons for the reduction in machine tools' precision and accuracy. The thermal behavior, caused by the temperature rise, is one of the main causes of the inaccuracy of the machine tools. Thermal errors account for 60 to 70% of total machine tool errors [1]. On the other hand, approximately 75% of machined workpieces' geometrical errors occur due to temperature influence [2]. The heat generation on the bearings and motor occurring during the operation of high-speed spindles is often considerably high, so active cooling is required. With the correct choice of coolant and coolant flow, it is possible to influence the temperature rise and thereby reduce errors due to heat load. For active cooling of the housing, water with anticorrosive additive or special oil is used. The application of cooling oil is particularly popular in warm and humid Asian areas. This is due to the oil's lower susceptibility to microbial contamination. Air and oil (FF) network that has good approximation and global optimal performance. Cui et al. [33] proposed a model based on the five-point method using the multiple linear regression (MLR) method, BP, and RBF neural network to establish the thermal error prediction of the motorized spindle. Lv et al. [34] improved the prediction accuracy and generalization ability of ANNs by using a generalized RBF neural network modeling method and applied it to the thermal error modeling of the spindle housing. Zhang and Fu [35,36] improved the accuracy of the RBF neural network, developing the thermal error prediction model. They applied the genetic algorithm, particle swarm algorithm, and chicken flock algorithm to optimize the important parameters (hidden layer and output layer weights) of the RBF neural network. Dynamic neuron network models have better robustness in modeling spindle thermal behavior although temperatures are changing and the thermal-elastic process is varying nonlinearly under different working conditions [37]. Kang et al. [38] proposed a modified method that combined a feed-forward neural network (FNN) and hybrid filters for the prediction of thermal deformation in a machine tool. The hybrid filter consists of linear regression (LR), moving average (MA), and autoregression (AR). Outputs from the filter serve as input of the FF network, which is estimated by the static and dynamic relationships between the temperature distributions and thermal deformations.
In addition to those widely used neural network models discussed above, other neural networks were studied and applied for analysis of the spindle thermal behavior. Yang et al. [39] proposed a modified Elam network (EN) for determining the thermal errors of the spindle based on the FEA simulation. Li et al. [40] optimized weights and thresholds of the Elman neural network by using the sparrow search algorithm to predict thermal errors in motorized spindles. Zhang [41] proposed serial grey neural network (SGNN) and parallel grey neural network (PGNN) to predict the thermal error. Abdulshahed et al. [42] proposed a methodology for thermal error compensation using a grey neural network model with convolution integral optimized by a particle swarm optimization. Consideration of limitations and challenges in applying ML techniques along with possible strategies to address them, including computational cost, large errors of extrapolation, data availability, and the iteration with experiments, was presented by Qian and Yang [43]. Raza et al. [44] presented an unsupervised machine learning algorithm (including random forest, least absolute shrinkage, selection operator regression, and feed-forward neural networks) that can automatically classify and rationalize chemical trends in PFAS structures.
Thermal errors occur primarily due to temperature rise and temperature differences, which are the consequences of heat sources in the machine tool and changes in environmental conditions. Usually, the temperature variables tested by multiple sensors are taken as the input, and the thermal errors of machine tools are the output of NN model. The thermal and mechanical behavior of the spindle is a function of the cooling system. Inadequate coolants and flow lead to an increase in temperature and thermal expansion on the spindle elements. This causes an increase in thermal deformations as well as a decrease in the energy efficiency of the machine tool.
This paper presents the BP neural networks with the Adam optimization algorithm for the prediction of the temperature of motorized spindle units under different input conditions. The Adam optimization algorithm is implemented instead of the commonly used stochastic gradient descent (SGD) algorithm. Several models of neural networks have been developed where the number of hidden layers and neurons within varies. Spindle speed, coolant type, coolant flow of motor, and bearings were used as input parameters. Temperatures measured in three different areas of the spindle unit were considered as the output parameters. After the network learning and training, the temperatures of the motorized spindle unit in different areas were accurately fitted and predicted. The dataset used for the ANN development was obtained experimentally under various operating conditions. The experiment was conducted on the basis of the Box-Wilson central composite design with levels and ranges for three numerical (quantitative) factors and one categorical (qualitative) factor.

Experimental Setup
This paper investigates the effects of four factors on the temperature of a motorized spindle unit: a number of revolutions (n), coolant flow of motor (Q m ), bearing coolant flow (Q b ), and the coolant type (H). Table 1 shows these factors and their levels.  Figure 1a shows the experimental setup for measuring temperatures. Collected data were used to train and simulate neural networks. The tests were performed on a highspeed motorized spindle GMN TSSV 100-90000 ( Figure 1b The spindle was mounted with two pairs of high-precision angular contact bearings, the front bearing with EX 12 7C1 DUL SNFA and the rear bearing with EX 10 7C1 DUL SNFA, mounted in a "tandem" arrangement so that the entire bearing formed an "O" arrangement. Front bearings had lock-ring preload, while rear bearings had constant (spring) preload. During the experiment, the external ambient temperature was 21 • C.
The motorized spindle (2) was connected to the frequency regulator Nidec HS 72 (1), which allowed for the desired RPM. The Acrylic Flow Meters 6A01 (5) was mounted to measure the amount of oil mist on the bearings at all times. The measurement of the flow of the cooling fluid for the motor was carried out with an Integral Flowmeter AXF (7) according to the principle of the Coriolis effect. The temperature of the cooling fluid was kept constant at 22 • C by using a heat exchanger placed in the tank (6). To measure the temperature of the spindle unit, thermocouples, infrared thermometer, and thermal imager were used. Three K-type thermocouples (T1, T2, and T3) were placed on the housing near the front and rear bearings and near the stator of the motor (Figure 1a,b). The temperature obtained from these thermocouples was collected by an acquisition system NI USB 6281 and then sent back to a computer for processing and monitoring results every one second. The output from the acquisition system was continuously monitored and analyzed in Matlab R16b software. An infrared thermometer was used to record the temperature at the outlet coolant from the spindle unit. To record the distribution of temperature fields, monitor the temperatures of the entire experimental rig, as well as to control the thermocouples and infrared thermometer, the Thermo Pro TP TP8S, Wuhan Guide Infrared Co., Ltd., Wuhan, China, thermal imager was used. Table 2 shows the characteristics used for equipment.  The motorized spindle (2) was connected to the frequency regulator Nidec HS 72 (1), which allowed for the desired RPM. The Acrylic Flow Meters 6A01 (5) was mounted to measure the amount of oil mist on the bearings at all times. The measurement of the flow of the cooling fluid for the motor was carried out with an Integral Flowmeter AXF (7) according to the principle of the Coriolis effect. The temperature of the cooling fluid was kept constant at 22 °C by using a heat exchanger placed in the tank (6). To measure the temperature of the spindle unit, thermocouples, infrared thermometer, and thermal imager were used. Three K-type thermocouples (T1, T2, and T3) were placed on the housing near the front and rear bearings and near the stator of the motor (Figure 1a,b). The temperature obtained from these thermocouples was collected by an acquisition system NI USB 6281 and then sent back to a computer for processing and monitoring results every one second. The output from the acquisition system was continuously monitored and analyzed in Matlab R16b software. An infrared thermometer was used to record the temperature at the outlet coolant from the spindle unit. To record the distribution of temperature fields, monitor the temperatures of the entire experimental rig, as well as to control the thermocouples and infrared thermometer, the Thermo Pro TP TP8S, Wuhan Guide Infrared Co., Ltd., Wuhan, China, thermal imager was used. Table 2 shows the characteristics used for equipment.

Experimental Plan
The goal of this experiment was to determine the temperature rise on the spindle unit for different operation conditions. Using the Box-Wilson central composite design, the experiments were reduced to 40 runs. The central composite matrix shown in Appendix A (Table A1) contains 40 rows, representing the number of experimental runs.
In this research, levels and ranges were applied for three numerical (quantitative) factors: RPM, coolant flow, and oil mist flow. In addition, a categorical (qualitative) factor was applied-it was a type of cooling where oil and water were used. For a centrally composite plan, the parameter α, the distance of the axial arrays from the projected center, was 1, so that each numerical factor had three levels. The experiment was divided into two blocks. The role of the blocks was to reduce or eliminate the variability caused by interference factors that may have affected the response but were not directly related to a design factor. For this experiment, six replications were conducted to the midpoint for each level of the categorical factor, that is, two replicates for each level of the categorical factor in each block. For the central point in the project, standard working conditions were applied. Columns 8, 9, and 10 in Appendix A (Table A1) show the results of the measured temperatures (responses) for the conducted experiments in all three points, i.e., near front bearing, stator, and near rear bearing.

Analysis of Experimental Data
Data analysis was performed using the Seaborn, Scikit-learn, NumPy, and Pandas machine learning libraries and Keras framework for Python. Various algorithms were used for data visualization, such as HeatMap or Scatterplot. Since string values in column H (Appendix A Table A1) are not appropriate for data analysis, values in the column were converted into Boolean using the replace method from the Pandas library. Linear relationships between variables could be shown and quantified with a correlation matrix. The correlation matrix is a square matrix that measures the linear dependence between pairs of attributes. The correlation coefficients range from −1 to 1, where two attributes have a perfect positive correlation if r = 1, no correlation if r = 0, and a perfect negative correlation if r = −1. On the basis of the above analysis, the comprehensive correlation between the temperature and input variables was obtained, as shown in Figure 2. Figure 2a shows a correlation matrix with a correlation coefficient for all input variables. The correlations between output variables and input variable "H" (coolant type) were 0.93, 0.888, and 0.91, indicating that they were strongly positively correlated ( Figure 2a). The correlations between output variables and input variable n (spindle speed) were positive and had values 0.3, 0.4, and 0.34. That indicates that correlation existed but was not significant, as in the previous case. The correlations between output variables and input variables Q m and Q b (black highlighted cell in the correlation matrix) were negligible, indicating that this relationship was not noticeable. The correlations between output variables T1, T2, and T3 were positive and had values of 0.98 and 0.99. It can be seen from Figure 2a that the type of coolant had the biggest influence on motor-spindle temperatures. When water-cooling of the housing was used, a large amount of heat was transferred by water. Water-based cooling was generally more effective due to higher specific heat capacity (41.2 kJ/(kg·K) at 20 • C for water vs. 1.9 kJ/(kg·K) for special cooling oil). If the correlation matrix is observed separately for the case of cooling with oils and cooling with water, the influence of certain input variables changes significantly. The correlation between input and output variables for oil cooling was positive and had a value of 0.93 (Figure 2b). The small correlation factor between oil flow and temperature If the correlation matrix is observed separately for the case of cooling with oils and cooling with water, the influence of certain input variables changes significantly. The correlation between input and output variables for oil cooling was positive and had a value of 0.93 (Figure 2b). The small correlation factor between oil flow and temperature indicates that the oil flow did not significantly affect the spindle unit temperature change.
For the case of water-cooling, correlations between output variables and input variable n (spindle speed) were positive and had values of 0.79, 0.82, and 0.71. Correlations between Q m flow, as an input variable, and output variables had values of 0.2, 0.24, and 0.098, which indicates that correlation was not high but existed (Figure 2c). Therefore, the water flow had an impact on the temperature change of the spindle unit.

BP Neural Network Modeling
This paper also considered the influence of the network topology and the learning parameter on the model performances. Several models of neural networks have been developed where the number of hidden layers and neurons within varies. Neural network models have one, two, or three hidden layers. The number of neurons in the hidden layer varies from 2 to 10.
The Adam optimization algorithm is used for updating network weights in training data instead of the commonly used SGD algorithm. Adam algorithm uses two components, momentum and adaptive learning rate, to converge faster and update network weights efficiently. Momentum update can be expressed mathematically: In the above equation, θ is network parameter (weight, bias . . . ), η is a learning rate, J is a function to optimize, γ is a constant, ν (t−1) is a past timestep, and ν t is a current timestep.
Adaptive learning rates can be observed as adjustments, i.e., reducing the learning rate in the training phase. Mathematical expression is given in following equation: where E g 2 t is an exponentially decaying average of squared gradients, β has the recommended setting of 0.9, θ t+1 denotes the resulting (updated) parameters, and g t is the gradient at timestep t.

Impact of Network Architecture on Model Performance
The activation function for hidden layers is ReLU, and the linear activation function is used for the output layer. The number of epochs is set at 10,000, but the early stopping function interrupts training after model performance stop to improve on the validation dataset. First, for model selection and evaluation, the hold-out method is used. R-squared and RMSE are used to explain a quantitative measure of model performances for each output, and the average value is considered. Table 3 shows that network performances are not improving significantly by increasing the number of hidden layers and neurons within. A smaller value of the learning parameter improves the performance of a neural network that has more than one hidden layer but slows down the training at the same time; however, it should certainly take into account that the difference in the results of individual networks is negligibly small. It is important to note that in this case, the dataset is divided randomly, which means that during the next training, the results will likely differ from the results shown in Table 3.

Impact of Cross-Validation Technique on Model Performance
Building the machine learning model requires two things: feeding the model with an initial dataset for training and then providing unseen data to the model to evaluate its performance. The stability and performances of the model highly depend on the data belonging to training and validation sets. Although a large number of evaluation techniques exist, this paper examines and compares the impact of two cross-validation techniques on model performance:
KFold cross-validation technique. Holdout (Figure 3) is the simplest of all validation techniques widely used in machine learning. Holdout implies that the entire data set is divided, most often randomly, into two sets. Usually, data are divided in a ratio of 2/3 of the data to the training set and 1/3 of the data to the test set. If handling a large dataset, data can be divided into ratios of 60/40, 70/30, 80/20, or even 90/10. An advantage of this technique is that it is fast because training executes only once. As a drawback, randomly splitting the initial dataset can lead to high variance on repeated training. As a result, the model accuracy will not be consistent. The way to obtain a more robust model using the holdout method is by repeating training several times, using different random seeds. After k repetitions, average performance is computed.

Holdout cross-validation technique 2. KFold cross-validation technique.
Holdout (Figure 3) is the simplest of all validation techniques widely used in machine learning. Holdout implies that the entire data set is divided, most often randomly, into two sets. Usually, data are divided in a ratio of 2/3 of the data to the training set and 1/3 of the data to the test set. If handling a large dataset, data can be divided into ratios of 60/40, 70/30, 80/20, or even 90/10. An advantage of this technique is that it is fast because training executes only once. As a drawback, randomly splitting the initial dataset can lead to high variance on repeated training. As a result, the model accuracy will not be consistent. The way to obtain a more robust model using the holdout method is by repeating training several times, using different random seeds. After k repetitions, average performance is computed.  KFold cross-validation ( Figure 4) is another method for estimating the model performances. This method systematically creates and evaluates multiple models on different subsets. In most cases, cross-validation is used for the estimation and evaluation of the effectiveness of different hyperparameters. The KFold validation technique is not desirable in neural networks because it is more expensive compared with the holdout technique (i.e., it is time-consuming). When the dataset is large enough, one validation set is enough, and usually there is no need for the KFold.

Choosing the Number of Folds for Cross-Validation
This section of the research analyzes the influence of the fold size on the model performance. Fold sizes from k = 2 to k = n were used to assess and evaluate their effects on prediction errors. The last case is a leave-one-out-cross-validation technique or LOOCV. LOOCV represents the cross-validation case where there are several folds, which are equal to dataset size, and just one fold is held out for validation.
The most common measures of regression model fit, R-squared and RMSE were used to estimate model performances. Root mean squared error can be expressed by

Choosing the Number of Folds for Cross-Validation
This section of the research analyzes the influence of the fold size on the model performance. Fold sizes from k = 2 to k = n were used to assess and evaluate their effects on prediction errors. The last case is a leave-one-out-cross-validation technique or LOOCV. LOOCV represents the cross-validation case where there are several folds, which are equal to dataset size, and just one fold is held out for validation.
The most common measures of regression model fit, R-squared and RMSE were used to estimate model performances. Root mean squared error can be expressed by formula: whereŷ i is predicted values, and y i is a dependent regression variable. Since the training set has 40 samples, the 2-fold cross-validation estimates the model performance over a training set of 20, 4-fold cross-validation over a training set of 30, and so on, while LOOCV estimates the model performance over a training set of 39. Figures 5 and 6 show the impact of KFold size on bias and variance for five tested network architectures. This can be explained in the following way: for smaller K values, training samples become smaller. As a result, the model is less stable, which is indicated by the higher value of the variance.   Table 4 shows that the lowest average bias and variance has neural network topology 4-8-8-3 and it will be considered the most favorable. The topology of the adopted This can be explained in the following way: for smaller K values, training samples become smaller. As a result, the model is less stable, which is indicated by the higher value of the variance.   Table 4 shows that the lowest average bias and variance has neural network topology 4-8-8-3 and it will be considered the most favorable. The topology of the adopted neural network for further analysis is shown in Figure 7.  Figure 5 shows that leave-one-out cross-validation does not lead to significantly lower bias than KFold. Increasing K slightly improves variance for almost all tested network architectures (except the 4-2-3 network architecture), as can be seen in Figure 6. This can be explained in the following way: for smaller K values, training samples become smaller. As a result, the model is less stable, which is indicated by the higher value of the variance. Table 4 shows that the lowest average bias and variance has neural network topology 4-8-8-3 and it will be considered the most favorable. The topology of the adopted neural network for further analysis is shown in Figure 7.

Results and Discussion
Figure 8 (based on [25]) shows the procedure of neural network building and sim ulation using an experimentally obtained dataset. In total, 40 samples were used fo neural network model building, of which 28 samples were used for training and 12 fo validation of the network. Following that, the neural network was simulated using 14 sets which were completely unknown to the network.   [25]) shows the procedure of neural network building and simulation using an experimentally obtained dataset. In total, 40 samples were used for neural network model building, of which 28 samples were used for training and 12 for validation of the network. Following that, the neural network was simulated using 14 sets which were completely unknown to the network.

Verification of ANN Model
Network characteristics have been estimated based on the value of the root mean square error (RMSE) related to the simulation dataset and correlation coefficient (R). After comparing the absolute error of the experimentally obtained output and simulation output, an assessment of the possibility of using neural networks trained with small data sets was produced (Table 5).  [25]) shows the procedure of neural network building and simulation using an experimentally obtained dataset. In total, 40 samples were used for neural network model building, of which 28 samples were used for training and 12 for validation of the network. Following that, the neural network was simulated using 14 sets which were completely unknown to the network.

Verification of ANN Model
Network characteristics have been estimated based on the value of the root mean square error (RMSE) related to the simulation dataset and correlation coefficient (R). After comparing the absolute error of the experimentally obtained output and simulation output, an assessment of the possibility of using neural networks trained with small data sets was produced (Table 5).  The relative error was calculated by the equation: The regression plot for simulation is shown in Figure 9.  Figure 10 shows the change in temperature rise depending on the number of revolutions for different oil flows (Q m ) through the housing. By increasing the number of revolutions by 25%, the temperatures of the spindle unit increased by 63% at an oil flow of Q m = 3 L/min, i.e., by 65% at an oil flow of 7 L/min. However, by increasing the flow from Q m = 3 L/min to Q m = 7 L/min, the temperatures of T1 and T3 decreased by 2% at n = 30,000 rpm, or by 1% at n = 70,000 rpm, while the temperature of T2 decreased by 1.7% at n = 30,000 rpm and 3% at n = 70,000 rpm. The regression plot for simulation is shown in Figure 9. Figure   The temperature change depending on the number of revolutions for the considered water flow (Qm) through the housing is shown in Figure 11. Increasing the number of revolutions from n = 30,000 rpm to n = 40,000 rpm (25%) increased the temperature difference (ΔT) at the considered points by 50% at a water flow of Qm = 3 L/min, i.e., by 55% at a water flow of Qm = 7 L/min. However, increasing the flow from 3 L/min to 7 L/min reduced the temperatures of T1 and T3 by 12% at n = 30,000 rpm, or by 13% at n = 70,000 The temperature change depending on the number of revolutions for the considered water flow (Q m ) through the housing is shown in Figure 11. Increasing the number of revolutions from n = 30,000 rpm to n = 40,000 rpm (25%) increased the temperature difference (∆T) at the considered points by 50% at a water flow of Q m = 3 L/min, i.e., by 55% at a water flow of Q m = 7 L/min. However, increasing the flow from 3 L/min to 7 L/min reduced the temperatures of T1 and T3 by 12% at n = 30,000 rpm, or by 13% at n = 70,000 rpm, while the temperature of T2 decreased by 10% at n = 30,000 rpm and 16% at n = 70,000 rpm. rpm, while the temperature of T2 decreased by 10% at n = 30,000 rpm and 16% at n = 70,000 rpm. When cooling the housing with oil, the maximum temperature increase ΔT2 was approximately 16 °C at a flow rate of Qm = 3 L/min. By increasing the flow to Qm = 7 L/min, ΔT2 decreased to 1 °C, while ΔT1 decreased from 14.5 °C to 13 °C, and ΔT3 decreased from 14.6 °C to 14 °C. On the other hand, when cooling the housing with water, When cooling the housing with oil, the maximum temperature increase ∆T2 was approximately 16 • C at a flow rate of Q m = 3 L/min. By increasing the flow to Q m = 7 L/min, ∆T2 decreased to 1 • C, while ∆T1 decreased from 14.5 • C to 13 • C, and ∆T3 decreased from 14.6 • C to 14 • C. On the other hand, when cooling the housing with water, by increasing the flow from 3 L/min to 7 L/min, the temperature ∆T1 decreased for 4 [ • C], with a simultaneous decrease in the temperature ∆T2 from 3.8 • C to 1.2 • C and temperature ∆T3 from 3.9 to 1.3 • C. Therefore, the coolant flow had a greater effect on the temperatures of the spindle unit elements when cooling the housing (stator) with water, which is similar to the analysis of experimental test results presented in Section 2. With water cooling, the temperature ∆T2 was lower by approximately 12 • C than with oil cooling, while the temperatures ∆T1 and ∆T2 were lower by approximately 11 • C, which is also consistent with the experimental testing.

Conclusions
The major advantage of this approach is that it enables the results of ANN models to be easily integrated with FEM models or digital twins, especially for any further analyses of the influence of temperature on the spindle unit behavior.
Eighteen different neural network models were evaluated, whereby a hold-out method is used for model selection and evaluation. It can be seen that: • Network performances does not improve significantly by increasing the number of hidden layers and neurons within; • A smaller value of the learning parameter improves the performance of a neural network with more than one hidden layer but significantly slows down training; • All models have high variance since the dataset was split randomly.
Five different neural network models were estimated using the KFold cross-validation technique.

•
It can be seen that the model with the 4-8-8-3 network topology has the lowest RMSE value, which indicates that this model has the best performance. • Increasing fold number has no significant impact on the bias but slightly improves variance.
The hold-out method is preferred when the dataset is large and can be a good validator for building the initial model. This method takes less computational power and requires less time to run.

•
When handling small datasets, cross-validation is more desirable than the hold-out method since the model is trained on multiple folds. This provides a more reliable indicator of the model acting on unseen data. For datasets up to 40 input/output values, with multiple outputs, the LOOCV method provides the lowest bias and variance, and this model can be considered more reliable than models using lower Kfold values.
Through these findings, the history of temperature distribution on the spindle can be learned, and suitable coolant and flow for the motorized spindle unit can be chosen to minimize temperature rising and thermal expansion. The results of temperature obtained by means of the ANN model make it possible to indicate the best solution and to quantitatively assess the improvement in the high-speed motorized spindle thermal properties. By choosing the appropriate coolant and flow rate, the energy efficiency of the machine tool is increased, while temperature and errors due to heat load are reduced.

Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest:
The authors declare no conflict of interest.