Comparison of Heat Demand Prediction Using Wavelet Analysis and Neural Network for a District Heating Network

: Short-Term Load Prediction (STLP) is an important part of energy planning. STLP is based on the analysis of historical data such as outdoor temperature, heat load, heat consumer conﬁguration, and the seasons. This research aims to forecast heat consumption during the winter heating season. By preprocessing and analyzing the data, we can determine the patterns in the data. The results of the data analysis make it possible to form learning algorithms for an artiﬁcial neural network (ANN). The biggest disadvantage of an ANN is the lack of precise guidelines for architectural design. Another disadvantage is the presence of false information in the analyzed training data. False information is the result of errors in measuring, collecting, and transferring data. Usually, trial error techniques are used to determine the number of hidden nodes. To compare prediction accuracy, several models have been proposed, including a conventional ANN and a wavelet ANN. In this research, the inﬂuence of different learning algorithms was also examined. The main differences were the training time and number of epochs. To improve the quality of the raw data and remove false information, the research uses the technology of normalizing raw data. The basis of normalization was the technology of the Z-score of the data and determination of the energy-entropy ratio. The purpose of this research was to compare the accuracy of various data processing and neural network training algorithms suitable for use in data-driven (black box) modeling. For this research, we used a software application created in the MATLAB environment. The app uses wavelet transforms to compare different heat demand prediction methods. The use of several wavelet transforms for various wavelet functions in the research allowed us to determine the best algorithm and method for predicting heat production. The results of the research show the need to normalize the raw data using wavelet transforms. The sequence of steps involves following milestones: normalization of initial data, wavelet analysis employing quantitative criteria (energy, entropy, and energy-entropy ratio), optimization of ANN training with information energy–entropy ratio, ANN training with different training algorithms, and evaluation of obtained outputs using statistical methods. The developed application can serve as a control tool for dispatchers during planning.


Introduction
The application of new, progressive technologies to process control is the key to increasing productivity, quality, reliability, and safety [1]. The use of modern process control means predictive control in the management process. It can be achieved by the implementation of artificial intelligence in the control process, together with other data processing methods. This article deals with the problem of short-time (1 h ahead) prediction of energy consumption and planning in heat production, with a comparison of the effectiveness of different methods and algorithms.
Planning is the first phase and is often the most important in many areas. In the area of heat production, the amount of energy produced depends on several variables.

Wavelet Theory and Multiresolution Analysis
Thanks to progress in computer science, there are many cases in which data from timedependent processes in the physical world were processed using a computer system [8]. There are many algorithms for the better understanding of these processes. The best known and most used is Fourier transform (FT). For computer processing, FT is often paired with Gabor transform, S-transformation, Hilbert transform, or Wavelet transform. In addition to the previously mentioned transformation methods, empirical mode decomposition (EMD) or ensemble empirical mode decomposition (EEMD) are used to solve similar signal processing and surface reconstruction problems [9][10][11]. Wavelet transform and EMD/EEMD are relatively new. The main scope of this contribution is to use wavelet transformation for signal processing, prediction, and to improve the resolution of the data from the controlled process.
Within the management of production processes, a common mathematical task is the prediction of the future state of a system based on known, recorded data. The wavelet transform is not mainly intended as a forecasting technique. It transforms a suspected signal into different levels of resolution and localizes a process in time and frequency. Even so, there are a lot of papers in which it is used in the prediction process.
As we can see from the available literature, wavelet transformation is often used with other technologies, mainly in the process of prediction based on a known system state, data, and parameters from the past. The methods used for the computation of future state differ, including different types of neural network, Nonlinear Least Squares Autoregressive Moving Average (NLS-ARMA) [23,25], high-dimension space mapping [26,27], fractal prediction [28], grey prediction [29], etc. For example, Elarabi et al. [30] proposed an approach that uses both DCT (Discrete Cosine Transform) and DWT (Discrete Wavelet Transform) to enhance the intraprediction phase of H.264/AVC standard. The algorithm presented in their research is designed for video processing software and, according to the authors, extends the benefits of the wavelet-based compression technique to the speed of the FSF algorithm and forms an intraprediction algorithm that ensures a 51% drop in bit rate while keeping the same visual quality and peak signal-to-noise ratio (PSNR) of the original H.264/AVC intraprediction algorithm. A very interesting approach joining wavelet decomposition and adaptive neuro-fuzzy inference system (ANFIS) for ship roll forecasting is described in the work of Li et al. [31].
Stefenon et al. [32] presented an approach to predict the failure of insulators in electric power systems. In their work, they used a hybrid approach with wavelets and neural networks to process data from the ultrasonic scan. Another use of the wavelet method is described in the work of Prabhakar et al. [33]. They describe a combination of Fast Fourier transform (FFT) together with Discrete Wavelet Transform and Discrete Shearlet Transform for predicting surface roughness by milling. The predicted results of the hybrid model were better than the individual transform. The combination of wavelet transform and neural network for prediction is described by Zhang et al. [34]. The core of the article is the creation of a power forecasting model based on dendritic neuron networks in combination with wavelet transform. For decomposing input data in the proposed model, Mallat's algorithm, which is a fast Discrete Wavelet Transform (DWT), is used. The results show that, with the help of wavelet decomposition, together with various types of neural networks, the prediction process is faster and better compared to the results obtained by the other three conventional models for almost every error criterion.
Another case of joining neural networks with DWT is described in [35]. The article describes the proposal of a hybrid system for prediction that consists of a Long Short-Term Memory (LSTM) neural network and a wavelet module. Wavelet transform is used to decompose the data into a set of subseries, which appears to be very effective. El-Hendawi and Wang [36] describe using a full wavelet packet transform model together with a neural network. Their proposed model is able to predict the electrical load. The model consists of a wavelet packet transform module that is able to decompose a series of high-frequency components into subseries of high and low frequencies. Subseries are fed into the neural networks and the outputs of each neural networks are reconstructed, which is the forecasted load. The described approach is not sensitive to various conditions such as different day types (e.g., weekend, weekday, or holiday) or months. The authors also state that the model can reduce MAPE error by 20% compared to the conventional approach.
Similar problems are solved in [37][38][39]. Tayab et al. [39] propose using wavelet transform with classical feed-forward neural network for short-term forecasting of electricity load demand. Prediction in their work consists of using the best-basis stationary wavelet packet transform. The authors used a Harris hawks optimization to optimize the feedforward neural network weights. The hybrid model achieved a more than 60% decrease in MAPE compared to SVM and a classical backpropagation neural network. More sophisticated methods are described by Liu et al. [37]. The article predicts wind speed in a wind power generation plant. The highlight is a conjunction of advanced neural network models with wavelet packet decomposition (WPD). The authors developed a new hybrid model to predict wind speed. The model is based on a WPD, convolutional neural network (CNN), and convolutional long short-term memory network (CNNLSTM). In the developed WPD-CNNLSTM-CNN model, the WPD is used to decompose the original wind speed time series into various subseries. CNN with a 1D convolution operator is employed to predict the obtained high-frequency subseries and CNNLSTM is employed to complete the prediction of the low-frequency subseries.
The same or a very similar approach to signal processing via wavelet transformation is used in the work of Farhadi et al. [40]. They used a proven procedure of data processing, like other authors. This means that data from the manufacturing process retrieved via piezoelectric sensors and microphones are processed with a combination of wavelet transform and neural network. All calculations are processed by a program written in MATLAB software. The type of neural network used is multilayer perceptron with a backpropagation learning method.
Moreover, wavelet transform, in combination with other algorithms, can be used to filter out noise or interference signals on the premise of ensuring an undistorted original and adding the data prediction function in the data denoising process. Feng et al. [41] described a wavelet-based Kalman smoothing approach for oil well testing during the data processing stage. To improve the dynamic prediction and data resolution, similar approaches with the combination of Kalman prediction and wavelet transform can be found in [42,43]. There is the possibility to use a combination of DWT and other data filters. For example, the most often used method is the common and simple moving average (MA) method [44,45].
It is clear that the wavelet method (DWT) can be widely used for computer data processing in many applications and many scientific fields, together with different technologies, but mainly neural networks. Theory from the area of wavelet transformation is well described in many works [46][47][48][49]. The DWT is considered a linear transformation for which wavelets are discretely sampled. Multiresolution analysis (MRA) and filter bank reconstruction are properties that confirm the wide range of applicability of DWT [50]. The basis functions are derived from a mother wavelet ψ(x), by factors of dilation and translation [51].
where a is the dilation factor and b represents the translation factor. The continuous wavelet transform of a function f(x) can be expressed as follows: where * represents the conjugate operator. The basis functions in (Equation (1)) are redundant when a and b are continuous, so it is possible to discretize a and b to form an orthonormal basis. One way of discretizing a and b is to let a = 2ˆp and b = 2pq. After this, (Equation (2)) can be expressed as follows: where p and q are integers. Then, the wavelet transform in (Equation (3)) can be expressed as follows: where p and q are set to be integers, and it is possible to call (Equation (2)) a wavelet series. It is clear from the representation that the wavelet transform contains both spatial and frequency information. The wavelet transform is based on the concept of multiresolution analysis. That means the signal is decomposed into a series of subsignals and their associated detailed signals at different resolution levels. Generally, these subseries are called approximations (low frequencies) and details (high frequencies). The smooth subsignal (approximation) at level m can be reconstructed from the ith level smooth subsignal and the associated m + 1 detailed signals. Matlab software is often used for mathematical analysis [52]. Thanks to the number of functions and features, it is suitable for wavelet decomposition, too. The wavelet problem is well managed by the wavelet toolbox.
The principle of the DWT algorithm is depicted in Figure 1, where, after employing the DWT, the procedure comprises of log2 N steps. From signal s, the very first step is to produce two sets of coefficients: approximation coefficients cA1 and detail coefficients cD1. After convolution of the signal s with the lowpass filter LoD and the highpass filter HiD, the dyadic transformation (downsampling) is applied. After obtaining approximation cA1 and detail cD1, the procedure of downsampling continues until the condition of N steps is met. The downsampling procedure is presented in Figure 2.  The decomposition was developed by Mallat [12]. DWT is commonly used for fast signal extraction [34]. The decomposition process is iterative, which means that the approximation component after iteration will be decomposed into several low-resolution components.
After the decomposition of the input signal s, we have the low-frequency coefficient cA1 and the high-frequency coefficient cD1 (see Figure 3). After the next decomposition of cA1, high-frequency component cD2 and low-frequency component cA2 are gained. If the decomposition process continues to level j = 3, in the last step coefficients cA3 and cD3 are obtained from cA2. Coefficient cA1 can be reconstructed by cA2 and cD2; similarly, cA2 can be reconstructed by cA3 and cD3. The wavelet transform is often used for data preparation for predictive systems with a neural network. In many cases, the type of neural network selected is very simple; oftentimes, a feedforward backpropagation neural network is used.

Artificial Neural Networks
An artificial neural network can be understood as a parallel processor that tends to preserve the knowledge gained through learning for its future use. They represent an artificial model of the human brain [53]. Knowledge is acquired by neurons during supervised learning, where the relationships between inputs and outputs are mapped. These relationships are created by neurons in the hidden layers that are interconnected. In the hidden layer, neurons capture and extract features from the inputs. The numbers of hidden layers and neurons may be different. Often, the number is selected by a trial-anderror approach, but optimization algorithms can also be used. Input-output relationships are regulated by weights, which are determined and adjusted during the learning process. The most common learning algorithm is backpropagation (BP). BP consists of two phases: forward and backward. During the forward phase, the response to the inputs is calculated; in the backward phase, the error between the response of the network and desired outputs is calculated. The calculated error is used for adjusting weights between inputs and outputs [54].
The efficiency of the three training algorithms was investigated in this study.

Scaled Conjugate Gradient
The principle of the BP algorithm is based on the calculation and adjustment of the weights in the steepest direction, which is time-consuming. The scaled conjugate gradient (SCG) uses at each iteration a different search direction, instead of the steepest direction that results in the fastest convergence. In the SCG algorithm, the line search technique, which is used to detect the step size, is omitted and replaced by quadratic approximation of the error function (Hessian matrix) together with the trust region from LM. This algorithm requires more iterations to converge but fewer computations between iterations [55].

Levenberg-Marquardt
The Levenberg-Marquardt (LM) algorithm was proposed to approach the secondorder training speed without computing the Hessian matrix. The LM algorithm offers a tradeoff between the benefits of the Gauss-Newton method and the steepest descent method. To compute the connection weights wk, the LM uses the following Hessian matrix approximation: where J is the Jacobian matrix (first-order derivatives of the errors), I is the unit matrix, e is the vector of network errors, and µ is the scalar parameter. Scalar parameter µ controls the algorithm. If µ = 0, then the algorithm behaves as in Gauss-Newton's method. For a high value of µ, the algorithm uses the steepest descent method [56].

BFGS Quasi-Newton Backpropagation
The BFGS algorithm was proposed by Broyden, Fletcher, Goldfarb, and Shanno. The principle of Newton's method is based on computing the Hessian matrix. The BFGS algorithm is also based on Newton's method but uses the approximation of the Hessian matrix; then connection weights w k are updated via the following Equation: where H is the Hessian matrix (second-order derivatives of the errors) and g is the gradient. The BFGS algorithm is computationally more difficult and requires more storage due to computing and storing the approximation of the Hessian matrix [57].

Dataset Overview
The input data were provided by the local heating plant, which has more than 300 substations. For our experiments, we used historical load data from 1 November 2014 to 31 March 2015; for that period, we have a mixture of different data from various types of weather (see Figure 4). This dataset is used for training and testing. The total load consists of domestic hot water and the hot water used for central heating. The samples were logged every 10 min. Weather data were also collected from this heating plant. The units for load are in MW and for temperature in • C. Data were downloaded from SCADA in *.xlsx format, and then the data were converted into a suitable format for further processing. A large heat load drop at the beginning of the series was caused by the beginning of the heating season. The statistical characteristics of the data used are presented in Table 1.  The scatter plot shown in Figure 5 shows the temperature and load dependency, as well as possible outliers. The presence of outliers can be explained by the fact that, during warmer days, the studied heat plant did not use the maximum power. That means that, during warmer periods, plants with less power were used.
Data preprocessing is necessary because all these data come from an industrial process, which is often full of errors such as missing data, noisy data, offset, etc. Missing data were substituted by the average of P −1 and P +1 values. Data normalization is a necessary step to avoid node saturation, which could negatively affect the training phase. For data normalization, we used Z score normalization: where Y i is the normalized value, X i is the actual value, X is the arithmetic mean, and σ is the standard deviation.

Mother Wavelet Selection Criteria
In the past, we can find many papers in the field of heat demand prediction where various researchers applied wavelet transform during the data preprocessing phase to predict heat consumption. The drawback of these studies is that the researchers chose mother wavelet empirically db4 [58], morlet [59]. There is an unwritten rule that Daubechies (db) mother wavelets, especially low order db2-db4, are the most suitable for load forecasting [60,61]. However, each signal has a unique characteristic and therefore we cannot rely on empirical selection. In this section, we are dealing with quantitative methods to select the appropriate mother wavelet. The investigated qualitative methods have different criteria for determining the appropriate mother wavelet; therefore, it is necessary to make a trade-off between criteria.

Maximum Energy Criteria
The base wavelet that maximizes energy from the wavelet coefficients represents the most appropriate wavelet for the analyzed signal. For a more detailed explanation, see [62].
where N is the number of wavelet coefficients and wt(s,i) corresponds to the wavelet coefficients.

Minimum Shannon Entropy
The base wavelet that minimizes entropy from the wavelet coefficients represents the most appropriate wavelet for the analyzed signal. For a more detailed explanation, see [62].
where p i is the energy probability distribution of the wavelet coefficients, defined as

Energy-to-Shannon Entropy ratio
The base wavelet that has produced the maximum energy-to-Shannon entropy ratio was selected to be the most appropriate wavelet for the analyzed signal [62]: where E energy and E entropy are calculated using (Equations (8) and (9)). In this section we compared 25 different wavelets from three different wavelet families, namely Daubechies, Symlets, and Coiflets ( Figure 6). We also included the haar wavelet, which is often marked as db1. The best wavelet was selected based on the criteria mentioned above. For the first criterion, maximum energy, the best wavelet was db1 or haar. Based on the second criterion, minimum Entropy, the most suitable wavelet was sym 10. This conclusion contradicts the first criterion. To avoid such a conflict, it is necessary to find a trade-off between criteria. Based on the maximum energy-entropy ratio, the db5 wavelet produced the highest value, which means that that db5 wavelet is the optimal mother wavelet for our purposes.

Decomposition Level Selection
A suitable decomposition level is as important as selecting the mother wavelet. A high number of decomposition levels might cause information loss, high computational effort, etc. [63]. In the past, several papers were published where researchers used the following empirical equation: L = log 2 (N), where N is the series length, to determine the decomposition level [64]. In our case, it would be 14 levels. The difference between the raw heat load series and decomposed series is clearly visible after approximation at level 11. We set the maximum decomposition level to 11; this decomposition level also involves all possible mother wavelet candidates [65]: (12) where N is the series length and lw is the wavelet filter size.
We focused on a deeper analysis of the decomposed signal via the proposed method by Sang [66]. This method is used to identify the true and noisy components of each decomposition level. The comparison of the energy of raw series and referenced noise series identified components that are close to or inside of the confidence interval of the referenced noise series. In Figure 7, we can clearly identify the components (D1-D3) that are likely to be noise. We suppose that the components above level D4 are true components of the signal. From this analysis, we propose two suitable decompositions at level 6 and level 9 by db5 mother wavelet.

Building WANN and ANN Models
For an accurate forecast of heat consumption, it is necessary to make a detailed analysis of the variables that influence heat consumption. Generally, energy consumption depends on many factors such as social and climate parameters, type of consumers, etc. Heat demand strongly depends on the outdoor temperature and other climate factors like humidity, wind speed, and so on. Research papers from the past confirm that the strongest influence on demand is outdoor temperature [67]. That fact is also proven by a correlation analysis between heat load and outdoor temperature, where the correlation coefficient is -0.78, which could be considered a strong relationship. Another good predictor is historical load. It is likely that consumption the following day at the same time will be similar to the consumption the day before. Significant lags were determined by autocorrelation analysis. Generally, heat load consumption also depends on the time of the day and day of the week. These factors could also increase the prediction accuracy. Table 2 shows the selected input variables for WANN. Hour-To capture the cyclical behavior of the series, the hour variable was encoded via sine and cosine transform: Weekend-1 represents weekends and 0 represents weekdays. Day of the week (DoW)-determines the days of the week, where Mondays are marked as 1 and Sundays as 7.
Temperature T(t)-The temperature values at lags t, t −144 . Because of small differences in temperature, the average temperature at t −1 and t −2 is used. Lagged load L(t)-Autocorrelation analysis was used to select the most relevant historical consumption. A window of length 1008 (one week) was considered for selection. Selected lags are listed in Table 3. Table 3. Selected lags and proposed WANN model structure.

Number
Selected Lags Model Structure Determining a suitable number of hidden neurons and hidden layers is crucial in ANN modeling. Too many hidden neurons could cause overfitting. To avoid overfitting, it is crucial to select an appropriate number of neurons in the hidden layer h. Unfortunately, there is no equation to compute the number of hidden neurons. In most cases, the appropriate number of hidden neurons is determined by trial and error. In our research, the number of hidden neurons was determined by rule of thumb: the number of hidden neurons is two-thirds the size of the input layer [67]. The parameters of the proposed models are presented in Table 4. The proposed methodology is depicted in Figure 8 and consists of the following steps: 1. Load input raw data; 2.
Decompose load data using DWT into N subseries of details and approximations; 3.
Create an input matrix from selected features; 6.
Divide the processed data into training and testing sets; 7.
Train and test until error starts to increase, then stop training; 10. Reconstruct predicted outputs and reconstruct signal X rec = D 1+ , . . . ,+ D n + A n ; 11. Denormalize outputs using reverse mapstd function; 12. Validate proposed models on a new dataset.

Results and Discussion
The results of the proposed models for 1 h ahead (10 min sampling interval) are presented in this section. In this research, three FFBP ANNs were proposed and three different learning algorithms, Levenberg-Marquardt (LM), BFGS quasi-Newton, and Scaled Conjugate Gradient (SCG), were tested. The ANN models were compared with WANNs. The experiments were performed in the MATLAB2020a environment on a laptop with an i7 3.00 GHz CPU and 16 GB memory.

Evaluation Metrics
To evaluate the accuracy of the proposed models, the following metrics were used: mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean square error (RMSE). The definition of these metrics is as follows: where n-number of samples, Y i -actual value, Y i -predicted value. The smaller results represent better prediction accuracy.

WANN and ANN Prediction Comparison
The accuracy of the proposed WANN and ANN models was validated on a dataset from February 2018. This dataset was not included in the training and testing models. Figures 9 and 10 shows the prediction error of the decomposed series with different learning algorithms. Upon visual checking, it is noticeable that the prediction accuracy for the wavelet details D1, D2, and D3 shows high differences compared to other details and there are significant errors according to the other details. The MAPE was 322.93% for LM, and SCG and BFG produced MAPE over 370%. This is caused by high-frequency components (i.e., noise), but these sub-bands also contain some useful features that are predictable. As stated in Section 4.2, sub-bands D1-D3 contain noise and could be omitted. The error rate rapidly decreases after D4. The presence of a higher error rate is also clearly visible in the A9 series, where BFG and SCG produce much worse predictions compared to the LM algorithm. Both algorithms have a tendency to overestimate the load with the total MAPE for the SCG (0.135%), 0.112% (BFG), and 0.017% (LM). In general, the WANN model shows a good ability to capture the features from the decomposed raw data.  The error analysis from Figure 10 indicates that the suitable decomposition level is 6. Predicted sub-bands D1, Dn, An were reconstructed to obtain the total heat consumption, which is presented in Figure 11.
The accuracy of the WANNs' prediction is better compared to conventional ANN. In every experiment, WANN models outperform the conventional ANN model. In each case, the LM algorithm produced the highest accuracy. The MAPE for conventional ANN LM was 1.75%, for WANN LM6 it was 0.359%, and for model WANN LM9 it was 0.362%. The difference between the WANN LM6 and WANN LM9 inaccuracy is negligible. Other values are presented in Table 5 and Figure 12.   It is worth mentioning that the proposed WANN models in each configuration were able to improve accuracy in every aspect. Percentage improvements are listed in Table 6, which demonstrates the importance of employing DWT in the data preprocessing stage.  Table 6 shows that the improvement had a decreasing tendency after decomposition level 6 for the BFG and SCG algorithms. The largest percentage decrease in accuracy was produced by the SCG algorithm in each evaluation metric. The drop was 51% for MAE. However, these decreases in accuracy represented higher overall accuracy compared to conventional ANNs. The LM algorithm produced the biggest improvement in every metric. Compared to the conventional model, the increase was 79%. The LM algorithm showed no improvement after decomposition level 6.

Conclusions
This research paper dealt with the prediction of heat consumption. Several models have been created that can predict heat consumption with varying accuracy. Some important findings have been identified during this research. The mother wavelet was chosen based on a quantitative criterion, the energy-entropy ratio. According to Figure 6, the most suitable mother wavelet was db5. From the presented results, it is clear that the suitable decomposition level is 6. The accuracy of the reconstructed signals shows that models with decomposition level 6 have better results compared to models with decomposition level 9 (Figures 10 and 11). Also, we can state that some details could be omitted during the signal reconstruction. Several models were created for the purpose of finding the most appropriate training algorithm. The presented results show that models trained with an LM algorithm outperform other models ( Table 5). Calculation of the error metrics MAPE, RMSE, and MAE proved that the LM training algorithm offered the best results for all models. This research also compared the effectiveness of employing wavelet transform during data preprocessing. In each case, the WANN models predicted heat consumption with significantly higher accuracy compared to ANNs. Significant differences in accuracy were achieved in every WANN model. The LM algorithm produced the highest accuracy among WANN and ANN models. Compared to the conventional model (1.75% MAPE), the improvement was near five times greater (0.36% MAPE). The highest error was produced by the BFG algorithm in both cases. We can state that a combination of wavelet decomposition and ANN could significantly improve the prediction performance. The outcome of this research is also a MATLAB GUI application that could be used by dispatchers. In future research, there are several opportunities to improve the models, e.g., propose and test other ANN architectures like Elman, RNN, optimize the number of hidden neurons with PSO, reduce input parameters with PCA, and propose models with a longer forecasting period (12 h ahead, 24 h ahead). Funding: This research was funded by Mladý výskumník, "Návrh neurónovej siete na predikciu spotreby tepla"; the Scientific Grant Agency of the Ministry of Education, Science, Research and Sport of the Slovak Republic and the Slovak Academy of Sciences, grant number VEGA 1/0272/18, "Holistic approach of knowledge discovery from production data in compliance with Industry 4.0 concept"; and the Scientific Grant Agency of the Ministry of Education, Science, Research and Sport of the Slovak Republic and the Slovak Academy of Sciences, grant number 1/0232/18, "Using the methods of multiobjective optimization in production processes control."