Next Article in Journal
On the Coalition Number of the dth Power of the n-Cycle
Previous Article in Journal
The Role of Misclassification and Carbon Tax Policies in Determining Payment Time and Replenishment Strategies for Imperfect Product Shipments
Previous Article in Special Issue
A Flexible Truncated (u,v)-Half-Normal Distribution: Properties, Estimation and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Experimental Study of an Approximate Method for Calculating Entropy-Optimal Distributions in Randomized Machine Learning Problems

1
Federal Research Center “Computer Science and Control” of Russian Academy of Sciences, 44/2 Vavilova, 119333 Moscow, Russia
2
Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, 65 Profsoyuznaya, 117997 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(11), 1821; https://doi.org/10.3390/math13111821
Submission received: 27 April 2025 / Revised: 26 May 2025 / Accepted: 28 May 2025 / Published: 29 May 2025

Abstract

:
This paper is devoted to the experimental study of the integral approximation method in entropy optimization problems arising from the application of the Randomized Machine Learning method. Entropy-optimal probability density functions contain normalizing integrals from multivariate exponential functions; as a result, when computing these distributions in the process of solving an optimization problem, it is necessary to ensure efficient computation of these integrals. We investigate an approach based on the approximation of integrand functions, which are applied to the solution of several configurations of problems with model and real data with linear static models using a symbolic computation mechanism. Computational studies were carried out under the same conditions, with the same initial data and values of hyperparameters of the used models. They have shown the performance and efficiency of the proposed approach in the Randomized Machine Learning problems based on linear static models.

1. Introduction

Randomized Machine Learning is a new scientific area based on the concept of artificial randomization of model parameters. This approach transforms a traditional model with fixed parameters into a model with random parameters. Instead of searching for specific parameter values, the model learns to find their probability distributions during training. These distributions are chosen to maximize their entropy while maintaining the moments of the model output.
One of the key advantages of this method is its independence from the real properties of the input data. This method does not require testing hypotheses about the normality of the data or studying other statistical characteristics of the data. The distributions obtained as a result of training are determined by the conditions of maximum entropy, which corresponds to the most unfavorable scenario of the investigated process characterized by the maximum degree of uncertainty. These properties of the entropy estimation method date back to the works of Boltzmann [1], Jaynes [2,3], Shannon [4]. A significant contribution to the further development of this concept was made by the studies of Kapur [5], Golan [6,7] and a number of other researchers.
An important feature of the method is the ability to simultaneously find entropy-optimal distributions of noise components together with optimal distributions of model parameters. This distinguishes this method from traditional approaches, where we have to make assumptions about the characteristics of the noise and the data.
The main theoretical foundations of the method are presented in  [8], and by now, sufficient experience has been accumulated in its successful application to various forecasting problems, classification, missing data recovery, and a number of other problems. Nevertheless, the experience of its practical application has shown some of its properties that require further research. One of such properties is the necessity to compute multivariate integrals in the process of solving the main optimization problem of Machine Learning, which in practice is solved by the iterative numerical method. Numerical integration in most software frameworks is performed by methods based on quadrature formulas, effective software implementations of which are developed for dimensionalities not higher than three. In order to solve the described problem, it was proposed to use the approximation of exponents in integrand functions by Taylor (Maclaurin) series.
Thus, the present work is aimed at further development of the Randomized Machine Learning (RML) method in the direction of developing effective approaches to the implementation of their computational component, in particular, an experimental study of their practical implementation and effectiveness using synthetic and real data.
With this in mind, this paper is organized as follows: Section 1 formulates the problem this paper is aimed at, Section 2 describes the proposed materials and methods, and Section 3 presents the results obtained and the discussion.

2. Problem Statement

The application of the methodology of Randomized Machine Learning leads to the problem of optimizing the information entropy of the distributions of parameters and noise of the trained model.
Within the framework of this methodology, we consider models in the “input-output” form, in which the transformation of input into output is realized in the general case by a nonlinear function or functional.
It follows from such a general description that the transformation of input to output can be realized by both numerical and functional objects. In particular, one can consider a model as taking a continuous function as input and giving a function as output as well.
On the other hand, in the context of the study of processes functioning in time (often understood in a generalized sense), a productive approach is the approach related to the observation of the state of the process, which occurs at some time intervals, thus forming a set of state observations. It is important here that the observations do not occur continuously, and thus there is a countable set of observation points.
A natural way to describe the transformation of input into output is its functional parameterized description. The parameters of such a transformation are proposed to be tuned (trained) by the data at the observation points, which are “real data” in this procedure.
The transformation of input into output, and, accordingly, the model, will be called static only if its previous state is needed to form the output, i.e., the next state, of the model. In case its previous states, in particular p of previous states, are necessary, then dynamic coupling is realized.
In this paper, we consider linear static models. Despite their simplicity, they are used in most statistical methods as well as within such scientific and applied areas as Data Mining, Machine Learning, and Artificial Intelligence due to their structural simplicity, interpretability, and availability of a large number of methods for working with them.
Consider a model with n inputs x n and one output y, the transformation between which is realized by the function y = z ( x , a ) , where x = ( x 1 , , x n ) is the vector of model inputs and a = ( a 1 , a d ) is the vector of parameters. The parameters are assumed to be random and of interval type with joint probability distribution P ( a )
a k A k = [ a k a k + ] , k = 1 , d ¯ .
In this paper, we consider the linear models, and hence the function z can be represented in the form of a generalized dot product
z ( x , a ) = i = 1 n a i x i .
It is assumed that the state of the process under study contains errors that determine its stochastic component. This randomness is taken into account in modeling by means of additive noise at the model output, which also has an interval type with the distribution  q ( ξ )
v = y + ξ = z ( x , a ) + ξ , ξ [ ξ , ξ + ] j = 1 , m ¯ .
Randomized Machine Learning is based on the idea of estimating model parameters simultaneously with measurement noise under the assumption that the parameters of the model under study are random, and their estimates resulting from the method are not their values or intervals but their probability distributions. This goal is achieved by solving the problem of maximizing the entropy of the distributions of the model parameters under conditions of balance at the moments of the model output with the observed real data y ^
H ( P , Q ) = A P ( a ) ln P ( a ) d a Ξ ξ ln ξ d ξ ,
A P ( a ) d a = 1 , Ξ ξ d ξ = 1 ,
A z ( x j , a ) P ( a ) d a + Ξ ξ q j ( ξ ) d ξ = y ^ j , j = 1 , j ¯ ,
Entropy-optimal distributions (their probability density functions) of parameters and noises, depending on Lagrange multipliers, are given by the following expressions:
P * ( a , λ ) = exp j = 1 m λ j F ( x j , a ) A exp j = 1 m λ j F ( x j , a ) d a ,
q j * ( ξ , λ ) = exp λ j ξ Ξ j exp λ j ξ d ξ , j = 1 , m ¯ ,
where λ = { λ j } —vector of Lagrange multipliers.
To determine the Lagrange multipliers, it is necessary to solve the following equation:
A F ( x j , a ) exp j = 1 m λ j F ( x j , a ) d a A exp j = 1 m λ j F ( x j , a ) d a + Ξ j ξ j exp λ j ξ d ξ Ξ j exp λ j ξ d ξ = y ^ j .
Despite the fact that the system (6) is solved numerically, any numerical methods for solving such problems have an iterative character where, at each iteration, it is required to calculate the value of the left-hand side of the equation for the current value of the optimized variable. In the case of the system under consideration, this operation is significantly complicated due to the presence of integral components.
Numerical computation of multivariate integrals is a challenging problem in general, which is relatively efficiently solved on most computing platforms using quadrature methods for dimensionalities of three or less. For higher dimensionality, it is necessary to involve Monte Carlo methods, the accuracy of which is inversely proportional to the square root of the number of trials (random grid points), which is further aggravated by the higher dimensionality, requiring an increase in the number of points to achieve the required accuracy.
In order to solve the described problem in [9], it was proposed to use the approximation of exponents by Taylor (Maclaurin) series, which is possible due to the analyticity of the exponent, in particular, the exponents under the integrals in the expressions (5) and (6) will be approximated by Maclaurin series for the exponent from the scalar variable f with a finite number of terms. In the framework of this paper, we study the case with four terms of the series
exp ( f ) k = 1 ( 1 ) k k ! f k .
We believe that the choice of four terms of Taylor series is optimal in terms of accuracy and computational performance at the current stage of this study.

3. Materials and Methods

The purpose of this paper is to experimentally investigate the practical feasibility and efficiency of the proposed approach based on the application of the RML method to various linear models with three numerical implementation variants: fully numerical, hereafter denoted as numeric or num, exact (exact or exp), based on the exact calculation of exponents, and approximate (approximate or app).
The first variant realizes the numerical computation of the corresponding integrals using methods based on quadrature formulas, respectively, and its application is possible for parameter dimensionalities not higher than 3.
The second and third variants are based on the direct computation of the integrals included in the corresponding expressions. This approach is based on the use of the symbolic computation mechanism implemented in many software platforms, such as MATLAB and Python. This mechanism allows expressions to be formed and manipulated in a manner similar to that of a human. Of course, its functionality has certain limitations that lead to the practical impossibility of manipulating bulky expressions due to the lack of computational resources, such as memory. In this regard, it is particularly relevant to verify the realizability of this approach to the studied problem.
In view of the above, under the exact (but still numerical) solution, we mean the computation of the corresponding integrals using symbolic computations, and a similar principle is applied to the approximate solution, where the corresponding expressions of integrand functions implement the Taylor series expansion with a given number of terms.
The study of this approach will be carried out on two groups of models with corresponding data. The first group represents artificially created data generated by the model with specified characteristics, as a consequence of which a variety of concrete data realizations are achieved. The second group represents real data related to the task of forecasting the area of thermokarst lakes, which are characterized by the fact that their probabilistic characteristics are unknown. In all cases, the data are scaled to the interval [ 0 , 1 ] to avoid error accumulation and overflow in the computations.
The purpose of the ongoing experimental study is to establish the closeness of the obtained solutions in Equation (6), as well as the closeness of the obtained entropy-optimal distributions (4) and (5).

3.1. Model Data

The model data to be used in the following are generated by a linear static model of order p of the form
y [ n ] = k = 1 p a k x k [ n ] + ε [ n ] , n = 1 , N ¯ ,
where y—model output, x k —input, ε —random noise of standard normal distribution, n—time index, a k —parameters.
Under the assumption that the model realizes some process, observations of its development at some time points n give a set of real data that will be used to train the model. For the purposes of this study, in accordance with the model, we generate different data for any value of the vector parameter a and different values of the model order p and the number of observation points N, and the inputs x k are generated randomly with a uniform distribution from the interval [ 0 , 10 ] .
Since the purpose of this paper is to investigate the realizability of the proposed method for a different number of data points and dimensionalities of the integration space, we further use the notation D for the dimensionality of the parameter vector (in the context of the model (7) D = p ), and the reference to the corresponding data configuration is dDnN.
We generate the data for the study for D = 2 , 3 , 4 , 5 and N = 5 , 10 , 15 . Examples of some generated data configurations are shown in Figure 1, Figure 2 and Figure 3, where the graphs labeled real show the generated data and the graphs labeled model show the model trajectory obtained by the least squares for the model used.

3.2. Real Data

As a model of the real process and the data corresponding to it, we consider the model of the thermokarst lakes area in Western Siberia, considered in a number of publications [10,11,12] devoted to the study of this object, the importance and relevance of the study of which is ensured by the fact that thermokarst lakes are a massive source of greenhouse gases. In the present work, this object and related data are interesting and chosen by us for the reasons that, despite the structural simplicity of the model used, the data on which it is necessary to train it are characterized by two main circumstances: their small amount and large errors of unknown nature in the probabilistic sense. These circumstances are related to the fact that some data are collected from only a few points in vast, inaccessible areas, and others are obtained by methods of analyzing low-resolution satellite images, resulting in a small amount of noisy data.
The model to be considered here is a linear static model with two inputs, which establishes the relationship between area S and two natural factors: precipitation R and temperature T, whose data are annually averaged. We use two types of it: with a free parameter a 3 or without, in order to use the same data, namely
S [ n ] = a 1 T [ n ] + a 2 R [ n ] + a 3 .
Similar to the model data, we consider three data configurations, lakes_d2n5, lakes_d2n10, and lakes_d3n5, obtained by selecting the appropriate number of consecutive points from the available dataset. The real data for the two configurations are shown in Table 1 and Figure 4 and Figure 5, where R is scaled to [ 0 , 1 ] .

3.3. Entropy Maximization Procedure

The use of symbolic calculation mechanisms in practice involves the necessity of separating work with numeric and symbolic objects within a single program code. Otherwise, there may be problems with the accuracy of calculations and/or their speed [13]. In most implementations of symbolic computations, it is recommended to convert all numeric values that are included in symbolic expressions into a symbolic representation, or to substitute them instead of the corresponding symbols at the final stage of computation, when the numeric result of some expression is required.
The problems considered in this paper are characterized by the fact that the integrand functions include the values of inputs at the observation points x j , as well as Lagrange multipliers λ , and integration is required to be performed over the parameters a . It should be taken into account that the solution of the problem (6) is performed by an iterative numerical optimization method, which leads to the need to calculate integrals for each new value of the optimized variable, i.e.,  λ .
Taking these circumstances into account, before starting the optimization, it is necessary to form symbolic expressions for the corresponding parts of the function to be optimized, namely for the integrals in the expression (6). In this case, to reduce the amount of symbolic calculations, these expressions are formed in several steps:
  • The symbolic vectors for the variables λ , a and the matrix of inputs X = { x j } , j = 1 , m ¯ are formed;
  • All values of the inputs are converted to symbolic form, and a set (vector) z of symbolic representations of the function z j ( a ) = z ( x j , a ) defined by (2) is formed;
  • Compute the symbolic representation of the dot product function ( λ , z ) ;
  • Now, we can compute two symbolic representations at once: one for the exact solution and another for the approximate solution by substituting the expression from the previous paragraph into the corresponding symbolic expression.
    The symbolic expression of the integrand function of the form z j exp ( ( λ , z ) ) is calculated in the same way.
This procedure allows us to obtain the expressions of the required integrand functions depending only on λ , thus performing cumbersome and relatively resource-intensive symbolic calculations once before starting the numerical solution of Equation (6). Inside the optimized function, the current values of λ are substituted into the corresponding symbolic expressions, and then the values of integrals are calculated, which are no longer resource-intensive.

3.4. Comparison

We propose to evaluate the feasibility and efficiency of our approach, taking into account its application to RML problems, by comparing the resulting Lagrange multipliers λ , as well as the resulting entropy-optimal parameter distributions (their probability density functions).
We compare the Lagrange multipliers obtained as a result of solving the corresponding problems for different configurations by two metrics: absolute Δ and relative δ error in the L 2 norm:
Δ = x y , δ = x y x ,
where the relative error is calculated relative to the exact value.
It should be noted that in the case of comparison of multipliers, there are no serious problems, because here it is necessary to compare vectors according to the chosen metric. In the case of comparing distributions, however, certain difficulties may arise due to their dimensionality. Here, we propose to estimate the difference of distributions pointwise by calculating the mean square error (MSE) or mean absolute relative error (MAPE):
MSE = 1 N i = 1 N ( p i p ^ i ) 2 , MAPE = 1 N i = 1 N p i p ^ i p ^ i ,
where value p ^ is “exact”, p—“model”, and N—number of points.

4. Results and Discussion

The implementation of all computational experiments was carried out in MATLAB version 9.7 (2019b) on Windows x64 platform equipped with Intel 3.00 GHz 8-core i7-9700F CPU and 32 GB of RAM. The Trust-Region-Dogleg [14] algorithm implemented in the Optimization Toolbox [15] was used to solve the system of Equation (6) with default options and initial λ = 1 , and symbolic calculations were implemented using the Symbolic Math Toolbox [13].
The Randomized Machine Learning problem was solved in all experiments for a [ 1 , 1 ] , ξ [ 0.3 , 0.3 ] , and the initial value of the corresponding optimization problems was λ 0 = 1 .

4.1. Model Data

The obtained values of Lagrange multipliers for different configurations are contained in Table 2, Table 3 and Table 4, and the values of metrics are in Table 5.
From the given data, we can see that for each value of dimension D in all three variants of the problem solution, numerical, exact, and approximate, the Lagrange multipliers obtained as a result of numerical optimization of the corresponding Equation (6) are close. This is evidenced by the values of absolute and relative errors, the latter of which do not exceed 5%.
We also observe the deviation of the solution obtained in the approximate version from the exact one, as well as from the numerical one, and the latter two are closer to each other. This situation is quite expected and is explained by the approximation using a Taylor series with only four terms. Increasing the terms of the series will predictably lead to an increase in accuracy, i.e., to an approximation to the exact solution by the metric used, but will significantly increase the load on computational resources and performance.
As for the dependence of the accuracy of the obtained solutions (in the sense of the used metric) on the dimensionality and the number of observation points, the obtained data did not reveal any significant dependence between these parameters. The order of the obtained errors keeps its stability in all variants of data configurations.
Let us now turn to the analysis of the obtained entropy-optimal distributions, which are the goal of RML. Figure 6 shows the bivariate distributions obtained in the configuration d2n10, and Table 6 shows the error values between the corresponding distributions for all investigated configurations.
The given graphs show the structural closeness of the distributions (PDF functions) in the exact and approximate cases, and the values of the errors between all three variants of the solution allow us to state that this closeness is observed for the numerical solution as well.
It should be noted that unlike the solutions to the optimization problem, the errors between the distributions indicate some dependence on the dimensionality or the number of observation points—some increase in the relative error with increasing number of observation points is noticeable for all the dimensions investigated, and an increase in the error with increasing dimensionality for the same number of observation points can also be observed. All this can be explained by the fact that both the number of observation points and the dimensionality determine the value of the integrand function in the corresponding expressions, and, as a consequence, the integral from it. Due to the approximation of the integrand, the error increases, especially when the dimensionality increases.
Nevertheless, the obtained data show a relatively low level of error, which is acceptable for the problems on which the RML method is oriented, namely for problems with a small amount of noisy data.

4.2. Real Data

The obtained values of Lagrange multipliers for the real data configurations are contained in Table 7 and Table 8, the values of metrics are in Table 9,  Figure 7 presents the entropy-optimal distributions obtained in the lakes_d2n10 configuration, and Table 10 summarizes the error values between the corresponding distributions.
In general, in the case of real data, for which, unlike artificial data, their probabilistic characteristics are unknown, we can draw the same conclusions, namely, low errors both in solving the optimization problem and between the obtained distributions.

4.3. Performance

The performance of the used algorithms was evaluated when solving problems on model data. Table 11 shows the solution time of the optimization problem, including the computation of the corresponding integrals. The columns time_* show the time of the solution of the problem, and the next three columns show the change in the time of realization of the corresponding solution compared with numerical (columns % exp and % app) and approximate compared with exact.
In general, the obtained performance results show the expected speedup of the solution in the case of using symbolic computations, since such computation allows us to compute their symbolic expression depending on λ once before starting the optimization, while a fully numerical solution requires the computation of the values of integrals for each value of λ , i.e., at each iteration, which significantly increases the amount of computation.
The computation of the approximate solution is expectedly slower than for the exact solution, which is due to the structurally more complex expressions, which are the sum of several terms of the series that need to be raised in degree. Although the arguments of these expressions are simple sums, the expressions themselves contain a much larger number of terms and multipliers, resulting in the need to build and manipulate large special data structures in memory. As a result, this leads to an increase in the time required to obtain a solution.
It should also be noted that some time indices for approximate variants differ from those for exact variants. This is due to the properties of the software platform used and the time required to build the corresponding program objects in memory. Such a process, called “warming up”, is standard for many program systems, including those using some type of parallelism.

5. Conclusions

The aim of this work was to experimentally investigate the approach to the efficient computation of integral components in problems arising in the application of the Randomized Machine Learning method. Entropy-optimal distributions (their probability densities) contain normalizing integrals from multivariate exponential functions; as a result, when computing these distributions in the process of solving an optimization problem, it is necessary to ensure efficient computation of these integrals. We investigated an approach based on the approximation of integrand functions by a Taylor series with four terms. The research methodology involved solving several problems with appropriate data configurations in two groups based on modeled, artificially generated data and real data. The workability and efficiency of the proposed approach were evaluated by the accuracy (closeness) of the solutions of the optimization problem for Lagrange multipliers, as well as by the closeness of the obtained entropy-optimal distributions. Computational studies were carried out under identical conditions, with the same initial conditions and values of hyperparameters of the models. They showed the workability and efficiency of our proposed approach in problems based on linear static models.

Author Contributions

Conceptualization, Y.S.P., A.Y.P., and Y.A.D.; data curation, Y.A.D. and I.V.S.; methodology, Y.S.P., A.Y.P., and Y.A.D.; software, A.Y.P., Y.A.D., and I.V.S.; supervision, Y.S.P.; writing—original draft, A.Y.P. and Y.A.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The reported study was performed in the Research Laboratory of Young Scientists “Technologies for Analysis and Controllable Text Generation”.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Boltzmann, L. On connection between the second law of mechanical theory of heat and probability theory in heat equilibrium theorems. In Boltzmann L.E. Selected Proceedings; Shlak, L.S., Ed.; Classics of Science, Nauka: Moscow, Russia, 1984. [Google Scholar]
  2. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  3. Jaynes, E.T. Probability Theory: The Logic of Science; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  4. Shannon, C.E. Communication theory of secrecy systems. Bell Labs Tech. J. 1949, 28, 656–715. [Google Scholar] [CrossRef]
  5. Kapur, J.N. Maximum-Entropy Models in Science and Engineering; John Wiley & Sons: Hoboken, NJ, USA, 1989. [Google Scholar]
  6. Golan, A.; Judge, G.; Miller, D. Maximum Entropy Econometrics: Robust Estimation with Limited Data; John Wiley & Sons: New York, NY, USA, 1996. [Google Scholar]
  7. Golan, A. Foundations of Info-Metrics: Modeling, Inference, and Imperfect Information; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  8. Popkov, Y.S.; Popkov, A.Y.; Dubnov, Y.A. Entropy Randomization in Machine Learning; Chapman and Hall/CRC: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
  9. Popkov, Y.S. Analytic method for solving one class of nonlinear equations. Dokl. Math. 2024, 110, 404–407. [Google Scholar] [CrossRef]
  10. Polischuk, V.Y.; Muratov, I.N.; Polischuk, Y.M. Problemi modelirovania prostranstvennoy strukturi polei termokarsovykh ozer v zone vechnoy merzloty na osnove sputnikovykh snimkov. Vestnik Yugorskogo Gosudarstvennogo Universiteta 2018, 88–100. [Google Scholar]
  11. Polischuk, V.Y.; Muratov, I.N.; Kuprianov, M.A.; Polischuk, Y.M. Modelirovanie polei termokarsovykh ozer v zone vechnoy merzloty na osnove geoimitacionnogo podhoda i sputnikovykh snimkov. Mat. Zametki SVFU 2020, 27, 101–114. [Google Scholar]
  12. Polischuk, V.Y.; Kuprianov, M.A.; Polischuk, Y.M. Analiz vzaimosvyazi izmeneniy klimata i dinamiki termokarsovykh ozer v arkticheskoy zone Taymyra. Sovrem. Probl. Distancionnogo Zondirovania Zemli Kosmosa 2021, 16. [Google Scholar]
  13. The MathWorks Inc. Symbolic Math Toolbox, version 9.4 (R2022b); The MathWorks Inc.: Natick, MA, USA, 2022. [Google Scholar]
  14. Nocedal, J.; Wright, S. Numerical Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  15. The MathWorks Inc. Optimization Toolbox, version 9.4 (R2022b); The MathWorks Inc.: Natick, MA, USA, 2022. [Google Scholar]
Figure 1. Model data d2n15.
Figure 1. Model data d2n15.
Mathematics 13 01821 g001
Figure 2. Model data d3n15.
Figure 2. Model data d3n15.
Mathematics 13 01821 g002
Figure 3. Model data d4n15.
Figure 3. Model data d4n15.
Mathematics 13 01821 g003
Figure 4. Data lakes_d2n10.
Figure 4. Data lakes_d2n10.
Mathematics 13 01821 g004
Figure 5. Data lakes_d3n5.
Figure 5. Data lakes_d3n5.
Mathematics 13 01821 g005
Figure 6. Entropy-optimal distributions for d2n10.
Figure 6. Entropy-optimal distributions for d2n10.
Mathematics 13 01821 g006
Figure 7. Entropy-optimal distributions for lakes_d2n10.
Figure 7. Entropy-optimal distributions for lakes_d2n10.
Mathematics 13 01821 g007
Table 1. Data for the thermokarst lake area model.
Table 1. Data for the thermokarst lake area model.
nSTR
10.0589−1.7027512.32
20.0417−0.6765586.16
3−0.03811.7985672.80
4−0.05510.9089542.66
50.00450.3306593.23
60.07100.8634770.21
70.0059−0.3165584.41
8−0.02450.3509565.19
9−0.0492−0.2508630.14
10−0.0198−0.1259569.59
Table 2. Lagrange multipliers for N = 5 .
Table 2. Lagrange multipliers for N = 5 .
D = 2 D = 3 D = 4
n λ num λ exp λ app λ num λ exp λ app λ exp λ app
110.093510.093510.16014.14644.14644.09352.25232.2115
2−0.1498−0.1498−0.06680.42910.42910.4675−0.3431−0.3263
3−7.6378−7.6378−7.5596−2.7148−2.7148−2.6683−1.4237−1.3946
41.53041.53041.5611−0.8247−0.8247−0.87440.39360.3710
54.88654.88654.85712.18662.18662.21100.56080.5584
Table 3. Lagrange multipliers for N = 10 .
Table 3. Lagrange multipliers for N = 10 .
D = 2 D = 3 D = 4
n λ num λ exp λ app λ num λ exp λ app λ exp λ app
11.98241.98241.97012.59402.59402.60927.44187.4860
2−3.9151−3.9151−3.8723−1.6401−1.6401−1.6142−3.6777−3.6638
3−0.0437−0.0437−0.0121−2.2928−2.2928−2.28150.81870.8201
41.19211.19211.1855−3.1355−3.1355−3.1026−4.8527−4.8176
53.40343.40343.41270.61830.61830.60960.04110.0255
60.96450.96450.9723−0.6940−0.6940−0.7100−3.8295−3.8492
75.53735.53735.56898.40548.40548.362313.270313.1382
8−2.4072−2.4072−2.4011−0.5980−0.5980−0.56650.40420.4397
9−0.9228−0.9228−0.9140−2.3018−2.3018−2.2880−4.8702−4.8555
104.63594.63594.62896.07536.07536.14526.21156.2761
Table 4. Lagrange multipliers for N = 15 .
Table 4. Lagrange multipliers for N = 15 .
D = 2 D = 3 D = 4
n λ num λ exp λ app λ num λ exp λ app λ exp λ app
10.44230.44230.51530.01150.01150.02480.87080.8679
22.02002.02002.24600.90410.90410.91503.68413.6869
30.15330.15330.26450.48290.48290.48450.34600.3537
4−13.5778−13.5778−13.0830−2.7422−2.7422−2.7215−3.9408−3.9386
5−2.6339−2.6339−2.63991.24251.24251.25794.37724.3783
6−1.4540−1.4540−1.5169−5.1839−5.1839−5.1746−4.7839−4.7754
73.80813.80813.80063.12293.12293.127610.736310.7277
82.08242.08242.2865−1.9384−1.9384−1.9294−3.6504−3.6508
9−1.0257−1.0257−0.9212−6.4825−6.4825−6.4556−5.8717−5.8722
100.16550.16550.2821−1.5453−1.5453−1.5353−5.0077−5.0017
116.01206.01205.79560.92590.92590.91383.04173.0579
12−0.6004−0.6004−0.36761.54461.54461.56452.19692.1960
131.50651.50651.65312.28662.28662.28953.59833.6093
14−2.5354−2.5354−2.58235.86025.86025.87570.96600.9752
154.84364.84364.90788.55788.55788.574711.015511.0052
Table 5. Lagrange multipliers errors.
Table 5. Lagrange multipliers errors.
D = 2 D = 3 D = 4
x , y N Δ δ Δ δ Δ δ
num, exp 10 10 10 11 10 10 10 11
num, app50.13870.01020.09750.0177
exp, app 0.13870.01020.09750.01770.05750.0207
num, exp 10 10 10 11 10 8 10 10
num, app100.06580.00690.10190.0087
exp, app 0.06580.00690.10190.00870.16470.0088
num, exp 10 8 10 9 10 8 10 9
num, app150.71600.04230.05520.0038
exp, app 0.71600.04230.05520.00380.02900.0014
Table 6. Distribution errors.
Table 6. Distribution errors.
D = 2 D = 3 D = 4
x , y N MSEMAPEMSEMAPEMSEMAPE
num, exp 10 20 10 10 10 22 10 10
num, app50.00540.10570.00060.0773
exp, app 0.00540.10570.00060.07730.00010.0458
num, exp 10 21 10 10 10 21 10 10
num, app100.00230.07410.00200.1284
exp, app 0.00230.07410.00200.12840.00050.1193
num, exp 10 17 10 9 10 21 10 10
num, app150.11530.35940.00260.1477
exp, app 0.11530.35940.00260.14770.00020.0808
Table 7. Lagrange multipliers for N = 5 .
Table 7. Lagrange multipliers for N = 5 .
D = 2 D = 3
n λ num λ exp λ app λ num λ exp λ app
10.24280.24280.24280.03890.03890.0389
2−0.0916−0.0916−0.0916−0.0839−0.0839−0.0839
3−0.1409−0.1409−0.1409−0.0763−0.0763−0.0763
40.83540.83540.83540.38600.38600.3860
5−0.1144−0.1144−0.1144−0.2233−0.2233−0.2233
Table 8. Lagrange multipliers for N = 10 .
Table 8. Lagrange multipliers for N = 10 .
D = 2 D = 3
n λ num λ exp λ app λ num λ exp λ app
1−0.3466−0.3466−0.3466−0.8513−0.8513−0.8513
2−0.3934−0.3934−0.3934−0.6373−0.6373−0.6373
30.33490.33490.33490.05290.05290.0529
41.12721.12721.12720.13350.13350.1335
5−0.0739−0.0739−0.0739−0.5377−0.5377−0.5377
6−1.9918−1.9918−1.9918−1.2822−1.2822−1.2822
70.45210.45210.45210.09920.09920.0992
80.74080.74080.74080.06690.06690.0669
92.53952.53952.53962.49642.49642.4964
101.06341.06341.06340.54610.54610.5461
Table 9. Lagrange multipliers errors.
Table 9. Lagrange multipliers errors.
D = 2 D = 3
x , y N Δ δ Δ δ
num, exp 10 10 10 10 10 13 10 13
num, app5 10 7 10 07 10 6 10 6
exp, app 2 · 10 7 2 · 10 7 2 · 10 6 4 · 10 7
num, exp 10 9 10 10 10 10 10 10
num, app10 2 · 10 5 5 · 10 6 5 · 10 6 5 · 10 6
exp, app 2 · 10 5 5 · 10 6 2 · 10 5 5 · 10 6
Table 10. Distribution errors.
Table 10. Distribution errors.
D = 2D = 3
x , y N msemapemsemape
num, exp 10 24 10 12 10 29 10 14
num, app5 10 12 5 · 10 6 10 12 2 · 10 5
exp, app 10 12 5 · 10 6 10 12 2 · 10 5
num, exp 10 22 10 11 10 24 10 11
num, app10 10 12 4 · 10 6 10 10 10 4
exp, app 10 12 4 · 10 6 10 10 10 4
Table 11. Performance indicators, seconds.
Table 11. Performance indicators, seconds.
NDtime_numtime_exptime_app% exp% app% exp_app
5220660.280.281.00
1028835420.390.471.20
1522971101210.370.411.10
5375238180.050.020.48
10342401121000.030.020.89
15312,3463336540.030.051.96
54 1935 1.88
104 247380 1.54
154 6964005 5.75
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Popkov, A.Y.; Dubnov, Y.A.; Sochenkov, I.V.; Popkov, Y.S. Experimental Study of an Approximate Method for Calculating Entropy-Optimal Distributions in Randomized Machine Learning Problems. Mathematics 2025, 13, 1821. https://doi.org/10.3390/math13111821

AMA Style

Popkov AY, Dubnov YA, Sochenkov IV, Popkov YS. Experimental Study of an Approximate Method for Calculating Entropy-Optimal Distributions in Randomized Machine Learning Problems. Mathematics. 2025; 13(11):1821. https://doi.org/10.3390/math13111821

Chicago/Turabian Style

Popkov, Alexey Yu., Yuri A. Dubnov, Ilya V. Sochenkov, and Yuri S. Popkov. 2025. "Experimental Study of an Approximate Method for Calculating Entropy-Optimal Distributions in Randomized Machine Learning Problems" Mathematics 13, no. 11: 1821. https://doi.org/10.3390/math13111821

APA Style

Popkov, A. Y., Dubnov, Y. A., Sochenkov, I. V., & Popkov, Y. S. (2025). Experimental Study of an Approximate Method for Calculating Entropy-Optimal Distributions in Randomized Machine Learning Problems. Mathematics, 13(11), 1821. https://doi.org/10.3390/math13111821

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop