Next Article in Journal
Probabilistic Study of the Effect of Anti-Epileptic Drugs Under Uncertainty: Cost-Effectiveness Analysis
Next Article in Special Issue
Machine Learning-Based Detection for Cyber Security Attacks on Connected and Autonomous Vehicles
Previous Article in Journal
Starlikness Associated with Cosine Hyperbolic Function
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropy-Randomized Forecasting of Stochastic Dynamic Regression Models

by
Yuri S. Popkov
1,2,3,*,
Alexey Yu. Popkov
1,
Yuri A. Dubnov
1,4 and
Dimitri Solomatine
5
1
Federal Research Center “Computer Science and Control” of Russian Academy of Sciences, Moscow 119333, Russia
2
Institute of Control Sciences of Russian Academy of Sciences, Moscow 117997, Russia
3
Department of Software Engineering, ORT Braude College, Carmiel 2161002, Israel
4
National Research University “Higher School of Economics”, Moscow 101000, Russia
5
IHE Delft Institute for Water Education, 2601 Delft, The Netherlands
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(7), 1119; https://doi.org/10.3390/math8071119
Submission received: 20 May 2020 / Revised: 6 July 2020 / Accepted: 6 July 2020 / Published: 8 July 2020
(This article belongs to the Special Issue Machine Learning and Data Mining in Pattern Recognition)

Abstract

:
We propose a new forecasting procedure that includes randomized hierarchical dynamic regression models with random parameters, measurement noises and random input. We developed the technology of entropy-randomized machine learning, which includes the estimation of characteristics of a dynamic regression model and its testing by generating ensembles of predicted trajectories through the sampling of the entropy-optimal probability density functions of the model parameters and measurement noises. The density functions are determined at the learning stage by solving the constrained maximization problem of an information entropy functional subject to the empirical balances with real data. The proposed procedure is applied to the randomized forecasting of the daily electrical load in a regional power system. We construct a two-layer dynamic model of the daily electrical load. One of the layers describes the dependence of electrical load on ambient temperature while the other simulates the stochastic quasi-fluctuating temperature dynamics.

1. Introduction

Due to the gradually increasing resources and computational power of computers, huge amounts of data can be accumulated and stored, both in natural and digitized formats. Then, the following question arises immediately: what should be done with these data, except for storage? Extracting new knowledge from data seems to be a very interesting idea. The concepts of Data Mining (DM) [1,2], Big Data (BD) [3] and Data Science (DS) [4] were formulated and further developed by researchers accordingly.
A very tempting goal—extracting new knowledge from data—inevitably leads to the verbal or formal (mathematical) modeling of the “expected” knowledge. Therefore, any model has some predictive properties, which can be implemented only under known values of its quantitative characteristics (parameters). Data are a fundamental component of the three concepts above: data are adopted for estimating the characteristics of a model using machine learning (ML) procedures, which allows extracting new knowledge.
Unlike DM, BD, and DS, the concept of ML has a rich history of over 70 years as well as vast experience in solving numerous problems. The first publication in this field of research dates back to 1957; see [5]. The notion of empirical risk, a key element of ML procedures, was introduced in 1970 in the monograph [6]. The method of potential functions for classification and recognition problems was also presented in 1970 in another monograph [7]. The modern concept of ML is based on the deterministic parametrization of models and estimates using data sets with postulated properties. The quality of estimation is characterized by empirical risk functions, and their minimization gives optimal estimates [8,9].
As a rule, real problems that are solved by ML procedures are immersed in some uncertain environment. If the matter concerns data, they are acquired with inevitable errors, omissions or low reliability. The design and parametrization of models is a non-formalizable and subjective process that depends on the individual knowledge of a researcher. Therefore, in the mass application of ML procedures, the level of uncertainty is quite high.
All these circumstances indicate that it is necessary to somehow compensate uncertainty. Here a general trend is the application of its stochastic description to parametrized models and data. This means that the model parameters are assumed to be random (appropriately randomized), and the data are assumed to have random errors. Machine learning procedures with these properties belong to the class of randomized machine learning (RML) procedures. Their difference from the conventional ML procedures is that optimal estimates are constructed not for the parameters but for the probability density functions (PDFs) of the random parameters and the PDFs of the worst-case random errors in the data. In the entropy-based RML procedures, the functional of generalized information entropy [10] is used as an optimality criterion for estimates.
The core of an RML procedure is a parametrized predictive model designed for simulating the temporal (or spatiotemporal) evolution of a process under study. Therefore, such a model belongs to the class of dynamic models. Parametric dynamic regression models (PDRMs) are most widespread representatives of this class, in which the current state of a model is determined by its past states on a certain time interval [11,12]. A formal image of PDRMs is difference equations, in the general case of the pth order [13]. Most applications are described by linear PDRMs. In particular, they naturally occur in many problems of macroeconomic modeling and forecasting, e.g., time series analysis of economic indices [14], adequacy analysis of PDRMs [15], and prediction of exchange rates [16]. Linear PDRMs are effective enough for short-term forecasting, yet causing significant errors on large forecasting horizons. Therefore, attempts to improve forecasts by introducing various nonlinearities into PDRMs are quite natural. The monograph [17] was dedicated to a general approach to the formation and use of nonlinear PDRMs. However, applications require a more “personalized” approach to choosing the most useful and effective nonlinearity. On this pathway, it seems fruitful, e.g., to forecast exchange rates using logistic and exponential nonlinearities [18], or to predict the daily electrical load of a power system using periodic autoregressive models [19] or multidimensional time series [20].
Since forecasting is performed under uncertainty, the resulting errors caused by some unaccounted factors are often compensated, if possible, by assigning some probabilities to forecasts [21,22]. The most common approach is the use of Bayes’ theorem on posterior probability. Let a parametrized conditional probability density function of data and an a priori probability density function of parameters be specified; then their normalized product will determine the posterior probability density function of the parameters under fixed data. Fundamental problems in this field of investigations are connected with the structural choice of the conditional and prior PDFs. Typically, Gaussian PDFs or their mixture are selected, and the mixture weights are estimated using retrospective data [23,24,25]. A similar approach was adopted in applications: population genetics [26], where the method of numerical approximation of posterior PDFs was developed; the interaction between the financial sector of the economy and the labor market [27], where the Metropolis–Hastings algorithm was used to estimate the parameters of the above PDFs; population dynamics [28], where a hierarchy of Bayesian models was designed to predict fertility, mortality and migration rates. Probabilistic forecasts are constructed by other methods, taking into account the specifics of applications. In meteorology, retrospective weather forecasts are accumulated for estimating the PDFs; subsequently, these PDFs are used for short-term forecasting [29,30,31,32]. A rather interesting procedure is to form a probabilistic forecast as a mixture of forecasts obtained by different methods [33].
In this paper, we propose a fundamentally different forecasting method—the so-called entropy-randomized forecasting (ERF). In accordance with this method, an ensemble of random forecasts is generated by a predictive dynamic regression model (PDRM) with random input and parameters. The corresponding probabilistic characteristics, namely the probability density functions, are determined using the entropy randomized machine learning procedure. The ensembles of forecasting trajectories are constructed by the sampling of the entropy-optimal PDFs.
The proposed method is adopted for randomized prediction of the daily electrical load of a regional power system. A hierarchical randomized dynamic regression model that describes the dependence of the load on the ambient temperature is constructed. The temporal evolution of the ambient temperature is represented by an oscillatory second-order dynamic regression model with a random parameter and a random input. The results of randomized learning of this model on the GEFCom2014 dataset [34] are given. A randomized forecasting technology is suggested and its adequacy is investigated depending on the length of forecasting horizon.

2. Procedure of Entropy-Randomized Forecasting

Randomization as a means of imparting artificial and rationally organized random properties to naturally nonrandom events, indicators, or methods is a fairly common technique that yields a positive effect. There exist many examples in various fields of science, management, economics: randomized numerical optimization methods [35,36]; the mixed (random) strategies of trading on a stock exchange [37]; the randomized forecasting of population dynamics [38]; vibration control of industrial processes [39]. As the result of randomization, nonrandom objects gain artificial stochastic properties with optimal probabilistic characteristics in a chosen sense. The question on appropriate quantitative characteristics of optimality has always been controversial and ambiguous. It requires arguments that would somehow reflect the important specifics of a randomized object. In particular, a fundamental feature of forecasting procedures is uncertainty in the data, predictive models, methods for generating forecasts, etc.
In what follows, information entropy [40] will be used as a characteristic of uncertainty. In the works [41,42,43], using the first law of thermodynamics it was demonstrated that entropy is a natural functional describing the processes of universal evolution. Moreover, in accordance with the second law of thermodynamics, entropy maximization determines the best state of an evolutionary process under the worst-case external disturbance (maximum uncertainty). Also, note another quality of information entropy associated with measurement errors and other types of errors, which are important attributes of data: with the factor of such errors being considered in terms of informational entropy, the probabilistic characteristics of noises exerting the worst-case impact on forecasting procedures can be estimated in explicit form.
The technology of entropy-randomized forecasting consists of the following stages. At the beginning (the first stage), a predictive randomized model (PRM) of a studied object is formed and its parameters are designed. A PRM transforms real data into a model output. In the general case, these transformations are assumed to be dynamic, i.e., the model output observed at a time instant n depends on the states observed on some past interval. The PRM parameters are assumed to be of the interval type and random, and their probabilistic properties are characterized by the corresponding PDFs.
The second stage of the technology under consideration—randomized machine learning (more specifically, its entropy version)—is intended to estimate the PDFs. At this stage, the estimates of the PDFs are calculated using learning data sets and also a learning algorithm in the form of a functional entropy-linear programming problem.
At the third stage, the optimized PRM (with the entropy-optimal PDFs) is tested using a test data set and accepted quantitative characteristics of the quality of learning. The optimized PRM actually generates an ensemble of random trajectories, vectors, or events with the entropy-optimal values of their parameters.
The learned and tested PRMs serve for forecasting. In this case, the ensembles of random forecasted trajectories generated by the entropy-optimal PRMs are used to calculate their numerical characteristics such as mean trajectories, variance curves, median trajectories, the PDF evolution of forecasted trajectories, etc.

3. Randomized Dynamic Regression Models with Random Input and Parameters

Randomized dynamic regression models (RDRMs) form a class of dynamic models with random parameters that describe a parametrized dependence of the object’s state at a given time instant on external factors and its states at some past time instants.
The structures of models are designed on the basis of existing knowledge and hypotheses about the properties of an object, which often turn out to be very inaccurate. Moreover, the external factors themselves can change over time and therefore should be predicted to model the object’s dynamics. Reliable information on realistically measured impacts leading to the temporal evolution of external factors is often unavailable. The aforementioned indicates the presence of uncertainty, both in the development and further use of models. In [10], a method for reducing the influence of uncertainty based on the randomization of models (including the class of RDRMs) was proposed. In the latter case, this method extends the idea of randomization to the modeling of external factors and their evolution.
The structure of the RDRM is shown in Figure 1. It consists of a model of the main object (RDRM-O) with random parameters a R p and a model of external factors (RDRM-F) with random parameters b R s and a random input ζ R q . The states of the object and its model belong to the vector space R m , in which x ^ [ n ] are the state vectors of the object and x [ n ] R m are the state vectors of RDRM-O. The external factors are characterized by the vector y ^ [ n ] R q while the changes in the state of RDRM-F over time by the vector y [ n ] R q . The variable n denotes discrete time taking integer values on the interval L = [ n , n + ] .
Consider the linear version of RDRM-O. Its state x [ n ] at a time instant n is changing under the influence of p retrospective states x [ n 1 ] , , x [ n p ] and measurable external factors z [ n ] R q . The corresponding equation has the form
x [ n ] = X ( n , p ) A ( p ) + A ( p + 1 ) z [ n ] ,
with the following notations:
  • A ( p ) = A 1 , , A p
    as the block column vector of parameters, where A i is a random matrix of dimensions ( m × m ) with random elements of the interval type, i.e.,
    A i A i = [ A i , A i + ] , i = 1 , p ¯ ;
  • A ( p + 1 ) as a matrix of dimensions ( m × q ) with random elements of the interval type, i.e.,
    A ( p + 1 ) A ( p + 1 ) = [ A ( p + 1 ) , A ( p + 1 ) + ] ;
  • X ( n , p ) = x [ n 1 ] , , x [ n p ]
    as the block row vector of p retrospective states, where denotes a block row vector.
The probabilistic properties of the block vector A ( p ) and the matrix A ( p + 1 ) are characterized by a joint PDF P ( A ( p ) ) and a PDF F ( A ( p + 1 ) ) , respectively.
The state of RDRM-O is assumed to be measurable at each time instant n and also to contain an additive noise μ [ n ] :
v [ n ] = x [ n ] + μ [ n ] .
The random vectors μ [ n ] are of the interval type, i.e.,
μ [ n ] M n = [ μ n , μ n + ] ,
with a PDF M n ( μ [ n ] ) . The random vectors measured at different time instants are assumed to be statistically independent.
Consider the linear version of RDRM-F, which has a similar structure described by the equation
y [ n ] = Y ( n , s ) B ( s ) + ζ [ n ]
with the following notations:
  • B ( s ) = B 1 , , B s
    as a block column vector formed by matrices B i of dimensions ( q × q ) with random elements of the interval type, i.e.,
    B i B i = [ B i , B i + ] , i = 1 , s ¯ ;
  • Y ( n , p ) = y [ n 1 ] , , y [ n s ]
    as a block row vector.
The probabilistic properties of the parameters are characterized by a continuously differentiable PDF  W ( B ( s ) ) .
The random vector ζ [ n ] is of the interval type, i.e.,
ζ [ n ] E n = [ ζ n , ζ n + ] ,
with a continuously differentiable PDF Q n ( ζ [ n ] ) . The random vectors ζ [ n ] measured at different time instants are statistically independent.
By analogy with RDRM-O, the state of RDRM-F is assumed to be measurable at each time instant n and also to contain an additive noise ξ [ n ] :
z [ n ] = y [ n ] + ξ [ n ] .
The random vectors ξ [ n ] are of the interval type, i.e.,
ξ [ n ] Ξ n = [ ξ n , ξ n + ] ,
with a continuously differentiable PDF G n ( ξ [ n ] ) . The random vectors measured at different time instants are assumed to be statistically independent.
Thus, in the RDRM (RDRM-O and RDRM-F), the unknown characteristics are the PDFs P ( A ( p ) ) , F ( A ( p + 1 ) ) and W ( B ( s ) ) of the model parameters and also the PDFs M n ( μ [ n ] ) , Q n ( ζ [ n ] ) and G n ( ξ [ n ] ) of the measurement noises, n L .

4. Models of Learning Data Sets

The desired PDFs (see the previous section) are estimated using the learning data sets that are obtained on a learning interval n L = [ n , n + ] and are consistent with the RDRM.
Consider RDRM-O. On the learning interval,
x [ n ] = X ( n , p ) A ( p ) + A ( p + 1 ) z [ n ] , x [ n + 1 ] = X ( n + 1 , p ) A ( p ) + A ( p + 1 ) z [ n + 1 ] , = , x [ n + ] = X ( n + , p ) A ( p ) + A ( p + 1 ) z [ n + ] .
The observable states of RDRM-O on the learning interval L represent the collection of vectors
v [ n ] = x [ n ] + μ [ n ] , n = n , n + ¯ .
Hence, the learning data set consists of the data on retrospective states of the object,
X ^ ( n , p ) , X ^ ( n + 1 , p ) , , X ^ ( n + , p ) ,
and the data on observable current states,
v ^ [ n ] , , v ^ [ n + ] , z ^ [ n ] , , z ^ [ n + ] .
Consider RDRM-F. On the learning interval,
y [ n ] = Y ( n , s ) B ( s ) + ζ [ n ] , y [ n + 1 ] = Y ( n + 1 , s ) B ( s ) + ζ [ n + 1 ] , = , y [ n + ] = Y ( n + , p ) B ( s ) + ζ [ n + ] .
The observable states of RDRM-F on the learning interval L represent the collection of vectors
z [ n ] = y [ n ] + ξ [ n ] , n = n , n + ¯ .
Hence, the learning data set consists of the data on retrospective states of the factors,
Y ^ ( n , s ) , Y ^ ( n + 1 , s ) , , Y ^ ( n + , s ) ,
and the data on observable current states,
z ^ [ n ] , , z ^ [ n + ] .
Thus, the learning procedure of the RDRM involves three data sets, (18), (21) and (22).

5. Algorithm of Randomized Machine Learning

The entropy version [10] of RML algorithms is used for estimating the PDFs of the model parameters and measurements noises of RDRM-O and RDRM-F. For RDRM-O, the corresponding algorithm has the form
H O = A P ( A ( p ) ) ln P ( A ( p ) ) d A ( p ) A ( p + 1 ) F ( A ( p + 1 ) ) ln F ( A ( p + 1 ) ) d A ( p + 1 ) n = n n + M n M n ( μ [ n ] ) ln M n ( μ [ n ] ) d M n ( μ [ n ] ) max
subject to the following constraints:
-
the normalization conditions of the PDFs given by
A P ( A ( p ) ) d A ( p ) = 1 , A ( p + 1 ) F ( A ( p + 1 ) ) d A ( p + 1 ) = 1 , M n M n ( μ [ n ] ) d M n ( μ [ n ] ) = 1 , n = n , n + ¯ ;
-
the empirical balances given by
A P ( A ( p ) ) X ^ ( n , p ) A ( p ) d A ( p ) + A ( p + 1 ) F ( A ( p + 1 ) ) A ( p + 1 ) z ^ [ n ] d A ( p + 1 ) + + M n M n ( μ [ n ] ) μ [ n ] d μ [ n ] = v ^ [ n ] , n = n , n + ¯ .
Please note that the empirical balances represent a system of ( n + n ) blocks composed of m equations. With each block an m-dimensional vector of Lagrange multipliers θ ( n ) is associated. This optimization problem belongs to the class of entropy-linear programming problems of the Lyapunov type [44]. It has an analytic solution parametrized by the Lagrange multipliers:
P * ( A ( p ) ) = exp n = n n + θ ( n ) , X ^ ( n , p ) A ( p ) P ( θ ) , F * ( A ( p + 1 ) ) = exp n = n n + θ ( n ) , A ( p + 1 ) z ^ [ n ] F ( θ ) , M n * ( μ [ n ] ) = exp θ ( n ) , μ [ n ] M n ( θ ( n ) ) , n = n , n + ¯ .
In the above formulas,
P ( θ ) = A exp n = n n + θ ( n ) , X ^ ( n , p ) A ( p ) d A ( p ) , F ( θ ) = A ( p + 1 ) exp n = n n + θ ( n ) , A ( p + 1 ) z ^ [ n ] d A ( p + 1 ) , M n ( θ ( n ) ) = M n exp θ ( n ) , μ [ n ] d μ [ n ] , n = n , n + ¯ .
The matrix of Lagrange multipliers θ = [ θ ( n ) , , θ ( n + ) ] is determined by solving the balance equations
1 P ( θ ) A exp n = n n + θ ( n ) , X ^ ( n , p ) A ( p ) X ^ ( n , p ) A ( p ) d A ( p ) + + 1 F ( θ ) A ( p + 1 ) exp n = n n + θ ( n ) , A ( p + 1 ) z ^ [ n ] A ( p + 1 ) z ^ [ n ] d A ( p + 1 ) + + 1 M n ( θ ( n ) ) M n exp θ ( n ) , μ [ n ] μ [ n ] d μ [ n ] = x ^ [ n ] , n = n , n + ¯ .
From (25)–(27) it follows that the PDFs P * ( A ( p ) ) and F * ( A ( p + 1 ) ) of the model parameters of RDRM-O and the PDFs M n * ( μ [ n ] ) , n = n , n + ¯ , of the measurement noises are found using the retrospective learning data sets X ^ ( n , p ) , X ^ ( n + 1 , p ) , , X ^ ( n + , p ) , the current state data sets x ^ [ n ] , , x ^ [ n + ] and the data sets z ^ [ n ] , , z ^ [ n + ] generated by RDRM-F.
For obtaining the latter collections, the RML algorithm is applied to estimate the PDFs of the model parameters and measurement noises of RDRM-F. In accordance with [10],
H F = B W ( B ) ln W ( B ) d B n = n n + E n Q n ( ζ [ n ] ) ln Q n ( ζ [ n ] ) d Q n ( ζ [ n ] ) n = n n + Ξ n G n ( ξ [ n ] ) ln G n ( ξ [ n ] ) d G n ( ξ [ n ] ) max
subject to the following constraints:
-
the normalization conditions of the PDFs given by
B W ( B ) d B = 1 , E n Q n ( ζ [ n ] ) d Q n ( ζ [ n ] ) = 1 , Ξ n G n ( ξ [ n ] ) d G n ( ξ [ n ] ) = 1 , n = n , n + ¯ ;
-
the empirical balances given by
B W ( B ( s ) ) Y ^ ( n , s ) B ( s ) d B ( s ) + E n Q n ( ζ [ n ] ) ζ [ n ] d ζ [ n ] + + Ξ n G n ( ξ [ n ] ) ξ [ n ] d ξ [ n ] = z ^ [ n ] , n = n , n + ¯ .
This problem is from the same class as (25)–(27). It has the following analytic solution in terms of Lagrange multipliers:
W * ( B ( s ) ) = exp n = n n + η ( n ) , Y ^ ( n , s ) B ( s ) W ( η ) , Q n * ( ζ [ n ] ) = exp ζ [ n ] , η ( n ) Q n ( η ( n ) ) , G n * ( ξ [ n ] ) = exp ξ [ n ] , η ( n ) G n ( η ( n ) ) , n = n , n + ¯ . η = [ η ( n ) , , η ( n + ) ] .
In the above formulas,
W ( η ) = B exp n = n n + η ( n ) , Y ^ ( n , s ) B ( s ) d B ( s ) , Q n ( η ( n ) ) = E n exp ζ [ n ] , η ( n ) d ζ [ n ] , G n ( η ( n ) ) = Ξ n exp ξ [ n ] , η ( n ) d ξ [ n ] , n = n , n + ¯ .
The matrix of Lagrange multipliers η is determined by solving the balance equations
W ( 1 ) ( η ) B exp n = n n + η ( n ) , Y ^ ( n , s ) B ( s ) Y ^ ( n , s ) B ( s ) d B ( s ) + + Q n ( 1 ) ( η ( n ) ) E n exp ζ [ n ] , η ( n ) ζ [ n ] d ζ [ n ] + + G n ( 1 ) ( η ( n ) ) Ξ n exp ξ [ n ] , η ( n ) ξ [ n ] d ξ [ n ] = z ^ [ n ] , n = n , n + ¯ .

6. Entropy-Randomized Forecasting of Daily Electrical Load of Power System

The daily electrical load L of a power system depends on many various factors. The analysis below is restricted to one of the most significant external factors—the ambient temperature T . The daily temperature variations are fluctuating [45,46]. These fluctuations affect electrical load, but with some time delay due the inertia of a power network supplying electrical energy from the generator to consumers.
(1). Dynamic Regression Model.   
In accordance with the general structure of the RDRM (see Section 2), the electrical load model (the L T model) describes the dynamic relationship between electrical load and ambient temperature while the ambient temperature model (the T ξ model) describes the daily dynamics of ambient temperature. There exist quite a few versions of the L T model, albeit all being static, i.e., describing the relationship between electrical load and ambient temperature at current time instants [47]. The daily temperature dynamics are fluctuating, and such fluctuations are described, in particular, by the periodic autoregressive model [48].
Please note that the effect of ambient temperature on electrical load is dynamic, i.e., the change in load due to temperature at a given time instant depends on its value at a previous time instant. A similar property applies to ambient temperature fluctuations.
Therefore, following the general randomized approach, the L T model is designed as a first-order dynamic regression model with random parameters, while the T ξ model is designed as a second-order dynamic regression with a random parameter and a random input ξ . Then the L T ξ model is the composition of the two models above.
In the class of linear models, the randomized dynamic regression load–temperature model ( L T model) of the first order can be written in the form
L [ n ] = a L [ n 1 ] + b T [ n ] , v [ n ] = L [ n ] + μ [ n ] , n = n , n + ¯ ,
where random independent parameters a and b take values within intervals
a A = [ a , a + ] , b B = [ b , b + ] .
The probabilistic properties are characterized by PDFs P ( a ) and F ( b ) defined on the sets A and B , respectively. The random noise μ [ n ] that simulates electrical load measurement errors is of the interval type as well. In the general case, for each time instant the intervals may have different limits, i.e.,
μ [ n ] M n = [ μ [ n ] ] , μ + [ n ] ] ,
with PDFs M n ( μ [ n ] ) , ] , n = n , n + ¯ .
Consider the T ξ model. The fluctuating character of the daily temperature variations is described by the randomized dynamic regression model of the second order
τ [ n ] = c 2 . 1 τ [ n 1 ] 1 . 1 τ [ n 2 ] , T [ n ] = t + τ [ n ] + ξ [ n ] ,
where t is the mean daily temperature. These parameters are random and take values within given intervals c [ c , c + ] . The probabilistic properties of the parameters are characterized by PDFs W ( c ) defined on corresponding intervals.
Equation (36) contains random noises described by independent random variables ξ [ n ] ; in each measurement n , their values may lie in different intervals, i.e.,
ξ [ n ] Ξ n = [ ξ n , ξ n + ] .
The probabilistic properties of the random variable ξ [ n ] are characterized by a PDF Q n ( ξ [ n ] ) , n = n , n + ¯ .
Thus, Equations (33) and (36) describing electrical load dynamics in a power system are characterized by the following PDFs:
  • the L T model, by the PDFs P ( a ) and F ( b ) of the model parameters and the PDF M n ( μ [ n ] ) , of the measurement noises, n = n , n + ¯ ;
  • the T ξ model, by the PDFs W ( c ) of the model parameters and the PDFs Q n ( ξ [ n ] ) of the measurement noises, n = n , n + ¯ .
(2). Learning Data Set.   
For estimating the PDFs, the normalized real data from the GEFCom2014 dataset (see [34]) on daily electrical load variations 0 L r ( i ) [ n ] 1 , mean daily temperature variations 0 t r ( i ) 1 and temperature deviations 0 τ r ( i ) [ n ] 1 from the mean daily value can be used. (Here normalization means the reduction to the unit interval.)
The normalization procedure is performed in the following way:
L r ( i ) [ n ] = L ^ r ( i ) [ n ] L ^ m i n ( i ) L ^ m a x ( i ) L ^ m i n ( i ) , τ r ( i ) [ n ] = τ ^ r ( i ) [ n ] τ ^ m i n ( i ) τ ^ m a x ( i ) τ ^ m i n ( i ) , t r ( i ) = 1 n + n n = n n + τ r ( i ) [ n ] ,
where L ^ m i n ( i ) = min n L ^ ( i ) [ n ] , L ^ m a x ( i ) = max n L ^ ( i ) [ n ] , τ ^ m i n ( i ) = min n τ ^ ( i ) [ n ] , τ ^ m a x ( i ) = max n τ ^ ( i ) [ n ] .
In accordance with (33) and (36), the model variables and the corresponding real data on the learning interval n T l are described by the vectors
L ( i ) ( T l ) = { L ( i ) [ 1 ] , , L ( i ) [ 24 ] } , L r ( i ) ( T l ) = { L r ( i ) [ 1 ] , , L r ( i ) [ 24 ] } , L ( i ) ( T l 1 ) = { L ( i ) [ 0 ] , , L ( i ) [ 23 ] } , L r ( i ) ( T l 1 ) = { L r ( i ) [ 0 ] , , L r ( i ) [ 23 ] } , V ( i ) ( T l ) = { v ( i ) [ 1 ] , , v ( i ) [ 24 ] } , V r ( i ) ( T l ) = { v r ( i ) [ 1 ] , , v r ( i ) [ 24 ] } , T ( i ) ( T l ) = { τ ( i ) [ 1 ] , , τ ( i ) [ 24 ] } , T r ( i ) ( T l ) = { τ r ( i ) [ 1 ] , , τ r ( i ) [ 24 ] } ,
T ˜ ( i ) ( T l 1 , T l 2 ) = { 2 , 1 τ ( i ) [ 0 ] 1 , 1 τ ( i ) [ 1 ] , , 2 , 1 τ ( i ) [ 23 ] 1 , 1 τ ( i ) [ 22 ] } , T ˜ r ( i ) ( T l 1 , T l 2 ) = { 2 , 1 τ r ( i ) [ 0 ] 1 , 1 τ r ( i ) [ 1 ] , , 2 , 1 τ r ( i ) [ 23 ] 1 , 1 τ r ( i ) [ 22 ] } ,
μ ( i ) ( T l ) = { μ ( i ) [ 1 ] , , μ ( i ) [ 24 ] } , ξ ( i ) ( T l ) = { ξ ( i ) [ 1 ] , , ξ ( i ) [ 24 ] } .
In terms of (39), the L T and T ξ models on the learning interval T l have the form
L ( i ) ( T l ) = a L ( i ) ( T l 1 ) + b T ( i ) ( T l ) , V ( i ) ( T l ) = L ( i ) ( T l ) + μ ( i ) ( T l ) , T ( i ) ( T l ) = c T ˜ ( i ) ( T l 1 , T l 2 ) , T ( i ) ( T l ) = t + T ˜ ( i ) ( T l ) + ξ ( i ) ( T l ) .
The random parameters take values within the intervals
A = [ 0 . 05 , 0 . 15 ] , B = [ 0 . 5 , 1 . 0 ] , C = [ 0 . 75 , 0 . 85 ] .
The measurement noises take values within the intervals
M n = [ 0 . 1 , 0 . 1 ] , Ξ n = [ 0 . 1 , 0 . 1 ] .
(3). Entropy-Optimal Probability Density Functions of Parameters and Noises.   
In accordance with the approach described in Section 5, for the L T model (33)–(35) the PDFs parametrized by the Lagrange multipliers θ ( i ) = { θ 1 ( i ) , , θ 24 ( i ) } have the form
P i * ( a , θ ( i ) ) = l r ( i ) ( θ ) exp a l r ( i ) ( θ ) exp ( a l r ( i ) ( θ ) ) exp ( a + l r ( i ) ( θ ) ) , F i * ( b , θ ( i ) ) = h r ( i ) ( θ ) exp b h r ( i ) ( θ ) exp ( b h r ( i ) ( θ ) ) exp ( b + h r ( i ) ( θ ) ) , M i , n * ( μ [ n ] ) = θ n ( i ) exp θ n ( i ) μ [ n ] exp ( μ [ n ] θ n ( i ) ) exp ( μ + [ n ] θ n ( i ) ) , n = 1 , 24 ¯ ,
where
l r ( i ) ( θ ) = n = 1 24 θ n L r ( i ) [ n 1 ] , h r ( i ) ( θ ) = n = 1 24 θ n T r ( i ) [ n ] .
The Lagrange multipliers θ ( i ) are calculated by solving the system of balance equations
L ( i ) ( θ ( i ) ) + T ( i ) ( θ ( i ) ) + M n ( i ) ( θ n ( i ) ) = L r ( i ) [ n ] , n = 1 , 24 ¯ ,
where
L ( i ) ( θ ( i ) ) = exp ( a l r ( i ) ( θ ( i ) ) ) ( a l ( i ) ( θ ( i ) ) + 1 ) exp ( a + l r ( i ) ( θ ( i ) ) ) ( a + l ( i ) ( θ ( i ) ) + 1 ) exp ( a l ( i ) ( θ ( i ) ) ) exp ( a + l ( i ) ( θ ( i ) ) ) , T ( i ) ( θ ( i ) ) = exp ( b h r ( i ) ( θ ( i ) ) ) ( b h ( i ) ( θ ( i ) ) + 1 ) exp ( b + h r ( i ) ( θ ( i ) ) ) ( b + h ( i ) ( θ ( i ) ) + 1 ) exp ( b h ( i ) ( θ ( i ) ) ) exp ( b + h ( i ) ( θ ( i ) ) ) , M n ( i ) ( θ n ( i ) ) = exp ( μ [ n ] θ n ( i ) ) ( μ [ n ] θ n ( i ) ) + 1 ) exp ( μ + [ n ] θ n ( i ) ) ( μ + [ n ] θ n ( i ) ) + 1 ) θ n ( i ) exp ( μ [ n ] θ n ( i ) ) exp ( μ + [ n ] θ n ( i ) ) .
Consider the T ξ model. The corresponding entropy-optimal PDFs parametrized by the Lagrange multipliers have the form
W i * ( c , η ( i ) ) = h ˜ r ( i ) ( η ) exp c h ˜ r ( i ) ( η ) exp ( a h ˜ r ( i ) ( η ) ) exp ( a + h ˜ r ( i ) ( η ) ) , Q i , n * ( ξ [ n ] ) = η n ( i ) exp η n ( i ) ξ [ n ] exp ( ξ [ n ] η n ( i ) ) exp ( ξ + [ n ] η n ( i ) ) , n = 1 , 24 ¯ ,
where
h ˜ r ( i ) ( η ) = n = 1 24 η n ( 2 , 1 T r ( i ) [ n 1 ] 1 , 1 T r ( i ) [ n 2 ] ) , q ( i ) ( η ( i ) ) = n = 1 24 η n ( i ) .
The Lagrange multipliers η ( i ) are calculated by solving the system of balance equations
D ( i ) ( η ( i ) ) + N ( i ) ( η ( i ) ) + K n ( i ) ( η n ( i ) ) = T r ( i ) [ n ] , n = 1 , 24 ¯ ,
where
D ( i ) ( η ( i ) ) = exp ( t q ( i ) ( η ( i ) ) ) ( t q ( i ) ( η ( i ) ) + 1 ) exp ( t + q ( i ) ( η ( i ) ) ) ( t + q ( i ) ( η ( i ) ) + 1 ) exp ( t q ( i ) ( η ( i ) ) ) exp ( t + q ( i ) ( η ( i ) ) ) , N ( i ) ( η ( i ) ) = exp ( c h ˜ r ( i ) ( η ( i ) ) ) ( c h ˜ ( i ) ( η ( i ) ) + 1 ) exp ( c + h ˜ r ( i ) ( η ( i ) ) ) ( c + h ˜ ( i ) ( η ( i ) ) + 1 ) exp ( c h ˜ ( i ) ( η ( i ) ) ) exp ( c + h ˜ ( i ) ( η ( i ) ) ) , K n ( i ) ( η n ( i ) ) = exp ( ξ [ n ] η n ( i ) ) ( ξ [ n ] η n ( i ) ) + 1 ) exp ( ξ + [ n ] η n ( i ) ) ( ξ + [ n ] η n ( i ) ) + 1 ) η n ( i ) exp ( ξ [ n ] η n ( i ) ) exp ( ξ + [ n ] η n ( i ) ) .
(4). Results of Model Learning.   
Using the available data on daily variations of electrical load and ambient temperature (see Figure 1) for the three days indicated above, the balance Equations (44), (45), (47) and (48) were formed. Their solution was determined by minimizing the quadratic residual between the left- and right-hand sides of the equations. Since the equations are significantly nonlinear, the resulting values of the Lagrange multipliers (see Table 1) correspond to a local minimum of the residual. All calculations were implemented in MATLAB; optimization was performed using the fsolve function.
Because the parameters of the L T model are independent, the joint PDFs U i * ( a , b ) = P i * ( a ) F i * ( b ) of the parameters and noises have the form
U 1 * ( a , b ) = 53.09 exp ( 9.72 a ) exp ( 0.06 b ) , U 2 * ( a , b ) = 55.49 exp ( 6.04 a ) exp ( 0.58 b ) , U 3 * ( a , b ) = 65.81 exp ( 6.09 a ) exp ( 0.81 b ) ,
( a , b ) [ 0 . 05 , 0 . 15 ] [ 0 . 5 , 1 . 0 ] , μ [ 0 . 1 , 0 . 1 ] , i = 1 , 24 ¯ .
Clearly, the PDFs are of exponential type. For i = 1 , the graphs are shown in Figure 2.
For the T ξ model, the PDFs of the parameters and noises have the form
W 1 * ( c ) = 13.90 exp ( 0.41 c ) , W 2 * ( c ) = 10.43 exp ( 0.05 c ) , W 3 * ( c ) = 11.65 exp ( 0.19 c ) ,
c [ 0 . 75 , 0 . 85 ] , ξ [ 0 . 1 , 0 . 1 ] , i = 1 , 24 ¯ .
For i = 1 , the graphs can be seen in Figure 3.
Thus, the randomized L T ξ model generates random trajectories with the entropy-optimal PDFs of the model parameters and measurement noises:
L [ n ] = a L [ n 1 ] + b T [ n ] , P * ( a ) , F * ( b ) , v [ n ] = L [ n ] + μ [ n ] , M n * ( μ [ n ] ) ; τ [ n ] = c 2 . 1 τ [ n 1 ] 1 . 1 τ [ n 2 ] , W * ( c ) , i = 1 , 3 ¯ , T [ n ] = t + τ [ n ] + ξ [ n ] , Q n * ( ξ [ n ] ) .
The corresponding ensembles are generated by the sampling procedure of the resulting PDFs of the parameters and noises using the acceptance-rejection (AR) method (also known as rejection sampling (RS); see [49]). During calculations, 100 samples for each parameter and 100 samples for each noise were used; in other words, the ensemble consisted of 10 4 trajectories.
(5). Model Testing.
The adequacy of the model was analyzed by the self- and cross-testing of the L T and T ξ models on the real load–temperature data for 3–5 July 2016 ( i = 1 , 2 , 3 ). Self-testing means generating an ensemble of trajectories with the entropy-optimal parameters and noises for day i, calculating the mean (mean) and median (med) trajectories and also the variance curve (std±) of the ensemble, and comparing the mean trajectory with the real counterparts by electrical load and ambient temperature for the same day i. The quality of approximation is characterized by relative errors,
δ L ( i ) = n = 1 24 L m e a n ( i ) [ n ] L r ( i ) [ n ] 2 n = 1 24 ( L m e a n ( i ) [ n ] ) 2 + n = 1 24 ( L r ( i ) [ n ] ) 2 , i = 1 , 3 ¯ ,
in electrical load and
δ T ( i ) = n = 1 24 T m e a n ( i ) [ n ] T r ( i ) [ n ] 2 n = 1 24 ( T m e a n ( i ) [ n ] ) 2 + n = 1 24 ( T r ( i ) [ n ] ) 2 , i = 1 , 3 ¯ ,
in ambient temperature.
Cross-testing represents a similar procedure in which the mean trajectories are compared with the real counterparts in terms of electrical load and ambient temperature for days j i . The quality of approximation is characterized by relative errors,
δ L ( i , j ) = n = 1 24 L m e a n ( i ) [ n ] L r ( j ) [ n ] 2 n = 1 24 ( L m e a n ( i ) [ n ] ) 2 + n = 1 24 ( L r ( j ) [ n ] ) 2 , i = 1 , 3 ¯ , i j ,
in electrical load and
δ T ( i , j ) = n = 1 24 T m e a n ( i ) [ n ] T r ( j ) [ n ] 2 n = 1 24 ( T m e a n ( i ) [ n ] ) 2 + n = 1 24 ( T r ( j ) [ n ] ) 2 , i = 1 , 3 ¯ , i j ,
in ambient temperature.
Self-testing. For the L T model, the real ambient temperature data T r ( i ) [ n ] as well as the entropy-optimal PDFs P i * ( a ) and F i * ( b ) of the parameters ( a , b ) and the PDFs M 1 * ( μ [ 1 ] ) , , M 24 * ( μ [ 24 ] ) of the measurement noises μ [ n ] were used. The ensembles L ( i ) were generated using the sampling procedure of the above PDFs. The mean trajectory L m e a n ( i ) [ n ] , the median trajectory L m e d ( i ) [ n ] and also the trajectories L s t d ± ( i ) [ n ] corresponding to the limits of the variance graph were found. The errors δ L ( i ) were calculated. The resulting ensembles and relative errors δ L ( i ) for the three indicated days are demonstrated in Figure 4.
The T ξ model was tested by generating the ensemble T ( i ) of random trajectories T ( i ) [ n ] , n = 1 , 24 ¯ with the entropy-optimal PDFs W ( i ) ( c ) and Q 1 * ( ξ [ 1 ] ) , , Q 24 * ( ξ [ 24 ] ) through sampling. The mean trajectory T m e a n ( i ) [ n ] , the median trajectory T m e d ( i ) [ n ] and the trajectory T s t d ± ( i ) [ n ] corresponding to the limits of the variance curve are calculated. The resulting ensembles and relative errors δ T ( i ) for the three days are shown in Figure 5.
Cross-testing. For cross-testing, the L T and L T ξ models learned on the data for day i were used, and their mean trajectories were compared with the data for days j i . The resulting errors are combined in Table 2, Table 3 and Table 4.
(6). Randomized Prediction of N-Daily Load.
In the randomized prediction of the N-daily load, the L T ξ model learned on the interval T l was used. The quality of the forecast was characterized using the L T ξ model with the entropy-optimal PDFs obtained on the real data for the first ( i = 1 ) day.
The 1-day ( n [ 1 , 24 ] ), 2-day ( n [ 1 , 48 ] ) and 3-day ( n [ 1 , 72 ] ) ensembles were constructed by the sampling procedure of the above PDFs. For these ensembles, the mean trajectories L m e a n [ n ] , the median trajectories L m e d [ n ] and also the limiting trajectories L s t d ± [ n ] of the variance curve were found. The forecast results were compared with the real data for 3–7 July 2006 ( i = 1 , 4 ¯ ). The forecasting quality was characterized by the relative errors calculated similar to (53) and (54).
The resulting 24-h, 48-h and 72-h randomized forecasts of electrical load and their probabilistic characteristics (the mean and median trajectories, the limit trajectories of the variance curves) are presented in Figure 6. The errors, i.e., the deviations between the model forecasts and real data, can be seen in Table 5.

7. Conclusions

The article proposes a new forecasting approach based on the idea of generation not the only forecast and not a set of forecasts with the scenario’s model parameters, and not forecasts with assigned probabilities, but the ensemble of random forecasts with entropy-optimal model parameters and measurement noises.
For the randomized forecasting we propose a structure of predictive dynamical model which uses as real data as optimized noises. The latter are the source of ensemble of predictive trajectories, which allow computing deterministic trajectories of its different numerical characteristics and probabilistic estimates.

Author Contributions

Conceptualization, Y.S.P.; Data curation, A.Y.P.; Methodology, Y.S.P., A.Y.P., Y.A.D. and D.S.; Software, A.Y.P. and Y.A.D.; Supervision, D.S.; Writing—original draft, Y.S.P., A.Y.P., Y.A.D. and D.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by Russian Foundation for Basic Research (project Nos. 20-07-00223, 20-07-00683, 20-07-00470).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Frawley, W.J.; Piatetsky-Shapiro, G.; Matheus, C.J. Knowledge discovery in databases: An overview. AI Mag. 1992, 13, 57. [Google Scholar]
  2. Witten, I.H.; Frank, E. Data Mining: Practical Machine Learning Tools and Techniques; Morgan Kaufmann: Burlington, MA, USA, 2005. [Google Scholar]
  3. Campbell, P. Editorial on special issue on big data: Community cleverness required. Nature 2008, 455, 1. [Google Scholar]
  4. Dhar, V. Data Science and Prediction. Commun. ACM 2013, 56, 64–73. [Google Scholar] [CrossRef]
  5. Rosenblatt, F. The Perceptron, a Perceiving and Recognizing Automaton Project Para; Cornell Aeronautical Laboratory: Buffalo, NY, USA, 1957. [Google Scholar]
  6. Tsypkin, Y.Z. Osnovi Teorii Obuchaiuschichsia Sistem (Foundations of the Theory of Learning Systems); Nauka: Moscow, Russia, 1970. [Google Scholar]
  7. Ayzerman, M.A.; Braverman, E.M.; Rozonoer, L.I. Metod Potencialnikh Funkcii v Teorii Obuchenia Mashin (Method of Potential Functions in the Theory of Machine Learning); Nauka: Moscow, Russia, 1970. [Google Scholar]
  8. Vapnik, V.N. Statistical Learning Theory; Wiley: New York, NY, USA, 1998. [Google Scholar]
  9. Bishop, C. Pattern Recognition and Machine Learning (Information Science and Statistics), 1st ed.; Springer: New York, NY, USA, 2006; Reprint in 2007. [Google Scholar]
  10. Popkov, Y.S.; Popkov, A.Y.; Dubnov, Y.A. Randomizirovannoe Mashinnoe Obichenie: Ot Empiricheskoi Veroiatnosti k Entropiinoi Randomizacii (Randomized Machine Learning: From Empirical Probability to Entropy Randomization); LENAND: Moscow, Russia, 2019. [Google Scholar]
  11. Friedman, J.; Hastie, T.; Tibshirani, R. The Elements of Statistical Learning; Springer Series in Statistics; Springer: Berlin, Germany, 2001; Volume 1. [Google Scholar]
  12. Aivazyan, S.A.; Mhitaryan, V.S. Prikladnaia Statistika i Osnovi Econometriki (Applied Statistics and Basics of Econometrics); Unity: Moscow, Russia, 1998.
  13. Tarassow, A. Forecasting U.S. money growth using economic uncertainty measures and regularisation techniques. Int. J. Forecast. 2019, 35, 443–457. [Google Scholar] [CrossRef]
  14. Marcellino, M.; Stock, J.H.; Watson, M.W. A comparison of direct and iterated multistep AR methods for forecasting macroeconomic time series. J. Econ. 2006, 135, 499–526. [Google Scholar] [CrossRef] [Green Version]
  15. Eitrheim, Ø.; Teräsvirta, T. Testing the adequacy of smooth transition autoregressive models. J. Econ. 1996, 74, 59–75. [Google Scholar] [CrossRef]
  16. Molodtsova, T.; Papell, D. Out-of-sample exchange rate predictability with Taylor rule fundamentals. J. Int. Econ. 2009, 77, 167–180. [Google Scholar] [CrossRef]
  17. Granger, C.; Teräsvirta, T. Modelling Non-Linear Economic Relationships; Oxford University Press: Oxford, UK, 1993. [Google Scholar]
  18. Wang, R.; Morley, B.; Stamatogiannis, M.P. Forecasting the exchange rate using nonlinear Taylor rule based models. Int. J. Forecast. 2019, 35, 429–442. [Google Scholar] [CrossRef]
  19. Bessec, M.; Fouquau, J. Short-run electricity load forecasting with combinations of stationary wavelet transforms. Eur. J. Oper. Res. 2018, 264, 149–164. [Google Scholar] [CrossRef]
  20. Clements, A.E.; Hurn, A.; Li, Z. Forecasting day-ahead electricity load using a multiple equation time series approach. Eur. J. Oper. Res. 2016, 251, 522–530. [Google Scholar] [CrossRef] [Green Version]
  21. Hong, T.; Fan, S. Probabilistic electric load forecasting: A tutorial review. Int. J. Forecast. 2016, 32, 914–938. [Google Scholar] [CrossRef]
  22. Wheatcroft, E. Interpreting the skill score form of forecast performance metrics. Int. J. Forecast. 2019, 35, 573–579. [Google Scholar] [CrossRef]
  23. Canale, A.; Ruggiero, M. Bayesian nonparametric forecasting of monotonic functional time series. Electron. J. Stat. 2016, 10, 3265–3286. [Google Scholar] [CrossRef]
  24. Dubnov, Y.; Boulytchev, A.V. Bayesian Identification of a Gaussian Mixture Model. Inform. Tekhn. Vychisl. Sist. 2017, 1, 101–114. [Google Scholar]
  25. Frazier, D.T.; Maneesoonthorn, W.; Martin, G.M.; McCabe, B.P. Approximate bayesian forecasting. Int. J. Forecast. 2019, 35, 521–539. [Google Scholar] [CrossRef] [Green Version]
  26. Beaumont, M.A.; Zhang, W.; Balding, D.J. Approximate Bayesian computation in population genetics. Genetics 2002, 162, 2025–2035. [Google Scholar]
  27. McAdam, P.; Warne, A. Euro area real-time density forecasting with financial or labor market frictions. Int. J. Forecast. 2019, 35, 580–600. [Google Scholar] [CrossRef] [Green Version]
  28. Alkema, L.; Gerland, P.; Raftery, A.; Wilmoth, J. The United Nations probabilistic population projections: An introduction to demographic forecasting with uncertainty. Foresight (Colch. VT) 2015, 2015, 19. [Google Scholar]
  29. Brier, G.W. Verification of forecasts expressed in terms of probability. Mon. Weather Rev. 1950, 78, 1–3. [Google Scholar] [CrossRef]
  30. Bröcker, J.; Smith, L.A. From ensemble forecasts to predictive distribution functions. Tellus A Dyn. Meteorol. Oceanogr. 2008, 60, 663–678. [Google Scholar] [CrossRef] [Green Version]
  31. Christensen, H.; Moroz, I.; Palmer, T. Evaluation of ensemble forecast uncertainty using a new proper score: Application to medium-range and seasonal forecasts. Q. J. R. Meteorol. Soc. 2015, 141, 538–549. [Google Scholar] [CrossRef]
  32. Gneiting, T.; Katzfuss, M. Probabilistic forecasting. Annu. Rev. Stat. Appl. 2014, 1, 125–151. [Google Scholar] [CrossRef]
  33. Lahiri, K.; Wang, J.G. Evaluating probability forecasts for GDP declines using alternative methodologies. Int. J. Forecast. 2013, 29, 175–190. [Google Scholar] [CrossRef]
  34. Hong, T.; Pinson, P.; Fan, S.; Zareipour, H.; Troccoli, A.; Hyndman, R.J. Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond. Int. J. Forecast. 2016, 32, 896–913. [Google Scholar] [CrossRef] [Green Version]
  35. Vidyasagar, M. Randomized Algorithms for Robust Controller Synthesis Using Statistical Learning Theory: A Tutorial Overview. Eur. J. Control 2001, 7, 287–310. [Google Scholar] [CrossRef]
  36. Granichin, O.N.; Polyak, B.T. Randomizirovannie Algoritmi Ocenivania i Optimizacii pri Pochti Proizvolnikh Pomekhakh (Randomized Algorithms of Estimation and Optimization under Almost Arbitrary Disturbances); Nauka: Moscow, Russia, 2003. [Google Scholar]
  37. Biondo, A.E.; Pluchino, A.; Rapisarda, A.; Helbing, D. Are random trading strategies more successful than technical ones? PLoS ONE 2013, 8, e68344. [Google Scholar] [CrossRef]
  38. Lutz, W.; Sanderson, W.; Scherbov, S. The end of world population growth. Nature 2001, 412, 543. [Google Scholar] [CrossRef] [Green Version]
  39. Tsirlin, A.M. Metody Usrednennoi Optimizatsii i Ikh Primenenie (Average Optimization Methods and Their Application); Fizmatlit: Moscow, Russia, 1997. [Google Scholar]
  40. Shannon, C.E. Communication theory of secrecy systems. Bell Labs Tech. J. 1949, 28, 656–715. [Google Scholar] [CrossRef]
  41. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  42. Rosenkrantz, R.D.; Jaynes, E.T. Papers on Probability, Statistics, and Statistical Physics; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1989. [Google Scholar]
  43. Jaynes, E.T. Probability Theory: The Logic of Science; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  44. Joffe, A.D.; Tihomirov, A.M. Teoriya Ekstremalnykh Zadach (Theory of Extreme Problems); Nauka: Moscow, Russia, 1974. [Google Scholar]
  45. Wang, P.; Liu, B.; Hong, T. Electric load forecasting with recency effect: A big data approach. Int. J. Forecast. 2016, 32, 585–597. [Google Scholar] [CrossRef] [Green Version]
  46. Gaillard, P.; Goude, Y.; Nedellec, R. Additive models and robust aggregation for GEFCom2014 probabilistic electric load and electricity price forecasting. Int. J. Forecast. 2016, 32, 1038–1050. [Google Scholar] [CrossRef]
  47. Fiedner, G. Hierarchical Forecasting: Issues and Use Guidelines. Ind. Manag. Data Syst. 2001, 101, 5–12. [Google Scholar] [CrossRef]
  48. Amaral, L.F.; Souza, R.C.; Stevenson, M. A smooth transition periodic autoregressive (STPAR) model for short-term load forecasting. Int. J. Forecast. 2008, 24, 603–615. [Google Scholar] [CrossRef]
  49. Von Neumann, J. 13. Various Techniques Used in Connection With Random Digits. Appl. Math. Ser. 1951, 12, 36–38. [Google Scholar]
Figure 1. Structure of the RDRM.
Figure 1. Structure of the RDRM.
Mathematics 08 01119 g001
Figure 2. PDFs of the parameters and noises for i = 1.
Figure 2. PDFs of the parameters and noises for i = 1.
Mathematics 08 01119 g002
Figure 3. PDFs of the parameters and noises for i = 1.
Figure 3. PDFs of the parameters and noises for i = 1.
Mathematics 08 01119 g003
Figure 4. Ensembles of LT model.
Figure 4. Ensembles of LT model.
Mathematics 08 01119 g004
Figure 5. Ensembles of T ξ model.
Figure 5. Ensembles of T ξ model.
Mathematics 08 01119 g005
Figure 6. 24-h, 48-h and 72-h forecasts using L T ξ model.
Figure 6. 24-h, 48-h and 72-h forecasts using L T ξ model.
Mathematics 08 01119 g006
Table 1. Lagrange multipliers θ , η .
Table 1. Lagrange multipliers θ , η .
Time Instants θ ( 1 ) θ ( 2 ) θ ( 3 ) η ( 1 ) η ( 2 ) η ( 3 )
1−29.727009.281038.0714.6321.2217.34
21.58230.8935.3519.5226.7128.32
3−4.09369.9626.2335.9131.6026.33
4−4.6829.9311.9655.83127.8252.08
5−7.2124.251.0396.85642.35110.94
6−9.2613.72−15.76592.997009.284729.52
7−59.09−5.96−7009.287009.28183.927009.28
8−7009.28−33.99−767.9948.2139.9423.16
9−766.00−1409.28−22.9166.5812.28−1.26
10−50.90−4229.90−4.2737.382.35−19.78
11−18.97−45.223.7222.51−8.82−22.73
12−11.42−15.079.177.16−27.06−23.06
13−13.942.5917.385.72−172.25−27.06
14−17.625.8214.942.83−65.29−23.02
15−18.189.3317.74−0.30−57.45−23.15
16−27.2811.3521.85−1.24−482.69−47.78
17−49.554.5022.68−5.49−889.02−130.49
18−25.41−7.0929.39−0.89−28.12−60.71
19−8.20−4.6698.03−4.23−14.20−270.17
200.95−4.8952.273.70−6.41−31.47
211.01−16.378.1521.162.48−1.23
2222.00−8.247.4517.8512.1514.86
232881.4317.00902.7326.789.9826.39
24512.1436.30355.4727.3224.65121.44
l r ( i ) ( θ * ) 9.716.046.09
h r ( i ) ( θ * ) 0.060.580.81
h ˜ r ( i ) ( η * ) 0.410.050.19
Table 2. Values δ L obtained by cross-testing of L T model. Mean value δ L = 0.0530 .
Table 2. Values δ L obtained by cross-testing of L T model. Mean value δ L = 0.0530 .
i / j 123
1 0.04950.1052
20.0858 0.1428
30.05690.0364
Table 3. Values δ T obtained by cross-testing of T ξ model. Mean value δ L = 0.0757 .
Table 3. Values δ T obtained by cross-testing of T ξ model. Mean value δ L = 0.0757 .
i / j 123
1 0.10510.1079
20.1506 0.1185
30.13150.0676
Table 4. Values δ T obtained by cross-testing of L T ξ model. Mean value δ L = 0.1478 .
Table 4. Values δ T obtained by cross-testing of L T ξ model. Mean value δ L = 0.1478 .
i / j 123
1 0.14370.2659
20.1756 0.2322
30.34750.1655
Table 5. Accuracy of 24-h, 48-h and 72-h forecasts using L T ξ model.
Table 5. Accuracy of 24-h, 48-h and 72-h forecasts using L T ξ model.
δ L ( 2 ) δ L ( 3 ) δ L ( 4 )
0.15090.25150.2133

Share and Cite

MDPI and ACS Style

Popkov, Y.S.; Popkov, A.Y.; Dubnov, Y.A.; Solomatine, D. Entropy-Randomized Forecasting of Stochastic Dynamic Regression Models. Mathematics 2020, 8, 1119. https://doi.org/10.3390/math8071119

AMA Style

Popkov YS, Popkov AY, Dubnov YA, Solomatine D. Entropy-Randomized Forecasting of Stochastic Dynamic Regression Models. Mathematics. 2020; 8(7):1119. https://doi.org/10.3390/math8071119

Chicago/Turabian Style

Popkov, Yuri S., Alexey Yu. Popkov, Yuri A. Dubnov, and Dimitri Solomatine. 2020. "Entropy-Randomized Forecasting of Stochastic Dynamic Regression Models" Mathematics 8, no. 7: 1119. https://doi.org/10.3390/math8071119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop