Next Article in Journal
Structural Features Guiding the Design of Liquid-Crystalline Elastomeric Fluorescent Force Sensors
Previous Article in Journal
Comparison of Deep Transfer Learning Techniques in Human Skin Burns Discrimination
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variables Reduction in Sequential Resource Allocation Problems

1
School of Mathematical and Physical Sciences, University of Technology Sydney, P.O. Box 123, Broadway, NSW 2007, Australia
2
Departiment of Mathematics “Tullio Levi-Civita”, University of Padova, via Trieste 63, 35131 Padova, Italy
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2020, 3(2), 21; https://doi.org/10.3390/asi3020021
Submission received: 24 January 2020 / Revised: 2 March 2020 / Accepted: 17 March 2020 / Published: 14 April 2020

Abstract

:
This paper presents a general framework to address diverse notoriously difficult problems arising in the area of optimal resource management, exploitation of natural reserves, pension fund valuation, environmental protection, and storage operation. Using some common abstract features of this problem class, we present a technique which provides a significant reduction of decision variables. As an application, we discuss a battery storage control to show how a decision problem, which is practically unsolvable in the original formulation, can be treated by our method.

1. Introduction

Optimization of sequential decision-making under uncertainty arises in different fields, such as finance, economics, robotics, manufacturing, and telecommunication. These problems are frequently discussed under the framework of real options, as applications of optimal stochastic control. Therefore, the real options viewpoint (see [1,2]) highlights a freedom in the choice of decisions and consistently uses the operational flexibility for strategy optimization. The classical contributions [3,4] were among the first, placing sequential decision optimization into real option context, with applications in manufacturing [5], investment planning [6], mining [7], and commodities [8]. Therefore, a formulation of decision problems in continuous time was preferred: using a jump-diffusion setting and its well-established stochastic control toolbox, the so-called Hamilton-Jacobi Bellman (HJB) equations, (see [9]) and the Backward Stochastic Differential Equations (BSDE) (see [10]) became applicable [11,12,13]. However, also using discrete-time modeling, sequential decision problems have been routinely treated in terms of the Markov Decision Theory [14,15] whose approximate techniques are known as Approximate Dynamic Programming, (see [16], which comprises diverse heuristic numerical approaches).
In this paper we discuss a specific class of discrete-time stochastic control problems from the viewpoint of real options, and use a certain managerial flexibility which can be expressed in terms of a virtual reserve price. We show that in the optimal regime at any time, the actions are connected to each other (in a certain sense) via virtual reserve price. This observation can be used to significantly reduce the range of actions available for decision choice by excluding at each step those which cannot be optimal. More specifically, we consider an abstract but generic situation which frequently arises in sequential decision-making, when a certain activity (production plan) must be optimally chosen to meet the right balance between the current costs of the activity and the consumption of some resource. While all activity costs have an immediate effect, the impact of resource consumption is uncertain and becomes relevant in the future. In practice, one is frequently confronted with a huge range of possibilities to choose activities, and their combinations form a complex space whose structure is usually determined by numerous inter-relations. Although a solution to such sequential decision problem can be theoretically obtained in terms of the standard backward induction, its practical implementation is virtually infeasible due to the high complexity of decisions.
It turns out that a certain transformation can be of great help in such situations. In particular, under specific but natural assumptions, we show how the decision variables space can be reduced to a simple one-parameter family. Such reduction is achieved by a solution to a separate deterministic optimization problem which is usually easily obtainable. Using such re-formulation, diverse numerical techniques for backward induction can be applied. In this context, in our particular situations, the Bellman’s optimality principle turns out to be equivalent to certain fixed-point equations, which might lead to alternative ways to further simplify computational efforts in the backward induction.
The paper is organized as follows. Section 2 reviews dynamical Bellman principle underlying the classical and stochastic dynamic programing and explains how our method is placed within this context and its use in practice. Section 3 presents the problem and our approach to variables reduction within a general framework. Section 4 introduces an application to battery control which is reviewed in Section 5 to obtain a variables reduction for battery storages. Section 6 is devoted to formal justification of our techniques, while Section 7 concludes.

2. Dynamic Principle for Optimal Switching and Reduction of Decision Variables

In applications, sequential decision-making is usually addressed under the framework of discrete-time Stochastic Control. The theory of Markov Decision Processes/Dynamic Programming provides a variety of methods to deal with such questions. In generic situations, approaching analytical solutions for even some simplest decision processes may be a cumbersome process ([10,14,16]) Furthermore, since closed-form solutions to practically important control problems are usually unavailable, numerical approximations became popular among practitioners to obtain approximately optimal control policies. Although a huge variety of computational methods have been developed therefore, typical real-world problems are usually too complex for the existing solution techniques, in particular if the state dimension of the underlying controlled evolution is high. Let us review the finite-horizon Markov decision theory following [17].
Controlled Markov processes: Consider a random dynamics on a finite time horizon 0 , , T whose state x evolves in E and is controlled by actions a from an action set A. For each a A , we assume that K t a ( x , d x ) is a stochastic transition kernel on E. A mapping π t : E A which describes the action that the controller takes at time t is called a decision rule. A sequence of decision rules π = ( π t ) t = 0 T 1 is called a policy. For each initial point x 0 E and each policy π = ( π t ) t = 0 T 1 , there exists a probability measure P x 0 , π and a stochastic process ( X t ) t = 0 T satisfying the initial condition P x 0 , π ( X 0 = x 0 ) = 1 such that
P x 0 , π ( X t + 1 B | X 0 , , X t ) = K t π t ( X t ) ( X t , B )
holds for each B E at all times t = 0 , , T 1 , i.e., given that system is in state X t at time t, the action a = π t ( X t ) is used to pick the transition probability K t a = π t ( X t ) ( X t , · ) which randomly drives the system from X t to X t + 1 with the distribution K t π t ( X t ) ( X t , · ) . Let us use K t a to denote the one-step transition operator associated with the transition kernel K t a when the action a A is chosen. In other words, for each action a A the operator K t a acts on functions v by
( K t a v ) ( x ) = E v ( x ) K t a ( x , d x ) x E ,
whenever the above integrals are well-defined.
Costs of control: For each time t, we are given the t-step reward function r t : E × A R , where r t ( x , a ) represents the reward for applying an action a A when the state of the system is x E at time t. At the end of the time horizon, at time T, it is assumed that no action can be taken. Here, if the system is in a state x, a scrap value r T ( x ) , which is described by a pre-specified scrap function r T : E R , is collected. Given an initial point x 0 , the goal is to maximize the expected finite-horizon total reward, in other words to find the argument π = ( π t ) t = 0 T 1 such that
π = argmax π E x 0 , π t = 0 T 1 r t ( X t , π t ( X t ) ) + r T ( X T ) ,
where A is the set of all policies, and E x 0 , π denotes the expectation over the controlled Markov chain defined by (1).
Decision optimization: The maximization (3) is well-defined under diverse additional assumptions (see [14], p. 199). The calculation of the optimal policy is addressed in the following setting. For t = 0 , , T 1 , introduce the Bellman operator
T t v ( x ) = sup a A r t ( x , a ) + K t a v ( x ) , x E
which acts on each measurable function v : E R where the integrals K t a v for all a A exist. Furthermore, consider the Bellman recursion
v T = r T , v t = T t v t + 1 for t = T 1 , , 0 .
Under appropriate assumptions, there exists a recursive solution ( v t ) t = 0 T to the Bellman recursion, which gives the so-called value functions and determines an optimal policy π via
π t ( x ) = argmax a A r t ( x , a ) + K t a v t + 1 ( x ) , x E , t = 0 , , T 1
Stochastic switching: Consider now a Markov decision model whose state evolution consists of one controllable and one uncontrollable component. To be more specific, we assume that the state space E = P × Z is the product of a space P (operation modes) and a set Z (states of environment) being a subset Z R d of the Euclidean space. We suppose that at each time t = 0 , T 1 the mode component p P is driven by actions a A in terms of a deterministic function
α t : P × Z × A P , ( p , z , a ) α t p , z ( a )
where α t p , z ( a ) P stands for the new mode from if the action a A was taken at time t = 0 , , T 1 in the state ( p , z ) E . In this setting, the transition operators are given by
K t a v ( p , z ) = E ( v ( α t p , z ( a ) , Z t + 1 ) | Z t = z ) , z Z
for t = 0 , , T 1 , and a A .
Variables reduction: In the above context of stochastic switching, the application of Bellman operator
T t v ( p , z ) = sup a A r t ( p , z , a ) + E ( v ( α t p , z ( a ) , Z t + 1 ) | Z t = z ) , ( p , z ) E
may cause numerical difficulties, particularly if the action space A is high-dimensional and fragmented. Unfortunately, this situation appears frequently in practice, where high-dimensional vectors of decision variables usually subject to numerous feasibility inter-dependencies. In such framework, the problem may become unsolvable due to difficulty of maximization over a complex action set A. The main contribution of our paper is to provide a significant reduction of set of actions, which are relevant for maximization. Under specific assumptions described in this paper, we show there exists a one-parameter family (curve) A t p , z A which depends on the recent time t = 0 , , T = 1 and state ( p , z ) E such that the domain of maximization reduces from A to A t p , z which yields instead of the infeasible maximization (7) a new problem
T t v ( p , z ) = sup a A t p , z r t ( p , z , a ) + E ( v ( α t p , z ( a ) , Z t + 1 ) | Z t = z ) , ( p , z ) E
which is significantly simpler and usually admits a (numerical) solution. To determine the curve A t p , z A of relevant actions, a separate deterministic optimization problem must be solved. Its solution is usually obtained explicitly and provides interesting economic insights.
Contribution of this work: Using a standard Bellman principle, we explore an abstract, but natural framework to reduce a potentially very large space of decision variables (actions) to a single one-parameter family. Although our technique is entirely placed within the traditional Bellman principle of stochastic/classic dynamic programming and addresses a relatively narrow problem class, a wide range of application is covered, including mining operations, pension fund management, and emission control. This approach can serve efficient numerical algorithms for notoriously difficult and important problems from practice.

3. Optimal Control via Virtual Resource Price

Let us introduce the required framework more precisely. Consider an agent who is confronted with the following problem. At the beginning t = 0 , 1 , T 1 of each decision epoch, an activity plan (work schedule) ξ t is to be determined. Therefore, all costs of this work plan must be optimally balanced against their resource consumption/generation. Suppose that the limited resources are described in terms of the state variable e I which stands for the current resource shortage and can vary within a certain interval I R . For instance, if the resource under consideration is a commodity in the storage, then e I stands for the amount of commodity required to fill the storage to its maximal capacity. The other state variable z R d is supposed to represent the situation in the surrounding environment. Let us agree that this environmental state variable z is relevant for decisions, but cannot be influenced by agent’s actions. For instance, for commodity storage, z may comprise the driving market factors of the commodity price evolution. We furtherly assume that the environment state occurs at any time t = 0 , , T as a realization z = Z t of an R d -valued Markov process ( Z t ) t = 0 T whose dynamics carries all information, relevant for decisions.
Having observed at time t = 0 , , T 1 the resource level e I and the realization of the environmental state variable z = Z t , the agent selects a plan ξ Ξ from the set Ξ of all feasible activity plans. This choice yields an immediate cost C t e , z ( ξ ) and causes an immediate resource consumption E t e , z ( ξ ) via pre-specified functions C t e , z , E t e , z on Ξ which may depend on the recent state ( e , z ) I × R d . While all costs are accumulated, the resource level is carried over to the next decision time t + 1 as e + E t e , z ( ξ ) and will influence the decision at this time. The availability of resources becomes crucial at the end t = T of the planning horizon, when a certain terminal costs C T ( e , z ) must be paid, which depend on the total resource level e I and on the state z R d of the environment. Under additional assumptions, such control problems are solved in terms of the so-called value functions ( V t ) t = 0 T which are obtained via Bellman recursions as
V T ( e , z ) = C T ( e , z )
V ˜ t + 1 ( e , z ) = E ( V t + 1 ( e , Z t + 1 ) | Z t = z ) e I + e I , z R d
V t ( e , z ) = inf ξ Ξ C t e , z ( ξ ) + V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) ,
for all arguments ( e , z ) I × R d representing the current situation at decision time. Please note that we implicitly agreed to penalize the violation of the admissible resource level in (10) by infinity, gaining the freedom to interpret the value function for all arguments e R with possible values + in order to avoid watching restriction e + E t e , z ( ξ ) I in (11).
The idea underlying our variable reduction scheme is based on the realization that if the resources had a market price, then an agent would minimize all activity costs taking into account the monetary value of the consumed resources. To realize this concept, we suppose that some entity (a regulatory body) could charge a resources price A R at any time t, depending on the situation ( e , z ) . In the presence of such virtual price A R , each the decision-maker would examine the virtual charges for the resource consumption and chose an activity accordingly, by obtaining
X t e , z ( A ) , a minimizer of ξ C t e , z ( ξ ) + A E t e , z ( ξ ) over ξ Ξ .
Obviously, such mapping A X t e , z ( A ) represents an optimal activity depending on the current state ( e , z ) I × R d , time t = 0 , , T 1 , given the resource price A R . To some degree, this mapping can be interpreted as the willingness to save resources by following a less profitable strategy in response to an increased value of the resource. For ease of understanding, let us postpone the discussion concerning the existence of the minimizer in (12) and the properties of the relation A X t e , z ( A ) which are crucial for the targeted results. The important assumption for now is that for each state ( e , z ) there exists a bounded interval A e , z R such that the one-parameter family of activities
ξ { X t e , z ( A ) Ξ | A A e , z } Ξ
contains only the “best candidates” (for the purpose of minimization in (11)). They can be used in the Bellman recursion, replacing the minimization in (11) by (16) as follows
v T ( e , z ) = C T ( e , z ) ,
v ˜ t + 1 ( e , z ) = E ( v t + 1 ( e , Z t + 1 ) | Z t = z ) e I + e I z R d ,
v t ( e , z ) = min A A e , z C t e , z ( X t e , z ( A ) ) + v ˜ t + 1 ( e + E t e , z ( X t e , z ( A ) ) , z ) .
Please note that now the minimization in (16) must be performed merely over a curve (13) instead of over the whole space Ξ as in (11). In practical applications, such reduction can provide a reasonable (numerical) approach to a virtually unsolvable problem. The main question here is whether
the value functions ( V t ) t = 0 T from ( 9 ) ( 11 ) coincide with those ( v t ) t = 0 T from ( 14 ) ( 16 ) .
In the following, we work out all conditions required therefore. Before we turn to the illustration of our technique by battery storage management in Section 4 and Section 5, followed by proofs in Section 6, let us present all required assumptions which ensure the validity of the assertion (17).
Suppose that all information, relevant for decision-making, is carried by R d -valued Markov process ( Z t ) t = 0 T which is realized on a filtered probability space ( Ω , F , P , ( F t ) t = 0 T ) . As introduced above, we assume that the resource level can vary within a certain bounded interval I R . Having selected an activity ξ from the set Ξ of all feasible activity plans at time t = 0 , , T 1 in the state ( e , z ) I × R d , the costs C t e , z ( ξ ) and resource consumption E t e , z ( ξ ) are determined via pre-specified functions
C t e , z , E t e , z : Ξ R , t = 0 , , T 1 , ( e , z ) I × R d ,
while the terminal costs are determined by
C T : I × R d R , ( e , z ) C T ( e , z ) .
All our considerations rely on additional assumptions on the functions (18) which we formulate next. To ensure that the minimization in (11) is well-defined, let us agree that there exists an idle activity which does not consume any resources:
for each ( e , z ) I × R d there exits ξ Ξ such that E t e , z ( ξ ) = 0 .
To ease our argumentation, we suppose that
for each ( e , z ) I × R d there is an interval A e , z R such that the minimizer X t e , z ( A ) Ξ to ( 12 ) exists for each A A e , z .
Furthermore, let us propose a mild technical assumption
for each z R d , ξ Ξ and e , e I with e < e there exists ξ Ξ reaching the same level e + E t e , z ( ξ ) = e + E t e , z ( ξ ) of resource consumption at no greater cos ts C t e , z ( ξ ) C t e , z ( ξ ) .
To determine the desired minimizer ξ = X t e , z ( A ) to (11), we rely on the following natural technical assumption
for each ( e , z ) I × R d and t = 0 , , T 1 , the function A E t e , z ( X t e , z ( A ) ) is continuous on A e , z , strictly decreasing , and possesses a root .
Finally, we require some convexity properties in the sense that
the set Ξ is convex and for each z R d , t = 0 , , T 1 the functions ( e , ξ ) C t e , z ( ξ ) , ( e , ξ ) E t e , z ( ξ ) are convex on I × Ξ , furthermore e C T ( e , z ) is convex and non decreasing on I .
As mentioned above, the advantage of our approach is that a simpler form (14)–(16) of Bellman recursion occurs, whose efficient (numerical) treatment may be easily possible, unlike that of the original problem (9)–(11). Indeed, it turns out that such simplification solves the original problem in the following sense:
Theorem 1.
Under the assumptions (20)–(24) consider ( V t ) t = 0 T and ( v t ) t = 0 T defined by (9)–(11) and (14)–(16) respectively. If a solution ( v t ) t = 0 T to (14)–(16) exists, then
the value functions coincide v t = V t for all t = 0 , , T .
In view of the above result, the practical solution now requires obtaining for each t = 0 , , T 1 the optimal decision
π t : I × R d R
via minimization in the simplified Bellman recursion where the virtual price for the optimal regime is obtained via π t ( e , z ) = A as a minimizer
A = argmin A A e , z C t e , z ( X t e , z ( A ) ) + V ˜ t + 1 ( e + E t e , z ( X t e , z ( A ) ) , z ) .
Remark 1.
We also prove that the optimal decision π t ( e , z ) = A can be obtained as a solution to the fixed-point equation
A V ˜ t + 1 ( e + E t e , z ( X t e , z ( A ) ) , z ) ,
meaning that A must be a sub-gradient of the expected value function, i.e., in the optimal regime, the virtual resource price must always be equal to the marginal change rate of the value function with respect to the resource level. The economic interpretation of this insight is natural: when choosing an activity plan, the agent increases the resource consumption to a level where the value loss caused by this consumption starts taking over the instant revenue of the activity.

4. Battery Storage Control

To illustrate our variables reduction technique, we introduce a model for battery storage installation operated within a deregulated electricity market. First, let us introduce some technical characteristics. The battery capacity stands for the maximal energy stored (measured in MWh), whereas the battery power is the amount of electrical power (measured in MW) the installation can provide at any moment (see [18]). The conditions under which batteries are operated affect their performance in terms of the so-called cycle life, which can be defined as the number of cycles completed before the effective battery capacity falls below than 60 % of its nominal capacity. Therefore, the Depth of Discharge (DoD) is essential. To give the reader a quantitative understanding of this phenomena, let us consider an example of a lithium ion battery. Assuming that each charge/discharge cycle causes a battery deterioration, one would assume that emptying completely a battery (which corresponds to 100 % DoD) 200 times is roughly equivalent (in terms of performance decline) to 400 cycles at 50 % DoD and 600 cycles at 33 % DoD. However, the actual behavior is different. Usually, a lithium-type battery serves longer than 400 cycles at 50 % DoD and significantly longer than 600 cycles at 33 % DoD, (see [19]) In our model, we suppose that visiting deep discharge states is costly since it affects battery life. For this reason, we suggest including a user-defined function which penalizes deep discharge states accordingly. Figure 1 illustrates its dependence measured in charge cycles and dependence on depth of discharge. We include such effects into our model by a cost penalization of deep discharge states.
Beyond preventing deep discharge, further operational improvements encompass avoiding that the battery is fully charged (by reducing voltage when charging) and diminishing the so-called charge/discharge rate, which stands for the maximal electric flow during charging/discharging.
We assume that the agent attempts to optimally manage an electricity storage by sequential decisions on the amount of energy procured/purchased through the intraday market and on the optimal charge/discharge of batteries. Obviously, these decisions must take into account the current electricity price, market state, storage level, and all costs and technical restrictions.
Consider an energy retailer facing the obligation to satisfy an unknown energy demand of its customers at times t = 0 , , T 1 , while renewable energy sources produce a random electricity amount given a certain battery storage. To manage a potential energy imbalance, appropriate forward positions are taken in advance and decisions are made to drive the battery storage. This is a typical dynamic control problem, since at any time t = 0 , , T an action must be chosen (encompassing a simultaneous energy trade and battery control) which immediately causes some costs but also determines a transition to the next system state (future battery level and market conditions). Clearly, one needs to optimally balance the recent costs against those incurred in the future, based on the current situation. The problems of this type are naturally formulated and solved in terms of dynamic programming.
Remark 2.
Please note that we investigate the problem of electricity storage management within an abstract context which is applicable to most of the deregulated electricity markets, possibly with minor adaptation. Namely we consider energy trading on two time scales. The longer-term trading is realized by forwards or futures or by energy delivery contracts, for instance from the day-ahead trading. In practice, these long-term positions are constantly adjusted on a short-term scale using intraday trading or balancing procedures. This two-scale structure is universal and inherent to any deregulated electricity market and we usually observe significant differences between both scales considering their prices, liquidity, and spread.
Consider an agent who serves an energy consumer whose random demand is (partially) covered from a renewable energy generation facility whose energy output is also random. We denote by D t the cumulative electricity demand of such facility (which stands for demand if D t < 0 and surplus if D t > 0 ) for the delivery period t. Assume that the time point t = 0 , , T 1 corresponds to the beginning of the period t and agree that the demand D t is observed after t, at the end of the delivery period t. Let us agree to model D t = d t + ε t , with a zero-mean random variable ε t standing for the deviation of the realized demand D t from its prediction d t , which is observable at time t. Moreover, assume that at each time t = 0 , , T , the producer can take a forward position which attempts to cover D t . Let us describe such position as d t + f t where f t stands for the deviation of the total amount traded forward from the prediction d t . In generic situations, the quantity f t can be considered to be a “safety margin” which must be purchased on the top of predicted demand d t to avoid a potential energy shortage during delivery interval. However, we do not assume that f t or d t must always be positive. Moreover, introduce the control variable b t , standing for the decision to transfer the energy amount | b t | from/to the battery, where b t > 0 and b t < 0 represent discharging and charging actions, respectively. Here, we agree that these control actions must be decided at time t (immediately before the delivery period t starts). With these assumptions, the energy to be balanced during delivery period t = 0 , , T using electricity grid is
f t + d t D t + b t = f t + b t ε t .
Please note that on the right-hand side of this equation, the quantities f t and b t must be chosen at t whereas ε t becomes observable after time t.
Now we turn to storage control costs and introduce electricity prices
Ψ t = ( Ψ t + , Ψ t 0 , Ψ t ) t = 0 , , T ,
with the interpretation that for delivery period t, Ψ t 0 stands for the forward price, whereas Ψ t + and Ψ t are the so-called upper and the lower balancing prices.
Remark 3.
Please note that Ψ t 0 stands for the price of energy from long-term market. In the sense of the above remark, this price can represent a forward, futures, or day-ahead price of electrical energy in front of delivery, depending on modeling.
While the forward price Ψ t 0 is listed prior to the delivery period t and applies to energy traded in advance, the balancing prices Ψ t + , Ψ t are determined during delivery period t and apply for purchase and procurement of the grid energy. Usually, it holds that
0 Ψ t + < Ψ t 0 < Ψ t t = 0 , , T .
In practice, the price range [ Ψ t + , Ψ t ] can be wide, meaning that Ψ t + is significantly lower than Ψ t 0 which is lower than Ψ t . This issue makes any balancing using grid energy potentially unfavorable. For this reason, the agent attempts to meet the demand as precisely as possible using a combination of the energy from the forward position and from the battery. More specifically, we suppose that the costs, associated with taking forward position, are given by
Ψ t 0 ( d t + f t ) + q ( d t + f t ) 2
where q > 0 is a coefficient, representing the elasticity of the forward price with respect to contract volume. The total costs associated with energy balancing during the period t = 0 , , T is given by
Ψ t 0 ( d t + f t ) + q ( d t + f t ) 2 Ψ t + ( f t + b t ε t ) + + Ψ t ( f t + b t ε t ) .
Please note that this quantity is observable after t and is controlled by the variables f t and b t which must be chosen at t.
Let us precisely formulate the assumptions on random variables observables, concerning the time of their observation. Suppose that the processes ( d t ) t = 0 T ( f t ) t = 0 T , ( b t ) t = 0 T , ( ε t ) t = 0 T are given on a filtered probability space ( Ω , F , P , ( F t ) t = 0 T + 1 ) where F t represents the information available at time point t, just before the start of the delivery period t. According to the above modeling, we suppose that for t = 0 , , T
d t , f t , b t , Ψ t 0 are F t measurable , and ε t , Ψ t + , Ψ t are F t + 1 measurable .
Let us suppose that
Ψ t = ( Ψ t , Ψ t 0 , Ψ t + ) and ε t are conditionally independent , given F t t = 0 , , T ,
and denote by E t ( · ) the expectation, conditioned on F t for t = 0 , , T . Applying such conditional expectation E t ( · ) to (32), we use the conditional independence (4) to obtain
Ψ ¯ t 0 ( d t + f t ) + q ( d t + f t ) 2 Ψ ¯ t + E t [ ( f t + b t ε t ) + ] + Ψ ¯ t E t [ ( f t + b t ε t ) ] .
In this equation, the prices expected in front of delivery, are denoted by
Ψ ¯ t = E t ( Ψ t ) , Ψ ¯ t 0 = Ψ t 0 = E t ( Ψ t 0 ) , Ψ ¯ t + = E t ( Ψ t + ) , t = 0 , , T .
To simplify our energy storage management, we furtherer agree that the distribution of prediction errors does not depend on recent information in the sense that
ε t and F t are independent , t = 0 , , T .
This natural assumption yields a compact form for the expected costs (34):
Ψ ¯ t 0 ( d t + f t ) + q ( d t + f t ) 2 Ψ ¯ t + h t + ( f t + b t ) + Ψ ¯ t h t ( f t + b t )
with functions h + and h explicitly computable from
h t + ( u ) = E [ ( u ε t ) + ] , h t ( u ) = E [ ( u ε t ) ] , u R , t = 0 , , T .
In view of (34)–(37), the expected costs of (32) depend on the energy control variables f and b as
( f , b ) Ψ ¯ t 0 ( d t + f ) + q ( d t + f ) 2 Ψ ¯ t + h t + ( f + b ) + Ψ ¯ t h t ( f + b )
Please note that these costs are expected at time t and can be changed by appropriate adjustment of decision variables f , b R .
For an agent concerned with the minimization of all costs accumulated within the decision period ranging from t = 0 to t = T , the dynamical aspects are important. Specifically, the decision at time t to use energy b from the storage changes the storage level, which has a distinct impact on the availability of energy in the future, influencing all following decisions.
To formulate our storage control as a dynamic programming problem, we assume that a Markov dynamics ( Z t ) t = 0 T on ( Ω , F , P , ( F t ) t = 0 T ) carries all relevant information. Therefore, we suppose that ( Z t ) t = 0 T is a Markovian process which takes values in R d . This process describes the evolution of all relevant state variables of the environment. In particular, we assume that the expected prices are represented by function ψ : R d R 3 of state variables, whose components ψ = ( ψ t , ψ t 0 , ψ t + ) determine the prices as follows:
( Ψ ¯ t , Ψ ¯ t 0 , Ψ ¯ t + ) = ψ t ( Z t ) = ( ψ t ( Z t ) , ψ t 0 ( Z t ) , ψ t + ( Z t ) ) t = 0 , , T .
In accordance to (30), we require for t = 0 , , T that
0 ψ t + ( z ) < ψ t 0 ( z ) < ψ t ( z ) , z R d .
Furthermore, we suppose that at any time t = 0 , , T , the conditional expectation of the next period’s demand is described in terms of a deterministic function
d : { 0 , , T } × R d R , ( t , z ) d t ( z ) .
Besides the state of the environment z R d , the other important state variable is the current storage level e I . Having denoted the minimal and the maximal energy amounts of the battery by e ̲ and e ¯ respectively, we suppose that the storage level e represents by the amount e of energy, which is needed to fully charge the battery. With this interpretation, our variable is the
resource level e , which takes values in the interval I = ] 0 , e ¯ e ̲ [ .
In view of (34)–(39), the expected costs of (32) are now written in terms of a function
C t e , z ( f , b ) = ψ t 0 ( z ) ( d t ( z ) + f ) + q ( d t ( z ) + f ) 2 ψ t + ( z ) h t + ( f + b ) + ψ t ( z ) h t ( f + b ) + χ ( e )
of the resource level e I , the state variable z R d , and the control variables f , b R for t = 0 , , T . Please note that in (42) we include the costs χ ( e ) of deep discharge corresponding to the resource level e I , modeled by an
increasing and convex function χ : I R .
Remark 4.
Ideally, an understanding of the chemistry of a particular battery will suggest some generic penalization function. However, in practice, there are diverse approaches to assess (in economic terms) how the battery life is affected by visiting a deep discharge state. Here, the usual cycle life graph (as in Figure 1), does not detail all aspects required by our approach. The point is that the cycle life is determined by charge/discharge periods oscillating linearly between the minimal and the maximal capacity, rather by a certain strategy run within a random environment. Such life cycle diagrams do not provide any information whether it could be worth visiting a deep discharge state for a short period of time in order to catch an electricity price spike. Given diverse battery storage technologies, there is no simple way of determining an appropriate penalization function. For this reason, the authors suggest a pragmatic approach: For a pre-specified penalization as depicted in Figure 2, the user will calculate the corresponding optimal strategy which must be examined and assessed in simulations. If required by simulation results, the penalty function can be altered, followed by another round of optimization and simulations. Such attempts can be repeated until satisfactory results (considering battery life expectation, peak load response, and total revenue) are reached.
Now, let us propose Bellman recursion for our battery control problem. Recall that in our modeling, the state variables at time t = 0 , , T comprise a market situation described by the realization z = Z t of the environment state process and the current resource level e I . Having supposed that at the final date t = T , in the environment state z = Z T the entire battery storage content e ¯ e can be sold at the market price ψ T 0 ( z ) , we agree that the terminal costs function is
C T ( e , z ) = ψ T 0 ( z ) ( e ¯ e ̲ e ) ( z , e ) R d × I .
This quantity determines the value function V T at the final time T by
V T ( e , z ) = C T ( e , z ) , ( z , e ) R d × I .
Prior to the terminal time t = 0 , , T 1 , the backward induction yields the value functions ( V t ) t = 0 T 1 recursively by
V ˜ t + 1 ( e , z ) = E ( V ( Z t + 1 , e ) | Z t = z ) ( z , e ) R d × I ,
V t ( e , z ) = min ( f , b ) Ξ ( e , z ) C t e , z ( f , b ) + V ˜ t + 1 ( e + b , z ) ( z , e ) R d × I ,
where the minimum must be taken over the set Ξ ( e , z ) of all admissible controls
Ξ ( e , z ) = { ( f , b ) : f , b R , e + b I } , e I
due to restriction that the energy transfer from/to the storage is limited by the storage capacity. However, in our approach, many arguments rely on the assumption that the set of admissible decisions does not depend on the current situation. To replace the minimization over admissible controls Ξ ( e , z ) in (11) by a minimization over an unrestricted set
Ξ = { ( f , b ) : f , b R } ,
we introduce a penalization for violation ( f , b ) Ξ ( e , z ) , having in mind that for a sufficiently strong penalization it will be never optimal to violate the restriction e + b I . This concept is realized by the following recursions:
V T ( e , z ) = C T ( e , z ) , ( z , e ) R d × I
V ˜ t + 1 ( e , z ) = E ( V t + 1 ( e , Z t + 1 ) | Z t = z ) e I + e I , z R d
V t ( e , z ) = inf ( f , b ) Ξ C t e , z ( f , b ) + V ˜ t + 1 ( e + b , z ) , ( z , e ) R d × I .
Please note that with this definition, the functions ( V t ) t = 0 T satisfy (45)–(48) if and only if they fulfill (49)–(51).

5. Variables Reduction for Battery Control

In this section, we adopt and use our variable reduction technique to the specific situation described in Section 4. Therefore, we show that all assumptions of Theorem 1 are fulfilled. Recall from our modeling (44), (42) that the set of activities is given by
Ξ = { ( f , b ) : f R , b R }
with the interpretation that f and b represent the energy from trading and that from the battery, respectively. On this account, we have assumed that the costs and resource consumption functions are given by
C t e , z ( f , b ) = ψ t 0 ( z ) ( d t ( z ) + f ) + q ( d t ( z ) + f ) 2 ψ t + ( z ) h t + ( f + b )
+ ψ t ( z ) h t ( f + b ) + χ ( e )
E t e , z ( f , b ) = b for z R d , ( f , b ) Ξ , t = 0 , , T 1 .
Here the coefficient q > 0 represents the price elasticity whereas the functions h t + and h t are defined by (37) as
h t + ( u ) = E [ ( u ε t ) + ] = R ( u ϵ ) + P ε t ( d ϵ ) , h t ( u ) = E [ ( u ε t ) ] = R ( u ϵ ) P ε t ( d ϵ ) ,
in terms of the distribution P ε t of the demand prediction error ε t , this distribution is non-random by assumption (35). Define the function
h t ( z , u ) = R [ ψ t + ( z ) ( u ϵ ) + + ψ t ( z ) ( u ϵ ) ] P ε t ( d ϵ ) z R d , u R
and observe that
u h t ( z , u ) is convex and non increasing on R for each z R d
due to 0 ψ t + ( z ) < ψ t ( z ) stated in (30). Using (55), we rewrite (53) as
C t e , z ( f , b ) = ψ t 0 ( z ) ( d t ( z ) + f ) + q ( d t ( z ) + f ) 2 + h t ( z , f + b ) + χ ( e ) for ( f , b ) Ξ , ( e , z ) I × R d .
With this representation, the convexity required by (24) is obvious, since the functions ( f , b ) q ( d t ( z ) + f ) 2 and ( f , b ) h t ( z , f + b ) are convex on Ξ , while e χ ( e ) is convex on I.
To verify (22), consider ξ = ( f , b ) Ξ and e , e I with e < e to define
ξ = ( f , b ) = ( f , b + e e )
which yields the same resource consumption
e + E t e , z ( ξ ) = e + b = e + E t e , z ( ξ )
at no greater costs
C t e , z ( ξ ) = ψ t 0 ( z ) ( d t ( z ) + f ) + q ( d t + f ) 2 + h t ( z , f + b + e e ) + χ ( e ) ψ t 0 ( z ) ( d t ( z ) + f ) + q ( d t + f ) 2 + h t ( z , f + b ) + χ ( e ) = C t e , z ( ξ ) ,
since h t is non-increasing by (56) in the second variable and the deep discharge penalization function χ is strictly increasing by assumption (43).
Now let us turn to the Legendre-type transform and verify (21) and (23). To obtain explicit expressions, let us suppose for the sake of concreteness that
for t = 0 , , T 1 , the distribution P ε t possesses a strictly increasing and continuous distribution function F ε t .
The derivative of the function h t from (55) under the assumptions (58) is evaluated explicitly as
( 0 , 1 ) h t ( z , u ) = ( ψ t ( z ) ψ t + ( z ) ) F ε t ( u ) ψ t ( z ) , u R .
Given an A R , consider the minimization (12) of C t e , z ( ξ ) + A E t e , z ( ξ ) over ξ Ξ , which yields the minimization of a convex and smooth function
R 2 R , ( f , b ) ψ t 0 ( z ) ( d t ( z ) + f ) + q ( d t ( z ) + f ) 2 + h t ( z , f + b ) + A b
whose first-order conditions are
ψ t 0 ( z ) + 2 q ( d t ( z ) + f ) + ( 0 , 1 ) h t ( z , f + b ) = 0 , ( 0 , 1 ) h t ( z , f + b ) + A = 0 .
These conditions are fulfilled by ( f , b ) R 2 obtained from
ψ t 0 ( z ) + 2 q ( d t ( z ) + f ) = A , ( 0 , 1 ) h t ( z , f + b ) = A .
That is, the unique minimum of (60) exists for 0 < ψ t + ( z ) < A < ψ t ( z ) and is given by solution to (61) as
f t e , z ( A ) = A ψ t 0 ( z ) 2 q d t ( z ) ,
b t e , z ( A ) = F ε t 1 ψ t ( z ) A ψ t ( z ) ψ t + ( z ) A ψ t 0 ( z ) 2 q + d t ( z ) ,
giving the minimizer
X t e , z ( A ) = ( f t e , z ( A ) , b t e , z ( A ) )
for (21) over the relevant interval
A A e , z = ] ψ t + ( z ) , ψ t ( z ) [ .
Furthermore, assumption (23) is satisfied, since
A E t e , z ( X t e , z ( A ) ) = b t e , z ( A )
is linearly decreasing over the interval (64) ranging from a positive
lim A ψ t + ( z ) b t e , z ( A ) = F ε t 1 ( 1 ) ψ t + ( z ) ψ t 0 ( z ) 2 q > 0 ,
to a negative value
lim A ψ t + ( z ) b t e , z ( A ) = F ε t 1 ( 0 + ) ψ t ( z ) ψ t 0 ( z ) 2 q < 0 .
All ingredients required for (14)–(16) are specified for t = 0 , , T 1 , z R d , A A e , z by the following quantities
E t e , z ( X t e , z ( A ) ) = b t e , z ( A ) C t e , z ( X t e , z ( A ) ) = ψ t 0 ( z ) ( d t ( z ) + f t e , z ) ( A ) + q ( d t ( z ) + f t e , z ) A 2 + h t ( z , f t e , z ( A ) + b t e , z ( A ) ) + χ ( e )
which are relevant in (14)–(16).

6. The Variables Reduction Technique

This section is devoted to the derivation of Theorem 1. Let us prepare the proof by gradually showing auxiliary results. Our arguments rely on convexity of the functions ( V t ) t = 0 T from (9)–(11). It turns out that, based on the assumption
V ˜ t + 1 ( · , z ) is convex and non decreasing on I and V ˜ t + 1 ( · , z ) = + on R I , for all z R d ,
the same properties can be deduced for V ˜ t ( · , z ) as formulated in the following lemma.
Lemma 1.
Given costs and resource consumption (18), let t { 0 , , T 1 } , suppose that (65) holds and consider V t ( · , z ) defined by (11). If (22) holds then V t ( · , z ) is non-decreasing on I for all z R d . If in addition (24) is satisfied, then V t ( · , z ) is non-decreasing and convex on I for all z R d .
Proof. 
Given z R d , e , e I with e < e , and arbitrary ξ Ξ there exists by (22) an action ξ Ξ with
e + E t e , z ( ξ ) = e + E t e , z ( ξ ) and C t e , z ( ξ ) C t e , z ( ξ )
which gives
C t e , z ( ξ ) + V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) C t e , z ( ξ ) + V ˜ t + 1 ( e + E t e , z ( ξ ) , z )
and shows V t ( e , z ) V t ( e , z ) by minimization in ξ and ξ over Ξ .
Now we turn to the convexity. Using the infimum in (11) for each δ > 0 , there exist ξ , ξ Ξ such that
δ 2 + V t ( e , z ) C t e , z ( ξ ) + V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) , δ 2 + V t ( e , z ) C t e , z ( ξ ) + V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) .
Please note that these values are finite due to (20), which means that
e + E t e , z ( ξ ) I , e + E t e , z ( ξ ) I .
That is, by convex linear combinations of the activities ( Ξ is convex by (24)) and resource levels we obtain
ξ ( λ ) = λ ξ + ( 1 λ ) ξ Ξ e ( λ ) = λ e + ( 1 λ ) e I .
Now, by convexity of costs functions (24) it holds that
λ C t e , z ( ξ ) + ( 1 λ ) C t e , z ( ξ ) C t e ( λ ) , z ( ξ ( λ ) )
and using assumption (65) we conclude
λ V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) + ( 1 λ ) V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) V ˜ t + 1 ( λ e + ( 1 λ ) e + λ E t e , z ( ξ ) + ( 1 λ ) E t e , z ( ξ ) , z ) V ˜ t + 1 ( e ( λ ) + E t e ( λ ) , z ( ξ ( λ ) ) , z ) .
From this, we proceed with
δ + λ V t ( e , z ) + ( 1 λ ) V t ( e , z ) C t e ( λ ) , z ( ξ ( λ ) ) + V ˜ t + 1 ( e ( λ ) + E t e ( λ ) , z ( ξ ( λ ) ) , z ) inf ξ Ξ C t e ( λ ) , z ( ξ ) + V ˜ t + 1 ( e ( λ ) + E t e , z ( ξ ) , z ) V t ( e ( λ ) , z )
that shows the desired convexity. □
In what follows, we recall the notion g of the so-called sub-gradient of a convex function g : R n R { } , ( n N ) which is defined at each point u R n as the family of linear functionals
g ( u ) = { l : R n R : l is linear with g ( u ) + l ( u u ) g ( u ) , for u R n } .
For a function V on I × R d R as in (65) which is convex in the first component on I, we agree to consider its sub-gradient in the first component, only for arguments in I. More precisely, for ( e , z ) I × R d we write V ( e , z ) to denote the set of linear functionals l : R R satisfying
V ( e , z ) + l ( e e ) V ( e , z ) for all e R .
Now, let us elaborate on the fixed-point property of the virtual resource price in the optimal regime in the spirit of the remark before Section 4.
Lemma 2.
Assume that the costs and resource consumption functions are given as in (18). Let t { 0 , , T 1 } and V ˜ t + 1 be a function which satisfies (65), furthermore, suppose that (21) holds. If for ( e , z ) R d there exists A A e , z satisfying the fixed-point equation
A V ˜ t + 1 ( e + E t e , z ( X t e , z ( A ) ) , z ) ,
then ξ = X t e , z ( A ) is a minimizer to (11).
Proof. 
By assumption (21) the minimizer ξ = X t e , z ( A ) to (12) satisfies
C t e , z ( ξ ) + A E t e , z ( ξ ) C t e , z ( ξ ) + A E t e , z ( ξ )
for each ξ Ξ , which is equivalent to
C t e , z ( ξ ) C t e , z ( ξ ) A ( E t e , z ( ξ ) E t e , z ( ξ ) ) .
On the other hand, since A V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) by assumption (66), we conclude that
A ( E t e , z ( ξ ) E t e , z ( ξ ) ) V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) .
By combining (68) and (69)
C t e , z ( ξ ) C t e , z ( ξ ) V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) V ˜ t + 1 ( e + E t e , z ( ξ ) , z )
we obtain the desired assertion
C t e , z ( ξ ) + V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) C t e , z ( ξ ) + V ˜ t + 1 ( e + E t e , z ( ξ ) , z ) .
 □
The above lemma suggests determining a minimizer ξ to (11) as ξ = X t e , z ( A ) where A is obtained as a solution to the fixed-point relation (66). However, according to the dynamic programing principle, the virtual resource price A A e , z shall be determined by the minimization (27) rather than by solving a fixed-point relation. The result below shows that under natural assumptions, (27) in fact implies (66).
Lemma 3.
Given costs and resource consumption (18), let t { 0 , , T 1 } and ( e , z ) I × R d , suppose that (65) holds and assume that (21) holds and (23) is satisfied. If A = A e , z fulfills (27), then A also fulfills the fixed-point equation (66).
Proof. 
Given ( e , z ) I × R d , suppose that A = A e , z A e , z satisfies (27). Recall that by assumption (23) there exists A 0 A e , z there exists a root such that E t e , z ( X t e , z ( A 0 ) ) = 0 . Since this root is included in the minimization, with a finite (due to e I ) value
V t + 1 ( e + E t e , z ( X t e , z ( A 0 ) ) , z ) = V t + 1 ( z , e ) < ,
the infimum is also finite
C t e , z ( X t e , z ( A ) ) + V ˜ t + 1 ( e + E t e , z ( X t e , z ( A ) ) , z ) < .
By the minimum property (27), for each A A e , z it holds that
C t e , z ( X t e , z ( A ) ) + V ˜ t + 1 ( e + E t e , z ( X t e , z ( A ) ) , z ) C t e , z ( X t e , z ( A ) ) + V ˜ t + 1 ( e + E t e , z ( X t e , z ( A ) ) , z ) .
Furthermore, by definition of X t e , z ( A ) , it holds on the other hand that
C t e , z ( X t e , z ( A ) ) + E t e , z ( X t e , z ( A ) ) C t e , z ( ξ ) + A E t e , z ( ξ )
for each ξ Ξ . Replacing ξ by X t e , z ( A ) , we deduce
C t e , z ( X t e , z ( A ) ) C t e , z ( X t e , z ( A ) ) + A ( E t e , z ( X t e , z ( A ) ) E t e , z ( X t e , z ( A ) ) )
which we combine with (70) to obtain
V ˜ t + 1 ( e + E t e , z ( X t e , z ( A ) ) , z ) A ( E t e , z ( X t e , z ( A ) ) E t e , z ( X t e , z ( A ) ) ) + V ˜ t + 1 ( e + E t e , z ( X t e , z ( A ) ) , z ) .
Introducing for ( ϵ , z ) I × R d the notations
E ( A ) = e + E t e , z ( X t e , z ( A ) ) , A A e , z , V ˜ ( ϵ ) = V ˜ t + 1 ( ϵ , z ) ,
the inequality (71) is written as
V ˜ ( E ( A ) ) + A ( E ( A ) E ( A ) ) V ˜ ( E ( A ) ) , A A e , z .
For a strictly increasing sequence ( A n + ) n 1 A e , z converging to A define a sequence
Δ n + = E ( A n + ) E ( A ) , n 1
which is strictly decreasing, converges to zero due to (23), and satisfies
V ˜ ( E ( A ) ) + A n + Δ n + V ˜ ( E ( A ) + Δ n + ) ,
because of (72). Similarly, for a strictly decreasing sequence ( A n ) n 1 A e , z converging to A , define a sequence
Δ n = E ( A n ) E ( A ) , n 1
which is strictly increasing, converges to zero due to (23), and satisfies
V ˜ ( E ( A ) ) + A n Δ n V ˜ ( E ( A ) + Δ n ) .
Now apply Lemma 4 with above sequences ( Δ n + ) n 1 , ( A n + ) n 1 and ( Δ n ) n 1 , ( A n ) n 1 using their properties (73) and (74) to deduce the assertion
A V ˜ ( E ( A ) ) = V ˜ t + 1 ( e + E t e , z ( X t e , z ( A ) ) , z ) .
 □
Lemma 4.
Let V ˜ : R R { } be a convex function and ε R such that V ˜ ( ϵ ) < . Suppose that there exist sequences ( Δ n + ) n 1 ] 0 , [ , ( Δ n ) n 1 ] , 0 [ converging to zero and sequences ( A n ) n 1 , ( A n + ) n 1 converging to A R such that
V ˜ ( ϵ ) + A n + Δ n + V ˜ ( ϵ + Δ n + ) V ˜ ( ϵ ) + A n Δ n V ˜ ( ϵ + Δ n ) , for all n 1 .
Then it holds that A V ˜ ( ϵ ) .
Proof. 
Suppose on the contrary that A V ˜ ( ϵ ) : then there exists δ R with
V ˜ ( ϵ ) + A δ > V ˜ ( e + δ ) .
First, we assume that δ > 0 , then there exists A 0 < A such that
V ˜ ( ϵ ) + A 0 δ > V ˜ ( ϵ + δ )
and since V ˜ is convex, the above inequality holds for all intermediate points
V ˜ ( ϵ ) + A 0 Δ > V ˜ ( ϵ + Δ ) , Δ ] 0 , δ [ .
However, for sufficiently large n N we obtain A n + > A 0 such that
V ˜ ( ϵ ) + A n + Δ n + > V ˜ ( ϵ ) + A 0 Δ n + > V ˜ ( ϵ + Δ n + ) ,
which gives a contradiction to (75) for n N sufficiently large to satisfy Δ n + ] 0 , δ [ . A similar argument in the alternative situation δ < 0 also shows a contradiction and completes the proof. □
Finally, let us now compose the outcomes of the Lemmas 1–4 to our technique for control variables reduction. Consider a problem defined in Section under the assumptions formulated in Section 3. Let us prove Theorem 1.
Proof. 
We proceed by induction to show that
the functions ( V t ( · , z ) ) t = 0 T and ( V ˜ t ( · , z ) ) t = 0 T are convex and non decreasing on I for each z R d .
Since for each z R d the function
I R , e C T ( e , z )
is convex and non-decreasing by (24), the initialization (10) ensures that V ˜ T ( · , z ) is convex and non-decreasing on I. Please note that all conditions of Lemma 1 are fulfilled by assumptions, thus for t = T 1 we conclude that that V T 1 ( · , z ) is also convex and non-decreasing on I for each z R d , and calculating the conditional expectation, we observe that V ˜ T 1 ( · , z ) is also convex and non-decreasing on I for each z R d . Proceeding for t = T 1 , , 0 inductively, with the same argumentation, the assertion (78) follows.
Now we turn to the main claim (25). Since (9), (10) coincide with (14), (15), we obtain V ˜ T = v ˜ T . With this, we apply Lemma 3 whose assumption (65) is satisfied because of (78), ensuring that V ˜ T = v ˜ T is convex and non-decreasing in the first argument on I. Further conditions (21), (23) of this lemma also hold. Moreover, since we have supposed in (16) that the minimum is reached, say at A = A , e , z , the fixed-point property (66)
A , e , z V ˜ T ( e + E T 1 e , z ( X T 1 e , z ( A , e , z ) ) , z )
holds for all ( e , z ) I × R d . Using Lemma 2, we conclude that
ξ T 1 = X T 1 e , z ( A , e , z )
is a minimizer to (10), showing that V T 1 = v T 1 . Repeating the argumentation for t = T 1 , , 1 , the desired assertion v t = V t for t = T , , 0 follows inductively. □

7. Conclusions

New technologies have triggered a growing attention to optimization algorithms for operational management of energy storage facilities. In the generic settings, a typical electricity retailer determines an optimal strategy for purchase, procurement, generation, and storage of electrical power while taking into account a fluctuating energy price, storage costs, limited storage capacity, and uncertain production rates. Such problems are challenging, since numerous decision and control variables must be considered within a potentially high-dimensional setting. This paper addresses a variables reduction technique for such problems. Within an abstract, but natural framework, we show that a certain Legendre-type transform can be applied to equivalently reformulate the original strategy optimization into a stochastic control problem driven by a single one-parameter family of decision variables. It turns out that the presented technique is sufficiently general and can be applied to solve diverse dynamic resource allocation involving scarce reserves. These problems encompass a wide range of important areas including mining operations, pension fund management, and emission abatement. The authors believe that their approach can be used as a starting point for efficient numerical algorithms and will address this topic in future research.

Author Contributions

Conceptualization, J.H.; methodology, J.H.; validation, J.H. and T.V.; formal analysis, J.H. and T.V.; investigation, J.H.; resources, J.H. and T.V.; data curation, J.H.; writing–original draft preparation, J.H. and T.V.; writing–review and editing, J.H. and T.V.; visualization, J.H. and T.V.; supervision, J.H. and T.V.; project administration, T.V.; funding acquisition, T.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been completed while the first author was visiting the Department of Mathematics of the University of Padova thanks to the fundings provided by the program “Visiting Scientist 2017” of the University of Padova. This research was also funded by the University of Padova grant numbers BIRD172407-2017 “New perspectives in stochastic methods for finance and energy markets” and BIRD190200/19 “Term Structure Dynamics in Interest Rate and Energy Markets: Modelling and Numerics”, which we gratefully acknowledge.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Trigeorgis, L. Real Options: Managerial Flexibility and Strategy in Resource Allocation; MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
  2. Trigeorgis, L. The nature of option interactions and the valuation of investments with multiple real options. J. Financ. Quant. Anal. 1993, 28, 1–20. [Google Scholar] [CrossRef] [Green Version]
  3. Ross, S.A. A Simple Approach to the Valuation of Risky Streams. J. Bus. 1978, 51, 453–475. [Google Scholar] [CrossRef]
  4. McDonald, R.L.; Siegel, D.R. Investment and the Valuation of Firms When There is an Option to Shut Down. Int. Econ. Rev. 1985, 26, 331–349. [Google Scholar] [CrossRef]
  5. Dias, M. Valuation of exploration and production assets: An overview of real options models. J. Pet. Sci. Eng. 2004, 44, 93–114. [Google Scholar] [CrossRef]
  6. Jaimungal, S.; de Souza, M.O.; Zubelli, J.P. Real option pricing with mean-reverting investment and project value. Eur. J. Financ. 2013, 19, 625–644. [Google Scholar] [CrossRef] [Green Version]
  7. Cortazar, G.; Schwartz, E.S.; Casassus, J. Optimal exploration investments under price and geological-technical uncertainty: A real options model. R&D Manag. 2001, 31, 181–189. [Google Scholar] [CrossRef]
  8. Devalkar, S.K.; Anupindi, R.; Sinha, A. Integrated Optimization of Procurement, Processing, and Trade of Commodities. Oper. Res. 2011, 59, 1369–1381. [Google Scholar] [CrossRef] [Green Version]
  9. Oeksendal, B.; Sulem, A. Applied Stochastic Control of Jump Diffusions; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2005. [Google Scholar]
  10. Pham, H. Continuous-Time Stochastic Control and Optimization with Financial Applications, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  11. Gobet, E.; Lemor, J.P.; Warin, X. A regression-based Monte Carlo method to solve backward stochastic differential equations. Ann. Appl. Probab. 2005, 114, 2172–2202. [Google Scholar] [CrossRef] [Green Version]
  12. Gobet, E.; Lemor, J.P. Numerical simulation of bsdes using empirical regression methods: Theory and practice. arXiv 2008, arXiv:0806.4447. [Google Scholar]
  13. Bender, C.; Dokuchaev, N. A First Order BSPDE for Swing Option Pricing. Math. Finance 2016, 26, 461–491. [Google Scholar] [CrossRef] [Green Version]
  14. Bäuerle, N.; Rieder, U. Markov Decision Processes with Applications to Finance; Springer: Heidelberg, Germany, 2011. [Google Scholar] [CrossRef]
  15. Puterman, M. Markov Decision Processes: Discrete Stochastic Dynamic Programming; Wiley: New York, NY, USA, 1994. [Google Scholar]
  16. Powell, W.B. Approximate Dynamic Programming: Solving the Curses of Dimensionality; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  17. Hinz, J.; Yee, J. Solving Control Problems with Linear State Dynamics—A Practical User Guide. In Proceedings of the 2016 Second International Symposium on Stochastic Models in Reliability Engineering, Life Science and Operations Management (SMRLO), Beer-Sheva, Israel, 15–18 February 2016; pp. 591–596. [Google Scholar] [CrossRef]
  18. Kempener, R.; Borden, E. Battery Storage for Renewables: Market Status and Technology Outlook; Report; IRENA: Abu Dhabi, UAE, 2015. [Google Scholar]
  19. Lawson, B. Electropaedia; Report; Woodbank Communications Ltd.: Chester, UK, 2019; Available online: https://www.mpoweruk.com (accessed on 14 April 2020).
Figure 1. Typical relationship between battery life and depth of discharge.
Figure 1. Typical relationship between battery life and depth of discharge.
Asi 03 00021 g001
Figure 2. A function as in (43) for 60 MWh, penalizing the middle and upper range of shortage level.
Figure 2. A function as in (43) for 60 MWh, penalizing the middle and upper range of shortage level.
Asi 03 00021 g002

Share and Cite

MDPI and ACS Style

Hinz, J.; Vargiolu, T. Variables Reduction in Sequential Resource Allocation Problems. Appl. Syst. Innov. 2020, 3, 21. https://doi.org/10.3390/asi3020021

AMA Style

Hinz J, Vargiolu T. Variables Reduction in Sequential Resource Allocation Problems. Applied System Innovation. 2020; 3(2):21. https://doi.org/10.3390/asi3020021

Chicago/Turabian Style

Hinz, Juri, and Tiziano Vargiolu. 2020. "Variables Reduction in Sequential Resource Allocation Problems" Applied System Innovation 3, no. 2: 21. https://doi.org/10.3390/asi3020021

Article Metrics

Back to TopTop