Time-consistent investment and consumption strategies under a general discount function

The paper [12] examines a concept of equilibrium policies instead of optimal controls in stochastic optimization to analyze a mean-variance portfolio selection problem. We follow the same approach in order to investigate the Merton portfolio management problem in the context of non-exponential discounting, a context that give rise to time-inconsistency of the decision maker. Equilibrium policies are characterized in this context by means of a variational method which leads to a stochastic system that consists of a flow of forward-backward stochastic differential equations and an equilibrium condition. An explicit representation of the equilibrium policies is provided for the special cases of power, logarithmic and exponential utility functions.


Introduction
In recent years, there has been a renewed attention in time inconsistency for optimality problems, as well as financial models. Generally, time inconsistency arises in several optimality problems when the optimal strategy selected at some time s is no longer optimal at time t > s. In other words, a strategy is time-inconsistent when the decision-maker at future time t > s is tempted to deviate from the strategy determined at time s. It is well known that an optimization problem gives raise to time-inconsistent strategies when the dynamic programming principle cannot be applied, and Bellman's principle does not hold.
In practice, an important illustration of time-inconsistent problem is the mean-variance selection problem, where the time inconsistency is due to the fact that there is a nonlinear function of the expectation of the final wealth in the objective criterion. Another important problem which produces a time-inconsistent behavior is the investment-consumption problem with non-exponential discounting. This was the case studied by Strotz (1955Strotz ( -1956, where the time inconsistency arises by the fact that the initial point in time enters in a crucial manner the objective criterion. The common assumption in classical investment-consumption problems under discounted utility is that the discount rate is assumed to be constant over time which leads to the discount function be exponential. This assumption provides the possibility to compare outcomes occurring at different times by discounting future utility by some constant factor. But, on the other hand, results from experimental studies contradict this assumption, indicating that discount rates for the near future are much lower than discount rates for the time further away in future. Ainslie (1995) established experimental studies on human and animal behavior and found that discount functions are almost hyperbolic; that is, they decrease like a negative power of time rather than an exponential. Loewenstein and Prelec (1992) showed that economic decision-makers are impatient about choices in the short term but are more patient when choosing between long-term alternatives; therefore, a hyperbolic type discount function would be more realistic.
Unfortunately, as soon as a discount function is non-exponential, discounted utility models become time-inconsistent in the sense that they do not admit the Bellman's optimality principle. Consequently, the classical dynamic programming approach may not be applicable to solve these problems. According to Strotz (1955Strotz ( -1956, there are two basic ways of handling time inconsistency in non-exponential discounted utility models. In the first one, under the notion of naive agents, every decision is taken without taking into account that their preferences will change in the near future. The agent at time t ∈ [0, T] will solve the problem as a standard optimal control problem with initial condition X(t) = x t . If we suppose that the naive agent at time 0 solves the problem, his or her solution corresponds to the so-called pre-commitment solution, in the sense that it is optimal as long as the agent can pre-commit his or her future behavior at time t = 0. Kydland and Prescott (1997) indeed argue that a pre-committed strategy may be economically meaningful in certain circumstances. The second approach consists in the formulation of a time-inconsistent decision problem as a non-cooperative game between incarnations of the decision-maker at different instants of time. Nash equilibrium of these strategies are then considered to define the new concept of solution to the original problem. Strotz (1955Strotz ( -1956 was the first who proposed a game theoretic formulation to handle the dynamic time inconsistent optimal decision problem on the deterministic Ramsey problem; see Ramsey (1928). Then, by capturing the idea of non-commitment, and letting the commitment period being infinitesimally small, he provided a primitive notion of Nash equilibrium strategy.

Related Works
Further work along this line in continuous and discrete time has been done by Pollak (1968); Phelps and Pollak (1968); Goldman (1980); Barro (1990); and Krusell and Smith (2003). Keeping the same game theoretic approach, Ekeland and Lazrak (2008) and Marín-Solano and Navas (2010) treated the optimal consumption problem where the utility involves a non-exponential discount function in the deterministic framework. They characterized the equilibrium strategies by a value function that has to satisfy a certain "extended Hamilton-Jacobi-Bellman (HJB) equation", which is a non-linear differential equation displaying a non-local term that depends on the global behavior of the solution. In this situation, every decision at time t is taken by a t−agent which represents the incarnation of the controller at time t and is referred in Marín-Solano and Navas (2010) as a "sophisticated t−agent". Björk and Murgoci (2010) extended the idea to the stochastic setting where the controlled dynamic is driven by a quite general class of Markov process and a fairly general objective function. Yong in Yong (2011), by a discretization of time, studied a class of time inconsistent deterministic linear quadratic models and derive equilibrium controls via some class of Riccati-Voltera equations. Yong (2012), also by a discretization of time, investigated a general discounting time inconsistent stochastic optimal control problem and characterizes a feedback time-consistent Nash equilibrium control via the so-called "equilibrium HJB equation".
The Nash equilibrium solution to the mean-variance problem was established first by Basak and Chabakauri (2010) and then extended to a further general class of timeinconsistent problems by Björk and Murgoci (2010). Other papers on the consistent planning approach for the mean-variance problem are Czichowsky (2013); and Björk et al. (2014).
Concerning equilibrium strategies for an optimal consumption-investment problem with a general discount function, Ekeland and Pirvu (2008) were the first to investigate Nash equilibrium strategies where the price process of the risky asset is driven by geometric Brownian motion. They characterized the equilibrium strategies through the solutions of a flow of BSDEs, and they show, for an special form of the discount function, that the BSDEs reduce to a system of two ODEs, which has a solution. Ekeland et al. (1995) added life insurance to the investor's portfolio and they characterize the equilibrium strategy by an integral equation. In Yong (2012), the case of time-inconsistent consumptioninvestment problem under a power utility function is discussed. Following Yong's approach, Zhao et al. (2014) studied the consumption-investment problem with a general discount function and a logarithmic utility function. Recently, Zou et al. (2014) investigated equilibrium consumption-investment decisions for Merton's portfolio problem with stochastic hyperbolic discounting.
As for the comparison between different methods to time inconsistency, Wang and Forsyth (2012) evaluate time-consistent against pre-commitment strategies and compare their related efficient frontiers for a mean-variance optimization problem. The evaluation among the naive and the sophisticated approaches is given by Chen et al. (2014), who study the optimal dividend model of an insurance company in the existence of time inconsistency created by non-exponential discount factor. Cong and Oosterlee (2016) found a relation linking the time-consistent and the pre-commitment investment strategies in a defined contribution pension scheme. Cui et al. (2017) emphasize the shortcomings of pre-commitment and game theoretical strategies, and examine a self-coordination strategy that aims at corresponding global concern and local interests of the decision-maker. Van Staden et al. (2018) consider the pre-commitment and the time-consistent policies in the attendance of realistic investment constraints. Bensoussan et al. (2019) evaluate the produce of constraints on the value function of both pre-commitment and game theoretical approaches, and discover the unexpected result that for the game theoretical approach the occurrence of constraints can improve the payoff, whereas, for the pre-commitment approach, this paradox does not arise. Menoncin and Vigna (2020) evaluate the contribution pension scheme and prove that the dynamically optimal policy reacts better to extreme scenarios of market returns.
Time-inconsistent consumption-investment problem with a non-exponential discount function and a general utility function. We use the game theoretic approach to handle the time inconsistency in the same perspective as Björk and Murgoci (2010). Noting that, the game perspective that we will consider is as follows: We first consider a game with one player at each point t in time. This player represents the incarnation of the decision maker at time t and can be referred to as "player t". This t − th player can control the system only at time t by taking his/her strategies. A control process is then viewed as a complete description of the chosen strategies of all players in the game. The reward to player t is given by utility function of an investment-consumption optimization problem. From this description, we introduce the concept of a "perfect Nash equilibrium strategy" of the game. This is an admissible control process satisfying some admissibility conditions.
We focus on a variational technique approach leading to a version of a necessary and sufficient condition for equilibrium, which involves a flow of forward-backward stochastic differential equations (FBSDEs) along with a certain equilibrium condition. We also present a verification theorem that covers some possible examples of utility functions. Then, by decoupling the flow of the FBSDEs, we derive a closed-loop representation of the equilibrium strategies via a parabolic non-linear partial differential equation (PDE). We show that within a special form of the utility function (logarithmic, power, and exponential) the PDE reduces to a system of ODEs which has an explicit solution.

Novelty and Contribution
Different from Marín-Solano and Navas (2010) and Ekeland and Pirvu (2008), where the authors derived explicit solutions for special forms of the discount factor, in our model, the non-exponential discount function is in a fairly general form. Moreover, we consider equilibrium strategies in the open-loop sense, as defined in  and Hu et al. (2015), which is different from most of the existing literature on this topic. Note also that the time-inconsistency, in our paper, arises from a non-exponential discounting in the objective function, while the works  and Hu et al. (2015) are concerned with a quite different kind of time-inconsistency which is caused by the presence of nonlinear terms of expectations in the terminal cost. On other hand, the objective functional, in our paper, is not reduced to the quadratic form as in  and Hu et al. (2015).
We accentuate that, different from most of the existing literature on this topic, where some feedback equilibrium strategies are derived via several very complicated highly non-linear integro-differential equations, an explicit representation of the equilibrium strategies are obtained in our work via simple ODEs. In addition, this method can provide the necessary and sufficient conditions to characterize the equilibrium strategies, while the extended HJB techniques can create, in general, only the sufficient condition in the form of a verification theorem that characterizes the equilibrium strategies.

Structure of the Paper
The rest of the paper is organized as follows. In Section 2, we formulate the problem and give the necessary notations and preliminaries. In Section 3, we present the main results of the paper, Theorem 1 and Theorem 2, that characterize the equilibrium decisions by some necessary and sufficient conditions. In Section 4, we derive an explicit representation of the equilibrium consumption-investment strategy. Section 5 is devoted to some comparisons with existing results in the literature. The paper ends with an Appendix containing some proofs.

Problem Formulation
In what follows, we assume that W(·) = (W 1 (·), . . . , W d (·)) is a d-dimensional standard Brownian motion defined on a filtered probability space (Ω, F , F, P), such that F := (F t ) t∈[0,T] is a natural filtration that satisfies the usual conditions, in particular, F 0 contains all P-null sets and F T = F for an arbitrarily fixed finite time horizon T > 0. Recall that F t stands for the information available up to time t, and any decision made at time t is based on this information.

Notations
Throughout this paper, we use the following notations: M : the transpose of the vector (or matrix) M, χ, ζ : the inner product of χ and ζ, that is, χ, ζ := tr(χ T ζ). For a function f , we denote by f x (resp. f xx ) the first (resp. the second) derivative of f with respect to the variable x.
For any Euclidean space E with Frobenius norm |·|, we let, for any t ∈ [0, T], • L p (Ω, F t , P; E) : for any p ≥ 1, the set of E−valued F t −measurable random variables X, such that E |X| p < ∞.

Investment-Consumption Policies and Wealth Process
Starting from an initial capital x 0 > 0 at time 0, during the time horizon [0, T], the decision-maker is allowed to dynamically invest in the stocks, as well as in the bond and consuming. A consumption-investment strategy is described by a (d + 1)-dimensional stochastic process u(·) = (c(·), u 1 (·), . . . , u d (·)) , where c(s) represents the consumption rate at time s ∈ [0, T] and u i (s), for i = 1, 2, .., d, represents the amount invested in the i-th risky stock at time s ∈ [0, T]. The process u I (·) = (u 1 (·), . . . , u d (·)) is called an investment strategy. The amount invested in the bond at time s is where X x 0 ,u (·) is the wealth process associated with the strategy u(·) and the initial capital x 0 . The evolution of X x 0 ,u (·) can be described as Accordingly, the wealth process solves the SDE: As time evolves, it is natural to consider the controlled stochastic differential equation parametrized by (t, ξ) ∈ [0, T] × L 2 (Ω, F t , P; R) and satisfied by X(·) = X t,ξ (·; u(·)), Definition 1 (Admissible Strategy). A strategy u(·) = c(·), u I (·) is said to be admissible T; R d and for any (t, ξ) ∈ [0, T] × L 2 (Ω, F t , P; R), the equation (4) has a unique solution X(·) = X t,ξ (·; u(·)).
We impose the following assumption about the coefficients.
(H1) Processes r 0 (·), r(·) and σ(·) are uniformly bounded and moreover we assume the following uniform ellipticity condition: for some > 0, where I d denotes the identity matrix on R d×d .
for some positive constant K. In particular, for , and the following estimate holds:

General Discounted Utility Function
Most of financial-economics works have considered that the rate of time preference is constant (exponential discounting). However, there is growing evidence to suggest that this may not be the case. In this section, we discuss the general discounting preferences. We also introduce the basic modeling framework of Merton's consumption and portfolio problem. We refer the reader to Ainslie et al. (1991), Karatzas et al. (1987), Merton (1969), Merton (1971), and Pliska (1986) for more detail about the classical Merton model.

Discount Function
As soon as discounting is non-exponential, most papers work with special form of the non-exponential discount factor. Differently from these works, we consider a general form of the discount factor. Definition 2. A discount function λ(·) : [0, T] → R is a continuous and deterministic function satisfying λ(0) = 1, λ(s) > 0 ds − a.e. and T 0 λ(s)ds < ∞.

Utility Functions and Objective
In order to evaluate the performance of a consumption-investment strategy, the decision-maker derives utility from inter-temporal consumption and final wealth. Let ϕ(·) be the utility of inter-temporal consumption and h(·) the utility of the terminal wealth at some non-random horizon T (which is a primitive of the model). Then, for any (t, ξ) ∈ [0, T] × L 2 (Ω, F t , P; R), the investment-consumption optimization problem is reduced to maximize the utility function J(t, ξ, .) given by We restrict ourselves to utility functions which satisfy the following conditions.
(H2) The maps ϕ(·), h(·) : R → R are strictly increasing, strictly concave and satisfy the integrability condition Noting that, most literature on the necessary and/or sufficient optimality conditions in stochastic control problem that considers more strong conditions about the coefficients, in which the derivatives ϕ xx (·) and h xx (·) are bounded or have linear or quadratic growth; see, e.g., Yong and Zhou (1999). It is also worth mentioning that , several papers impose an L p bounds on the control process for p > 2. Those restrictions make it impossible to apply the stochastic maximum principle approach directly to study the consumption-investment problem. Certainly, in that important problem, the optimal control is not necessarily L pintegrable, and the derivatives of the running cost and the terminal cost do not necessarily follow the global polynomial growth conditions.
In this paper, we overcome the technical difficulties mentioned above and treat some limiting procedures. To do so, let us introduce further technical integrability conditions of the utilities, which will be used in the proof of the main result.
(H3) The maps ϕ(·), h(·) are twice continuously differentiable functions, so all the derivatives ϕ x (·), h x (·), ϕ xx (·) and h xx (·) are continuous. (H4) For all admissible strategy pairs, there exists a constant p > 1 such that then the optimal control problem associated with (4) and (7) is equivalent to maximize subject to

Time Inconsistency
Let us first note that the optimal policies, although they exist, will not be timeconsistent in general. First of all, as an illustration, let us consider the model in (8)-(9) with logarithmic utility functions. We suppose that the financial market consists of one riskless asset and d risky assets. Arguing as in Ekeland and Pirvu (2008), we can prove that, if the agent is naive and starts with a given positive wealth x, at some instant t, then, by the standard dynamic programming approach, the value function associated with this stochastic control problem solves the following Hamilton-Jacobi-Bellman equation.
The HJB equation contains the term , which depends not only on the current time s but also on initial time t, so the optimal policy will depend on t, as well. Indeed, the first order necessary conditions yield the t−optimal policy Let us consider the following example: The naive agent for the initial pair (0, x 0 ) solves the problem, assuming that the discount rate of time preference will be λ(s), for s ∈ [0, T], and the optimal consumption strategy will be This solution corresponds to the so-called pre-commitment solution, in the sense that it is optimal as long as the agent can pre-commit (by signing a contract, for example) his or her future behavior at time t = 0. If there is no commitment, the 0-agent will take the action c 0,x 0 (s), but, in the near future, the -agent will change his decision rule (timeinconsistency) to the solution of the HJB equation (10) with t = . In this case, the optimal control trajectory for s > will be changed to c ,x (s), given by hence, the optimal consumption plan is time consistent. As soon as discount function is non-exponential Then, the optimal consumption plan is not time consistent. In general, the solution for the naive agent will be constructed by solving the family of HJB equations (10) for t ∈ [0, T], and patching together the "optimal" solutions c t,x t (t). If the agent is sophisticated, things become more complicated. The standard HJB equation cannot be used to construct the solution, and a new method is required in what follows.

Equilibrium Strategies
It is well known that the problem described above by (8)-(9) turns out to be time inconsistent in the sense that it does not satisfy the Bellman optimality principle, since a restriction of an optimal control for a specific initial pair on a later time interval might not be optimal for that corresponding initial pair. For a more detailed discussion see Ekeland and Pirvu (2008) and Yong (2012). Due to the lack of time consistency, we consider open-loop Nash equilibrium controls instead of optimal controls. As in , we first consider an equilibrium by local spike variation, given, for t ∈ [0, T], an admissible consumption-investment strategyû(·) ∈ M 1 F (t, T; R) × M 2 F t, T; R d . For any R d+1 −valued, F t −measurable and bounded random variable v and for any ε > 0, define We have the following definition.

A Necessary and Sufficient Condition for Equilibrium Controls
In this paper, we follow an alternative approach, which is essentially a necessary and sufficient condition for equilibrium. In the same spirit of proving the stochastic Pontryagin's maximum principle for equilibrium in  for the case of linear quadratic models, we derive this condition by a second-order expansion in the spike variation. Now, we introduce the adjoint equations involved in the characterization of open-loop Nash equilibrium controls.

A Characterization of Equilibrium Strategies
The following theorem is the first main result of this work, and it provides a necessary and sufficient condition for equilibrium. As we have said before, the proof is inspired by  and Hu et al. (2015).
First, we define the processq(s; t) = 0, q(s; t) , and we introduce the following notations: and, for a certain θ ∈ [0, 1], Theorem 1. Let (H1)-(H4) hold. Given an admissible strategyû(·) ∈ M 1 F (0, T; R) × M 2 F 0, T; R d , let for any t ∈ [0, T], the process (p(·; t), q(·; t)) ∈ L 2 F (t, T; R) × M 2 F t, T; R d be the unique solution to the BSDE (14). Then,û(·) is an equilibrium consumption-investment strategy, if and only if, the following condition holds In order to derive the proof of this theorem, let us, first of all, derive some technical results. First, denote byX ε (·) the solution of the state equation corresponding to u ε (·). Since the coefficients of the controlled state equation are linear, using the standard perturbation approach (see, e.g., Yong and Zhou (1999)), we havê where, for any R d+1 −valued, F t −measurable, and bounded random variable v, and for any ε ∈ [0, T − t), y ε,v (·) and z ε,v (·), solve, respectively, the following linear stochastic differential equations: and Proposition 1. Let (H1)-(H4) hold. For any t ∈ [0, T], the following estimates hold for any k ≥ 1 : In addition, we have the following equality: Proof. See Appendix A. Now, we present the following technical lemma needed later. The proof follows an argument adapted from Hamaguchi (2019). (2) lim n→∞ 1 ε t n t+ε t n t E t A ε t n (s; t) ds = A 0 (t; t), dP − a.s, dt − a.e.

Proof. See Appendix A.
Proof of Theorem 1. Given an admissible strategŷ for which (21) holds, according to Lemma 1, we have from (28) that, for any t ∈ [0, T], and for any R d+1 −valued, F t −measurable, and bounded random variable v, there exists a sequence ε t n n∈N ⊂ (0, T − t) satisfying ε t n → 0 as n → ∞, such that where we have used in the last inequality the fact that, under the concavity condition of ϕ(·) and h(·), it follows A 0 (t; t)v, v ≤ 0. Hence,û(·) is an equilibrium strategy.
Conversely, assume thatû(·) is an equilibrium strategy. Then, by (12), together with (28) and Lemma 1, for any (t, u) ∈ [0, T] × R d+1 , the following inequality holds: Clearly, Φ(·, ·) is well defined. In fact, it is a second order polynomial in terms of the components of vector u. Easy manipulations show that the inequality (29) is equivalent to So, it is easy to see that the maximum condition (30) leads to the following condition: According to Lemma 1, the expression (21) follows immediately. This completes the proof.

A Characterization of Equilibrium Strategies by Verification Argument
In classical (time-consistent) stochastic control theory, the sufficient condition of optimality is of significant importance for computing optimal controls. It says that, if an admissible control satisfies the maximum condition of the Hamiltonian function, then the control is indeed optimal for the stochastic control problem. This allows one to solve examples of optimal control problems, where one can find a smooth solution to the associated adjoint equation.
Remark 2. The purpose of the sufficient condition of optimality is to find an optimal control by computing the difference J(û(·)) − J(u(·)) in terms of the Hamiltonian function, where u(·) is an arbitrary admissible control. Here, the spike variation perturbation (11) plays a key role in deriving the sufficient condition for equilibrium strategies, which reduces to the computation of the difference J t,X(t),û(·) − J t,X(t), u ε (·) , without the necessity to achieving the second order expansion in the spike variation.

Equilibrium When the Coefficients Are Deterministic
Theorems 1 and 2 show that one can obtain equilibrium consumption-investment strategies by solving a system of FBSDEs which is not standard since the "flow" of the unknown process (p(·; t), q(·; t)) t∈[0,T] is involved. Moreover, there is an additional constraint that act on the "diagonal" (i.e., when s = t) of the flow. As far as we know, the explicit solvability of this type of equations remains an open problem, except for some particular form of the utility function. However, we are able to solve quite thoroughly this problem when the parameters r 0 (·), µ(·) and σ(·) are deterministic functions. In this section, we define what we mean by an equilibrium rule, and then we derive a parabolic backward PDE. Our PDE is comparable with the one obtained in Marín-Solano and Navas (2010) and Ekeland and Pirvu (2008), for some particular discount functions in a finite horizon with different utility functions.
In this section, let us look at the Merton's portfolio problem with general discounting and deterministic parameters. At first, we consider the following parabolic backward partial differential equation: where we denote by I(·) the inverse function of the strictly decreasing marginal derivative utility ϕ x (·) and Σ(s) ≡ σ(s)σ(s) −1 . We have the following verification theorem.

Remark 4.
Theorem 3 enables us to derive a suitable equilibrium strategyû I (t), as well asĉ(t), at each t ∈ [0, T], and this permits us to derive directly an explicit expression of equilibrium control in the cases of power, logarithmic, and exponential utility functions. While the duality approach used in Hamaguchi (2019) permits to characterize a stochastic equilibrium solution in terms of a complicated FBSDE system of a closed form, it does not provide an explicit representation.

Special Utility Functions
Equilibrium investment-consumption strategies for Merton's portfolio problem with general discounting and deterministic parameters have been studied in Marín-Solano and Navas (2010); Ekeland and Pirvu (2008); and Yong (2012), among others, in different frameworks. In this section, we discuss some special cases in which the function θ(·, ·) may be separated into functions of time and state variables. Then, one needs only to solve a system of ODEs in order to completely determine the equilibrium strategies. We will compare our results with some existing ones in the literature.

Power Utility Function
To make the problem (8)-(9) explicitly solvable, we consider power utility functions for the running and terminal costs. That is, ϕ(c) = c γ γ and h(x) = a x γ γ , with a > 0 and γ ∈ (0, 1). In this case, the PDE (36) reduces to From the terminal condition, we consider the following trial solution for some deterministic function Π(·) ∈ C 1 ([0, T], R) with the terminal condition Π(T) = 1. Then, by substituting in (36), we obtain where and It remains to determine the function Π(·). First, by the change of variable we find that y(·) should solve the following ODE A variation of constant formula yields to and, subsequently, we obtain In view of Theorem 3, the representation of the Nash equilibrium strategies (38)-(39) givesĉ This consumption-investment strategy determines a wealth process given by The above solution is comparable with the one obtained by Marín-Solano and Navas (2010); Ekeland and Pirvu (2008); and Yong (2012).

Logarithmic Utility Function
Now, let us analyze the case where ϕ(c) = ln(c) and h(x) = a ln(x), with a > 0. In this case, the PDE (36) reduces to Once again, we know that the solution of (57) will be of the form where Π(·) ∈ C 1 ([0, T], R). By substituting in (57), we get which is explicitly solved by In view of Theorem 3, the representation of the Nash equilibrium strategies (38)-(39) givesĉ This consumption-investment strategy determines a wealth process given by
The terminal condition PDE (36) becomes We try a solution of the form where φ(·), ψ(·) ∈ C 1 ([0, T], R) such that φ(T) = 1 and ψ(T) = 0. By substituting in (62) This suggests that functions φ(·) and ψ(·) should solve the following system of equations: which is explicitly solvable for t ∈ [0, T], by and The representation of the Nash equilibrium strategies (38)-(39) giveŝ This consumption-investment strategy determines a wealth process given by The above solution is comparable with the ones obtained in Marín-Solano and Navas (2010) by solving an extended Hamilton-Jacobi-Bellman (HJB) equations.

Special Discount Function
As well documented in Marín-Solano and Navas (2010), an agent making a decision at time t is usually called the t-agent, and can act in two different ways: naive and sophisticated. Naive agents make decisions without taking into account that their preferences will change in the near future, and then any t-agent will solve the problem as a standard optimal control problem with initial condition X(t) = x t , and his decision will be, in general, time-inconsistent. In order to obtain a time consistent strategy, the t-agent should be sophisticated, in the sense of taking into account the preferences of all the s-agents, for s ∈ [t, T]. Therefore, the approach to handle the time inconsistency in dynamic decision-making problems is by considering time-inconsistent problems as non-cooperative games with a continuous number of players, in which decisions at every instant of time are selected. The solution to the problem of the agent with non-constant discounting should be constructed by looking for the sub-game perfect equilibrium of the associated game with an infinite number of t-agents. In Marín-Solano and Navas (2010), the authors looked for a solution of a sophisticated agent to the modified HJB (which is not a partial differential equation due to the presence of a non-local term). Then, they need to define the Markov equilibrium strategies, while, in our work, and different from Marín-Solano and Navas (2010), we use the open-loop equilibrium strategies. This is a significant difference which leads to obtain an important change in the results.

Exponential Discounting with Constant Discount Rate (Classical Model)
At first, we consider the standard exponential discount function λ(t) = e −δ 0 t , t ∈ [0, T], where δ 0 > 0 is a constant representing the discount rate. In this case, our equilibrium solution for the three cases become: (1) Logarithmic utilitŷ u I (t) = Σ(t)r(t)X(t), dt − a.e.
Notice that our solutions given above coincide with the optimal solutions of classical Merton portfolio problem (see, e.g., Marín-Solano and Navas (2010) in the case with constant discount rate). This confirms the well-known fact that the time-consistent equilibrium strategy for an exponential discount function is nothing but the optimal strategy. A relevant remark is that the portfolio rule is independent of the discount factor, and it is the same for a non-exponential discount function.
6.2. Exponential Discounting with Non-Constant Discount Rate (Karp's Model) Now, following Karp (2007), let us assume that the instantaneous discount rate is non-constant, but a continuous and positive function of time δ(l), for l ∈ [0, T]. Impatient agents will be characterized by a non-increasing discount rate δ(·). The discount factor used to evaluate a payoff at times τ ≥ 0, is given by In this case, the objective is exactly the same as Marín-Solano and Navas (2010), in which the equilibrium is, however, defined within the class of feedback controls. In Marín-Solano and Navas (2010), the (feedback) equilibrium consumption-investment solutions (also called the sophisticated consumption-investment strategies) are summarized as: (1) Logarithmic utilitŷ (2) Power utilityĉ where α(·) is the solution of the integro-differential equation, with K(t) given by (52) and (3) Exponential utilitŷ where φ(·) is given by (65) and C(·) satisfies the following very complicated integrodifferential equation, where with Our (open-loop) equilibrium solutions reduce to (1) Logarithmic utilitŷ (2) Power utilitŷ (3) Exponential utilitŷ where K(·), φ(·) are given by (52) and (65), respectively, and Remark 5. Comparing the results of this special case with our solutions, we find the following facts: The equilibrium proportion investment strategies coincide in the three cases. The consumption strategies are different in the three cases. Moreover, our equilibrium consumption strategies are well defined and explicitly given, while, in Marín-Solano and Navas (2010), equilibrium consumption strategies in the case of Power utility, as well as in the case of Exponential utility, are obtained via a very complicated integro-differential equations, in which unique solvability are not established.
Let us prove the first limit. We have Applying the same arguments used in the first limit, we obtain, according to Lemma A1, lim n→∞ 1 ε t n E t t+ε t n t ϕ xx Γ (û(s)) ds = ϕ xx Γ (û(t)) , at least for a sub-sequence.

Appendix B. Conclusions
It is well known that the inconsistent optimal control problems are complicated to solve in general. In this paper, we have studied optimal investment and consumption problem, since the objective functional may depend on the non-exponential discount function. So, inspired from the previous literature, we made some restrictive conditions on the coefficients of the utility function, which allows to treat some limiting procedures in the original problem.
We have used the game theoretic approach to handle the time inconsistency. Specifically, feedback Nash equilibrium controls are explicitly constructed as an alternative of optimal controls. This has been accomplished through the necessary and sufficient condition for equilibrium during stochastic system that includes a flow of forward-backward stochastic differential equations. We derive the closed-from expression of the equilibrium investment-consumption strategy. Moreover, some particular cases of our model are discussed and compared with the previous literature from the extended HJB equations with a verification theorem.
The work can be extended in several ways. For example, this approach can be extended to a general continuous-time stochastic control problem with delay under a fairly general time-inconsistent objective functional. Another challenging problem, is the study of some statistical testing method for validation of the smoothness assumptions about the coefficients; see, e.g., Pešta and Wendler (2020). The research on these topics is in progress and will appear in our forthcoming papers.