Next Article in Journal
Modeling and Exploration of Localized Wave Phenomena in Optical Fibers Using the Generalized Kundu–Eckhaus Equation for Femtosecond Pulse Transmission
Previous Article in Journal
A Novel Computational Framework for Time-Fractional Higher-Order KdV Models: CLADM-Based Solutions and Comparative Analysis
Previous Article in Special Issue
Evolutionary Search for Polynomial Lyapunov Functions: A Genetic Programming Method for Exponential Stability Certification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Averaging of Linear Quadratic Parabolic Optimal Control Problem

by
Olena Kapustian
1,
Oleksandr Laptiev
2 and
Adalbert Makarovych
1,*
1
Faculty of Computer Science and Cybernetics, Taras Shevchenko National University of Kyiv, Volodymyrska Street 64, 01601 Kyiv, Ukraine
2
Faculty of Information Technology, Taras Shevchenko National University of Kyiv, Volodymyrska Street 64, 01601 Kyiv, Ukraine
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(7), 512; https://doi.org/10.3390/axioms14070512
Submission received: 4 June 2025 / Revised: 25 June 2025 / Accepted: 30 June 2025 / Published: 2 July 2025

Abstract

This paper studies an averaged Linear Quadratic Regulator (LQR) problem for a parabolic partial differential equation (PDE), where the system dynamics are affected by uncertain parameters. Instead of assuming a deterministic operator, we model the uncertainty using a probability distribution over a set of possible system dynamics. This approach extends classical optimal control theory by incorporating an averaging framework to account for parameter uncertainty. We establish the existence and uniqueness of the optimal control solution and analyze its convergence as the probability distribution governing the system parameters changes. These results provide a rigorous foundation for solving optimal control problems in the presence of parameter uncertainty. Our findings lay the groundwork for further studies on optimal control in dynamic systems with uncertainty.

1. Introduction

Optimal control problems for partial differential equations (PDEs) are widely used in engineering, physics, and economics to model systems that evolve over time [1]. Among these, the Linear Quadratic Regulator (LQR) problem has been extensively studied due to its well-established theoretical properties and practical applicability. The LQR framework provides an optimal strategy for controlling systems governed by linear PDEs while minimizing a quadratic cost functional [1,2]. Recent advances also explore approximate bounded feedback synthesis in parabolic PDEs with nonlinear perturbations and semidefinite performance criteria, extending classical LQR concepts to weakly nonlinear systems [3].
In this paper, we consider an averaged LQR problem for a parabolic PDE, where the system dynamics contain uncertain parameters. Instead of assuming a single deterministic system, we model the uncertainty using a probability distribution over a set of possible dynamics. This approach is inspired by previous studies in optimal control and stochastic averaging methods [4]. Related work addresses optimal regulation under rapidly oscillating parameters using homogenized models and superposition-type cost structures [5].
Reinforcement learning (RL) has become a cornerstone of modern machine learning, operating alongside supervised and unsupervised learning to solve decision-making problems under uncertainty. In this paradigm, agents learn optimal policies by interacting with an environment, optimizing a long-term performance criterion [6]. The connection between RL and classical optimal control theory has long been recognized [7], and recent developments in RL are now significantly influencing the field of control theory [8].
A key distinction within RL lies between model-free and model-based methods. Model-free approaches directly approximate value functions or policies without constructing a model of the environment, while model-based methods aim to learn a model from data and use it for planning [6]. The latter often suffer from model bias, a challenge identified early on [9]. To overcome this, the PILCO algorithm was proposed [10,11], which models system dynamics probabilistically using Gaussian processes and performs policy improvement based on expected trajectories.
PILCO and its extensions [12,13,14] operate within a Bayesian model-based RL framework that integrates data-driven learning with optimal control. These methods have demonstrated remarkable data efficiency and robustness and inspired numerous theoretical developments [15,16,17]. The formalization of such Bayesian approaches using distributions over system dynamics is central to understanding and reducing model uncertainty [18,19].
In this context, averaged optimal control emerges as a powerful framework that interprets the expected behavior over a distribution of dynamics. This idea has strong theoretical roots in the Riemann–Stieltjes optimal control setting [20,21,22] and averaged controllability [23,24]. Additionally, the challenge of maintaining stability in distributed control systems under destabilizing factors has motivated algorithmic approaches to resilient network synthesis [25], further highlighting the importance of robust control strategies. Our work is motivated by these formulations and aims to explore how solutions to averaged optimal control problems relate to those of deterministic counterparts.
Additionally, reinforcement learning in continuous-time systems is gaining momentum in control engineering [26,27,28,29]. These studies provide a basis for the development of algorithms that can operate in real-world, high-frequency environments.
This paper contributes to this growing body of work by investigating the convergence of optimal policies derived from averaged linear quadratic regulator (LQR) problems to those of classical LQR problems as the distribution over dynamics concentrates. Our approach complements existing algorithms such as PILCO [11] by offering a theoretical justification for their observed empirical success [18].
The main contributions of this paper are as follows:
  • We establish the existence and uniqueness of optimal solutions for the averaged control problem.
  • We analyze the convergence of optimal controls as the probability measure representing system uncertainty becomes more concentrated.
The structure of this paper is as follows: First, we define the mathematical formulation of the problem. Then, we present theoretical preliminaries and key functional analysis tools. The main results, including proofs of existence, uniqueness, and convergence, follow. Finally, we summarize our findings and discuss potential future directions.

2. Setting of the Problem

For unknown functions y = y ( t , x ) , u = u ( t , x ) L 2 ( Q T ) , t [ 0 , T ] , x Ω R d , and Q T = ( 0 , T ) × Ω , we consider the linear quadratic optimal control problem:
y ( t , x ) t L A y ( t , x ) = u ( t , x ) , y | x Ω = 0 , y | t = 0 = y 0 ( x ) ,
where the cost function is given by
J π ( u ) = E π 0 T Ω y 2 ( t , x ) d x + γ 1 Ω u 2 ( t , x ) d x d t + γ 2 Ω y 2 ( T , x ) d x = = A 0 T Ω y 2 ( t , x ) d x + γ 1 Ω u 2 ( t , x ) d x d t + γ 2 Ω y 2 ( T , x ) d x d π ( A ) inf ,
where
L A = i , j = 1 d a i j 2 x i x j , a i j = a j i ,
matrix A = { a i j } , and operator L A satisfies the uniform ellipticity condition given by
i , j = 1 d a i j ξ i ξ j γ 3 i = 1 d ξ i 2 , ξ R d .
The uniform ellipticity condition ensures the well-posedness of the PDE and is a standard assumption in the study of elliptic and parabolic operators.
Here, γ 1 and γ 2 are positive numbers that represent the weight coefficients in the objective functional, γ 3 is a positive number that defines the uniform ellipticity of a differential operator, and y 0 L 2 ( Ω ) .
A is a set of symmetric matrices satisfying condition (3) (with one and the same constant γ 3 ).
π is a probability measure on A .
In the following, we denote
y = Ω y 2 ( x ) d x 1 2 , ( y , z ) = Ω y ( x ) z ( x ) d x , u L 2 ( Q T ) = Q T u 2 ( t , x ) d t d x 1 2 .
For A A , we denote by A the standard Euclidean norm.
We will prove that for every π problem (1), (2) has a unique solution { y ¯ π , u ¯ π } (here, y ¯ π depends on A), and if
π n π weakly ,
then
A A , y ¯ π n y ¯ π in C ( [ 0 , T ] ; L 2 ( Ω ) ) , u ¯ π n u ¯ π in L 2 ( Ω ) .

3. Preliminary Results

It is known that for every A A and u L 2 ( Q T ) , problem (1) has a unique solution in the weak sense ([30] [Theorem 3.1, p. 70]):
! y W ( 0 , T ) : = { y L 2 ( 0 , T ; H 0 1 ( Ω ) ) | y t L 2 ( 0 , T ; H 1 ( Ω ) ) }
such that y | t = 0 = y 0 and v H 0 1 ( Ω ) ,
t ( y ( t ) , v ) + ( A y , v ) = ( u , v ) a . e . on ( 0 , T ) .
Due to the embedding,
W ( 0 , T ) C ( [ 0 , T ] ; L 2 ( Ω ) )
where equality y | t = 0 = y 0 makes sense.
Moreover, for every weak solution, functions t ( y ( t ) , v ) and t y ( t ) 2 are absolutely continuous and
t ( y ( t ) , v ) + ( A y , v ) = ( u , v ) a . e . on ( 0 , T ) .
1 2 t y ( t ) 2 + ( A y , y ) = ( u , y ) a . e . on ( 0 , T ) .
From (5), we derive the following estimates:
t [ 0 , T ] y ( t ) 2 ( y 0 2 + u L 2 ( Q T ) 2 ) e T ,
2 γ 3 0 T y 2 y 0 2 + u L 2 ( Q T ) 2 + ( y 0 2 + u L 2 ( Q T ) 2 ) e T T .
If for a given u L 2 ( Q T ) , we denote by y 1 (cor. y 2 ) the solution of (1) with matrix A 1 (cor. A 2 ), then for z = y 1 y 2 , we get
1 2 t z ( t ) 2 + ( ( A 1 A 2 ) y 1 , z ) + ( A 2 z , z ) = 0 .
Therefore,
1 2 t z ( t ) 2   A 1 A 2 y 1 z .
So, t [ 0 , T ] from (7), we deduce
y 1 ( t ) y 2 ( t ) 2   A 1 A 2 1 γ 3 ( y 0 2 + u L 2 ( Q T ) 2 ) ( T e T + 1 ) .
Finally, we note that weak convergence (4) can be described by the Wasserstein metric:
ρ ( π 1 , π 2 ) = inf μ Γ ( π 1 , π 2 ) A × A d ( A 1 , A 2 ) μ ,
where ( A , d ) is a Polish metric space, and Γ ( π 1 , π 2 ) is the collection of all probability measures on A × A with projections π 1 and π 2 , respectively.
Then,
π n w π ρ ( π n , π ) 0 as n .

4. Main Results

Theorem 1. 
For every measure π, the LQR optimal control problem (1), (2) has a unique solution { y ¯ π , u ¯ π } (here, y ¯ π depends on A), and
max t [ 0 , T ] y ¯ π ( t ) + u ¯ π L 2 ( Q t ) C ,
where constant C > 0 does not depend on π and A.
Remark 1. 
u ¯ π depends on π but does not depend on A.
Proof of Theorem 1. 
Let { u n } be a minimizing sequence and { y n } be the corresponding solutions of (1). Then,
J π ( u n ) inf J π ( u ) + 1 n .
Taking control function u 0 , for the corresponding solution y of (1), we have from (6),
J π ( 0 ) = A ( y L 2 ( Q T ) 2 + γ 2 y ( T ) 2 ) d π ( A ) y 0 2 e T ( T + γ 2 ) .
So,
γ 1 u n L 2 ( Q T ) 2 J π ( u n ) inf J π ( u ) + 1 n J π ( 0 ) + 1 n J π ( 0 ) + 1 y 0 2 e T ( T + γ 2 ) + 1 .
Equations (10) and (6) imply that
u n L 2 ( Q T ) 2 + max t [ 0 , T ] y n ( t ) 2 C ,
where
C = y 0 2 e T ( T + γ 2 ) + 1 γ 1 ( 1 + e T ) + y 0 2 e T
does not depend on n, π , and A.
Moreover, from (7), we deduce that for every A, { y n } is bounded in W ( 0 , T ) . As W ( 0 , T ) is compactly embedded in L 2 ( Q T ) , up to subsequence, for some { y ¯ A , u ¯ } ,
u n u ¯ weakly in L 2 ( Q ) , y n y ¯ A weakly in W ( 0 , T ) , y n y ¯ A in L 2 ( Q T ) and a . e . in Q T , y n ( t ) y ¯ A ( t ) weakly in L 2 ( Ω ) t [ 0 , T ] .
Taking an arbitrary ξ C 0 ( 0 , T ) and passing the limit in the equality
0 T ( y n ( t ) , v ) ξ t + 0 T ( A y n , v ) ξ = 0 T ( u n ( t ) , v ) ξ ,
we obtain that y ¯ A is a solution of (1) with control u ¯ and matrix A. Due to the uniqueness of such a solution, the whole sequence y n tends to y ¯ A .
Moreover, due to (11), t [ 0 , T ] , we get
u ¯ L 2 ( Q T ) + y ¯ ( t ) 2 l i m { u ¯ n L 2 ( Q T ) + y ¯ n ( t ) 2 } C .
Also, using Fatou’s Lemma, we get
lim J π ( u n ) = lim A ( y n L 2 ( Q T ) + γ 1 u n L 2 ( Q T ) 2 + γ 2 y n ( T ) 2 ) d π ( A ) A ( lim y n L 2 ( Q T ) + γ 1 lim u n L 2 ( Q T ) 2 + γ 2 lim y n ( T ) 2 ) d π ( A ) J π ( u ¯ ) .
Therefore, { y ¯ , u ¯ } { y ¯ π , u ¯ π } is a solution of the LQR optimal control problem (1), (2), and (9) holds.
Because of the strict convexity of u J π ( u ) , we obtain uniqueness. Thus, the theorem is proved. □
Now, let us assume that
π n w π .
The convergence of optimal controls under the weak convergence of probability measures is analyzed using the Wasserstein metric, a powerful tool in probability and optimal transport theory [31].
Let { y ¯ π n , u ¯ π n } and { y ¯ π , u ¯ π } be optimal solutions of (1), (2) for measures π n and π .
Theorem 2. 
Under assumption (12),
J π n ( u ¯ π n ) J π ( u ¯ π ) ,
u ¯ π n u ¯ π i n L 2 ( Q T ) ,
A A , y ¯ π n y ¯ π i n C ( [ 0 , T ] ; L 2 ( Ω ) ) .
Proof of Theorem 2. 
For every u L 2 ( Q T ) , due to (6) and (8),
| A y A 1 L 2 ( Q T ) 2 + γ 1 u L 2 ( Q T ) 2 + γ 2 y A 1 ( T ) 2 d π n ( A 1 ) A y A 2 L 2 ( Q T ) 2 + γ 1 u L 2 ( Q T ) 2 + γ 2 y A 2 ( T ) 2 d π ( A 2 ) | = | A × A ( y A 1 L 2 ( Q T ) 2 y A 2 L 2 ( Q T ) 2 ) + γ 2 ( y A 1 ( T ) 2 y A 2 ( T ) 2 ) d μ | C ( u L 2 ( Q T ) ) · A × A A 1 A 2 1 2 d μ ,
where C ( u L 2 ( Q T ) ) is bounded if u L 2 ( Q T ) is bounded, and μ is a probability measure on A × A with projections π N and π . If we take d ( A 1 , A 2 ) = A 1 A 2 2 , then space ( A , d ) is a Polish metric space. Therefore, from (16), we get, for u L 2 ( Q T ) ,
| J π n ( u ) J π ( u ) | C ( u L 2 ( Q T ) ) ρ ( π N , π ) .
Let us denote
C = max C ( u ¯ π n L 2 ( Q T ) ) , C ( u ¯ π L 2 ( Q T ) ) .
Due to (9), the number C > 0 does not depend on n.
Then, the optimality of u ¯ π n and u ¯ π implies
inf J π n ( u ) inf J π ( u ) inf J π n ( u ) J π ( u ¯ π ) J π n ( u ¯ π ) J π ( u ¯ π ) C ρ ( π n , π ) .
inf J π ( u ) inf J π n ( u ) inf J π ( u ) J π n ( u ¯ π n ) J π ( u ¯ π n ) J π ( u ¯ π n ) C ρ ( π , π n ) .
Inequalities (18) and (19) guarantee (13).
In the following, we denote u ¯ n : = u ¯ π n , u ¯ : = u ¯ π .
Due to (7) and (9), we obtain that
{ u ¯ n } is bounded in L 2 ( Q T ) A { y ¯ n } is bounded in W ( 0 , T )
So, up to the subsequence, for some { u ¯ , y ¯ } , { u ¯ n , y ¯ n } tends to { u ¯ , y ¯ } in the sense of (11).
Passing to the limit yields that y ¯ is a solution of (1) with control u ¯ .
Due to (17),
J π ( u ¯ n ) J π n ( u ¯ n ) + C ρ ( π n , π ) .
So, using (11), (13), and Fatou’s Lemma, we get
J π ( u ¯ ) lim J π ( u ¯ n ) lim ( J π n ( u ¯ n ) + C ρ ( π n , π ) ) = J π ( u ¯ ) .
The uniqueness of the solution of (1), (2), and (20) implies that u ¯ = u ¯ , and consequently, y ¯ = y ¯ .
This inequality also implies that u ¯ n u ¯ in L 2 ( Q T ) .
Indeed, on the one hand,
lim J π ( u ¯ n ) J π ( u ¯ ) = A y ¯ L 2 ( Q T ) 2 + γ 2 y ¯ ( T ) 2 d π ( A ) + γ 1 u ¯ L 2 ( Q T ) 2 .
And on the other hand,
lim J π ( u ¯ n ) lim A y ¯ n L 2 ( Q T ) 2 + γ 2 y ¯ n ( T ) 2 d π ( A ) + γ 1 lim u ¯ n L 2 ( Q T ) 2 A lim y ¯ n L 2 ( Q T ) 2 + γ 2 y ¯ n ( T ) 2 d π ( A ) + γ 1 lim u ¯ n L 2 ( Q T ) 2 = A y ¯ L 2 ( Q T ) 2 + γ 2 y ¯ ( T ) 2 d π ( A ) + γ 1 lim u ¯ n L 2 ( Q T ) 2 . .
So,
lim u ¯ n L 2 ( Q T ) u ¯ L 2 ( Q T ) .
This inequality and weak convergence u ¯ N to u ¯ in L 2 ( Q T ) imply the strong convergence of (14).
Let us prove (15). We already know from (11) that
A A , y ¯ N ( t ) y ¯ ( t ) for a . e . t ( 0 , T ) ,
and that functions t y ¯ n ( t ) , t y ¯ ( t ) are continuous.
From (1), we deduce
1 2 t y ¯ n ( t ) 2 2 ( u ¯ n ( t ) , y ¯ n ( t ) ) .
Therefore, for all t > s > 0 ,
t s 0 y ¯ n ( t ) 2 y ¯ n ( s ) 2 + 2 s t ( u ¯ n ( τ ) , y ¯ n ( τ ) ) d τ .
So, the function
t J n ( t ) = y ¯ n ( t ) 2 2 0 t ( u ¯ n ( τ ) , y ¯ n ( τ ) ) d τ
is monotone, continuous, and
J n ( t ) J ( t ) a . e .
Then, from Dini’s Theorem,
J n ( t ) J ( t ) in C ( [ 0 , T ] ) ,
which implies (15). □

5. An Example

The aim of this section is to illustrate the obtained results using the scheme utilized in [4]. We assume that A is a finite set, i.e., A = { A 1 , . . , A M } for some M N . We consider a sequence of probability distributions { π n } n = 1 :
π n = i = 1 M α i n · δ A i , where α i n 0 , i = 1 M α i n = 1 ,
where δ A i is a Dirac delta concentrated at A i .
Assume that
α 1 n 1 , α i n 0 , i = 2 M as n .
Then, clearly,
π n δ A 1 weakly .
For the measure π = δ A 1 , problem (1), (2) is the following LQR optimal control problem:
y ( t , x ) t L A 1 y ( t , x ) = u ( t , x ) , y | x Ω = 0 , y | t = 0 = y 0 ( x ) ,
J δ A 1 ( u ) = 0 T Ω y 2 ( t , x ) d x + γ 1 Ω u 2 ( t , x ) d x d t + γ 2 Ω y 2 ( T , x ) d x inf ,
According to Theorem 1, problems (22), (23) has a unique solution { y ¯ π , u ¯ π } , which satisfies (9). For the measure π n = i = 1 M α i n δ A i , we have the following problem of minimizing the functional
J π n ( u ) = i = 1 M α i n · J δ A i ( u ) i n f
where
J δ A 1 ( u ) = 0 T Ω y 2 ( t , x ) d x + γ 1 Ω u 2 ( t , x ) d x d t + γ 2 Ω y 2 ( T , x ) d x ,
and y is a solution of (1) with A = A i . With obvious changes, we can apply argument (10), (11) to problem (24) and obtain that such a problem has a unique solution u ¯ π n , which together with y-components satisfies estimation (9) with constant C not depending on n. This means that
J δ A i ( u ¯ π n ) C 2 ( T + γ 1 + γ 2 ) , i = 1 M , n 1 .
Therefore, using a well-known fact [3]—if in (1), u n u weakly in L 2 ( Q T ) , then y n y in C ( [ 0 , T ] ; L 2 ( Ω ) ) —we can pass to the limit and obtain
J π n ( u ¯ π n ) = α 1 n · J δ A 1 ( u ¯ π n ) + i = 2 M α i n · J δ A i ( u ¯ π n ) J δ A 1 ( u ¯ δ A 1 ) , n .
We can give a numerical illustration in the simplest case: d = 1 , Ω = ( 0 , π ) , γ 1 = γ 2 = γ 3 = 1 , M = 2 , A 1 = 1 , A 2 = 2 , α 1 n = 1 1 2 n , α 2 n = 1 2 n . Then, for T = 1 and y 0 ( x ) = sin x for problem (22), (23), we have
y ( t , x ) t 2 y ( t , x ) x 2 = u ( t , x ) , y | x = 0 = 0 , y | x = π = 0 , y | t = 0 = sin x ,
J δ A 1 ( u ) = 0 1 0 π ( y 2 ( t , x ) + u 2 ( t , x ) ) d x d t + y 2 ( 1 , x ) inf ,
Using Fourier analysis, we can reduce problem (26), (27) to the classical calculus of variations problem.
inf J δ A i ( u ) = inf y C 1 ( [ 0 , 1 ] ) y ( 0 ) = 1 0 1 ( y 2 ( t ) + ( y + y ) 2 ) d t + y 2 ( 1 ) .
For problem (24), we have
inf J π N ( u ) = inf y 1 , y 2 C 1 ( [ 0 , 1 ] ) y 1 ( 0 ) = y 2 ( 0 ) = 1 [ ( 1 1 2 n ) [ 0 1 ( y 1 2 ( t ) + ( y 1 + y 1 ) 2 ) d t + y 1 2 ( 1 ) ] + 1 2 n [ 0 1 ( y 2 2 ( t ) + ( y 2 + y 2 ) 2 ) d t + y 2 2 ( 1 ) ] ]
which, clearly, tends to the value of (28) as n .

6. Conclusions

In this paper, we studied an averaged Linear Quadratic Regulator (LQR) problem for a parabolic partial differential equation (PDE), where the system dynamics are described by a probability distribution over possible operators. This formulation generalizes the classical LQR problem by incorporating uncertainty in the system parameters through an averaging approach.
The main contributions of this work are the following:
  • We established the existence and uniqueness of the optimal control solution under appropriate assumptions.
  • We proved the convergence of the optimal control as the probability distribution governing the system dynamics become more concentrated.
These results provide a rigorous theoretical foundation for analyzing control problems with uncertainty in system parameters.
In future research, we plan to generalize our results to multidimensional parabolic systems and evolution problems on infinite-time intervals. Moreover, it should be interesting to extend the results to hyperbolic partial differential equations (PDEs), which model wave-like and transport phenomena [32]. Exploring the control of such systems within a reinforcement learning framework may yield new methods for optimal control in dynamic environments [33], with potential applications in robotics, physics-based simulations, and real-time decision-making systems.
Beyond the realm of physical systems, intelligent control and learning frameworks are gaining traction in socio-technical domains. One notable example is the use of machine learning models to support automated recruitment and decision-making for hiring young professionals [34]. Additionally, the development of advanced database connectors underscores the relevance of scalable control and dataflow mechanisms in distributed computing systems [35].

Author Contributions

Conceptualization, O.K., A.M., and O.L.; methodology, O.K.; formal analysis, O.K.; investigation, O.K., A.M., and O.L.; writing—original draft preparation, A.M.; writing—review and editing, O.K. and A.M.; visualization, A.M.; supervision, O.K.; project administration, O.K.; funding acquisition, O.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Evans, L.C. Partial Differential Equations, 2nd ed.; American Mathematical Society: Providence, RI, USA, 2010. [Google Scholar]
  2. Anderson, B.D.O.; Moore, J.B. Optimal Control: Linear Quadratic Methods; Prentice Hall: Englewood Cliffs, NJ, USA, 1989. [Google Scholar]
  3. Kapustyan, O.V.; Kapustyan, O.A.; Sukretna, A.V. Approximate bounded synthesis for one weakly nonlinear boundary-value problem. Nonlinear Oscil. 2009, 12, 297–304. [Google Scholar] [CrossRef]
  4. Pesare, A.; Palladino, M.; Falcone, M. Convergence results for an averaged LQR problem with applications to reinforcement learning. Math. Control Signals Syst. 2021, 33, 379–411. [Google Scholar] [CrossRef]
  5. Kapustian, O.A. Approximate optimal regulator for distributed control problem with superposition functional and rapidly oscillating coefficients. In Modern Mathematics and Mechanics; Sadovnichiy, V., Zgurovsky, M., Eds.; Springer: Cham, Switzerland, 2019; pp. 199–208. [Google Scholar]
  6. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction, 2nd ed.; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  7. Sutton, R.S.; Barto, A.G.; Williams, R.J. Reinforcement learning is direct adaptive optimal control. IEEE Control Syst. 1992, 12, 19–22. [Google Scholar]
  8. Recht, B. A tour of reinforcement learning: The view from continuous control. Annu. Rev. Control Robot Auton. Syst. 2019, 2, 253–279. [Google Scholar] [CrossRef]
  9. Atkeson, C.G.; Santamaria, J.C. A comparison of direct and model-based reinforcement learning. In Proceedings of the International Conference on Robotics and Automation, Albuquerque, NM, USA, 20–25 April 1997; Volume 4, pp. 3557–3564. [Google Scholar]
  10. Deisenroth, M.P. Efficient Reinforcement Learning Using Gaussian Processes; KIT Scientific Publishing: Karlsruhe, Germany, 2010. [Google Scholar]
  11. Deisenroth, M.P.; Fox, D.; Rasmussen, C.E. Gaussian processes for data-efficient learning in robotics and control. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 37, 408–423. [Google Scholar] [CrossRef]
  12. Gal, Y.; McAllister, R.; Rasmussen, C.E. Improving PILCO with Bayesian neural network dynamics models. In Proceedings of the ICML Workshop on Data-Efficient Machine Learning, New York, NY, USA, 24 June 2016; Volume 4, p. 25. [Google Scholar]
  13. Janner, M.; Fu, J.; Zhang, M.; Levine, S. When to trust your model: Model-based policy optimization. Adv. Neural Inf. Process. Syst. 2019, 32, 12519–12530. [Google Scholar]
  14. Kamthe, S.; Deisenroth, M. Data-efficient reinforcement learning with probabilistic model predictive control. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), Playa Blanca, Lanzarote, Spain, 9–11 April 2018; pp. 1701–1710. [Google Scholar]
  15. Chowdhary, G.; Kingravi, H.A.; How, J.P.; Vela, P.A. A Bayesian nonparametric approach to adaptive control using Gaussian processes. In Proceedings of the IEEE Conference Decision Control (CDC), Florence, Italy, 10–13 December 2013; pp. 874–879. [Google Scholar]
  16. Chua, K.; Calandra, R.; McAllister, R.; Levine, S. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. Adv. Neural Inf. Process. Syst. 2018, 31, 4754–4765. [Google Scholar]
  17. Wang, T.; Bao, X.; Clavera, I.; Hoang, J.; Wen, Y.; Langlois, E.; Zhang, S.; Zhang, G.; Abbeel, P.; Ba, J. Benchmarking Model-Based Reinforcement Learning. arXiv 2019, arXiv:1907.02057. [Google Scholar]
  18. Murray, R.; Palladino, M. A model for system uncertainty in reinforcement learning. Syst. Control Lett. 2018, 122, 24–31. [Google Scholar] [CrossRef]
  19. Murray, R.; Palladino, M. Modelling uncertainty in reinforcement learning. In Proceedings of the IEEE Conference on Decision and Control (CDC), Nice, France, 11–13 December 2019; pp. 2436–2441. [Google Scholar]
  20. Bettiol, P.; Khalil, N. Necessary optimality conditions for average cost minimization problems. Discrete Contin. Dyn. Syst. B 2019, 24, 2093. [Google Scholar] [CrossRef]
  21. Palladino, M. Necessary conditions for adverse control problems expressed by relaxed derivatives. Set-Valued Var. Anal. 2016, 24, 659. [Google Scholar] [CrossRef]
  22. Ross, I.M.; Proulx, R.J.; Karpenko, M.; Gong, Q. Riemann–Stieltjes optimal control problems for uncertain dynamic systems. J. Guid. Control Dyn. 2015, 38, 1251–1263. [Google Scholar] [CrossRef]
  23. Lohéac, J.; Zuazua, E. From averaged to simultaneous controllability. Ann. Fac. Sci. Toulouse Math. 2016, 25, 785–828. [Google Scholar] [CrossRef]
  24. Zuazua, E. Averaged control. Automatica 2014, 50, 3077–3087. [Google Scholar] [CrossRef]
  25. Barabash, O.; Sobchuk, V.; Sobchuk, A.; Musienko, A.; Laptiev, O. Algorithms for synthesis of functionally stable wireless sensor network. Adv. Inf. Syst. 2025, 9, 70–79. [Google Scholar]
  26. Doya, K. Reinforcement learning in continuous time and space. Neural Comput. 2000, 12, 219–245. [Google Scholar] [CrossRef]
  27. Lee, J.Y.; Park, J.B.; Choi, Y.H. Integral reinforcement learning for continuous-time input-affine nonlinear systems with simultaneous invariant explorations. IEEE Trans. Neural Netw. Learn. Syst. 2014, 26, 916–932. [Google Scholar]
  28. Lewis, F.L.; Vrabie, D. Reinforcement learning and adaptive dynamic programming for feedback control. IEEE Circuits Syst. Mag. 2009, 9, 32. [Google Scholar] [CrossRef]
  29. Munos, R. A study of reinforcement learning in the continuous case by the means of viscosity solutions. Mach. Learn. 2000, 40, 265–297. [Google Scholar] [CrossRef]
  30. Temam, R. Infinite-Dimensional Dynamical Systems in Mechanics and Physics; Springer Science & Business Media: Cham, Switzerland, 2013. [Google Scholar]
  31. Villani, C. Optimal Transport: Old and New; Springer: Berlin, Germany, 2009. [Google Scholar]
  32. Lions, J.L. Contrôlabilité Exacte, Perturbations et Stabilisation de Systèmes Distribués; Masson: Paris, France, 1988. [Google Scholar]
  33. Fleming, W.H.; Soner, H.M. Controlled Markov Processes and Viscosity Solutions, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  34. Makarovych, V.; Makarovych, A. Analysis of socio-economic determinants of youth employment using machine learning methods. Acta Acad. Beregsasiensis Econ. 2024, 6, 81–101. [Google Scholar]
  35. Glebena, M.I.; Makarovych, A.V. SingleStoreDB connector for Apache Beam. Sci. Bull. Uzhhorod Univ. Ser. Math. Inf. 2024, 44, 66–82. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kapustian, O.; Laptiev, O.; Makarovych, A. Averaging of Linear Quadratic Parabolic Optimal Control Problem. Axioms 2025, 14, 512. https://doi.org/10.3390/axioms14070512

AMA Style

Kapustian O, Laptiev O, Makarovych A. Averaging of Linear Quadratic Parabolic Optimal Control Problem. Axioms. 2025; 14(7):512. https://doi.org/10.3390/axioms14070512

Chicago/Turabian Style

Kapustian, Olena, Oleksandr Laptiev, and Adalbert Makarovych. 2025. "Averaging of Linear Quadratic Parabolic Optimal Control Problem" Axioms 14, no. 7: 512. https://doi.org/10.3390/axioms14070512

APA Style

Kapustian, O., Laptiev, O., & Makarovych, A. (2025). Averaging of Linear Quadratic Parabolic Optimal Control Problem. Axioms, 14(7), 512. https://doi.org/10.3390/axioms14070512

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop