Next Article in Journal
Exactly Solvable Model of a System with a Non-Conserved Number of Particles
Previous Article in Journal
Non-Global Lie Higher Derivations on Triangular Algebras Without Assuming Unity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Energy Management and Control for Linear–Quadratic–Gaussian Systems with Imperfect Acknowledgments and Energy Constraints

1
School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
2
College of Science, University of Shanghai for Science and Technology, Shanghai 200093, China
3
Institute of Applied Physics and Computational Mathematics, Beijing 100088, China
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(11), 791; https://doi.org/10.3390/axioms14110791 (registering DOI)
Submission received: 8 September 2025 / Revised: 19 October 2025 / Accepted: 24 October 2025 / Published: 27 October 2025

Abstract

This paper explores the optimal control issue for a linear–quadratic–Gaussian (LQG) system under the conditions of imperfect feedback and constraints related to energy harvesting. The system is equipped with various energy options, which allow it to gather energy for information transmission while also receiving imperfect feedback from an auxiliary filter that estimates packet loss. The primary goal of this study is to jointly design the energy selector and the controller to achieve an optimal balance between transmission costs and control performance. Initially, we separate the controller’s synthesis task from the energy selection task. The subproblem of optimal controller synthesis is characterized by a Riccati equation that takes continuous packet loss into account. Simultaneously, the energy selection task, influenced by imperfect feedback and constraints on energy costs, is reformulated as a Markov decision process (MDP) that operates with perfect acknowledgments through iterative updates of state information. Ultimately, the optimal energy selection policy that guarantees filtering performance is derived by solving a Bellman equation. The effectiveness of the proposed approach is confirmed through simulation results.

1. Introduction

As a result of swift advancements in networking, communication technology, and computing, the management of these systems intertwines the cyber and physical realms closely. To ensure the stability of these systems, it is essential to have dependable signals that are transmitted among the components over a common communication network. As demonstrated in references [1,2], implementing improvements to performance and reliability will be subject to challenges when undertaken in the presence of resource constraints. Due to resource constraints, measurement errors are inevitable when predicting actual systems and making decisions. In such cases, these errors are often handled in probabilistic terms to achieve better results [3,4].
Linear–quadratic–Gaussian (LQG) control, a classical and widely used optimal control method in modern control theory, primarily addresses the optimal control problem of linear stochastic systems. It achieves optimal feedback control laws by solving the Riccati equation, integrating state feedback with Kalman filtering. In practical applications, LQG control has been successfully implemented in various fields such as aerospace, robotic control, and power systems. However, as modern industries transition towards intelligence and networking, the dependence of control systems on networks has increased significantly. In this context, the impact of communication constraints on system performance cannot be overlooked [5,6,7]. Taking smart grids as an example, limited communication bandwidth restricts the amount of real-time data transmission between power nodes. When there are sudden changes in grid loads, various nodes may fail to timely exchange power adjustment information, leading to exacerbated voltage fluctuations and potentially triggering local grid failures. Aghaee, Fateme et al. focused on the impact of communication delays and data packet loss on distributed secondary control for microgrids [8]. The study presented in [9] investigates the safety of surgical robots with autonomous control in the context of surgical procedures, focusing on the modeling and control of nonlinear tissue compression and heating utilizing LQG methods. The authors in [10] showcase the balance between the expenses associated with control and the data transmission rates in infinite-horizon LQG systems. Furthermore, the work in [11] examines the ideal tracking capabilities of single-input single-output networked control systems when faced with restricted communication channels. In a similar vein, this paper delves into the LQG control issue under the conditions of communication limitations.
Prior research efforts [12,13,14] have examined the energy transmission issue in systems incorporating energy harvesters, which play a crucial role in transforming various environmental energy types into electricity. In [15], the authors demonstrated that the distributed resource allocation game algorithm significantly enhances energy efficiency and reduces network interference, thereby ensuring successful data transmission. Additionally, In [16], the authors investigated the challenges related to energy transmission for a sensor that harvests energy for remote state estimation, utilizing a continuous-time methodology alongside perturbation analysis. Building on this foundation, they demonstrated the existence of an optimal policy for the allocation of deterministic and stationary power transmission [17]. It is evident that the integration of energy harvesters substantially improves system performance. Consequently, it is posited that the energy selector possesses the capability to efficiently harvest energy.
Generally speaking, control strategies and energy selection strategies need to be co-designed for a long time. For example, researchers demonstrated that the ideal controller does not display the separation principle [2]. On the other hand, they offered the essential criteria for the controller to manifest a separation principle [5]. Based on the principle of separation proposed in [5], a novel framework is introduced in [18] that integrates a controller with a quantization selector, enabling the dynamic determination of the optimal quantization level from a specified set of quantization levels. Nevertheless, the optimal structure of the controller and the energy selector has not been sufficiently investigated. This paper adopts a similar approach and investigates the separation structure between the controller and the energy selector. In contrast to [18], this paper considers a situation where the communication channel is unreliable, which may lead to packet loss. In [19], the authors addressed the issue of energy management for a controller that transmits information to a sensor over communication channels characterized by unknown packet loss. Unlike the approach taken in [19], this paper examines a scenario where the plant transmits information through an unknown packet-dropping link to the filter. In a manner akin to the channel model outlined in reference [17], the energy used for transmission impacts the frequency of packet loss. Apart from scenarios involving flawless receipt acknowledgments [17,20], this study also takes into account a feedback channel that operates as a flawed packet-dropping link between the filter and the plant.
This study examines a challenge related to inefficient control and energy scheduling that stems from a faulty feedback channel. During each time interval, the energy selector assesses the power allocated for data transmission, relying on imperfect acknowledgments and the currently available energy supply. Simultaneously, the controller produces outputs to sustain an ideal equilibrium between control effectiveness and expenses. The optimal control issue, assuming that the state is estimated optimally, is defined by a Riccati equation that incorporates packet loss. The problem of energy selection is addressed by reinterpreting it as a Markov decision process (MDP) with accurate acknowledgments. In conclusion, this study outlines three main contributions:
(1) The LQG optimal control problem under the optimal estimated state with given energy levels is investigated by considering imperfect acknowledgments and energy constraints.
(2) In the case of the packet loss information and transmission cost, a novel optimal controller structure under the optimal estimated state is derived by using backward induction.
(3) The optimal energy selection strategies to ensure filtering performance and the suboptimal controller gain are co-designed, which can be computed offline and independently.
Notations: The symbols X and tr(X) denote the transpose and the trace of a real-valued matrix denoted by X, respectively. The inverse of the matrix X is expressed as X 1 . The sets of non-negative integers and positive integers are represented by N 0 and N + , respectively. The n-dimensional Euclidean space is referred to as R n . Additionally, P [ X ] signifies the probability of X, while E [ X | Y ] indicates the conditional expectation of X given Y.
As shown in Figure 1, the plant, on one hand, acquires energy from the environment via its integrated energy-harvesting subsystem (comprising an energy collector, an energy storage battery, and an energy selector) to generate σ t . This energy is stored in the battery as b t , and then, the energy selector chooses the optimal transmission energy p t from M energy levels to power the transmission of the state value. On the other hand, the plant transmits its own state X t to the filter through a wireless channel with packet loss. The filter performs state estimation based on the received state value Y t , obtains the estimated value, and sends it to the controller. The controller, relying on its own information set, calculates the optimal control signal U t in accordance with the separation principle and feeds it back to the plant to stabilize its state. Meanwhile, the filter sends an imperfect acknowledgment signal γ ^ t to the plant. The plant uses this feedback signal to adjust the subsequent energy-selection strategy, ultimately forming a complete closed loop of “state → energy optimization → estimation → control → feedback regulation” and enabling the stable operation of the system under energy constraints and imperfect feedback. In the following sections, we will detail the different submodules represented in the plant diagram.

2. Plant

2.1. LQG System

Consider the following time-invariant LQG system:
X t + 1 = A X t + B U t + W t
For every t N 0 , let X t R n represent the state of the system, while U t R m denotes the control input. The matrices A and B are constants with dimensions that are compatible. The sequence { W t } t N 0 corresponds to a series of independent and identically distributed (i.i.d.) noise values, each following a zero-mean Gaussian distribution denoted by W t N ( 0 , W ) . The initial state X 0 is characterized by a Gaussian distribution with parameters specified as N ( μ 0 , Σ ) .
In a wireless communication scenario plagued by packet loss, the plant conveys the measurement data to the filter. It is assumed that the wireless channel operates as an additive white Gaussian noise (AWGN) channel, where the relationship between the bit error rate (BER) and the transmission energy p t is detailed in [17]:
BER = 2 Φ α p t S 0 K
where
Φ ( x ) 1 2 π x exp ( t 2 / 2 ) d t
and α represents a constant parameter, with S 0 denoting the spectral density of the noise power, while K signifies the bandwidth of the wireless channel.
This study examines a situation in which energy availability is limited and the required volume of transmitted information is typically constrained. For simplicity, we consider that each data packet consists of a single bit (similar to what is employed for parity checks). If a packet is lost during transmission, the receiving end will not verify the parity correctly, meaning that the packet dropout corresponds to BER . Various data-check methods outlined in [16] can identify packet errors.
We can represent the transmission processes using a binary random variable γ t , where γ t = 1 indicates that the transmitted signal is received without errors by the filter, while γ t = 0 signifies a dropout scenario. The probability of dropout for the transmitted signals is denoted by
P [ γ t = 0 ] = 2 Φ α p t S 0 K .
It can be deduced from Equation (4) that different transmission energy values may give rise to different dropout probabilities, which subsequently impact the estimation performance at the filter. Depending on the above indicator function γ t , the state available to the filter is represented as
Y t = ( 1 γ t ) X t + γ t ( A X t 1 + B U t 1 ) .

2.2. Energy Harvesting and Storage

The energy selector is equipped with an energy harvester that could gather energy from the environment. Denote the amount of harvested energy from the beginning to the end at step t as σ t . The process of energy harvesting, denoted as σ t , is considered to be a first-order homogeneous Markov process operating in discrete time and demonstrating stationarity. Let C represent the maximum energy storage capacity of the battery, with the condition that C p t . Following the work of [4], we assume that the evolution of the battery’s available energy is expressed as
b t + 1 = min { b t p t + σ t + 1 , C } , t N 0 .

2.3. Energy Selection

Divide the transmission energy into M levels, and select the transmission energy p t among M energy levels. The energy for transmission at the i-th level, symbolized as Q i ( i { 1 , 2 , , M } ), is associated with a cost denoted by λ ( Q i ) = λ i N + . Let us define the new decision variable π t i for the selection of transmission energy as follows: π t i = 1 if the i-th transmission energy level is employed at time t, and π t i = 0 in all other cases. Consequently, the vector π t [ π t 1 , π t 2 , , π t M ] { 0 , 1 } M represents the choices made regarding the switching of transmission energy at the specified time t. Because the transmission energy selector selects only one transmission energy at each time step, hence, we can have Σ i = 1 M π t i = 1 for all t N 0 .    

3. Filter and Imperfect Feedback

The filter receives the state value Y t transmitted by the plant, performs state estimations based on historical measurements and control information, and transmits the estimated results to the controller; at the same time, it sends an imperfect feedback signal to the plant.

3.1. Imperfect Feedback

A signal acknowledging transmission is dispatched from the filter to the plant at every time step. This study addresses a feedback channel that may contain errors. The packet loss sequence { γ t , t N 0 } remains unknown to the plant. Instead, an imperfect acknowledgment sequence { γ ^ t , t N 0 } is received by the plant from the filter. According to [4], the modeling of the erroneous feedback channel is presented as follows:
γ ^ t = 0 or 1 if β t = 1 2 if β t = 0 .
When β t = 0 , all feedback signals are completely lost with a specified dropout probability η where η [ 0 , 1 ] . Conversely, when β t = 1 , the likelihood of a transmission error arises with a probability of ϵ such that ϵ [ 0 , 1 ] . This error in transmission results in γ ^ t = 0 if γ t = 1 and γ ^ t = 1 if γ t = 0 . The conditional probability matrix for the feedback channel is formulated as follows:
A = ( a m n ) = ( 1 ϵ ) ( 1 η ) ϵ ( 1 η ) η ϵ ( 1 η ) ( 1 ϵ ) ( 1 η ) η
where a m n : = P ( γ ^ = n 1 | γ = m 1 ) for m { 1 , 2 } and n { 1 , 2 , 3 } . The system receives perfect packet reception acknowledgments when both ϵ and η are set to 0.

3.2. State Estimation

Let us define the sets as follows: X t { X 0 , X 1 , , X t } represents the set of state history, Y t { Y 0 , Y 1 , , Y t } represents the history of state values that the filter can receive, U t { U 0 , U 1 , , U t } denotes the control history, γ ^ t { γ ^ 0 , γ ^ 1 , , γ ^ t } corresponds to the history of feedback signals, and finally, Π t { π 0 , π 1 , , π t } signifies the history of transmission selection.
The data accessible to the controller at time t is represented as
F t c = σ ( Y 0 , π 0 , Y 1 , U 0 , π 1 , , Y t , U t 1 , π t )
with the initial state being F 0 c = { Y 0 , π 0 } , where σ ( · ) a σ algebra. Based on Equation (9)’s definition, one can interpret an admission control strategy as a function that maps F t c to R m . We label these strategies as ξ t u . In contrast to the feedback-based control methods described in [2], our approach involves transmitting W t 1 rather than X t at time t. It is important to note that W t 1 can be easily calculated using the values of X t , X t 1 , and U t 1 .
Now, we introduce notation X ^ t = E [ X t | F t 1 c ] , which we will call the prediction of X t . Additionally, we define X ˜ t = E [ X t | F t c ] as the filtered version of X t . Consequently, we express ω t ^ ( π t + 1 ) = E [ W t | F t + 1 c ] .
Using (1) and because U t is F t c -measurable, we have
X ^ t + 1 = A X ˜ t + B U t
X ˜ t = E [ X t | F t c ] = X ^ t + ω ^ t 1 ( π t ) .
Let us introduce the error Δ t = X t X ˜ t . From this, it can be derived that
Δ t + 1 = A Δ t + W t ω ^ t = · · · = A t + 1 Δ 0 + k = 0 t A t k ( W k ω ^ k )
where Δ 0 = W 1 ω ^ 1 ( π 0 ) . The estimation error of the state Δ t is influenced by the sequence { π 0 , , π t } via the variables { ω ^ 1 , , ω ^ t 1 } . However, it is independent of the control strategy ξ U .

4. Suboptimal Control and Energy Selection Strategy

The controller receives the state estimation transmitted by the filter and, based on its own information set F t c , outputs the optimal control signal U t to ensure the stability of the equipment state and minimize the total cost. The energy transmission selector at time t has access to the information given by
F t e = σ X 0 , U 0 , γ ^ 0 , π 0 , , X t , U t 1 , γ ^ t , π t 1 = F t 1 e σ X t , U t 1 , γ ^ t , π t 1
with F 0 e = { X 0 } . The strategies for selecting the transmitter can likewise be considered as a mapping: F t e to { 0 , 1 } M . These strategies are denoted as ξ t π . Thus, the decision-making process during a single time step is illustrated in Figure 2.
The cooperative minimization of the cost function undertaken by the controller and the energy transmission selector involves a finite horizon quadratic criterion, and it is expressed as follows:
J T ( ξ U , ξ Π ) = E [ t = 0 T 1 ( X t Q 1 X t + U t R U t + π t ) + X T Q 2 X T | U t = ξ t u ( F t c ) , π t = ξ t π ( F t e ) ]
where = [ λ 1 , λ 2 , , λ M ] , R 0 , Q 1 , Q 2 0 , ξ Π represents the entire sequence { ξ 0 π , ξ 1 π , , ξ T 1 π } ; likewise, ξ U is defined similarly.
Remark 1. 
The overall cost function to minimize is a weighted combination of control efficiency and transmission cost, directly reflecting their balance. Then, penalize the state deviation from the equilibrium ( X t Q 1 X t ) and control input consumption ( U t R U t )—smaller values mean better control stability. High-cost energy selection is then penalized—smaller values mean lower energy expenditure (driven by Λ). To avoid the complexity of joint optimization, we use the separation principle to decompose the problem into two decoupled but mutually constrained subproblems—ensuring that neither control efficiency nor energy selection (costΛ)) is sacrificed excessively.
It is essential to identify the optimal mappings ξ U * and ξ Π * that minimize the cost function (14) across all permissible strategies:
( ξ U * , ξ Π * ) = arg min ξ U , ξ Π J T ( ξ U , ξ Π ) .

4.1. Optimal Control Under the Optimal Estimated State with the Separate Principle

When packet loss occurs during the process of transmission, the controller cannot receive ω ^ t , and we have the following function:
X ˜ t = g ( X ˜ t 1 ) , γ ( t ) = 0 X ˜ t , γ ( t ) = 1
where g ( X ) A X + B U . It is clear that the values of X ˜ t belong to a countably infinite collection: { X ˜ t , g ( X ˜ t 1 ) , g 2 ( X ˜ t 2 ) , } .
We may introduce a random variable τ t to represent the length of the most recent successful transfer before time t:
τ t = t m a x { t * : γ ( t * ) = 1 , 0 t * t }
The analysis presented suggests a distinction in structure between the controller and the selection of transmitters. In the subsequent sections, we will formally demonstrate the emergence of a separation principle related to this issue. Associated with the cost function (14), let us define the value function as
V k ( F k c , F k e ) = min ξ t u , ξ t π E [ t = k T 1 ( X t Q 1 X t + U t R U t + π t ) + X T Q 2 X T | U t = ξ t u , π t = ξ t π , t = k , , T 1 ] .
Expression (18) is rewritten as follows by using the dynamic programming principle:
V k ( F k c , F k e ) = min ξ k u , ξ k π E [ ( X k Q 1 X k + U k R U k + π k ) + V k + 1 ( F k + 1 c , F k + 1 e ) | U k = ξ k u , π k = ξ k π ] .
If ξ k u * and ξ k π * serve to minimize the right side of (19), then we have U k * = ξ k u * ( F k c ) and π k * = ξ k π * ( F k e ) . Additionally, from (18), we can derive the following:
min ξ U , ξ Π J T ( ξ U , ξ Π ) = E [ V 0 ( F 0 c , F 0 e ) ]
Here, the expectation in Equation (20) pertains to the random variables F 0 c and F 0 e . For conciseness in the upcoming analysis, we will denote V k ( F k c , F k e ) as follows:
V k ( F k c , F k e ) = min ξ k u , ξ k π E [ ( X k Q 1 X k + U k R U k + π k ) + V k + 1 ( F k + 1 c , F k + 1 e ) | F k ]
where F k = { F k c , F k e } is the combined information set. Note that the information set F k e (and, consequently, F k ) encompasses the realization x k of the state X k . The subsequent theorem describes the optimal policy ξ k u * for every k = 0 , 1 , , T 1 .
Theorem 1. 
Given the information set F k c at time k provided to the controller, the optimal control policy ξ k u * : F k c R m , which operates under the optimal estimated state to minimize the right-hand side of Equation (19), can be expressed in the following manner:
U k * = ξ k u * ( F k c ) = G k g τ k ( E [ X k τ k | F k c ] ) .
This holds for every k = 0 , 1 , , T 1 .
G k = ( R + B P k + 1 B ) 1 B P k + 1 A ,
P k = Q 1 + A P k + 1 A G k ( R + B P k + 1 B ) G k ,
P T = Q 2 .
Proof. 
To ensure conciseness, we will refer to g τ k rather than g τ k ( x k τ k ) , and g Δ τ k will denote g τ k ( Δ k τ k ) . The proof of this theorem relies on the dynamic programming principle. The central concept is to establish that the value function linked to the optimal control problem, assuming the best estimate of the state, is represented as follows:
V k ( F k c , F k e ) = g τ k P k g τ k + C k + o k
where x k signifies the outcome of the state X k , and P k is defined similarly to that in (23b); this is applicable for all k = 0 , 1 , , T 1 .
C k = min { ξ t π } t = k T 1 E t = k T 1 g Δ τ t H t g Δ τ t + π t | F k e .
The matrix H k R n × m and the scalar o k are defined by the following equations:
H k = G k ( R + B P k + 1 B ) G k ,
o k = o k + 1 + t r ( P k + 1 W ) ,
o T = 0 .
Neither H k nor o k relies on previous or forthcoming control or energy selection decisions. Thus, these values can be calculated offline. Additionally, from (12), it can be deduced that Δ t is independent of the earlier control history U t , being solely determined by Π t . Consequently, C k is not influenced by the control action U k . Equation (25) can be reformulated as follows.
C k = min ξ k π E g Δ τ k H k g Δ τ k + π k + C k + 1 | F k e .
Based on the definition of V k ( · ) presented in (18), we can express V T ( F T c , F T e ) as follows:
V T ( F T c , F T e ) = g τ T Q 2 g τ T = g τ T P T g τ T .
Following this, we need to confirm that V T 1 ( F T 1 c , F T 1 e ) conforms to the structure outlined in (24):
V T 1 ( F T 1 c , F T 1 e ) = min ξ T 1 u , ξ T 1 π E g τ T 1 Q 1 g τ T 1 + U T 1 R U T 1 + π T 1 + g τ T P T g τ T .
Substituting Equation (1) and performing a series of simplifications result in the following expression:
V T 1 ( F T 1 c , F T 1 e ) = min ξ T 1 u , ξ T 1 π E [ | | U T 1 + G T 1 g τ T 1 | | ( R + B P T B ) 2 + g τ T 1 P T 1 g τ T 1 + π T 1 + t r ( P T W ) | F k ] .
In Formula (30), the term | | U T 1 + G T 1 g τ T 1 | | ( R + B P T B ) 2 is the sole component that is influenced by U T 1 . Therefore, U T 1 serves as the minimum mean squared estimate for G T 1 g τ T 1 .Then,
U T 1 * = ξ T 1 u * ( F T 1 c ) = G T 1 g τ T 1 ( E [ X T 1 τ T 1 | F T 1 c ] ) .
After replacing the optimal U T 1 * in (29), we derive
V T 1 ( F T 1 c , F T 1 e ) = min ξ T 1 π E g Δ τ T 1 H T 1 g Δ τ T 1 + π T 1 | F T 1 e + g τ T 1 P T 1 g τ T 1 + t r ( P T W )
Consequently, applying the definitions of C T 1 and o T 1 leads us to the conclusion that
V T 1 ( F T 1 c , F T 1 e ) = C T 1 + g τ T 1 P T 1 g τ T 1 + o T 1 .
Therefore, the expression V T 1 ( F T 1 c , F T 1 e ) is certainly in the format described by Equation (24). To confirm that Equation (24) is also valid for time k, we apply backward induction, presuming that it is true for some time k + 1 . Toward that end, we obtain
V k ( F k c , F k e ) = min ξ k u , ξ k π E [ ( g τ k Q 1 g τ k + U k R U k + π k ) + g τ k + 1 P k + 1 g τ k + 1 + C k + 1 + o k + 1 | F k ] .
Using Equations (1), (23a) and (23b), one can obtain
V k ( F k c , F k e ) = min ξ k u , ξ k π E [ | | U k + G k g τ k | | ( R + B P k + 1 B ) 2 + g τ k P k g τ k + π k ) + t r ( P k + 1 W ) + C k + 1 + o k + 1 | F k ] .
The optimal control, denoted as U k * , which is derived under the best estimated state to minimize (35), is expressed as follows:
U k * = ξ k u * ( F k c ) = G k g τ k ( E [ X k τ k | F k c ] ) .
After substituting the optimal control under the optimal estimated state from (36) in (35), we can obtain
V k ( F k c , F k e ) = min ξ k π E [ ( g Δ τ k ( G k ( R + B P k + 1 B ) G k ) g Δ τ k + π k + C k + 1 ) | F k e ] + g τ k P k g τ k + t r ( P k + 1 W ) + o k + 1 = min ξ k π E F k e [ g Δ τ k H k g Δ τ k + π k + C k + 1 ] + g τ k P k g τ k + o k = g τ k P k g τ k + C k + o k .
Consequently, the value function can be expressed in the form of (24), and the optimal control based on the best estimated state at times k = 0 , 1 , , T 1 is provided in (36).    □
Remark 2. 
The optimal feedback controller for the separated control problem is given by Theorem 1 in [5]; it indicates that there exists a separation principle if the policy satisfies the structure of Equation (4) in [5]. In this paper, Equation (23a) is exactly the same as the structure of Equation (4) in [5], which shows that the optimal control strategy proposed by us satisfies the separation principle.
Remark 3.
The problem of designing a controller based on the filter requires the consideration of both filtering performance and control performance simultaneously. One typical method is to prioritize filtering performance before ensuring control performance. For example, the separation structure between the controller and the quantizer is studied in [18] to ensure the filtering performance and control performance individually. Through a similar analytical approach, we find that the state estimation error Δ t at the filter depends only on the energy selection policy ξ t Π through the variable { ω ^ 1 , , ω ^ t 1 } and not on the control ξ t U . Likewise, we obtain a separation principle between the energy selector and the controller.
Note that, in contrast to [18], this paper takes into account delay: τ t = t m a x { t * : γ ( t * ) = 1 , 0 t * t } . In the presence of delay, the information available at the controller will be affected since some of the measurements’ arrival will be delayed, and hence, state estimations will be affected. Theorem 1 incorporates the consideration of delay and provides the optimal controller structure for this scenario: U k * = G k g τ k ( E [ X k τ k | F k c ] ) .

4.2. Optimal Energy Selection to Ensure Filtering Performance and MDP

Affected by the energy constraint, packet dropouts may occur during the information transmission of the plant. Consequently, the system determines the most efficient transmission energy to reduce the error covariance at the filter, as well as the energy selection cost. The optimization challenge can be reformulated as follows:
min ξ t π E t = 0 T 1 t r ( P t ) + π t
This subsection aims to identify the ideal energy selection strategy that guarantees filtering efficacy while minimizing the above cost function. Due to the presence of an imperfect feedback communication channel, the system cannot verify the receipt of packets by the filter. Thus, at time t, the system acquires “imperfect state information” regarding { P t } through the acknowledgment sequence { γ ^ t } . This optimization problem can be interpreted as a Markov decision process (MDP) with flawed state information. Furthermore, the issue characterized as an MDP with imperfect acknowledgments may be transformed into an MDP with accurate acknowledgments by utilizing the information-state framework.
Let all observations obtained from the filter up to time t be represented as
z t : = { P 0 , γ ^ 0 , , γ ^ t , p 0 , , p t 1 }
for t 0 , with z 1 : = { P 0 } . Next, we define the information state as
f t + 1 ( P t + 1 | z t , p t ) = P ( P t + 1 | z t , p t ) ,
representing the conditional probability of the estimation error covariance P t + 1 given z t and p t . The ensuing theorem illustrates how f t + 1 ( · | z t , p t ) can be determined from f t ( · | z t 1 , p t 1 ) in conjunction with γ ^ t and p t . From reference [4], we have the following lemma.
Lemma 1
([4]). The dynamics of the information-state function f ( · ) is described as follows:
f t + 1 ( P t + 1 | z t , p t ) = γ t { 0 , 1 } [ P t P ( P t + 1 | P t , γ t ) × f t ( P t | z t 1 , p t 1 ) d P t × P ( γ ^ t | γ t ) × P ( γ t | p t ) γ t { 0 , 1 } P ( γ ^ t | γ t ) × P ( γ t | p t ) ] ,
with the initial condition given by f 0 ( P 0 | z 1 ) = δ ( P 0 ) , where δ denotes the Dirac delta function.
In light of the function γ t , the estimation error covariances at the filter satisfy the following iteration expression:
P t + 1 = γ t W + ( 1 γ t ) ( A P t A + W )
The condition of having perfect state information is characterized by the information-state f ( · ) . Define a binary random variable γ , akin to γ t . For a given P, we express the following
L ( P , γ ) : = γ W + ( 1 γ ) ( A P A + W )
as the operator of the random Riccati equation. Let S + n denote the collection of all non-negative definite matrices. We denote the space of all probability density functions defined on S + n as Ψ , characterized by the condition S + n φ ( P ) d P = 1 for any φ Ψ . Building on the recursion of the information-state, we define
φ ˜ : = γ { 0 , 1 } P P L ( P , γ ) | P , γ φ ( P ) d P × P ( γ ^ | γ ) × P ( γ | p ) γ { 0 , 1 } P ( γ ^ | γ ) × P ( γ | p )
for a given φ Ψ , as well as for γ ^ and p. It is important to highlight that when both ϵ and η are set to zero, the optimization issue transforms into a stochastic control problem with perfect state information.
We formulate a Markov decision process (MDP) to address the stochastic control issue. At time t, the MDP is characterized by the state s t = ( φ t , σ t , b t ) , which resides within the state space S . Let A = { 0 , p t ( 1 ) , , p t ( M ) } represent the set of actions corresponding to transmission energy, while A ( s ) denotes the collection of permissible actions for the state s S . The reward for a single stage is given by
r ( s , γ ) = E t r ( L ( P , γ ) ) | φ , p + π = P ( γ = 1 ) × t r ( W ) P φ ( P ) d P + P ( γ = 0 ) × P t r ( A P A + W ) φ ( P ) d P + π .
The best approach for selecting energy to guarantee effective filtering performance is calculated offline using the Bellman dynamic programming equations presented below.
Theorem 2. 
Given the initial condition I 0 = { σ 0 , b 0 , P 0 } , the finite-time horizon minimization problem, which accounts for imperfect acknowledgments, fulfills the following Bellman optimal iterative equation:
V k ( φ , σ , b ) = min p A r ( s , γ ) + E V k + 1 ( φ ˜ , σ ˜ , b ˜ ) | φ , σ , p
and the termination condition is
V T ( φ , σ , b ) : = min p A E t r ( L ( P , γ ) ) | φ , p = E t r ( L ( P , γ ) ) | φ , b
where all available energy is used for transmission in the final time T.
Accordingly, the optimal selection policy ξ π * is obtained by
ξ π * = arg min p A r ( s , γ ) + E V k + 1 ( φ ˜ , σ ˜ , b ˜ ) | φ , σ , p
Proof. 
For this proof, please consult Theorem 7.1 in reference [21].    □
To facilitate the calculations, we express
E V k + 1 ( φ ˜ , σ ˜ , b ˜ ) | φ , σ , p = i = 1 N V k + 1 ( φ ˜ , i , b ˜ ( i ) ) × ( P ( σ ) H ) i ,
where the energy harvesting sequence { σ k } is characterized by finite-state Markov chains. The matrix H represents the state transition probabilities of the energy harvesting processes, and P ( σ ) : = [ P ( σ = 1 ) , , P ( σ = N ) ] .
Remark 4. 
The energy selection problem with the imperfect state information problem is reduced to ones with perfect state information using the notion of information-state [21]. The information state is the entire probability density function φ ˜ and not just its value at any particular P t . We note that discretized versions of the Bellman equations (46), which, in particular, includes the discretization of the space of probability density functions Ψ, is used for the numerical computation to find the suboptimal solution to the energy selection problem. As the degree of discretization increases, the suboptimal solution will converge to the optimal solution [22]. For Bellman optimal (46), we can solve it by applying the value iteration algorithm [23].

4.3. Algorithm

Algorithm 1 is the algorithm flow of this article.
Algorithm 1 Joint Optimization of Control and Energy Selection for LQG Systems.
Require: Set Time horizon T, system matrices A , B , noise covariance W , state penalty Q, control cost R, energy cost vector Λ , energy levels A , battery capacity C, energy harvesting transition matrix T, harvesting set σ t , α , noise spectral density S, bandwidth K, feedback error probabilities ϵ , η
Initialize the distributions:
  • System initial state X 0 N ( 0 , I ) , initial battery level b 0 ;
  • Estimation error covariance initial value P 0 (Dirac delta distribution f 0 ( P 0 | z 1 ) = δ ( P 0 ) ), information state φ 0 = f 0 ;
  • Energy selection decision variable π 0 { 0 , 1 } ,
Offline Calculation: Controller and Energy Policy (Backward Induction):
  1:
for  k = T 1 down to 0 do
  2:
      Solve Riccati equation for optimal control gain
  3:
      if  k = T 1  then
  4:
            Initialize terminal matrix P T = Q 2 (Equation (23c))
  5:
      end if
  6:
      Calculate P k (Equation (23b))
  7:
      Compute control gain G k (Equation (23a))
  8:
      Store { G 0 , G 1 , , G T 1 }
  9:
      MDP value iteration for optimal energy policy
10:
      while MDP value function V k not converged do
11:
          Prediction of information state: Update √ via Lemma 1
12:
          Update of value function: Calculate one-stage reward r ( s t , γ ) (Equation (45))
13:
          Update V k (Equation (46))
14:
      end while
15:
      Determine optimal energy policy: ξ k π * (Equation (48))
16:
      Store { ξ 0 π * , , ξ T 1 π * }
17:
end for
Online Simulation: System Operation (Forward Propagation)):
for  t = 0 to T 1  do
  2:
    Select transmission energy p t = ξ t * ( φ t , σ t , b t ) (feasibility constraint: p t b t )
    Compute packet loss probability P [ γ t = 0 ] = 2 Φ α p t S 0 K
  4:
    Generate binary loss indicator γ t Bern ( 1 P [ γ t = 0 ] )
    Generate measurement Y t (Equation (5))
  6:
    Calculate continuous loss duration τ t (Equation (17))
    Filtered state X ˜ t = g τ t ( X ˜ t τ t )
  8:
    Compute optimal control U t * = G t g τ t ( E [ X t τ t | F t c ] ) (Equation (22))
    Update state X t + 1 (Equation (1)), where W t N ( 0 , W )
10:
    Update battery b t + 1 (Equation (6))
    Generate next harvesting energy σ t + 1 Markov ( T )
12:
    Record performance indicators:
      estimation error covariance P t = E [ Δ t Δ t ] , transmission cost π t Λ , control cost U t * R U t *
end for

5. Simulation Results

Example 1. 
Let us consider the two-dimensional (unstable) system
X t + 1 = 1.03 0.5 0 1.2 X t + 0.1 0 0 0.18 U t + W t ,
where the starting condition is defined as X 0 N ( 0 , I ) , while W t follows a distribution of W t N ( 0 , 2 I ) . The parameters for the control cost are specified as Q 1 = Q 2 = R = 1 2 I , with the time horizon established at T = 40 .
The harvested energies are modeled as two-level discrete Markov chains with the transition matrix T = [ 0.4 , 0.6 ; 0.3 , 0.7 ] and the energy-harvesting set { σ t } = { 2 , 4 } . The parameters of the channel are β = 0.5 , S 0 = 0.8 , and K = 2 . The simulation was performed with seven energy selectors corresponding to the action set { 0 , 1 , 3 , 5 , 7 , 9 } and a one-to-one mapping { p t } = { 0 , 1 , 3 , 5 , 7 , 9 } { π t } = { π t 1 , π t 2 , π t 3 , π t 4 , π t 5 , π t 6 } (where ↦ denotes the mapping from p t to the decision variable π t ). The costs associated with the energy selectors are = [ 0 , 0.1 , 0.2 , 0.3 , 0.4 , 0.5 ] T .
To simplify the discussion, we initially consider a scenario where the plant operates with ideal acknowledgments (i.e., η = 0 and ϵ = 0 ). In Figure 3, a single simulation run of the packet loss process { γ t } is depicted, with the red line illustrating the filter’s error covariance, while the blue dots represent the values of γ (where γ = 0 signifies packet loss, and γ = 1 denotes successful data transmission). Meanwhile, Figure 4 presents battery storage { b t } along with the corresponding optimal energy distribution { p t } for C = 10 , and the control effect comparison is illustrated in Figure 5.
We set τ = { 0 , 1 , 2 } (continuous packet dropping time). Without the proposed strategy, the system state is unstable; implementing this strategy leads to overall stabilization. An increase in τ will result in a corresponding increase in the time required for the system to regain stability. The control gain is calculated by Theorem 1 (removed redundant period) and given in Table 1.
Example 2. 
We consider an unstable batch reactor (a large-scale system) with parameters from [24] as follows:
A = 1.1782 0.0015 0.5116 0.4033 0.0515 0.6619 0.0110 0.0613 0.0762 0.3351 0.5607 0.3824 0.0006 0.3353 0.0893 0.8494 , B = 0.0045 0.0876 9.4672 0.0012 0.2132 0.2353 0.2131 0.0161 ,
where the starting condition is represented as X 0 N ( 0 , I ) and W t N ( 0 , 2 I 4 × 4 ) . The parameters for control costs are defined as Q 1 = Q 2 = 1 2 I 4 × 4 and R = 1 2 I 2 × 2 , while the time horizon is established at T = 40 . All other parameters remain consistent with those detailed in Example 1.
If we assume that packet loss may occur within this system, we derive an energy selection policy (specifically, η = 0 and ϵ = 0 ) and assess the control performance of the proposed approach. As illustrated in Figure 6, we note that the estimated error covariance rises with an increase in the continuous packet dropping duration τ. The optimal energy allocation under this policy when C = 10 is presented in Figure 7, where the blue curve denotes optimal transmission energy and the orange curve represents battery storage, providing quantitative insights into energy management. In Figure 8, the state after control diminishes to 0 (in contrast to the state prior to control), which validates the effectiveness of the designed controller. The control gain for the system is computed using Theorem 1 and is presented in Table 2.

6. Conclusions

In this study, we examine a traditional LQG problem that involves flawed acknowledgments and constraints related to energy harvesting. The transmitter is required to determine the optimal energy level based on incomplete feedback data, aiming to minimize the overall costs associated with transmission and control efficacy. By addressing the classical Riccati equation linked to the LQG problem, we have computed the optimal control gains. The challenge of energy selection under imperfect feedback is reformulated into a Markov decision process (MDP) concerning ideal acknowledgments through the iterative processing of state information. Ultimately, we derive the optimal energy selection approach needed to maintain filtering performance by solving the finite horizon Bellman optimal dynamic equation. In our upcoming research, we intend to investigate and evaluate this matter within a more realistic context.

Author Contributions

Z.J.: Formal analysis, investigation, methodology, writing—original draft, software development, and writing—review and editing; L.G.: writing—review and editing; J.L.: formal analysis, conceptualization, methodology, supervision, and writing—review and editing; Q.J.: supervision and writing—review. All authors have read and agreed to the published version of this manuscript.

Funding

This research was partially supported by the National Natural Science Foundation of China (NSFC) under grants 62103282, 12071292, 12131007, and 12071044.

Data Availability Statement

Data is contained within this article.

Conflicts of Interest

There are no conflicts of interest with the publication of this work.

References

  1. Cao, X.; Zhou, X.; Liu, L.; Cheng, Y. Energy-efficient spectrum sensing for cognitive radio enabled remote state estimation over wireless channels. IEEE Trans. Wireless Commun. 2014, 14, 2058–2071. [Google Scholar] [CrossRef]
  2. Liu, K.; Skelton, R.E.; Grigoriadis, K. Optimal controllers for finite wordlength implementation. IEEE Trans. Autom. Control 1992, 37, 1294–1304. [Google Scholar] [CrossRef]
  3. Chen, L.; Fung, T.C.; Li, Y.; Peng, L. Fitting heavy-tailed distributions to mortality indexes for longevity risk forecasts. J. Math. Study 2024, 57, 486–498. [Google Scholar] [CrossRef]
  4. Nourian, M.; Leong, A.S.; Dey, S. Optimal energy allocation for kalman filtering over packet dropping links with imperfect acknowledgments and energy harvesting constraints. IEEE Trans. Autom. Control 2014, 59, 2128–2143. [Google Scholar] [CrossRef]
  5. Borkar, V.S.; Mitter, S.K. LQG control with communication constraints. In Communications, Computation, Control, and Signal Processing: A Tribute to Thomas Kailath; Springer: Boston, MA, USA, 1997; pp. 365–373. [Google Scholar]
  6. Luo, W.; Lu, P.; Du, C.; Liu, H. Cooperative output tracking control of heterogeneous multi-agent systems with random communication constraints: An observer-based predictive control approach. IEEE Trans. Circuits Syst. II Express Briefs 2021, 69, 1139–1143. [Google Scholar] [CrossRef]
  7. Yang, X.; Marjanovic, O. LQG control with extended Kalman filter for power systems with unknown time-delays. IFAC Proc. Vol. 2011, 44, 3708–3713. [Google Scholar] [CrossRef]
  8. Aghaee, F.; Dehkordi, N.M.; Bayati, N.; Karimi, H. A distributed secondary voltage and frequency controller considering packet dropouts and communication delay. Int. J. Electr. Power Energy Syst. 2022, 143, 108466. [Google Scholar] [CrossRef]
  9. Li, B.; Sinha, U.; Sankaranarayanan, G. Modelling and control of non-linear tissue compression and heating using an LQG controller for automation in robotic surgery. Trans. Inst. Meas. Control 2016, 38, 1491–1499. [Google Scholar] [CrossRef]
  10. Kostina, V.; Hassibi, B. Rate-cost tradeoffs in control. IEEE Trans. Autom. Control 2019, 64, 4525–4540. [Google Scholar] [CrossRef]
  11. Zhan, X.S.; Guan, Z.H.; Zhang, X.H.; Yuan, F.S. Optimal performance of networked control systems over limited communication channels. Trans. Inst. Meas. Control 2014, 36, 637–643. [Google Scholar] [CrossRef]
  12. Kang, T.; Kim, S.; Hyoung, C.; Kang, S.; Park, K. An energy combiner for a multi-input energy-harvesting system. IEEE Trans. Circuits Syst. II Express Briefs 2015, 62, 911–915. [Google Scholar] [CrossRef]
  13. Wang, Z.; Wang, X.; Aggarwal, V. Transmission with energy harvesting nodes in frequency-selective fading channels. IEEE Trans. Wireless Commun. 2016, 15, 1642–1656. [Google Scholar] [CrossRef]
  14. Weddell, A.S.; Merrett, G.V.; Kazmierski, T.J.; Al-Hashimi, B.M. Accurate supercapacitor modeling for energy harvesting wireless sensor nodes. IEEE Trans. Circuits Syst. II Express Briefs 2011, 58, 911–915. [Google Scholar] [CrossRef]
  15. Yao, N.; Chen, B.; Wang, J.; Wang, L.; Hao, X. Distributed optimization game algorithm to improve energy efficiency for multi-radio multi-channel wireless sensor network. Trans. Inst. Meas. Control 2024, 46, 2198–2210. [Google Scholar] [CrossRef]
  16. Li, Y.Z.; Zhang, F.; Quevedo, D.E. Power control of an energy harvesting sensor for remote state estimation. IEEE Trans. Autom. Control 2017, 62, 277–290. [Google Scholar] [CrossRef]
  17. Peng, L.; Cao, X.; Sun, C. Optimal transmit power allocation for an energy-harvesting sensor in wireless cyber-physical systems. IEEE Trans. Cybern. 2019, 51, 779–788. [Google Scholar] [CrossRef]
  18. Maity, D.; Tsiotras, P. Optimal controller synthesis and dynamic quantizer switching for linear-quadratic-gaussian systems. IEEE Trans. Autom. Control 2022, 67, 382–389. [Google Scholar] [CrossRef]
  19. Lei, H.; Yang, W.; Yu, Y. Energy efficient management for distributed state estimation under DoS attacks. Int. J. Robust Nonlinear Control 2022, 32, 1941–1959. [Google Scholar] [CrossRef]
  20. Leong, A.S.; Dey, S. Power allocation for error covariance minimization in Kalman filtering over packet dropping links. In Proceedings of the 2012 IEEE 51st Annual Conference on Decision and Control, Orlando, FL, USA, 10–13 December 2012; pp. 3335–3340. [Google Scholar]
  21. Kumar, P.R.; Varaiya, P. Stochastic Systems: Estimation, Identification, and Adaptive Control; Prentice-Hall: Englewood Cliffs, NJ, USA, 1986. [Google Scholar]
  22. Yu, H.; Bertsekas, D. Discretized approximations for POMDP with average cost. In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, Banff, AB, Canada, 7–10 July 2004; pp. 619–627. [Google Scholar]
  23. Bertsekas, D.P. Dynamic Programming and Optimal Control; Athena Scientific: Belmont, MA, USA, 1995. [Google Scholar]
  24. Garone, E.; Sinopoli, B.; Goldsmith, A.; Casavola, A. LQG control for MIMO systems over multiple erasure channels with perfect acknowledgment. IEEE Trans. Autom. Control 2011, 57, 450–456. [Google Scholar] [CrossRef]
Figure 1. The controlled system with imperfect feedback acknowledgments.
Figure 1. The controlled system with imperfect feedback acknowledgments.
Axioms 14 00791 g001
Figure 2. The decision-making process during a single time step.
Figure 2. The decision-making process during a single time step.
Axioms 14 00791 g002
Figure 3. The process of variation in error covariance within the filter.
Figure 3. The process of variation in error covariance within the filter.
Axioms 14 00791 g003
Figure 4. The optimal energy allocation when C = 10 .
Figure 4. The optimal energy allocation when C = 10 .
Axioms 14 00791 g004
Figure 5. Control effect comparison chart.
Figure 5. Control effect comparison chart.
Axioms 14 00791 g005
Figure 6. The variation process of error covariance at the filter for Example 2.
Figure 6. The variation process of error covariance at the filter for Example 2.
Axioms 14 00791 g006
Figure 7. The optimal energy allocation when C = 10 for Example 2.
Figure 7. The optimal energy allocation when C = 10 for Example 2.
Axioms 14 00791 g007
Figure 8. Control effect comparison chart for Example 2.
Figure 8. Control effect comparison chart for Example 2.
Axioms 14 00791 g008
Table 1. Control gain for Example 1.
Table 1. Control gain for Example 1.
Time03839
G 0.593 0.743 0.703 3.470 0.206 0.155 0.085 0.566 0.101 0.049 0 0.209
Table 2. Control gain for Example 2.
Table 2. Control gain for Example 2.
Time G
0 0.0137 0.0999 0.0122 0.08168 2.1389 0.1310 1.4990 0.9285  
1 0.0137 0.0999 0.0122 0.08168 2.1389 0.1310 1.4990 0.9285  
⋮  
39 0.0051 0.0706 0.0003 0.0092 0.1141 0.0757 0.1675 0.0638
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ju, Z.; Guo, L.; Li, J.; Ju, Q. Energy Management and Control for Linear–Quadratic–Gaussian Systems with Imperfect Acknowledgments and Energy Constraints. Axioms 2025, 14, 791. https://doi.org/10.3390/axioms14110791

AMA Style

Ju Z, Guo L, Li J, Ju Q. Energy Management and Control for Linear–Quadratic–Gaussian Systems with Imperfect Acknowledgments and Energy Constraints. Axioms. 2025; 14(11):791. https://doi.org/10.3390/axioms14110791

Chicago/Turabian Style

Ju, Zhiping, Lijun Guo, Jiajia Li, and Qiangchang Ju. 2025. "Energy Management and Control for Linear–Quadratic–Gaussian Systems with Imperfect Acknowledgments and Energy Constraints" Axioms 14, no. 11: 791. https://doi.org/10.3390/axioms14110791

APA Style

Ju, Z., Guo, L., Li, J., & Ju, Q. (2025). Energy Management and Control for Linear–Quadratic–Gaussian Systems with Imperfect Acknowledgments and Energy Constraints. Axioms, 14(11), 791. https://doi.org/10.3390/axioms14110791

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop