Next Article in Journal
Improved SBM Model Based on Asymmetric Data—Mathematical Evaluation and Analysis of Green Innovation Efficiency
Previous Article in Journal
A Neural Network Training Method Based on Distributed PID Control
Previous Article in Special Issue
Symmetric Adjustable Tail-Risk Measure for Distributionally Robust Optimization in Portfolio Allocation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hurwicz-Type Optimal Control Problem for Uncertain Singular Non-Causal Systems

1
Department of Public Basic Courses, Nanjing Vocational University of Industry Technology, Nanjing 210023, China
2
College of Science, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(7), 1130; https://doi.org/10.3390/sym17071130
Submission received: 16 April 2025 / Revised: 21 May 2025 / Accepted: 22 May 2025 / Published: 15 July 2025
(This article belongs to the Special Issue Symmetry in Optimal Control and Applications)

Abstract

Uncertain singular non-causal systems represent a class of singular systems distinguished by the presence of regularity constraints and the involvement of uncertain variables. This paper considers optimal control problems for such systems under the Hurwicz criterion. By integrating dynamic programming with uncertainty theory, a recurrence equation is developed to address these problems. This equation has been shown to be effective in handling the optimal control problems of both linear and nonlinear uncertain singular non-causal systems, thereby enabling the derivation of analytical expressions for their corresponding optimal solutions. Moreover, the transformation of the original system into forward and backward subsystems reveals a fundamental temporal and structural symmetry, which significantly contributes to problem simplification. A detailed example is presented to illustrate the proposed results.

1. Introduction

Optimal control theory serves as a cornerstone of modern control theory, focusing on the systematic design of control strategies that optimize a specified performance criterion while satisfying predefined system constraints. Foundational contributions from Pontryagin’s maximum principle, Bellman’s dynamic programming, and Kalman’s filtering theory have collectively laid a solid foundation for the development of optimal control theory.
The development of optimal control has underscored the necessity of accurate system modeling for solving complex problems. Rosenbrock [1] revealed the limitations of traditional state-space models and introduced the concept of singular systems by incorporating singular matrices into dynamic coefficient matrices. In the field of economics, the dynamic input–output model proposed by Luenberger and Arbel [2] is classified as a singular system. Subsequently, Luenberger and Arbel [2] proposed a rigorous analytical framework to address the existence and uniqueness of solutions in both linear and nonlinear singular systems. Cobb [3] advanced the field further by establishing controllability and observability criteria, along with their duality principles, for singular systems within the context of structural system analysis. Building on these developments, Dai [4] systematically integrated optimal control theory with singular system theory, laying a comprehensive foundation for the optimal control of deterministic singular systems.
The study of systems influenced by various types of uncertainty is of vital importance to both systems science and optimal control theory. When uncertainty is modeled through random variables or Wiener processes, one arrives at stochastic systems [5,6,7], prompting the study of stochastic singular systems. For instance, Shu and Li [6] introduced dynamic programming into the analysis of optimal control problems involving stochastic singular systems, developing a recurrence equation framework based on probabilistic expectation. Vlasenko et al. [7] established conditions for the existence and uniqueness of solutions for a class of stochastic singular systems. Li and Ma [8] tackled the indefinite linear quadratic problem within singular Markov jump systems through the application of an equivalent transformation method.
The application of probability theory depends on the availability of sufficient historical data to construct probability distributions that approximate true frequencies. However, in many complex or emerging systems where sample data are sparse or nonexistent, expert judgment often serves as a key source of information. Due to the inherent conservatism and subjectivity of human cognition, expert assessments often deviate significantly from objective frequencies. Liu [9] argued that forcibly interpreting expert confidence using probabilistic frameworks can lead to paradoxical or counterintuitive conclusions—an insight that led directly to the development of [9,10]. Within the realm of uncertainty theory, uncertainties are modeled as uncertain variables, and expert assessments are expressed through uncertainty distributions. When system uncertainty is described using such uncertain variables, one obtains uncertain systems [11,12,13]. Zhu [14] demonstrated that solving optimality equations and recurrence equations can yield solutions to uncertain optimal control problems. This foundational work inspired a range of studies [15,16,17,18,19,20], which built the theoretical basis for analyzing uncertain singular systems. Shu and Zhu [21] proposed a rigorous analytical framework for analyzing the stability of uncertain singular systems. Under the uncertain expectation criterion, Shu et al. [22] developed a recurrence equation solution method for uncertain singular systems by introducing the assumptions of regularity and impulse-free conditions. Although the expectation criterion is widely adopted in uncertain optimization, its limitations are well recognized. For instance, in urban income distribution analysis, severe polarization may render average values misleading. To address such concerns, Shu and Zhu [23] applied the optimistic value criterion to uncertain singular systems, yielding several insightful results. Later, Chen et al. [24] investigated the linear–quadratic optimal control problem under the pessimistic value criterion. These developments in criterion design not only enrich the theoretical landscape of uncertain optimization but also offer new tools for addressing problems involving risk preferences and trade-offs. Recent research on the optimal control of uncertain singular systems has primarily focused on systems that are both regular and impulse-free [15,22,25]. While these studies have achieved significant progress, they encounter major challenges when extended to non-causal systems that satisfy only the regularity condition. Such non-causal systems frequently arise in practical applications, including certain problems in signal processing and electrical circuit analysis, where methods designed for causal and impulse-free systems may fail.
In this study, we extend the optimal control framework to regular but non-impulse-free uncertain singular non-causal systems by incorporating the Hurwicz criterion from decision theory. This criterion effectively balances risk and reward in uncertain environments, mirroring decision-making processes in investment scenarios where high-reward opportunities (optimistic values) must be weighed against potential losses (pessimistic values) using a weighting coefficient known as the Hurwicz index. For example, in smart grid dispatch, operators must balance the integration of renewable energy (optimistic objective) with the assurance of grid stability (pessimistic constraint). While previous work on causal uncertain singular systems is extensive [15,23], the distinctive nature of non-causal systems poses substantial obstacles to these established methods. To address this challenge, we adopt the algebraic transformation approach proposed in [26], which decomposes uncertain singular non-causal systems into forward and backward subsystems, fully exploiting their non-causal structure. This transformation simplifies the original problem into an equivalent form that is more amenable to analysis. In the context of the Hurwicz criterion, we analyze the properties of optimistic and pessimistic values and introduce a weighting coefficient to construct a unified objective function. Following an approach similar to [15], we employ a recurrence equation to solve this new class of uncertain optimal control problems. Our study not only generalizes existing results [15,23,26], but also broadens the scope of uncertain optimal control under mixed-risk evaluation criteria.
The remainder of this paper is organized as follows: Section 2 reviews the fundamental concepts of uncertainty theory and optimal control. Section 3 introduces a new modeling framework for uncertain singular non-causal systems, where the system is decomposed into forward and backward uncertain subsystems via an algebraic transformation method. Section 4 presents the formulation of a recurrence equation for solving the Hurwicz-type optimal control problem. Section 5 applies the recurrence equation to solve optimal control problems for both linear and nonlinear uncertain singular non-causal systems. Section 6 provides a numerical example to verify the effectiveness of the proposed approach.

2. Preliminaries

To facilitate the subsequent discussion, this section reviews fundamental concepts from uncertainty theory [10], followed by essential preliminaries on optimal control.

2.1. Uncertainty Theory

An uncertain variable ξ defined on an uncertainty space ( Γ , L , M ) is associated with an uncertainty distribution Φ ( x ) = M { ξ x } , x R . The uncertain expectation of ξ is given by
E M [ ξ ] = 0 + M { ξ x } d x 0 M { ξ x } d x = 0 + ( 1 Φ ( x ) ) d x 0 Φ ( x ) d x .
This expectation is well defined as long as at least one of the two integrals is finite.
In addition to expectation, Liu [9] introduced two further metrics to characterize uncertain variables.
Definition 1
(Liu [9]). Given an uncertain variable ξ and a confidence level α ( 0 , 1 ] , the α-optimistic value and α-pessimistic value of ξ are defined as
ξ sup ( α ) = sup { l M { ξ l } α } , ξ inf ( α ) = inf { l M { ξ l } α } ,
respectively.
Lemma 1
(Liu [9]). Let ξ be an uncertain variable and ρ R . Then, the following properties hold:
( ρ · ξ ) sup ( α ) = ρ · ( ξ ) sup ( α ) , ( ρ · ξ ) inf ( α ) = ρ · ( ξ ) inf ( α ) ,
if ρ 0 , and
( ρ · ξ ) sup ( α ) = ρ · ( ξ ) inf ( α ) , ( ρ · ξ ) inf ( α ) = ρ · ( ξ ) sup ( α ) ,
if ρ < 0 . For two independent uncertain variables ξ and η,
( ξ + η ) sup ( α ) = ( ξ ) sup ( α ) + ( η ) sup ( α ) , ( ξ + η ) inf ( α ) = ( ξ ) inf ( α ) + ( η ) inf ( α ) .
Under the uncertain measure M , the expectation E M [ ξ ] reflects the average tendency of ξ . Moreover, ξ exceeds its α -optimistic value with confidence level at least α , and is less than or equal to its α -pessimistic value with the same confidence level. As indicated by Lemma 1, the optimistic and pessimistic values are linearly additive for mutually independent uncertain variables. Specifically, uncertain variables ξ 1 , ξ 2 , , ξ k are said to be independent if, for any Borel sets Λ 1 , Λ 2 , , Λ k R , the following condition holds:
M j = 1 k { ξ j Λ j } = min 1 j k M { ξ j Λ j } .

2.2. Optimal Control Problem

The central objective of uncertain optimal control is to determine control strategies that optimize the performance index of an uncertain system. Due to the presence of uncertain variables, such performance indices are typically evaluated using their numerical characteristics.
Zhu [14] proposed an optimal control framework based on the uncertain expectation
V ( y ( 0 ) , 0 ) = max 0 j J 1 w ( j ) W ( j ) E M j = 0 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J ) s . t . y ( j + 1 ) = f ( y ( j ) , w ( j ) , ξ j ) , j = 0 , 1 , 2 , , J 1 ,
where y ( j ) R m is the state, w ( j ) R n w is the control, and ξ j R m ξ denotes an uncertain vector. f : R m × R n w × R R is a vector-valued function, and ϕ and ψ are real-valued functions. For each k = J 1 , J 2 , , 1 , 0 , let V ( y ( k ) , k ) be the optimal value function from the k-th stage to the final stage. Then, we have the following equation:
V ( y ( k ) , k ) = max w ( k ) W ( k ) E M ϕ ( y ( k ) , w ( k ) , k ) + j = K = 1 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J ) ,
where w * ( k + 1 ) , , w * ( J 1 ) are optimal controls, and V ( y ( J ) , J ) = ψ ( y ( J ) , J ) .
The expectation is the average in the sense of the uncertain measure, which generally reflects the properties of uncertain variables well. However, in scenarios with highly skewed distributions or extreme polarization, reliance on the expectation may lead to misleading or suboptimal decisions. To address this limitation, Chen et al. [24] introduced an alternative formulation based on the pessimistic value:
V ( y ( 0 ) , 0 ) = max 0 j J 1 w ( j ) W ( j ) j = 0 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J ) inf ( α ) s . t . y ( j + 1 ) = f ( y ( j ) , w ( j ) , ξ j ) , j = 0 , 1 , 2 , , J 1 ,
where α ( 0 , 1 ] is the confidence level. The corresponding recursive form is
V ( y ( k ) , k ) = max w ( k ) W ( k ) ϕ ( y ( k ) , w ( k ) , k ) + j = K = 1 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J ) inf ( α ) ,
to find the optimal controls of this problem.
Both optimal control problems (1) and (3) are established within the framework of standard systems. In contrast, the well-known Leontief dynamic input–output model belongs to the class of singular systems. The emergence of such non-causal singular models can be attributed to various physical considerations. For instance, in advanced signal processing, reconstructing complete signal information often requires accounting for future values, thereby necessitating non-causal models. Similarly, in circuit design, the output of components with memory effects may depend not only on past and present inputs but also on potential future inputs, motivating the development of non-causal system representations.

3. Uncertain Singular Non-Causal System

A class of linear singular systems was introduced by Dai [4] and is formulated as follows:
E y ( j + 1 ) = H y ( j ) + Π w ( j ) , j = 0 , 1 , 2 , , J 1 .
In this system, the state is represented by y ( j ) R m , and the control is denoted as w ( j ) R n w . The matrices E , H R m × m and Π R m × n w are deterministic. It is assumed that r a n k E = q < m .
Definition 2
(Dai [4]). The system (5) is called regular if det ( x E H )   0 , where x is a complex variable, and det ( · ) denotes the determinant of a matrix. The system (5) is impulse-free if deg ( det ( x E H ) ) = r a n k E , where deg ( · ) denotes the degree of the resulting polynomial.
From a physical perspective, the regularity condition ensures that the system model is mathematically well posed and physically meaningful. For singular systems such as (5), regularity ensures that the matrix pair ( E , A ) does not yield purely algebraic equations, i.e., det ( x E H )   0 . This avoids degenerate situations where the system’s dynamics are ill defined. For example, we consider a circuit with a capacitor and inductor in parallel, modeled by singular matrices due to algebraic constraints (e.g., Kirchhoff’s laws). The regularity condition ensures the circuit equations have a unique solution for voltages and currents over time, preventing contradictory constraints (e.g., infinite currents). If det ( x E H ) 0 , the circuit would have no dynamic behavior, rendering it physically irrelevant. The impulse-free condition guarantees that the system does not generate impulsive responses (e.g., infinite spikes in states or controls) at initial time j = 0 . This is critical for practical systems, as impulses are unrealizable in hardware and can damage components. For instance, if a robotic arm undergoes a constrained motion with singular dynamics, an impulse-free system avoids instantaneous velocity jumps that could violate physical limitations.
Taking into account that the system (5) is subject to perturbations by uncertain vectors ξ 0 , ξ 1 , , ξ J 1 R m ξ , Shu and Zhu [15] proposed the following linear uncertain singular causal system, formulated within the framework of uncertainty theory:
E y ( j + 1 ) = H y ( j ) + Π w ( j ) + D ξ j , j = 0 , 1 , 2 , , J 1 ,
where ( E , H ) satisfies both the regularity and impulse-free conditions. In contrast, for an uncertain singular non-causal system, only the regularity condition is required. According to Definition 2, regularity is generally a less restrictive condition than the impulse-free condition. Notably, when an uncertain singular non-causal system also satisfies the impulse-free condition, it reduces to a causal system, as in (6).
Unlike previous research that mainly focused on linear uncertain singular causal systems, we presen nonlinear uncertain singular non-causal systems, which are a class of generalized systems that is worthy of investigation:
E y ( j + 1 ) = H y ( j ) + f ( w ( j ) , ξ j ) , j = 0 , 1 , 2 , , J 1 ,
where ( E , H ) is regular, and f : R m w × R m ξ R m is a vector-valued function that satisfies the Lipschitz condition with respect to its arguments.
Lemma 2.
For the uncertain singular non-causal system (7), there exist non-singular matrices F and M such that the system is a restricted system equivalent to the following decoupled form:
y ¯ 1 ( j + 1 ) = H 1 y ¯ 1 ( j ) + f 1 ( w ( j ) , ξ j ) , y ¯ 2 ( j ) = E 2 y ¯ 2 ( j + 1 ) f 2 ( w ( j ) , ξ j ) , j = 0 , 1 , 2 , , J 1 ,
with the coordinate transformation
M 1 y ( j ) = y ¯ 1 ( j ) y ¯ 2 ( j ) , y ¯ 1 ( j ) R q , y ¯ 2 ( j ) R m q ,
and
F H M = H 1 0 0 I m q , F f ( w ( j ) , ξ j ) = f 1 ( w ( j ) , ξ j ) f 2 ( w ( j ) , ξ j ) ,
where H 1 R q × q , f 1 ( w ( j ) , ξ j ) R q , f 2 ( w ( j ) , ξ j ) R ( m q ) , E 2 R ( m q ) × ( m q ) is a nilpotent matrix. The nilpotent index is denoted by l min = min { l E 2 l = 0 , l Z + } .
Proof. 
As shown by Dai [4], there exist non-singular matrices F , M R m × m such that
F E M = I q 0 0 E 2 .
Applying this transformation to the system (7) yields the equivalent representation given in (8).
It is worth noting that both systems (7) and (8) share the same control. This transformation enables the analysis of associated optimal control problems through the simplified, decoupled system (8), enabling a more tractable analysis of system behavior under uncertainty.    □

4. Optimal Control Problem Based on the Hurwicz Criterion

Let ϕ and ψ be given measurable real-valued functions, and let α ( 0 , 1 ] denote the confidence level. The α -optimistic and α -pessimistic values of the uncertain variable
j = 0 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J )
are defined, respectively, as
j = 0 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J ) sup ( α )
and
j = 0 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J ) inf ( α ) .
To unify these two evaluation criteria, we introduce a Hurwicz-type optimal control framework for the uncertain singular non-causal system (7). This framework balances the α -optimistic and α -pessimistic values using a weight parameter ρ [ 0 , 1 ] :
V ( y ( 0 ) , y ( J ) , 0 ) = max 0 j J 1 w ( j ) W ( j ) { ρ j = 0 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J ) sup ( α ) + ( 1 ρ ) j = 0 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J ) inf ( α ) } s . t . E y ( j + 1 ) = H y ( j ) + f ( w ( j ) , ξ j ) , j = 0 , 1 , 2 , , J 1 .
Remark 1.
In problem (9), the integration of two criteria forms a hybrid decision-making framework that accommodates the subjective preferences of decision-makers. The underlying rationale is rooted in the classical Hurwicz criterion [27], which enables a quantitative adjustment between optimism and pessimism via the introduction of a risk preference coefficient ρ [ 0 , 1 ] . Specifically, when ρ = 1 , it corresponds to a fully optimistic strategy, with decision-making focus on best-case outcomes, suitable for scenarios with abundant resource redundancy and high risk tolerance. Conversely, when ρ = 0 , the strategy is fully pessimistic, favoring absolute risk aversion and system robustness under worst-case scenarios, often observed in high-stakes environments. Intermediate values ( 0 < ρ < 1 ) represent a balanced strategy, where linear weighting of the two extremes allows the decision-making process to flexibly reconcile performance and robustness, thereby addressing multidimensional requirements in practical engineering applications.
Lemma 3.
The problem (9) is equivalent to the following decomposed formulation:
V ( y ¯ 1 ( 0 ) , y ¯ 2 ( J ) , 0 ) = max 0 j J 1 w ( j ) W ( j ) { ρ j = 0 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) sup ( α ) + ( 1 ρ ) j = 0 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) inf ( α ) } s . t . y ¯ 1 ( j + 1 ) = H 1 y ¯ 1 ( j ) + f 1 ( w ( j ) , ξ j ) , y ¯ 2 ( j ) = E 2 y ¯ 2 ( j + 1 ) f 2 ( w ( j ) , ξ j ) , j = 0 , 1 , 2 , , J 1 ,
with M = [ M 1 M 2 ] , M 1 R m × q , M 2 R m × ( m q ) , and
ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) = ϕ ( M 1 y ¯ 1 ( j ) + M 2 y ¯ 2 ( j ) , w ( j ) , j ) , ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , j ) = ψ ( M 1 y ¯ 1 ( J ) + M 2 y ¯ 2 ( J ) , J ) .
The proof follows directly from Lemma 2.
Since the optimal value in (10) depends on the initial and terminal states, we denote it by V ( y ¯ 1 ( 0 ) , y ¯ 2 ( J ) , 0 ) . For any k { 1 , , J 1 } , we define the value function given y ¯ 1 ( k ) and y ¯ 2 ( J ) as follows:
V ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , k ) = max k j J 1 w ( j ) W ( j ) { ρ j = k J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) sup ( α ) + ( 1 ρ ) j = k J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) inf ( α ) } s . t . y ¯ 1 ( j + 1 ) = H 1 y ¯ 1 ( j ) + f 1 ( w ( j ) , ξ j ) , y ¯ 2 ( j ) = E 2 y ¯ 2 ( j + 1 ) f 2 ( w ( j ) , ξ j ) , j = k , k + 1 , k + 2 , , J 1 .
The optimal solution can be obtained via the following recurrence relations.
Theorem 1.
For each k = J 1 , J 2 , , 1 , 0 , the value function satisfies the following recurrence equation:
V ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , k ) = max w ( k ) W ( k ) { ρ [ ϕ ¯ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , w * ( k ) , w * ( k + 1 ) , , w * ( J 1 ) , j ) + j = k + 1 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w * ( j ) , w * ( j + 1 ) , , w * ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) ] sup ( α ) + ( 1 ρ ) [ ϕ ¯ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , w * ( k ) , w * ( k + 1 ) , , w * ( J 1 ) , j ) + j = k + 1 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w * ( j ) , w * ( j + 1 ) , , w * ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) ] inf ( α ) } ,
where w * ( k + 1 ) , , w * ( J 1 ) are optimal controls, and V ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) = ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) .
Proof. 
Let V ˜ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , k ) denote the right-hand side of the recurrence equation. By definition of the value function V ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , k ) , we have
V ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , k ) ρ j = k J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) sup ( α ) + ( 1 ρ ) j = k J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) inf ( α ) = ρ [ ϕ ¯ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , w ( k ) , w ( k + 1 ) , , w ( J 1 ) , j ) + j = k + 1 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) ] sup ( α ) + [ ϕ ¯ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , w ( k ) , w ( k + 1 ) , , w ( J 1 ) , j ) + j = k + 1 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) ] sup ( α ) + ( 1 ρ ) [ ϕ ¯ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , w ( k ) , w ( k + 1 ) , , w ( J 1 ) , j ) + j = k + 1 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) ] inf ( α ) ,
Taking the maximum with respect to w ( j ) , j = k + 1 , k + 2 , , J 1 in (12), and then w ( k ) in (12), we obtain
V ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , k ) V ˜ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , k ) .
From another aspect, we have
ρ j = k J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) sup ( α ) + ( 1 ρ ) j = k J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w ( j ) , w ( j + 1 ) , , w ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) inf ( α ) ρ [ ϕ ¯ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , w ( k ) , w * ( k + 1 ) , , w * ( J 1 ) , j ) + j = k + 1 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w * ( j ) , w * ( j + 1 ) , , w * ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) ] sup ( α ) + [ ϕ ¯ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , w ( k ) , w * ( k + 1 ) , , w * ( J 1 ) , j ) + j = k + 1 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w * ( j ) , w * ( j + 1 ) , , w * ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) ] sup ( α ) + ( 1 ρ ) [ ϕ ¯ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , w ( k ) , w * ( k + 1 ) , , w * ( J 1 ) , j ) + j = k + 1 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w * ( j ) , w * ( j + 1 ) , , w * ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) ] inf ( α ) , max w ( k ) W ( k ) { ρ [ ϕ ¯ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , w * ( k ) , w * ( k + 1 ) , , w * ( J 1 ) , j ) + j = k + 1 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w * ( j ) , w * ( j + 1 ) , , w * ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) ] sup ( α ) + [ ϕ ¯ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , w * ( k ) , w * ( k + 1 ) , , w * ( J 1 ) , j ) + j = k + 1 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w * ( j ) , w * ( j + 1 ) , , w * ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) ] sup ( α ) + ( 1 ρ ) [ ϕ ¯ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , w * ( k ) , w * ( k + 1 ) , , w * ( J 1 ) , j ) + j = k + 1 J 1 ϕ ¯ ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , w * ( j ) , w * ( j + 1 ) , , w * ( J 1 ) , j ) + ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) ] inf ( α ) } = V ˜ ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , k ) .
Combining (13) and (14), we conclude that Equation (11) holds. Furthermore, for the terminal case j = J , it follows from the definition of V ( y ¯ 1 ( k ) , y ¯ 2 ( J ) , k ) that V ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) = ψ ¯ ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) . This completes the proof.    □
Remark 2.
It is worth noting that when the matrix pair ( E , H ) satisfies the regularity and impulse-free conditions, the problem (9) reduces to the optimal control problem for uncertain singular causal systems studied by Shu and Sheng [25]. Moreover, if the matrix E is nonsingular, the problem (9) degenerates into an optimal control problem under the standard system framework:
V ( y ( 0 ) , y ( J ) , 0 ) = max 0 j J 1 w ( j ) W ( j ) { ρ j = 0 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J ) sup ( α ) + ( 1 ρ ) j = 0 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J ) inf ( α ) } s . t . y ( j + 1 ) = E 1 H y ( j ) + E 1 f ( w ( j ) , ξ j ) , j = 0 , 1 , 2 , , J 1 .
In particular, when ρ = 0 , the problem becomes fully aligned with the optimal control problem under the pessimistic value criterion, as studied by Chen et al. [24]:
V ( y ( 0 ) , y ( J ) , 0 ) = max 0 j J 1 w ( j ) W ( j ) j = 0 J 1 ϕ ( y ( j ) , w ( j ) , j ) + ψ ( y ( J ) , J ) inf ( α ) s . t . y ( j + 1 ) = E 1 H y ( j ) + E 1 f ( w ( j ) , ξ j ) , j = 0 , 1 , 2 , , J 1 .
Consequently, the recurrence Equation (11) given in Theorem 1 simplifies to the form of Equation (4). These results collectively demonstrate that problem (9) developed in this work possesses a more general structural form, encompassing several well-established models as special cases.
This transformation simplifies the original optimal control problem (9), which involves an uncertain singular non-causal system, by reformulating it into the equivalent form (10). In this reformulation, the system is decomposed into a combination of forward and backward uncertain subsystems. Utilizing the recurrence equation, we are able to analyze problem (10) in a backward recursive fashion, starting from the terminal stage and proceeding step by step toward the initial stage. In this process, we assume that ξ 0 , ξ 1 , , ξ J 1 are mutually independent. This assumption is introduced to facilitate the recursive computation by exploiting the linear additivity property of the optimistic and pessimistic values, as established in Lemma 1. Additionally, this recursive approach enables the derivation of analytical solutions for specific instances of optimal control problems involving uncertain singular non-causal systems.

5. Special Instances

First, we consider the following Hurwicz-type optimal control problem for a linear uncertain singular non-causal system:
V ( y ( 0 ) , y ( J ) , 0 ) = max 0 j J 1 w ( j ) [ 1 , 1 ] J ρ j = 0 J 1 ϕ j T y ( j ) + ϕ N T y ( J ) sup ( α ) + ( 1 ρ ) j = 0 J 1 ϕ j T y ( j ) + ϕ N T y ( J ) inf ( α ) s . t . E y ( j + 1 ) = H y ( j ) + Π w ( j ) + D ξ j , j = 0 , 1 , 2 , , J 1 ,
where ϕ j , ϕ J R m , Π R m × m w , D R m × m ξ . Let
f H Π = Π 1 Π 2 , Π 1 R q × m w , Π 2 R ( m q ) × m w , f H D = D 1 D 2 , D 1 R q × m ξ , D 2 R ( m q ) × m ξ .
If we assume that D 2 = 0 , then problem (17) is equivalent to the following transformed problem:
V ( y ¯ 1 ( 0 ) , y ¯ 2 ( J ) , 0 ) = max 0 j J 1 w ( j ) [ 1 , 1 ] n w { ρ j = 0 J 1 ϕ j T [ M 1 y ¯ 1 ( j ) + M 2 y ¯ 2 ( j ) ] + ϕ J T [ M 1 y ¯ 1 ( J ) + M 2 y ¯ 2 ( J ) ] sup ( α ) + ( 1 ρ ) j = 0 J 1 ϕ j T [ M 1 y ¯ 1 ( j ) + M 2 y ¯ 2 ( j ) ] + ϕ J T [ M 1 y ¯ 1 ( J ) + M 2 y ¯ 2 ( J ) ] inf ( α ) } s . t . y ¯ 1 ( j + 1 ) = H 1 y ¯ 1 ( j ) + Π 1 w ( j ) + D 1 ξ j , y ¯ 2 ( j ) = E 2 y ¯ 2 ( j + 1 ) Π 2 w ( j ) , j = 0 , 1 , 2 , , J 1 .
Theorem 2.
The optimal controls w * ( j ) for the problem (18) are
w i w * ( j ) = s i g n { l i w j } , i f   l i w j 0 , s i g n { u i w j , 1 } , i f   l i w j = 0 , a n d u i w j , 1 0 , s i g n { u i w j , 1 } , i f   l i w j = 0 , u i w j , 1 = 0 , a n d u i w j , 2 0 s i g n { u i w j , j } , i f   l i w j = 0 , u i w j , 1 = 0 , , u i w j , j 1 = 0 , a n d u i w j , j 0 u n d e t e r m i n e d , i f   l i w j = 0 , u i w j , 1 = 0 , , u i w j , j 1 = 0 , a n d u i w j , j = 0 ,
where
l j = r j + 1 T Π 1 ϕ j T M 2 Π 2 , u j , l ˜ w = k = 1 l ˜ w ϕ j k T M 2 E 2 k Π 2 ,
and
r j T = ϕ j T M 1 + r j + 1 T H 1 ,
for i w = 1 , 2 , , n w , l ˜ w = 1 , 2 , , j , j = 1 , 2 , , J 1 . The optimal values are
V ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) = r J T y ¯ 1 ( J ) + s J T y ¯ 2 ( J ) ,
V ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , j ) = r j T y ¯ 1 ( j ) + s j T y ¯ 2 ( J ) + k = j + 1 J 1 u k , k j w * ( k ) + k = j J 1 l k 1 + k = j J 1 t k ,
where
s j T = ϕ j T M 2 E 2 J j + s j + 1 T , t k = ρ [ r k + 1 T D 1 ξ k ] sup ( α ) + ( 1 ρ ) [ r k + 1 T D 1 ξ k ] inf ( α ) ,
for k = j , j + 1 , , J 1 , j = 0 , 1 , 2 , , J 1 , with r J T = ϕ J T M 1 , s J T = ϕ J T M 2 .
Proof. 
According to Theorem 1, it is shown that V ( y 1 ( J ) , y 2 ( J ) , J ) = r J T y ¯ 1 ( J ) + s J T y ¯ 2 ( J ) , where r J T = ϕ J T M 1 and s J T = ϕ J T M 2 . When j = J 1 , by applying Theorem 1, we have
V ( y 1 ( J 1 ) , y 2 ( J 1 ) , J 1 ) = max w ( J 1 ) [ 1 , 1 ] n w { ρ [ ϕ J 1 T ( M 1 y ¯ 1 ( J 1 ) + M 2 y ¯ 2 ( J 1 ) ) + r J T y ¯ 1 ( J ) + s J T y ¯ 2 ( J ) ] sup ( α ) + ( 1 ρ ) [ ϕ J 1 T ( M 1 y ¯ 1 ( J 1 ) + M 2 y ¯ 2 ( J 1 ) ) + r J T y ¯ 1 ( J ) + s J T y ¯ 2 ( J ) ] inf ( α ) } = max w ( J 1 ) [ 1 , 1 ] n w { ρ [ ϕ J 1 T [ M 1 y ¯ 1 ( J 1 ) + M 2 ( E 2 y ¯ 2 ( J ) Π 2 w ( J 1 ) ) ] + r J T ( H 1 y ¯ 1 ( J 1 ) + Π 1 w ( J 1 ) + D 1 ξ J 1 ) + s J T y ¯ 2 ( J ) ] sup ( α ) + ( 1 ρ ) [ ϕ J 1 T [ M 1 y ¯ 1 ( J 1 ) + M 2 ( E 2 y ¯ 2 ( J ) Π 2 w ( J 1 ) ) ] + r J T ( H 1 y ¯ 1 ( J 1 ) + Π 1 w ( J 1 ) + D 1 ξ J 1 ) + s J T y ¯ 2 ( J ) ] inf ( α ) } = ( ϕ J 1 T M 1 + r J T H 1 ) y ¯ 1 ( J 1 ) + ( ϕ J 1 T M 2 E 2 + s J T ) y ¯ 2 ( J ) + max w ( J 1 ) [ 1 , 1 ] n w [ ( r J T Π 1 ϕ J 1 T M 2 Π 2 ) w ( J 1 ) ] + ρ [ r J T D 1 ξ J 1 ] sup ( α ) + ( 1 ρ ) [ r J T D 1 ξ J 1 ] inf ( α ) .
Let l J 1 = r J T Π 1 ϕ J 1 T M 2 Π 2 . For each i w = 1 , 2 , , n w , we obtain
w i w * ( J 1 ) = s i g n { l i w J 1 } , i f   l i w J 1 0 , u n d e t e r m i n e d , i f   l i w J 1 = 0 ,
where l i w J 1 represents the i w -th element of the vector. Denote the 1-norm of the vector l J 1 as l J 1 1 ; then,
V ( y ¯ 1 ( J 1 ) , y ¯ 2 ( J ) , J 1 ) = r J 1 T y ¯ 1 ( J 1 ) + s J 1 T y ¯ 2 ( J ) + l J 1 1 + t J 1 ,
where
r J 1 T = ϕ J 1 T M 1 + r J T H 1 , s J 1 T = ϕ J 1 T M 2 E 2 + s J T , t J 1 = ρ [ r J T D 1 ξ J 1 ] sup ( α ) + ( 1 ρ ) [ r J T D 1 ξ J 1 ] inf ( α ) .
At this point, when considering the case of j = J 2 , we are able to obtain the subsequent outcome by leveraging Theorem 1:
V ( y 1 ( J 2 ) , y 2 ( J ) , J 2 ) = max w ( J 2 ) [ 1 , 1 ] n w { ρ [ ϕ J 2 T [ M 1 y ¯ 1 ( J 2 ) + M 2 y ¯ 2 ( J 2 ) ] sup ( α ) + r J 1 T y ¯ 1 ( J 1 ) + s J 1 T y ¯ 2 ( J ) + l J 1 1 + t J 1 ] sup ( α ) + ( 1 ρ ) [ ϕ J 2 T [ M 1 y ¯ 1 ( J 2 ) + M 2 y ¯ 2 ( J 2 ) ] sup ( α ) + r J 1 T y ¯ 1 ( J 1 ) + s J 1 T y ¯ 2 ( J ) + l J 1 1 + t J 1 ] inf ( α ) } = max w ( J 2 ) [ 1 , 1 ] n w { ρ [ ϕ J 2 T [ M 1 y ¯ 1 ( J 2 ) + M 2 ( E 2 2 y ¯ 2 ( J ) Π 2 w ( J 2 ) E 2 Π 2 w * ( J 1 ) ) ] + r J 1 T ( H 1 y ¯ 1 ( J 2 ) + Π 1 w ( J 2 ) + D 1 ξ J 2 ) + s J 1 T y ¯ 2 ( J ) + l J 1 1 + t J 1 ] sup ( α ) + ( 1 ρ ) [ ϕ J 2 T [ M 1 y ¯ 1 ( J 2 ) + M 2 ( E 2 2 y ¯ 2 ( J ) Π 2 w ( J 2 ) E 2 Π 2 w * ( J 1 ) ) ] + r J 1 T ( H 1 y ¯ 1 ( J 2 ) + Π 1 w ( J 2 ) + D 1 ξ J 2 ) + s J 1 T y ¯ 2 ( J ) + l J 1 1 + t J 1 ] sup ( α ) } = ( ϕ J 2 T M 1 + r J 1 T H 1 ) y ¯ 1 ( J 2 ) + ( ϕ J 2 T M 2 E 2 2 + s J 1 T ) y ¯ 2 ( J ) ϕ J 2 T M 2 E 2 Π 2 w * ( J 1 ) + max w ( J 2 ) [ 1 , 1 ] J [ ( r J 1 T Π 1 ϕ J 2 T M 2 Π 2 ) w ( J 2 ) ] + ρ [ r J 1 T D 1 ξ J 2 ] sup ( α ) + ( 1 ρ ) [ r J 1 T D 1 ξ J 2 ] inf ( α ) + l J 1 1 + t J 1 .
Denote l J 2 = r J 1 T Π 1 ϕ J 2 T M 2 Π 2 . Then, we have
w i w * ( J 2 ) = s i g n { l i w J 2 } , i f l i w J 2 0 , u n d e t e r m i n e d , i f l i w J 2 = 0 , i w = 1 , 2 , , n w .
The goal is to determine the maximum value of the expression ϕ J 2 T M 2 E 2 Π 2 w * ( J 1 ) . Since the elements of w * ( J 1 ) that were not computed in the preceding step need to be ascertained, an update procedure is necessary:
w i w * ( J 1 ) = s i g n { l i w J 1 } , i f   l i w J 1 0 , s i g n { u i w J 1 , 1 } , i f   l i w J 1 = 0 , a n d u i w J 1 , 1 0 , u n d e t e r m i n e d , i f   l i w J 1 = 0 , a n d u i w J 1 , 1 = 0 , i w = 1 , 2 , , n w ,
where u J 1 , 1 = ϕ J 2 T M 2 E 2 Π 2 . The corresponding optimal value is
V ( y 1 ( J 2 ) , y 2 ( J ) , J 2 ) = r J 2 T y ¯ 1 ( J 2 ) + s J 2 T y ¯ 2 ( J ) + u J 1 , 1 w * ( J 1 ) + k = J 2 J 1 l k 1 + k = J 2 J 1 t k ,
where
r J 2 T = ϕ J 2 T M 1 + r J 1 T H 1 , s J 2 T = ϕ J 2 T M 2 E 2 2 + s J 1 T , t J 2 = ρ [ r J 1 T D 1 ξ J 2 ] sup ( α ) + ( 1 ρ ) [ r J 1 T D 1 ξ J 2 ] inf ( α ) .
We have successfully demonstrated the theorem via an inductive approach.    □
Next, we consider the following Hurwicz-type optimal control problem for a nonlinear uncertain singular non-causal system:
V ( y ( 0 ) , y ( J ) , 0 ) = max 0 j J 1 w ( j ) [ 1 , 1 ] ρ j = 0 J 1 ϕ j T y ( j ) + ϕ N T y ( J ) sup ( α ) + ( 1 ρ ) j = 0 J 1 ϕ j T y ( j ) + ϕ N T y ( J ) inf ( α ) s . t . E y ( j + 1 ) = H y ( j ) + λ w ( j ) + c w 2 ( j ) + D ξ j , j = 0 , 1 , 2 , , J 1 ,
where w ( j ) [ 1 , 1 ] , λ , c R m . We denote
f H λ = λ 1 λ 2 , λ 1 R q , λ 2 R m q , f H c = c 1 c 2 . , c 1 R q , c 2 R m q .
Then, under the condition D 2 = 0 , the problem (25) is equivalent to the following formulation:
V ( y ¯ 1 ( 0 ) , y ¯ 2 ( J ) , 0 ) = max 0 j J 1 w ( j ) [ 1 , 1 ] n w { ρ j = 0 J 1 ϕ j T [ M 1 y ¯ 1 ( j ) + M 2 y ¯ 2 ( j ) ] + ϕ J T [ M 1 y ¯ 1 ( J ) + M 2 y ¯ 2 ( J ) ] sup ( α ) + ( 1 ρ ) j = 0 J 1 ϕ j T [ M 1 y ¯ 1 ( j ) + M 2 y ¯ 2 ( j ) ] + ϕ J T [ M 1 y ¯ 1 ( J ) + M 2 y ¯ 2 ( J ) ] inf ( α ) } s . t . y ¯ 1 ( j + 1 ) = H 1 y ¯ 1 ( j ) + λ 1 w ( j ) + c 1 w 2 ( j ) + D 1 ξ j , y ¯ 2 ( j ) = E 2 y ¯ 2 ( j + 1 ) λ 2 w ( j ) c 2 w 2 ( j ) , j = 0 , 1 , 2 , , J 1 .
Theorem 3.
The optimal controls w * ( j ) for the problem (26) are
w * ( j ) = s i g n { α j } , i f   α j 0 a n d β j = 0 , 1 , i f   α j 0 a n d β j > 0 , o r α j 2 β j a n d β j < 0 , α j 2 β j , i f   2 β j α j 2 β j a n d β j < 0 , 1 , i f   α j < 0 a n d β j > 0 , o r α j 2 β j a n d β j < 0 ;
if ( α j , β j ) ( 0 , 0 ) , and
w * ( j ) = s i g n { α j 1 } , i f   α j 1 0 a n d β j 1 = 0 , 1 , i f   α j 1 0 a n d β j 1 > 0 , o r α j 1 2 β j 1 a n d β j 1 < 0 , α j 1 2 β j 1 , i f   2 β j 1 α j 1 2 β j 1 a n d β j 1 < 0 , 1 , i f   α j 1 < 0 a n d β j 1 > 0 , o r α j 1 2 β j 1 a n d β j 1 < 0 ,
if  ( α j , β j ) = ( 0 , 0 ) , ( α j 1 , β j 1 ) ( 0 , 0 ) ,
and
w * ( j ) = s i g n { α j j 1 } , i f   α j j 1 0 a n d β j j 1 = 0 , 1 , i f   α j j 1 0 a n d β j j 1 > 0 , o r α j j 1 2 β j j 1 a n d β j j 1 < 0 , α j j 1 2 β j j 1 , i f   2 β j j 1 α j j 1 2 β j j 1 a n d β j j 1 < 0 , 1 , i f   α j j 1 < 0 a n d β j j 1 > 0 , o r α j j 1 2 β j j 1 a n d β j j 1 < 0 ,
if ( α j , β j ) = ( 0 , 0 ) , ( α j 1 , β j 1 ) = ( 0 , 0 ) , , ( α j j 1 , β j j 1 ) ( 0 , 0 ) , and
w * ( j ) = s i g n { α j j } , i f   α j j 0 a n d β j j = 0 , 1 , i f   α j j 0 a n d β j j > 0 , o r α j j 2 β j j a n d β j j < 0 , α j j 2 β j j , i f   2 β j j α j j 2 β j j a n d β j j < 0 , 1 , i f   α j j < 0 a n d β j j > 0 , o r α j j 2 β j j a n d β j j < 0 , u n d e t e r m i n e d , i f   α j j = 0 a n d β j j = 0
if ( α j , β j ) = ( 0 , 0 ) , ( α j 1 , β j 1 ) = ( 0 , 0 ) , , ( α j j 1 , β j j 1 ) = ( 0 , 0 ) , where
α j = r j + 1 T λ 1 ϕ j T M 2 λ 2 , β j = r j + 1 T c 1 ϕ j T M 2 c 2 ,
and
α j l = ϕ j l T M 2 E 2 λ 2 , β j l = ϕ j l T M 2 E 2 c 2 , l = 1 , 2 , , j ,
with r j T = ϕ j T M 1 + r j + 1 T H 1 , for j = 0 , 1 , 2 , , J 1 , and r J T = ϕ J T M 1 . The optimal values are
V ( y ¯ 1 ( J ) , y ¯ 2 ( J ) , J ) = r J T y ¯ 1 ( J ) + s J T y ¯ 2 ( J ) ,
V ( y ¯ 1 ( j ) , y ¯ 2 ( J ) , j ) = r j T y ¯ 1 ( j ) + s j T y ¯ 2 ( J ) + k = j + 1 J 1 j = 1 k j [ α k l w * ( k ) + β k l ( w * ( k ) ) 2 ] + k = j J 1 σ k + k = j J 1 t k ,
where
s j = ϕ j T M 2 E 2 J j + s j + 1 T ,
and
σ k = α j w * ( j ) + β j w * ( j ) 2 , t k = ρ [ r k + 1 T D 1 ξ k ] sup ( α ) + ( 1 ρ ) [ r k + 1 T D 1 ξ k ] inf ( α ) ,
for k = j , j + 1 , , J 1 , j = 0 , 1 , 2 , , J 1 , with s J T = ϕ J T M 2 .
Proof. 
According to Theorem 1, it is shown that V ( y 1 ( J ) , y 2 ( J ) , J ) = r J T y ¯ 1 ( J ) + s J T y ¯ 2 ( J ) , where r J T = ϕ J T M 1 , s J T = ϕ J T M 2 . When j = J 1 , by virtue of Theorem 1, we have
V ( y 1 ( J 1 ) , y 2 ( J ) , J 1 ) = max w ( J 1 ) [ 1 , 1 ] { ρ ϕ J 1 T [ M 1 y ¯ 1 ( J 1 ) + M 2 y ¯ 2 ( J 1 ) ] + r J T y ¯ 1 ( J ) + s J T y ¯ 2 ( J ) sup ( α ) + ( 1 ρ ) ϕ J 1 T [ M 1 y ¯ 1 ( J 1 ) + M 2 y ¯ 2 ( J 1 ) ] + r J T y ¯ 1 ( J ) + s J T y ¯ 2 ( J ) inf ( α ) } = max w ( J 1 ) [ 1 , 1 ] { ρ [ ϕ J 1 T [ M 1 y ¯ 1 ( J 1 ) + M 2 ( E 2 y ¯ 2 ( J ) λ 2 w ( J 1 ) c 2 w 2 ( J 1 ) ) ] + r J T ( H 1 y ¯ 1 ( J 1 ) + λ 1 w ( J 1 ) + c 1 w 2 ( J 1 ) + D 1 ξ J 1 ) + s J T y ¯ 2 ( J ) ] sup ( α ) + ( 1 ρ ) [ ϕ J 1 T [ M 1 y ¯ 1 ( J 1 ) + M 2 ( E 2 y ¯ 2 ( J ) λ 2 w ( J 1 ) c 2 w 2 ( J 1 ) ) ] + r J T ( H 1 y ¯ 1 ( J 1 ) + λ 1 w ( J 1 ) + c 1 w 2 ( J 1 ) + D 1 ξ J 1 ) + s J T y ¯ 2 ( J ) ] inf ( α ) } = ( ϕ J 1 T M 1 + r J T H 1 ) y ¯ 1 ( J 1 ) + ( ϕ J 1 T M 2 E 2 + s J T ) y ¯ 2 ( J ) + max w ( J 1 ) [ 1 , 1 ] [ ( r J T λ 1 ϕ J 1 T M 2 λ 2 ) w ( J 1 ) + ( r J T c 1 ϕ J 1 T M 2 c 2 ) w 2 ( J 1 ) ] + ρ [ r J T D 1 ξ J 1 ] sup ( α ) + ( 1 ρ ) [ r J T D 1 ξ J 1 ] inf ( α ) = ( ϕ J 1 T M 1 + r J T H 1 ) y ¯ 1 ( J 1 ) + ( ϕ J 1 T M 2 E 2 + s J T ) y ¯ 2 ( J ) + max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] + ρ [ r J T D 1 ξ J 1 ] sup ( α ) + ( 1 ρ ) [ r J T D 1 ξ J 1 ] inf ( α ) ,
where
α J 1 = r J T λ 1 ϕ J 1 T M 2 λ 2 , β J 1 = r J T c 1 ϕ J 1 T M 2 c 2 .
Taking w * ( J 1 ) as the maximum point, we have the following cases. (i) In the situation where α J 1 = β J 1 = 0 , the relevant expression simplifies to
max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = α J 1 w * ( J 1 ) + β J 1 w * ( J 1 ) 2 = 0 .
(ii) If α J 1 0 and β J 1 = 0 , then
w * ( J 1 ) = arg max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = s i g n { α J 1 } ,
and
max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = α J 1 .
(iii) If α J 1 0 and β J 1 > 0 , then we have
w * ( J 1 ) = arg max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = 1 ,
and
max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = α J 1 + β J 1 .
(iv) If α J 1 < 0 and β J 1 > 0 , then we have
w * ( J 1 ) = arg max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = 1 ,
and
max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = α J 1 + β J 1 .
(v) If 2 β J 1 α J 1 2 β J 1 and β J 1 < 0 , then we have
w * ( J 1 ) = arg max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = α J 1 2 β J 1 ,
and
max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = α J 1 2 4 β J 1 .
(vi) If α J 1 2 β J 1 and β J 1 < 0 , then we have
w * ( J 1 ) = arg max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = 1 ,
and
max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = α J 1 + β J 1 .
(vii) If α J 1 2 β J 1 and β J 1 < 0 , then we have
w * ( J 1 ) = arg max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = 1 ,
and
max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] = α J 1 + β J 1 .
The optimal control w * ( J 1 ) can be summarized as follows:
w * ( J 1 ) = s i g n { α J 1 } , if α J 1 0 and β J 1 = 0 , 1 , if α J 1 0 and β J 1 > 0 , or α J 1 2 β J 1 and β J 1 < 0 , α J 1 2 β J 1 , if 2 β J 1 α J 1 2 β J 1 and β J 1 < 0 , 1 , if α J 1 < 0 and β J 1 > 0 , or α J 1 2 β J 1 and β J 1 < 0 , u n d e t e r m i n e d , if α J 1 = 0 and β J 1 = 0 .
Denote σ J 1 = max w ( J 1 ) [ 1 , 1 ] [ α J 1 w ( J 1 ) + β J 1 w 2 ( J 1 ) ] . Then,
σ J 1 = α J 1 , if α J 1 0 and β J 1 = 0 , α J 1 + β J 1 , if α J 1 0 and β J 1 > 0 , or α J 1 2 β J 1 and β J 1 < 0 , α J 1 2 4 β J 1 , if 2 β J 1 α J 1 2 β J 1 and β J 1 < 0 , α J 1 + β J 1 , if α J 1 < 0 and β J 1 > 0 , or α J 1 2 β J 1 and β J 1 < 0 , u n d e t e r m i n e d , if α J 1 = 0 and β J 1 = 0 .
And
V ( y ¯ 1 ( J 1 ) , y ¯ 2 ( J ) , J 1 ) = r J 1 T y ¯ 1 ( J 1 ) + s J 1 T y ¯ 2 ( J ) + σ J 1 + t J 1 ,
where
r J 1 T = ϕ J 1 T M 1 + r J T H 1 , s J 1 T = ϕ J 1 T M 2 E 2 + s J T , t J 1 = ρ [ r J T D 1 ξ J 1 ] sup ( α ) + ( 1 ρ ) [ r J T D 1 ξ J 1 ] inf ( α ) .
For j = J 2 , according to Theorem 1, we obtain that
V ( y 1 ( J 2 ) , y 2 ( J ) , J 2 ) = max w ( J 2 ) [ 1 , 1 ] { ρ ϕ J 2 T [ M 1 y ¯ 1 ( J 2 ) + M 2 y ¯ 2 ( J 2 ) ] + r J 1 T y ¯ 1 ( J 1 ) + s J 1 T y ¯ 2 ( J ) + σ J 1 + t J 1 sup ( α ) + ( 1 ρ ) ϕ J 2 T [ M 1 y ¯ 1 ( J 2 ) + M 2 y ¯ 2 ( J 2 ) ] + r J 1 T y ¯ 1 ( J 1 ) + s J 1 T y ¯ 2 ( J ) + σ J 1 + t J 1 inf ( α ) } = max w ( J 2 ) [ 1 , 1 ] { ρ [ ϕ J 2 T [ M 1 y ¯ 1 ( J 2 ) + M 2 [ E 2 2 y ¯ 2 ( J ) λ 2 w ( J 2 ) c 2 w 2 ( J 2 ) E 2 ( λ 2 w * ( J 1 ) + c 2 ( w * ( J 1 ) ) 2 ) ] ] + r J 1 T ( H 1 y ¯ 1 ( J 2 ) + λ 1 w ( J 2 ) + c 1 w 2 ( J 2 ) + D 1 ξ J 2 ) + s J 1 T y ¯ 2 ( J ) + σ J 1 + t J 1 ] sup ( α ) + ( 1 ρ ) [ ϕ J 2 T [ M 1 y ¯ 1 ( J 2 ) + M 2 [ E 2 2 y ¯ 2 ( J ) λ 2 w ( J 2 ) c 2 w 2 ( J 2 ) E 2 ( λ 2 w * ( J 1 ) + c 2 ( w * ( J 1 ) ) 2 ) ] ] + r J 1 T ( H 1 y ¯ 1 ( J 2 ) + λ 1 w ( J 2 ) + c 1 w 2 ( J 2 ) + D 1 ξ J 2 ) + s J 1 T y ¯ 2 ( J ) + σ J 1 + t J 1 ] inf ( α ) } = ( ϕ J 2 T M 1 + r J 1 T H 1 ) y ¯ 1 ( J 2 ) + ( ϕ J 2 T M 2 E 2 2 + s J 1 T ) y ¯ 2 ( J ) ϕ J 2 T M 2 E 2 ( λ 2 w * ( J 1 ) + c 2 ( w * ( J 1 ) ) 2 ) + max w ( J 2 ) [ 1 , 1 ] [ α J 2 w ( J 2 ) + β J 2 w 2 ( J 2 ) ] + ρ [ r J 1 T D 1 ξ J 2 ] sup ( α ) + ( 1 ρ ) [ r J 1 T D 1 ξ J 2 ] inf ( α ) + σ J 2 + t J 1 ,
where
α J 2 = r J 1 T λ 1 ϕ J 2 T M 2 λ 2 , β J 2 = r J 1 T c 1 ϕ J 2 T M 2 c 2 .
The proof procedure is similar to the preceding one, and we thereby derive
w * ( J 2 ) = s i g n { α J 2 } , if α J 2 0 and β J 2 = 0 , 1 , if α J 2 0 and β J 2 > 0 , or α J 2 2 β J 2 and β J 2 < 0 , α J 2 2 β J 2 , if 2 β J 2 α J 2 2 β J 2 and β J 2 < 0 , 1 , if α J 2 < 0 and β J 2 > 0 , or α J 2 2 β J 2 and β J 2 < 0 , u n d e t e r m i n e d , if α J 2 = 0 and β J 2 = 0 ,
and σ J 2 = α J 2 w * ( J 2 ) + β J 2 w * ( J 2 ) 2 . In order to determine the maximum of the formula α J 1 1 w * ( J 1 ) + β J 1 1 ( w * ( J 1 ) ) 2 , where α J 1 1 = ϕ J 2 T M 2 E 2 λ 2 and β J 1 1 = ϕ J 2 T M 2 E 2 c 2 , it is necessary to update the elements of w * ( J 1 ) that were not computed in the step corresponding to J 1 :
w * ( J 1 ) = s i g n { α J 1 } , if α J 1 0 and β J 1 = 0 , 1 , if α J 1 0 and β J 1 > 0 , or α J 1 2 β J 1 and β J 1 < 0 , α J 1 2 β J 1 , if 2 β J 1 α J 1 2 β J 1 and β J 1 < 0 , 1 , if α J 1 < 0 and β J 1 > 0 , or α J 1 2 β J 1 and β J 1 < 0 ,
if ( α J 1 , β J 1 ) ( 0 , 0 ) , and
w * ( J 1 ) = s i g n { α J 1 1 } , if α J 1 1 0 and β J 1 1 = 0 , 1 , if α J 1 1 0 and β J 1 1 > 0 , or α J 1 1 2 β J 1 1 and β J 1 1 < 0 , α J 1 1 2 β J 1 1 , if 2 β J 1 1 α J 1 1 2 β J 1 1 and β J 1 1 < 0 , 1 , if α J 1 1 < 0 and β J 1 1 > 0 , or α J 1 1 2 β J 1 1 and β J 1 1 < 0 , u n d e t e r m i n e d , if α J 1 1 = 0 and β J 1 1 = 0 ,
if ( α J 1 , β J 1 ) = ( 0 , 0 ) . And
V ( y 1 ( J 2 ) , y 2 ( J ) , J 2 ) = r J 2 T y ¯ 1 ( J 2 ) + s J 2 T y ¯ 2 ( J ) + α J 1 1 w * ( J 1 ) + β J 1 1 w * ( J 1 ) 2 + k = J 2 J 1 σ k + k = J 2 J 1 t k ,
where
r J 2 T = ϕ J 2 T M 1 + r J 1 T H 1 , s J 2 T = ϕ J 2 T M 2 E 2 2 + s J 1 T , t J 2 = ρ [ r J 1 T D 1 ξ J 2 ] sup ( α ) + ( 1 ρ ) [ r J 1 T D 1 ξ J 2 ] inf ( α ) .
We have successfully demonstrated the theorem via an inductive approach.    □
The major steps of the unified framework for addressing the Hurwicz-type optimal control problem have been systematically discussed. To provide a clearer visual representation of the proposed methodology, as presented in the preceding sections, a schematic diagram is included in Figure 1.
In Theorems 2 and 3, we investigated two special cases of optimal control problems. For problems (17) and (25), analytical expressions for both the optimal controls and the corresponding optimal values at each stage were derived. By exploiting the recursive relationship between the j-th and ( j + 1 ) -th stages, we propose the following Algorithms 1 and 2 to compute the optimal solutions for problems (17) and (25), respectively.
Algorithm 1 Find the optimal results of problem (17)
Step  1.
For a given uncertain singular system corresponding to problem (17), check the conditions (regularity of ( E , H ) , and D 2 = 0 ). If the conditions do not hold, then EXIT.
Step  2.
Transform the optimal control problem (17) into an equivalent problem (18).
Step  3.
Calculate l j , u j , l ˜ w , r j , s j , t j , j = 0 , 1 , 2 , , J 1 , by (20), (21), (24) with r J = ϕ J T M 1 , s J = ϕ J T M 2 in reverse order.
Step  4.
Calculate optimal controls w ( j ) , j = 0 , 1 , 2 , , J 1 , using (19).
Step  5.
Assign values to y ¯ 1 ( 0 ) and y ¯ 2 ( 0 ) , obtain all states through the following state equations:
y ¯ 1 ( j + 1 ) = H 1 y ¯ 1 ( j ) + Π 1 w ( j ) + D 1 ξ j , y ¯ 2 ( j ) = E 2 y ¯ 2 ( j + 1 ) Π 2 w ( j ) , j = 0 , 1 , 2 , , J 1 ,
where ξ ¯ j indicate the realization of the unertain vectors ξ j , for j = 0 , 1 , 2 , , J 1 .
Step  6.
Calculate the optimal values by (22), (23).
In the context of our system, the condition D 2 = 0 reflects a physically meaningful simplification with specific implications. For example, in an electrical circuit, if D 2 = 0 corresponds to the impedance of a particular branch, then this condition effectively models a short circuit in that branch. Such a structural change alters the current and voltage distribution throughout the circuit, consequently affecting the system’s overall dynamic behavior. Similarly, in a mechanical system, if D 2 = 0 represents the damping coefficient associated with a motion-restraining component, then this condition implies the absence of damping from that component. As a result, the system may exhibit increased oscillation amplitudes, prolonged vibrations, or delayed settling times, thereby altering its response to external disturbances.
Algorithm 2 Find the optimal results of problem (25)
Step  1.
For a given uncertain singular system corresponding to problem (25), check the conditions (regularity of ( E , H ) , and D 2 = 0 ). If the conditions do not hold, then EXIT.
Step  2.
Transform the optimal control problem (25) into an equivalent problem (26).
Step  3.
Calculate α j , β j , α j l , β j l , r j , s j , σ j , t j , j = 0 , 1 , 2 , , J 1 , by (31), (32), (35) with r J = ϕ J T M 1 , s J = ϕ J T M 2 in reverse order.
Step  4.
Calculate optimal controls w ( j ) , j = 0 , 1 , 2 , , J 1 , using (27), (28), (29), (30).
Step  5.
Assign values to y ¯ 1 ( 0 ) and y ¯ 2 ( 0 ) , obtain all states through the following state equations:
y ¯ 1 ( j + 1 ) = H 1 y ¯ 1 ( j ) + λ 1 w ( j ) + c 1 w 2 ( j ) + D 1 ξ j , y ¯ 2 ( j ) = E 2 y ¯ 2 ( j + 1 ) λ 2 w ( j ) c 2 w 2 ( j ) , j = 0 , 1 , 2 , , J 1 ,
where ξ ¯ j indicate the realization of the unertain vectors ξ j , for j = 0 , 1 , 2 , , J 1 .
Step  6.
Calculate the optimal values by (33), (34).

6. Numerical Example

Extensive research has been conducted in the field of power system control by numerous scholars [4,28,29]. To demonstrate the practical relevance and effectiveness of our theoretical framework, we present a physically motivated numerical example inspired by power system control:
V ( y ( 0 ) , y ( 7 ) , 0 ) = max 0 j 6 w ( j ) [ 1 , 1 ] 3 ρ j = 0 6 ϕ j T y ( j ) + ϕ 7 T y ( 7 ) sup ( α ) + ( 1 ρ ) j = 0 6 ϕ j T y ( j ) + ϕ 7 T y ( 7 ) inf ( α ) s . t . E y ( j + 1 ) = H y ( j ) + Π w ( j ) + D ξ j , j = 0 , 1 , 2 , , 6 ,
where
E = 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 , H = 1 1 1 3 0 1 0 1 0 0 1 0 0 0 0 1 , Π = 1 1 2 2 3 1 1 2 3 1 2 0 , D = 1 2 1 0 0 2 2 0 0 2 2 0 0 0 0 0 .
The matrix E captures the singular dynamics of the system, which can represent, for example, instantaneous voltage drops caused by faults in power networks. The uncertainty vectors ξ j = ( ξ j 1 , ξ j 2 , ξ j 3 , ξ j 4 ) T model uncertain external inputs such as fluctuating loads or component tolerances. Each ξ j i ( i = 1 , 2 , 3 , 4 , j = 0 , 1 , , 6 ) is an independent linear uncertain variable following the uncertainty distribution L ( 1 2 , 1 2 ) . These ξ j i w are independent linear uncertain variables. The state weighting vectors are
ϕ 0 T ϕ 1 T ϕ 2 T ϕ 3 T ϕ 4 T ϕ 5 T ϕ 6 T = 4 6 3 2 3 5 4 9 2 6 3 9 3 5 5 6 4 4 7 5 5 2 5 7 7 4 6 3 ,
and ϕ 7 T = ( 1 , 1 , 1 2 , 3 2 ) .
First, we have
det ( x E H ) = det x 1 1 1 3 0 1 x 1 0 0 x 1 x 0 0 0 1 = x 2 2 x + 1 0 ,
and deg det ( x E H ) = 2 < r a n k ( E ) = 3 , i.e., the system under consideration belongs to the category of uncertain singular non-causal systems. There are nonsingular matrices
F = 1 1 0 2 0 0 1 2 1 2 0 1 1 2 0 0 0 1 , M = 1 2 0 1 0 2 1 2 0 2 0 1 0 0 0 1 ,
such that
F E M = 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 , F H M = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 , F Π = 1 8 1 0 0 1.5 5 3 4 1 2 0 , F D = 1 0 1 0 0 1 1 0 0 0 0 0 0 0 0 0 .
We obtain
E 2 = 0 1 0 0 , H 1 = 1 0 0 1 , Π 1 = 1 8 1 0 0 1.5 , Π 2 = 5 3 4 1 2 0 , D 1 = 1 0 1 0 0 1 1 0 , D 2 = 0 0 0 0 0 0 0 0 ,
and
M 1 = 1 2 0 2 0 2 0 0 , M 2 = 0 1 1 2 0 1 0 1 .
Under the condition D 2 = 0 , the problem (37) corresponds to
V ¯ ( y ¯ 1 ( 0 ) , y ¯ 2 ( 7 ) , 0 ) = max 0 j 6 w ( j ) [ 1 , 1 ] 3 { ρ j = 0 6 ϕ j T [ M 1 y ¯ 1 ( j ) + M 2 y ¯ 2 ( j ) ] + ϕ 7 T [ M 1 y ¯ 1 ( 7 ) + M 2 y ¯ 2 ( 7 ) ] sup ( α ) + ( 1 ρ ) j = 0 6 ϕ j T [ M 1 y ¯ 1 ( j ) + M 2 y ¯ 2 ( j ) ] + ϕ 7 T [ M 1 y ¯ 1 ( 7 ) + M 2 y ¯ 2 ( 7 ) ] inf ( α ) } s . t . y ¯ 1 ( j + 1 ) = H 1 y ¯ 1 ( j ) + Π 1 w ( j ) + D 1 ξ j , y ¯ 2 ( j ) = E 2 y ¯ 2 ( j + 1 ) Π 2 w ( j ) , j = 0 , 1 , 2 , , 6 .
The objective function integrates energy loss and terminal cost, reflecting voltage stability at the final stage J = 7 ). The Hurwicz criterion plays a central role by allowing operators to balance risk and reward, which is crucial for effective power grid management. By setting the weight parameter ρ = 0.6 and confidence level α = 0.7 , the framework emphasizes maximizing power supply under moderate risk, thereby enhancing system resilience without incurring excessive uncertainty exposure. Given the initial state y ¯ 1 ( 0 ) = ( 2 , 1 ) T and terminal state y ¯ 2 ( 7 ) = ( 1 , 0 ) T , the optimal solutions to problem (38) are obtained by applying Algorithm 1. The resulting values are summarized in Table 1.
Given that the nilpotency of the matrix E 2 is 2, we can deduce that u j , j = k = 1 j ϕ j k T M 2 E 2 k Π 2 = u j , 1 , as is evident from column 3 of Table 1. Additionally, the trajectories corresponding to the optimal controls w * ( j ) = ( w 1 * ( j ) , w 2 * ( j ) , w 3 * ( j ) ) T are illustrated in Figure 2, where w 1 * ( j ) , w 2 * ( j ) , and w 3 * ( j ) represent reactive power regulation, active power regulation, and energy storage output, respectively. The numerical values are listed in column 4 of Table 1. During stages j = 0 to 2, the control strategy operates as follows: When w 1 = 1 , reactive power is rapidly injected to mitigate voltage sags occurring during the closing process. Meanwhile, active power is adjusted sequentially with w 2 = 1 , 1 , 1 to effectively respond to sudden load variations. Simultaneously, when w 3 = 1 , the energy storage system discharges to compensate for the initial energy deficit in the system. During stages j = 3 to 6, the control logic transitions to a new phase: based on voltage feedback, w 1 switches through the sequence 1 , 1 , 1 , 1 , reversing the direction of reactive power injection to prevent overvoltage. Concurrently, active power is consistently supplied with w 2 = 1 , 1 , 1 , 1 to support generator acceleration. At the same time, with w 3 = 1 , 1 , 1 , 1 , the energy storage system charges to absorb surplus energy and suppress over-frequency events, thus ensuring stable system operation.
The trajectories of the states y ¯ 1 ( j ) and y ¯ 2 ( j ) are illustrated in Figure 3, with the corresponding numerical data provided in columns 5 and 6 of Table 1, respectively. It is worth noting that the state y ¯ 2 , 1 ( j ) experiences substantial variations. This is because it is governed by the optimal control, as described by the relation y ¯ 2 , 1 ( j ) = 8 w 1 * ( j ) + 2 w 2 * ( j ) + 5 w 3 * ( j ) .
According to the simulation results, the optimal value V ( y ¯ 1 ( 0 ) , y ¯ 2 ( 7 ) , 0 ) exhibits a clear linear increasing trend with respect to ρ , as illustrated in Table 2 and Figure 4.
As shown in Table 2, when ρ = 0 the strategy considers only the pessimistic scenario (i.e., the worst case), resulting in the minimum value of the objective function, reflecting a conservative control approach. In contrast, when ρ = 1 , the strategy focuses solely on the optimistic scenario, leading to the maximum objective value, which corresponds to an aggressive control policy. For intermediate values of ρ (e.g., ρ = 0.6 ) the strategy achieves a trade-off between risk and performance. The objective value increases linearly with the optimism weight, which is consistent with the convex combination property of the Hurwicz criterion. Analogous to the methodology applied in the preceding sections, Algorithm 2 can be systematically employed to solve problem (25). This algorithmic approach leverages the problem’s structural properties to compute the optimal controls across varying optimism-pessimism spectra, ensuring both computational tractability and theoretical rigor.

7. Conclusions

This study aims to advance the theoretical frontier of uncertain optimal control by focusing on problems arising in uncertain singular non-causal systems within the framework of the Hurwicz criterion. As a distinctive class of systems, uncertain singular non-causal systems can be transformed into equivalent subsystems through carefully designed mathematical transformations. Leveraging the rigor of equivalent transformation principles, the original complex problem is reformulated into a more tractable structure comprising uncertain forward and backward subsystems. Subsequently, this study integrates dynamic programming and uncertainty theory to meticulously construct a recurrence equation specifically designed to tackle such problems. Through the recurrence equation, the problems can be systematically and step-by-step decomposed and solved. As special cases in this study, we pay particular attention to the optimal control problems of linear and nonlinear uncertain singular non-causal systems. For these complex scenarios, based on the established recurrence equation, we have successfully derived the precise analytical expressions for the optimal control of such problems, and also determined the related optimal values. To validate and illustrate the practical applicability of the proposed framework, a carefully designed numerical example is provided. This example offers a clear and comprehensive demonstration of how the proposed approach can be employed to solve uncertain singular non-causal control problems under the Hurwicz criterion.
Although this study has achieved significant milestones, several promising directions remain for future research. These include extending the framework to singular systems exhibiting strong nonlinear dynamics and complex uncertainty interactions; developing data-driven dynamic adjustment strategies for ρ to optimize risk preferences in real time; conducting field validations in areas such as real-time grid frequency control and high-speed signal reconstruction to deepen the integration of theory and practice; and exploring new preference aggregation mechanisms for complex decision-making scenarios. This research not only provides theoretical tools for the control of uncertain singular non-causal systems but also offers methodological references for solving similar uncertain decision problems in interdisciplinary fields through its structured research approach.

Author Contributions

Methodology, X.C.; Formal analysis, Y.C.; Writing—original draft, Y.C.; Writing—review & editing, Y.C. and X.C.; Visualization, X.C.; Supervision, X.C.; Funding acquisition, Y.C. and X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study is backed by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Grant No. 23KJB110013) and the Start-up Fund for New Talented Researchers of Nanjing Vocational University of Industry Technology (Grant No. YK23-12-01).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rosenbrock, H.H. Structural properties of linear dynamical systems. Int. J. Control 1974, 20, 191–202. [Google Scholar] [CrossRef]
  2. Luenberger, D.G.; Arbel, A. Singular dynamic Leontief systems. Econometrica 1977, 45, 991–995. [Google Scholar] [CrossRef]
  3. Cobb, D. Controllability, observability, and duality in singular systems. IEEE Trans. Autom. Control 1984, 29, 1076–1082. [Google Scholar] [CrossRef]
  4. Dai, L. Singular Control Systems; Springer: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
  5. Lukashiv, T.; Malyk, I.V.; Hemedan, A.A.; Satagopam, V.P. Optimal control of stochastic dynamic systems with semi-Markov parameters. Symmetry 2025, 17, 498. [Google Scholar] [CrossRef]
  6. Shu, Y.; Li, B. Linear-quadratic optimal control for discrete-time stochastic descriptor systems. J. Ind. Manag. Optim. 2022, 18, 1583–1602. [Google Scholar] [CrossRef]
  7. Vlasenko, L.A.; Rutkas, A.G.; Semenets, V.V.; Chikrii, A.A. Stochastic optimal control of a descriptor system. Cybern. Syst. Anal. 2020, 56, 204–212. [Google Scholar] [CrossRef]
  8. Li, Y.; Ma, S. Finite and infinite horizon indefinite linear quadratic optimal control for discrete-time singular Markov jump systems. J. Frankl. Inst. 2021, 358, 8993–9022. [Google Scholar] [CrossRef]
  9. Liu, B. Uncertainty Theory, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  10. Liu, B. Uncertainty Theory, 4th ed.; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  11. Li, B.; Zhang, R.; Sun, Y. Multi-period portfolio selection based on uncertainty theory with bankruptcy control and liquidity. Automatica 2023, 147, 110751. [Google Scholar] [CrossRef]
  12. Liu, Y.; Zhou, L. Modeling RL electrical circuit by multifactor uncertain differential equation. Symmetry 2021, 13, 2103. [Google Scholar] [CrossRef]
  13. Mohammed, P.O. A generalized uncertain fractional forward difference equations of Riemann-Liouville type. J. Math. Res. 2019, 11, 43–50. [Google Scholar] [CrossRef]
  14. Zhu, Y. Uncertain Optimal Control; Springer Nature: Singapore, 2019. [Google Scholar]
  15. Shu, Y.; Zhu, Y. Optimistic value based optimal control for uncertain linear singular systems and application to a dynamic input-output model. ISA Trans. 2017, 71, 235–251. [Google Scholar] [CrossRef] [PubMed]
  16. Deng, L.; You, Z.; Chen, Y. Optimistic value model of multidimensional uncertain optimal control with jump. Eur. J. Control 2018, 39, 1–7. [Google Scholar] [CrossRef]
  17. Yang, X.; Gao, J. Linear-quadratic uncertain differential games with application to resource extraction problem. IEEE Trans. Fuzzy Syst. 2016, 24, 819–826. [Google Scholar] [CrossRef]
  18. Chen, X.; Zhu, Y. Optimal control for uncertain random singular systems with multiple time-delays. Chaos Solitons Fractals 2021, 152, 111371. [Google Scholar] [CrossRef]
  19. Chen, X.; Zhu, Y.; Sheng, L. Optimal control for uncertain stochastic dynamic systems with jump and application to an advertising model. Appl. Math. Comput. 2021, 407, 126337. [Google Scholar] [CrossRef]
  20. Latunde, T.; Bamigbola, O.M. Uncertain optimal control model for management of net risky capital asset. IOSR J. Math. 2016, 12, 22–30. [Google Scholar]
  21. Shu, Y.; Zhu, Y. Stability analysis of uncertain singular systems. Soft Comput. 2018, 22, 5671–5681. [Google Scholar] [CrossRef]
  22. Shu, Y.; Li, B.; Zhu, Y. Optimal control for uncertain discrete-time singular systems under expected value criterion. Fuzzy Optim. Decis. Mak. 2021, 20, 331–364. [Google Scholar] [CrossRef]
  23. Shu, Y.; Zhu, Y. Stability and optimal control for uncertain continuous-time singular systems. Eur. J. Control 2017, 34, 16–23. [Google Scholar] [CrossRef]
  24. Chen, X.; Cao, Z.; Zhang, Z. Linear quadratic optimal control and zero-sum game for uncertain time-delay systems based on pessimistic value. Eur. J. Control 2025, 84, 101224. [Google Scholar] [CrossRef]
  25. Shu, Y.; Sheng, L. Hurwicz criterion based optimal control model for uncertain descriptor systems with an application to industrial management. J. Ind. Manag. Optim. 2023, 19, 6054–6081. [Google Scholar] [CrossRef]
  26. Shu, Y. Optimal control for discrete-time descriptor noncausal systems. Asian J. Control 2021, 23, 1885–1899. [Google Scholar] [CrossRef]
  27. Hurwicz, L. Some specification problems and application to econometric models. Econometrica 1951, 19, 343–344. [Google Scholar]
  28. Pierre, D.A. A perspective on adaptive control of power systems. IEEE Trans. Power Syst. 2007, 2, 387–395. [Google Scholar] [CrossRef]
  29. Kwatny, H.G.; Miu-Miller, K. Power System Dynamics and Control; Springer: New York, NY, USA, 2016. [Google Scholar]
Figure 1. Schematic of the developed methodology for the Hurwicz-type optimal control problem of uncertain singular non-causal systems.
Figure 1. Schematic of the developed methodology for the Hurwicz-type optimal control problem of uncertain singular non-causal systems.
Symmetry 17 01130 g001
Figure 2. Trajectory concerning the optimal control w * ( j ) = ( w 1 * ( j ) , w 2 * ( j ) , w 3 * ( j ) ) T .
Figure 2. Trajectory concerning the optimal control w * ( j ) = ( w 1 * ( j ) , w 2 * ( j ) , w 3 * ( j ) ) T .
Symmetry 17 01130 g002
Figure 3. Trajectories concerning states y ¯ 1 ( j ) = ( y ¯ 1 , 1 ( j ) , y ¯ 1 , 2 ( j ) ) T , y ¯ 2 ( j ) = ( y ¯ 2 , 1 ( j ) , y ¯ 2 , 2 ( j ) ) T .
Figure 3. Trajectories concerning states y ¯ 1 ( j ) = ( y ¯ 1 , 1 ( j ) , y ¯ 1 , 2 ( j ) ) T , y ¯ 2 ( j ) = ( y ¯ 2 , 1 ( j ) , y ¯ 2 , 2 ( j ) ) T .
Symmetry 17 01130 g003
Figure 4. Trajectory concerning the optimal value V ( y ¯ 1 ( 0 ) , y ¯ 2 ( 7 ) , 0 ) .
Figure 4. Trajectory concerning the optimal value V ( y ¯ 1 ( 0 ) , y ¯ 2 ( 7 ) , 0 ) .
Symmetry 17 01130 g004
Table 1. The optimal results of problem (38).
Table 1. The optimal results of problem (38).
Stage ( l j ) T ( u j , 1 ) T ( w * ( j ) ) T ( y ¯ 1 ( j ) ) T ( y ¯ 2 ( j ) ) T V ( y ¯ 1 ( j ) , y ¯ 2 ( 7 ) , j )
0 ( 10 , 40 , 33 ) ( 1 , 1 , 1 ) ( 2 , 1 ) ( 6 , 1 ) 456.8000
1 ( 27 , 31 , 25 ) ( 6 , 12 , 0 ) ( 1 , 1 , 1 ) ( 4.2183 , 2.2909 ) ( 12 , 3 ) 409.5418
2 ( 29 , 2 , 30 ) ( 5 , 10 , 0 ) ( 1 , 1 , 1 ) ( 6.6022 , 4.3935 ) ( 6 , 1 ) 330.7113
3 ( 21 , 31 , 21 ) ( 6 , 12 , 0 ) ( 1 , 1 , 1 ) ( 0.7821 , 5.4972 ) ( 12 , 3 ) 255.1172
4 ( 15 , 28 , 17 ) ( 5 , 10 , 0 ) ( 1 , 1 , 1 ) ( 8.7544 , 4.2826 ) ( 6 , 1 ) 211.7428
5 ( 3 , 32 , 6 ) ( 4 , 8 , 0 ) ( 1 , 1 , 1 ) ( 14.4185 , 6.2345 ) ( 4 , 3 ) 190.7907
6 ( 5 , 48 , 2 ) ( 2 , 4 , 0 ) ( 1 , 1 , 1 ) ( 21.9123 , 8.4115 ) ( 6 , 1 ) 106.4245
7 ( 16.2940 , 8.1806 ) ( 1 , 0 ) 30.3860
Table 2. Optimal value V ( y ¯ 1 ( 0 ) , y ¯ 2 ( 7 ) , 0 ) corresponding to various ρ levels.
Table 2. Optimal value V ( y ¯ 1 ( 0 ) , y ¯ 2 ( 7 ) , 0 ) corresponding to various ρ levels.
ρ 00.20.40.60.81
V ( y ¯ 1 ( 0 ) , y ¯ 2 ( 7 ) , 0 ) 452453.6000455.2000456.8000458.4000460.0000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Y.; Chen, X. Hurwicz-Type Optimal Control Problem for Uncertain Singular Non-Causal Systems. Symmetry 2025, 17, 1130. https://doi.org/10.3390/sym17071130

AMA Style

Chen Y, Chen X. Hurwicz-Type Optimal Control Problem for Uncertain Singular Non-Causal Systems. Symmetry. 2025; 17(7):1130. https://doi.org/10.3390/sym17071130

Chicago/Turabian Style

Chen, Yuefen, and Xin Chen. 2025. "Hurwicz-Type Optimal Control Problem for Uncertain Singular Non-Causal Systems" Symmetry 17, no. 7: 1130. https://doi.org/10.3390/sym17071130

APA Style

Chen, Y., & Chen, X. (2025). Hurwicz-Type Optimal Control Problem for Uncertain Singular Non-Causal Systems. Symmetry, 17(7), 1130. https://doi.org/10.3390/sym17071130

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop