Next Article in Journal
Multiscale 1D-CNN for Damage Severity Classification and Localization Based on Lamb Wave in Laminated Composites
Next Article in Special Issue
Dynamic Adaptive Event-Triggered Mechanism for Fractional-Order Nonlinear Multi-Agent Systems with Actuator Saturation and External Disturbances: Application to Synchronous Generators
Previous Article in Journal
Decentralized Energy Swapping for Sustainable Wireless Sensor Networks Using Blockchain Technology
Previous Article in Special Issue
A Robust Salp Swarm Algorithm for Photovoltaic Maximum Power Point Tracking Under Partial Shading Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Partial Stability of Linear Hybrid Discrete–Continuous Itô Systems with Aftereffect

by
Ramazan I. Kadiev
1,† and
Arcady Ponosov
2,*,†
1
Dagestan Research Center of the Russian Academy of Sciences & Department of Mathematics, Dagestan State University, 367005 Makhachkala, Russia
2
Department of Mathematics, Norwegian University of Life Sciences, 1432 Aas, Norway
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(3), 397; https://doi.org/10.3390/math13030397
Submission received: 14 December 2024 / Revised: 17 January 2025 / Accepted: 23 January 2025 / Published: 25 January 2025
(This article belongs to the Special Issue Advances in Control Systems and Automatic Control)

Abstract

:
This paper offers several new sufficient conditions of the partial moment stability of linear hybrid stochastic systems with delay. Despite its potential applications in economics, biology and physics, this problem seems to have not been addressed before. A number of general theorems on the partial moment stability of stochastic hybrid systems are proven herein by applying a specially designed regularization method, based on the connections between Lyapunov stability and input-to-state stability, which are well known in control theory. Based on the results obtained for stochastic hybrid systems, some new conditions of the partial stability of deterministic hybrid systems are derived as well. All stability conditions are conveniently formulated in terms of the coefficients of the systems. A numerical example illustrates the feasibility of the suggested framework.
MSC:
34K25; 34K34; 34K50; 39A50; 60H99

1. Introduction

The growing role of hybrid systems is widely accepted in many modern applications of mathematical modeling. In robotic systems, where the continuous-time component describes a robot’s movement while the robot’s controller is implemented as a discrete-time component, the usage of hybrid systems provides one of the most general analytic tools. The walking motion of a biped robot is an example of such a dual-phase dynamic [1]. In the models of aircraft collision avoidance, it is crucial to combine continuous-state evolution and discrete mode switching [2]. Interactions within swarm robotic systems, where robots have to perform cooperatively, are controlled by hybrid automata [3]. Hybrid robots are also used to automate agricultural operations [4]. In these models, it is important to take into account properties of uncertain environments with no predefined structure. Such situations can appear in many other applications, which require the analysis of stochastic effects within hybrid dynamics [5].
Modern industrial processes often combine continuous-time (chemical reactions, voltage, temperature) and discrete-time (programmable controllers) components. The review paper [6] lists recent trends within the chemical processing industry related to the handling of large volumes of data. In complex energy systems, it is standard to steer current flows using smart grid controllers sending commands at discrete-time intervals [7]. Hybrid electronics integrate multiple components within a single package [8]. Healthcare, aerospace, consumer electronics and smart packaging are among the straightforward applications of this technology. Next-generation health monitoring requires the integration of hybrid electronics as well. In pacemakers, the continuous-time (heart rate, blood pressure) and discrete-time (monitoring and control) components are needed to maintain desired physiological states [9]. A transition from conventional endoscopic surgery to robotic surgery requires the implementation of new robotic platforms for clinical use [10].
Finally, the application of hybrid models, giving access to two fundamentally different communication modes, considerably speeds up the performance of communication networks [11].
One of the most important structural features of hybrid dynamical systems is their stability. The theoretical foundations of the stability analysis of deterministic hybrid systems are presented in the monograph [12], where the method of multiple Lyapunov functions is developed. A large number of previously published results can be found in the review paper [13], while information about more recent trends, including the theory of almost Lyapunov functions, is contained in [14]. The stability analysis of stochastic hybrid systems, with relevant publications before 2014, can be found in the survey [15]. A further development of this topic in the case of systems with random delays was suggested in the recent paper [16].
The stability and stabilization of stochastic hybrid networks, a particular yet important subclass of stochastic hybrid systems, has been a popular topic over many years. Based on the Itô-like estimates, the authors of the seminal paper [17] showed that in the case of networks perturbed by white noise, only observable variables are necessary to stabilize the whole network. The stability analysis of networks with Lévy noise was continued in the papers [18] (networks can be stabilized under full observation) and [19] (networks can be stabilized under partial observation). In both papers, delay effects were incorporated into the dynamics.
The partial stability of continuous deterministic dynamical systems was introduced and extensively studied in the monograph [20] in connection with its numerous applications in control theory. Applications to models in physics can be found in [21]. Partial stability of continuous stochastic systems with delays was considered in the recent publications [22] (equations with a general decay rate) and [23] (stochastic neutral pantograph equation). In [24], partial stability in the probability of discrete-time systems with delay was considered. For other results, see the references in the last three papers mentioned.
On the other hand, many practical examples indicate that the property of partial stability may be important for systems including both continuous and discrete dynamics. In robotics, the stability of the end-effector position has to be guaranteed, while internal actuator dynamics may oscillate. In power systems, critical voltages must be stable, while non-critical states can fluctuate. In pacemakers, the stability of heart rates is crucial, while other physiological states may vary. In the available mathematical literature, the papers [25] (stochastic case) and [26] (deterministic case, no delays) were, most probably, the only attempts to address the partial stability of hybrid systems.
The main findings of the present article concern the analysis of the partial stability of solutions of hybrid discrete–continuous Itô-type differential systems with aftereffect, the topic motivated by the above examples. To the best of our knowledge, this problem has not been addressed before. Moreover, as no Lyapunov-like analysis is known for this class of dynamical systems, we apply another and a more straightforward approach, which, in the literature, is known as the “the regularization method” or “the method of auxiliary equations”. The method proved to be efficient in the stability analysis of deterministic hereditary equations (see the monograph [27] and the references therein). The validation of this method in the case of stochastic differential equations with aftereffect can be found in the authors’ publications [28] (moment stability of discrete-time stochastic delay equations) and [29] (moment stability of hybrid stochastic delay equations). These papers also contain other references related to the regularization technique in the stability analysis of linear and nonlinear stochastic equations.
The remainder of this paper is organized as follows. The notation used throughout, the formulation of the hybrid system to be studied and the necessary definitions are all presented in Section 2. Section 3 starts with a brief description of the regularization method for a simpler equation and contains Theorems 1 and 2, which give a justification of the method in the case of linear stochastic hybrid systems. The main stability results of the paper are Theorems 3–5 of Section 4, where explicit conditions for partial moment stability and partial exponential stability, respectively, are formulated in terms of the coefficients of the system. These theorems are obtained within the framework of the regularization method described in Section 3. In the proofs, we restricted ourselves to the most technical case, when the number of stable continuous- and discrete-time variables is nonzero and also strictly less than the total number of these variables. Partial Lyapunov stability and partial exponential Lyapunov stability of linear deterministic hybrid systems are studied in Section 5, the central results being Propositions 1–3. These stability conditions are new as well. In Section 6, a numerical example validating some theoretical findings of Section 5 is offered. A discussion, a short summary of the paper and some of our future plans can be found in Section 7. Finally, Appendix A contains several tables explaining the adjustments to be made in Theorems 3–5 if the number of stable continuous- and discrete-time variables is either zero or equal to the total number of these variables.

2. Preliminaries and Formulation of the Problem

Let N be a set of natural numbers and N + = { 0 } N . The following constants remain fixed throughout the paper:
n N is the dimension of the phase space of the equation, i.e., the size of the solution vector of the equation;
l N , 0 < l < n ;
l 1 , l 2 N + , 0 l 1 l , 0 l 2 n l , 0 < l 1 + l 2 < n ;
m , m i N ;
i is the index satisfying the conditions 1 i m ;
j is the index satisfying the conditions 1 j m i ;
h is a positive real number;
1 p < , 1 q < .
We will also use the following notations:
( Ω , F , ( F ) t 0 , P ) is a filtered probability space, where Ω is the set of elementary events, F is a σ -algebra of events on Ω , ( F ) t 0 is a right-continuous flow (a filtration) of its σ -subalgebras on Ω , and P is the complete probability measure on F ;
E is the expectation related to P;
{ B i , i = 2 , , m } is a set of the mutually independent standard scalar Wiener processes (Brownian motions) on the above filtered probability space;
k n is the linear space of n-dimensional F 0 -measurable random variables;
L n is the linear space of n-dimensional progressively measurable stochastic processes on ( , 0 ) that have almost surely (a.s.) essentially bounded paths;
D n is the linear space of n-dimensional progressively measurable (with respect to the above filtered probability space) stochastic processes on [ 0 , ) whose paths are a.s. right-continuous and have left limits;
D ¯ n is the linear space of processes defined on ( , ) , which are equal to 0 for t < 0 and the restrictions of which to [ 0 , ) belong to D n ;
| . | is some norm in R n (which is kept fixed);
| | . | | is the norm of m × n -matrices consistent with the norm in R n ;
E ¯ is the identity m × m -matrix;
I is the identity operator acting in a suitable space of stochastic processes;
| | . | | X is the norm in some normed space X;
μ is the Lebesgue measure on [ 0 , ) ;
[ t ] is the integer part of t;
γ : [ 0 , ) R 1 is some positive continuous function.
The following normed spaces are frequently used below:
k q n = α : α k n , | | α | | k q n = d e f ( E | α | q ) 1 / q < ;
L q n = φ : φ L n , | | φ | | L q n = d e f e s s sup ς < 0 ( E | φ ( ς ) | q ) 1 / q < ;
M q γ = x : x D n , | | x | | M q γ = d e f sup t 0 ( E | γ ( t ) x ( t ) | q ) 1 / q < ;
M ¯ q γ = x : x D ¯ l 1 + l 2 , | | x | | M ¯ q γ = d e f sup t ( , ) ( E | γ ( t ) x ( t ) | q ) 1 / q < ;
M q = d e f M q 1 , M ¯ q = d e f M ¯ q 1 .
When describing the solutions of discrete–continuous systems, we will first number the continuous-time components x 1 ( t ) , , x l ( t ) ( t 0 ) and then the discrete-time components x l + 1 ( s ) , , x n ( s ) ( s N + ) . In the vector notation, this will look as follows:
x l ( t ) = ( x 1 ( t ) , , x l ( t ) ) T ( t 0 ) , x n l ( s ) = ( x l + 1 ( s ) , , x n ( s ) ) T ( s N + )
and
x ( t ) = ( x l ( t ) , x n l ( [ t ] ) ) T = ( x 1 ( t ) , , x l ( t ) , x l + 1 ( [ t ] ) , , x n ( [ t ] ) ) T ( t 0 ) .
In the paper, we intend to study the partial moment stability of solutions of the following system of linear discrete–continuous Itô equations with aftereffect:
d x l ( t ) = j = 1 m 1 A 1 j ( t ) x ( h 1 j ( t ) ) d t + i = 2 m j = 1 m i A i j ( t ) x ( h i j ( t ) ) d B i ( t ) ( t 0 ) , x n l ( s + 1 ) = x n l ( s ) j = s A 1 ( s , j ) x ( j ) h + i = 2 m j = s A i ( s , j ) x ( j ) ( B i ( ( s + 1 ) h ) B i ( s h ) ) ( s N + )
with respect to the initial data ( φ , b ) , where
x ( ς ) = φ ( ς ) ( ς < 0 ) ,
x ( 0 ) = b .
Here, the following apply:
x ( t ) = ( x 1 ( t ) , , x l ( t ) , x l + 1 ( [ t ] ) , , x n ( [ t ] ) ) T ( t 0 ) is an unknown n-dimensional stochastic process;
A i j ( t ) are l × n -matrices, where the entries of the matrices A 1 j ( t ) , j = 1 , , m 1 are progressively measurable scalar stochastic processes on the interval [ 0 , ) with a.s. locally integrable paths, and the entries of the matrices A i j ( t ) , i = 2 , , m , j = 1 , , m i are progressively measurable scalar stochastic processes on [ 0 , ) , whose paths are a.s. locally square integrable;
h i j ( t ) are Borel measurable functions defined on [ 0 , ) and such that h i j ( t ) t ( t 0 )   μ -almost everywhere;
A i ( s , j ) are ( n l ) × n -matrices, whose entries are F s -measurable scalar random variables for all s N + , j = , , s ;
φ ( ς ) = ( φ 1 ( ς ) , , φ l ( ς ) , φ l + 1 ( [ ς ] ) , , φ n ( [ ς ] ) ) T ( ς < 0 ) are F 0 -measurable n-dimensional stochastic processes with a.s. essentially bounded paths;
b = ( b 1 , . . , b n ) T is an F 0 -measurable n-dimensional random variable, i.e., b k n .
The equalities (1a), (1b) and (1), (1a), (1b) will be addressed as the initial conditions for (1) and the initial value problem (1), (1a), (1b), respectively.
Remark 1. 
In most applications of delay equations, it is assumed that φ in (1a) is continuous. In this case, the initial conditions (1a), (1b) can be merged into a single initial condition x ( ς ) = φ ( ς ) ( ς 0 ). However, for many delay systems, including stochastic systems or systems with impulses, it is more natural to assume, as we do in this paper, that the paths of the stochastic process φ belong to the space L . Then, the paths are only defined up to sets of the zero Lebesgue measure, while their values at individual time-points are undefined. Yet, the value at t = 0 must be specified in order that System (1) has a unique solution. This is why it is necessary to split the initial condition into two parts, as it is performed in (1a), (1b). Note that the case of a continuous φ is included in our analysis.
The following definition is a standard description of what is meant by a solution of the initial value problem (1), (1a), (1b).
Definition 1. 
By the solution of the initial value problem (1), (1a), (1b), we mean a stochastic process
x ( t ) = ( x 1 ( t ) , , x l ( t ) , x l + 1 ( [ t ] ) , , x n ( [ t ] ) ) T ( t ( , ) ) ,
which is progressively measurable for t 0 and which satisfies the equalities x ( ς ) = φ ( ς ) ( ς < 0 ) , x ( 0 ) = ( x l ( 0 ) , x n l ( 0 ) ) T = b , and which a.s. satisfies the system
x l ( t ) = x l ( 0 ) j = 1 m 1 0 t A 1 j ( ς ) x ( h 1 j ( ς ) ) d ς + i = 2 m j = 1 m i 0 t A i j ( ς ) x ( h i j ( ς ) ) d B i ( ς ) ( t 0 ) , x n l ( s + 1 ) = x n l ( s ) j = s A 1 ( s , j ) x ( j ) h + i = 2 m j = s A i ( s , j ) x ( j ) ( B i ( ( s + 1 ) h ) B i ( s h ) ) ( s N + ) ,
where the first integral is understood in the Lebesgue sense, while the second intergal is understood in the Itô sense.
Using the standard contraction mapping technique adjusted for the case of stochastic delay equations, one can easily check that under the assumptions made, the initial value problem (1), (1a), (1b) has a unique solution. In particular, this problem only has the zero solution under the zero initial conditions (1a), (1b). Let us denote this solution by x ( t , b , φ ) ( t ( , ) ) . Obviously, x ( . , b , φ ) | [ 0 , + ) D n . For any
x ( t , b , φ ) = ( x 1 ( t , b , φ ) , , x l ( t , b , φ ) , x l + 1 ( [ t ] , b , φ ) , , x n ( [ t ] , b , φ ) ) T
we introduce the notation
y ( t , b , φ ) = ( x 1 ( t , b , φ ) , , x l 1 ( t , b , φ ) , x l + 1 ( [ t ] , b , φ ) , , x l + l 2 ( [ t ] , b , φ ) ) T if l 1 0 , l 2 0 ,
y ( t , b , φ ) = ( x l + 1 ( [ t ] , b , φ ) , , x l + l 2 ( [ t ] , b , φ ) ) T if l 1 = 0
and
y ( t , b , φ ) = ( x 1 ( t , b , φ ) , , x l 1 ( t , b , φ ) ) T if l 2 = 0 .
Remark 2. 
It is quite important to remember that this notation, which interprets y as a corresponding stable part of the full solution x in different situations, will be used in the remaining part of the paper without additional comments. It is a rather convenient notational agreement, which helps to unify different kinds of partial stability.
For continuous-time systems, partial stability can always be reduced to stability with respect to some of the solution’s first components. The situation with continuous–discrete systems is more complicated, and even if we use renumbering and assume that 0 < l 1 + l 2 < n , we can still obtain seven different cases as follows:
Stability with respect to the first l 1 continuous-time components and the first l 2 discrete-time components, i.e.,
  • Case   1 : 0 < l 1 < l , 0 < l 2 < n l . Case   2 : l 1 = l , 0 < l 2 < n l . Case   3 : 0 < l 1 < l , l 2 = n l .
Stability with respect to the first l 1 continuous-time components, i.e.,
  • Case   4 : l 1 = l , l 2 = 0 . Case   5 : 0 < l 1 < l , l 2 = 0 .
Stability with respect to the first l 2 discrete-components, i.e.,
  • Case   6 : l 1 = 0 , 0 < l 2 < n l . Case   7 : l 1 = 0 , l 2 = n l .
In the main body of this paper, we deal with Case 1. However, all the results in Section 3, Section 4 and Section 5 are valid for other cases as well, provided that the operators in (4) below are defined according to the agreements given in Appendix A.
Remark 2 is used, in particular, in the following definition, which covers all the above types of partial stability of discrete–continuous stochastic systems with aftereffect if one chooses appropriate values of l 1 and l 2 .
Definition 2. 
We call the zero solution x ( t , 0 , 0 ) 0 of System (1) (and, for simplicity, System (1) itself) the following:
q-Stable with respect to the first l 1 continuous-time components and the first l 2 discrete-time components if, for any ϵ > 0 , there exists δ ( ϵ ) > 0 such that for any b k q n , φ L q n and b k q n + φ L q n < δ ( ϵ ) , the inequality ( E | y ( t , b , φ ) | q ) 1 / q ϵ holds for any t 0 ;
Asymptotically q-stable with respect to the first l 1 continuous-time components and the first l 2 discrete-time components if it is q-stable with respect to the first l 1 continuous-time components and the first l 2 discrete-time components, and, in addition, for any b k q n , φ L q n and b k q n + φ L q n < δ ( ϵ ) , one has lim t + ( E | y ( t , b , φ ) | q ) 1 / q = 0 ;
Exponentially q-stable with respect to the first l 1 continuous-time components and the first l 2 discrete-time components if there exist some positive numbers c ¯ , β such that for any b k q n , φ L q n , the inequality ( E | y ( t , b , φ ) | q ) 1 / q c ¯ b k q n + φ L q n exp { β t } holds.
Let us stress that if l 1 = l and l 2 = n l , then y = x , and Definition 2 converts to a standard definition of global moment stability of the zero solution of a stochastic system (see, e.g., [28,29]). We also remark that the linearity of System (1) implies that the local and global stabilities of the zero solution are equivalent.
The next definition refers to the notion of input-to-state stability, which is well known in control theory (see, e.g., [30]) and which was adapted to the case of stochastic hybrid equations in [29]. This definition makes it possible to put the three kinds of stability from Definition 2 into a common framework (see Remark 3 below), which considerably simplifies the analysis of partial stability.
Definition 3. 
We call System (1) M q γ y -stable if for any b k q n , φ L q n , the solution x ( t , b , φ ) ( t ( , + ) ) of the initial value problem (1), (1a), (1b) satisfies y ( . , b , φ ) | [ 0 , ) M q γ and the inequality
y ( . , b , φ ) | [ 0 , ) M q γ c ¯ b k q n + φ L q n
for some positive number c ¯ .
Remark 3. 
Definitions 2 and 3 are closely related via the following statements that are proved in [29]:
M q y -stability of System (1) implies its q-stability with respect to the first l 1 continuous-time components and the first l 2 discrete-time components;
If γ ( t ) satisfies the conditions γ ( t ) δ > 0 and lim t + γ ( t ) = , then M q γ y -stability of System (1) ( t 0 ) implies its asymptotic q-stability with respect to the first l 1 continuous-time components and the first l 2 discrete-time components;
If γ ( t ) = exp { β t } , where β is some positive number, then M q γ y -stability of System (1) implies its exponential q-stability with respect to the first l 1 continuous-time components and l 2 discrete-time components.
Using these relationships, we can replace partial-moment Lyapunov stability by M q γ y -stability and choosing different γ, l 1 and l 2 . Technically, it is much easier to prove the M q γ y -stability of System (1), where it is sufficient to find out whether the vector y ( . , b , φ ) | [ 0 , ) , which is composed of the first l 1 continuous-time components and the first l 2 discrete-time components of the solution, belongs to M q γ for any b k q n , φ L q n and if inequality (2) is satisfied.
Below, we will formulate all stability results for System (1) in terms of M q γ y -stability, remembering that they, in fact, give conditions for partial-moment Lyapunov stability of this system via the statements listed in this remark.
As already mentioned, the notion of M q γ y -stability returns to input-to-state stability in relation to the stochastic partial Lyapunov stability to control theory. Notice that φ is treated in this case as a part of the right-hand side of System (1) in its representation (3) below, so that the inputs are b and φ . This is crucial for the regularization method, also known as the method of auxiliary equations, which is outlined in the next section. The method was developed in [27] as an alternative to a Lyapunov-type approach. A stochastic modification of this method was used by the authors in a number of publications (see, e.g., [28,29] and the references therein).

3. The Regularization Method

This method has a long history in the theory of deterministic and stochastic delay differential equations (see, e.g., the monograph [27], the articles [28,29] and the references therein). In a nutshell, Lyapunov stability in this method (e.g., partial moment Lyapunov stabilities from Definition 2) is replaced by a suitable input-to-state stability (e.g., M q γ y -stability from Definition 3), and then the delay system in question is transformed into an equivalent system with the help of an auxiliary equation, which is simpler and which already has a required stability property. Using this transformation, one can effectively produce coefficient-based conditions of Lyapunov stability applying matrix inequalities or other estimates.
To better explain this method, we consider its particular case of a linear deterministic delay equation x ˙ ( t ) = ( Γ x ) ( t ) coupled with the deterministic initial conditions (1a)–(1b). First of all, we convert the given delay equation into a hereditary differential equation on [ 0 , ) by putting
x ¯ ( t ) = x ( t ) ( t 0 ) 0 ( t < 0 ) and φ ¯ ( t ) = 0 ( t 0 ) φ ( t ) ( t < 0 )
and defining ( V x ) ( t ) ( Γ x ¯ ) ( t ) and f ( t ) ( Γ φ ¯ ) ( t ) for t > 0 . By the property of linearity, Γ ( x ¯ + φ ¯ ) = Γ ( x ¯ ) + Γ ( φ ¯ ) = V x ¯ + f , which gives the hereditary differential equation x ¯ ˙ = V x ¯ + f on [ 0 , ) . The next step is based on the choice of an auxiliary equation x ¯ ˙ = Q x ¯ + g ( t > 0 ) , which is simpler and which has a required stability property (in the analysis below, it will be System (7)). According to the general theory of functional-differential equations [27], we have the solution representation x ¯ ( t ) = U ( t ) x ¯ ( 0 ) + ( W g ) ( t ) ( t 0 ) where U ( t ) is the fundamental matrix of the associated homogeneous equation and W is the Cauchy operator, i.e., ( W g ) ( 0 ) = 0 , where W g is a solution of the auxiliary equation for any admissible g. A similar formula is true for stochastic hybrid systems; see [29]. This representation is used to regularize the equation in question by rewriting it as x ¯ ˙ = Q x ¯ + ( V Q ) x ¯ + f , or, equivalently, as
x ¯ ( t ) = U ( t ) x ¯ ( 0 ) + ( W ( V Q ) x ¯ ) ( t ) + ( W f ) ( t ) ( t 0 ) .
By this, we obtain the operator equation x ¯ = Θ x ¯ + U x 0 + W f , where Θ = W ( V Q ) and x ¯ ( 0 ) = x ( 0 ) = x 0 . This equation corresponds to System (8) below. Estimating suitable norms, we obtain
x ¯ Θ x ¯ + c 1 x 0 + c 2 f ,
and if now Θ < 1 , then the equation x ¯ ˙ = V x ¯ + f becomes input-to-state stable, and this result corresponds to Theorem 1 below in the case of partial moment stability. Alternatively, one can use component-wise estimates in the analysis of the operator equation x ¯ = Θ x ¯ + U x 0 + W f , and this idea is implemented in Theorem 2 below. In either case, the outcome will be Lyapunov stability of the zero solution of the original delay equation x ˙ = Γ x in the desired sense. This algorithm is called “the regularization method” or “the method of auxiliary equations” in the literature, while a slightly different form of it is called “the W-method” in the monograph [27]. A systematic validation of this method and its applications to various classes of delay equations as well as its comparison with Lyapunov-like stability analysis can be found in the monograph [27], in the auhtors’ publications [28,29] and in the references therein.
To utilize the regularization method for the case of System (1), we rewrite the initial value problem problem (1), (1a), (1b) in the form of an operator equation by putting x ( t ) = x ¯ ( t ) + φ ¯ ( t ) , where x ¯ ( t ) = ( x ¯ l ( t ) , x ¯ n l ( [ t ] ) ) T is an unknown n-dimensional stochastic process on ( , ) such that x ¯ ( t ) = 0 for t < 0 and x ¯ ( t ) = x ( t ) for t 0 , and φ ¯ ( t ) is a known n-dimensional random process on ( , ) such that φ ¯ ( t ) = φ ( t ) for t < 0 and φ ¯ ( t ) = 0 for t 0 . Then, the initial value problem (1), (1a), (1b) is equivalent (see [27]) to the following problem:
d x ¯ l ( t ) = ( ( V l x ¯ ) ( t ) + f l ( t ) ) d Z ( t ) ( t 0 ) , x ¯ n l ( s + 1 ) = x ¯ n l ( s ) + ( V n l x ¯ ) ( s ) + f n l ( s ) ( s N + ) ,
x ¯ ( 0 ) = b ,
where
( V l x ¯ ) ( t ) = j = 1 m 1 A 1 j ( t ) x ¯ ( h 1 j ( t ) ) , j = 1 m 2 A 2 j ( t ) x ¯ ( h 2 j ( t ) ) , , j = 1 m m A m j ( t ) x ¯ ( h m j ( t ) ) , f l ( t ) = j = 1 m 1 A 1 j ( t ) φ ¯ ( h 1 j ( t ) ) , j = 1 m 2 A 2 j ( t ) φ ¯ ( h 2 j ( t ) ) , , j = 1 m m A m j ( t ) φ ¯ ( h m j ( t ) ) , ( V n l x ¯ ) ( s ) = j = 1 s A 1 ( s , j ) x ¯ ( j ) h + i = 2 m j = 1 s A i ( s , j ) x ¯ ( j ) ( B i ( ( s + 1 ) h ) B i ( s h ) ) , f n l ( s ) = j = 1 A 1 ( s , j ) φ ¯ ( j ) h + i = 2 m j = 1 A i ( s , j ) φ ¯ ( j ) ( B i ( ( s + 1 ) h ) B i ( s h ) ) , Z ( t ) = ( t , B 2 ( t ) , , B m ( t ) ) T .
The solution of the problem (3a), (3b) will be denoted below by x ¯ ( t , b , φ ) . Obviously, for t 0 , we have x ( t , b , φ ) = x ¯ ( t , b , φ ) .
As previously mentioned, we will focus on the case 0 < l 1 < l , 0 < l 2 < n l in this paper, keeping in mind that the remaining cases can be considered similarly, provided that the matrices in Conditions M1–M2 below, which are crucial for constructing the operators in (4), are redefined according to the tables in Appendix A.
Condition M1. 
Let M be an l × n matrix and 0 < l 1 < l , 0 < l 2 < n l . Then, the following apply:
M 1 is an l 1 × l 1 -matrix obtained from M by removing the last l l 1 rows and the last n l 1 columns;
M 2 is an l 1 × ( l l 1 ) -matrix obtained from M by removing the last l l 1 rows, as well as the first l 1 and the last n l columns;
M 3 is an l 1 × l 2 -matrix obtained from M by removing the last l l 1 rows, as well as the first l and last n l l 2 columns;
M 4 is an l 1 × ( n l l 2 ) -matrix obtained from M by removing the first l l 1 rows and the first l + l 2 columns;
M 5 is an ( l l 1 ) × l 1 -matrix obtained from M by removing the first l 1 rows and the last n l 1 columns;
M 6 is an ( l l 1 ) × ( l l 1 ) -matrix obtained from M by removing the first l 1 rows, as well as the first l 1 and last n l columns;
M 7 is an ( l l 1 ) × l 2 -matrix obtained M by removing the first l 1 rows, as well as the first l and last n l l 2 columns;
M 8 is an ( l l 1 ) × ( n l l 2 ) -matrix obtained from M by removing the first l 1 rows and the first l + l 2 columns;
M 9 is an l 1 × n -matrix obtained from M by removing the last l l 1 rows;
M 10 is an ( l l 1 ) × n -matrix obtained from M by removing the first l 1 rows.
Condition M1 is primarily used for matrices depending on t, e.g., A i j ( t ) in this section and A ( t ) in Section 5.
Condition M2. 
Let M be an ( n l ) × n matrix and 0 < l 1 < l , 0 < l 2 < n l . Then, the following apply:
M 1 is an l 2 × l 1 -matrix obtained from M by removing the last n l l 2 rows and the last n l 1 columns;
M 2 is an l 2 × ( l l 1 ) -matrix obtained from M by removing the last n l l 2 rows, as well as the first l 1 and the last n l columns;
M 3 is an l 2 × l 2 -matrix obtained from M by removing the last n l l 2 rows, as well as the first l and last n l l 2 columns;
M 4 is an l 2 × ( n l l 2 ) -matrix obtained from M by removing the last n l l 2 rows and the first l + l 2 columns;
M 5 is an ( n l l 2 ) × l 1 -matrix obtained from M by removing the first l 2 rows and the last n l 1 columns;
M 6 is an ( n l 2 ) × ( l l 1 ) -matrix obtained from M by removing the first l 2 rows, as well as the first l 1 and last n l columns;
M 7 is an ( n l l 2 ) × l 2 -matrix obtained M by removing the first l 2 rows, as well as the first l and last n l l 2 columns;
M 8 is an ( n l l 2 ) × ( n l l 2 ) ) -matrix obtained from M by removing the first l 2 rows and the first l + l 2 columns;
M 9 is an l 2 × n -matrix obtained from M by removing the last n l l 2 rows;
M 10 is an ( n l l 2 ) × n -matrix obtained from M by removing the first l 2 rows.
Condition M2 is used for matrices depending on s, e.g., A i ( s , j ) in this section and B ( s ) in Section 5.
In order to perform stability analysis, we separate the variables that should be stable from the variables that may be unstable. These variables will be denoted by the letters y and h, respectively. Recall that it is the first l 1 continuous-time variables and the first l 2 discrete-time variables ( 0 < l 1 < l and 0 < l 2 < n l ) that should be stable. With these agreements, we put
y ¯ l 1 ( t ) = ( x ¯ 1 ( t ) , , x ¯ l 1 ( t ) ) T , h ¯ l 1 ( t ) = ( x ¯ l 1 + 1 ( t ) , , x ¯ l ( t ) ) T , y ¯ l 2 ( t ) = ( x ¯ l + 1 ( [ t ] ) , , x ¯ l + l 2 ( [ t ] ) ) T , h ¯ l 2 ( t ) = ( x ¯ l + l 2 + 1 ( [ t ] ) , , x n ( [ t ] ) ) T .
With this agreement, we have x ¯ l 1 = ( y ¯ l 1 , h ¯ l 1 ) T , x ¯ l 2 = ( y ¯ l 2 , h ¯ l 2 ) T , x ¯ = ( y ¯ l 1 , h ¯ l 1 , y ¯ l 2 , h ¯ l 2 ) T . System (3) splits, then, into the following form if we use the notation from Conditions M1–M2, applied to the matrices A i j ( t ) and A j ( s , j ) , respectively:
d y ¯ l 1 ( t ) = ( V 1 l y ¯ l 1 ) ( t ) + ( V 2 l h ¯ l 1 ) ( t ) + ( V 3 l y ¯ l 2 ) ( t ) + ( V 4 l h ¯ l 2 ) ( t ) + f l 1 ( t ) d Z ( t ) , d h ¯ l 1 ( t ) = ( V 5 l y ¯ l 1 ) ( t ) + ( V 6 l h ¯ l 1 ) ( t ) + ( V 7 l y ¯ l 2 ) ( t ) + ( V 8 l h ¯ l 2 ) ( t ) + f l l 1 ( t ) d Z ( t ) , y ¯ l 2 ( s + 1 ) = y ¯ l 2 ( s ) + ( V 1 n l y ¯ l 1 ) ( s ) + ( V 2 n l h ¯ l 1 ) ( s ) + ( V 3 n l y ¯ l 2 ) ( s ) + ( V 4 n l h ¯ l 2 ) ( s ) + f l 2 ( s ) , h ¯ l 2 ( s + 1 ) = h ¯ l 2 ( s ) + ( V 5 n l y ¯ l 1 ) ( s ) + ( V 6 n l h ¯ l 1 ) ( s ) + ( V 7 n l y ¯ l 2 ) ( s ) + ( V 8 n l h ¯ l 2 ) ( s ) + f n l l 2 ( s ) ,
where t 0 , s N + and
( V k l y ¯ l 1 ) ( t ) = j = 1 m 1 ( A 1 j ( t ) ) k y ¯ l 1 ( h 1 j ( t ) ) , j = 1 m 2 ( A 2 j ( t ) ) k y ¯ l 1 ( h 2 j ( t ) ) , , j = 1 m m ( A m j ( t ) ) k y ¯ l 1 ( h m j ( t ) ) , ( V k n l y ¯ l 1 ) ( s ) = j = 1 s ( A 1 ( s , j ) ) k y ¯ l 1 ( j ) h + i = 2 m j = 1 s ( A i ( s , j ) ) k y ¯ l 1 ( j ) ( B i ( ( s + 1 ) h ) B i ( s h ) ) for k = 1 , 5 ;
( V k l h ¯ l 1 ) ( t ) = j = 1 m 1 ( A 1 j ( t ) ) k h ¯ l 1 ( h 1 j ( t ) ) , j = 1 m 2 ( A 2 j ( t ) k h ¯ l 1 ( h 2 j ( t ) ) , , j = 1 m m ( A m j ( t ) ) k h ¯ l 1 ( h m j ( t ) ) , ( V k n l h ¯ l 1 ) ( s ) = j = 1 s ( A 1 ( s , j ) ) k h ¯ l 1 ( j ) h + i = 2 m j = 1 s ( A i ( s , j ) ) k h ¯ l 1 ( j ) ( B i ( ( s + 1 ) h ) B i ( s h ) ) for k = 2 , 6 ;
( V k l y ¯ l 2 ) ( t ) = j = 1 m 1 ( A 1 j ( t ) ) k y ¯ l 2 ( h 1 j ( t ) ) , j = 1 m 2 ( A 2 j ( t ) ) k y ¯ l 2 ( h 2 j ( t ) ) , , j = 1 m m ( A m j ( t ) ) k y ¯ l 2 ( h m j ( t ) ) , V k n l y ¯ l 2 ) ( s ) = j = 1 s ( A 1 ( s , j ) ) k y ¯ l 2 ( j ) h + i = 2 m j = 1 s ( A i ( s , j ) ) k y ¯ l 2 ( j ) ( B i ( ( s + 1 ) h ) B i ( s h ) ) for k = 3 , 7 ;
( V k l h ¯ l 2 ) ( t ) = j = 1 m 1 ( A 1 j ( t ) ) k h ¯ l 2 ( h 1 j ( t ) ) , j = 1 m 2 ( A 2 j ( t ) k h ¯ l 2 ( h 2 j ( t ) ) , , j = 1 m m ( A m j ( t ) ) k h ¯ l 2 ( h m j ( t ) ) , ( V k n l h ¯ l 2 ) ( s ) = j = 1 s ( A 1 ( s , j ) ) k h ¯ l 2 ( j ) h + i = 2 m j = 1 s ( A i ( s , j ) ) k h ¯ l 2 ( j ) ( B i ( ( s + 1 ) h ) B i ( s h ) ) for k = 4 , 8 ;
and, finally,
f l 1 ( t ) = j = 1 m 1 ( A 1 j ( t ) ) 9 φ ¯ ( h 1 j ( t ) ) , j = 1 m 2 ( A 2 j ( t ) ) 9 φ ¯ ( h 2 j ( t ) ) , , j = 1 m m ( A m j ( t ) ) 9 φ ¯ ( h m j ( t ) ) , f l l 1 ( t ) = j = 1 m 1 ( A 1 j ( t ) ) 10 φ ¯ ( h 1 j ( t ) ) , j = 1 m 2 ( A 2 j ( t ) ) 10 φ ¯ ( h 2 j ( t ) ) , , j = 1 m m ( A m j ( t ) ) 10 φ ¯ ( h m j ( t ) ) , f l 2 ( s ) = j = 1 ( A 1 ( s , j ) ) 9 φ ( j ) h + i = 2 m j = 1 ( A i ( s , j ) ) 9 φ ( j ) ( B i ( ( s + 1 ) h ) B i ( s h ) ) , f n l l 2 ( s ) = j = 1 ( A 1 ( s , j ) ) 10 φ ( j ) h + i = 2 m j = 1 ( A i ( s , j ) ) 10 φ ( j ) ( B i ( ( s + 1 ) h ) B i ( s h ) ) .
Let us stress that the state space of Systems (3) and (4) is the same, but in the latter case, it is represented as a direct product of the state spaces for the stable and unstable variables, respectively.
Since any x ¯ ( 0 ) k n gives rise to a unique solution of System (3), each of the equations of System (4) will have a unique solution for any fixed vector values { y ¯ l 1 ( 0 ) , y ¯ l 2 , h ¯ l 1 , h ¯ l 2 } , { h ¯ l 1 ( 0 ) , y ¯ l 1 , y ¯ l 2 , h ¯ l 2 } , { y ¯ l 2 ( 0 ) , y ¯ l 1 , h ¯ l 2 , h ¯ l 2 } , { h ¯ l 2 ( 0 ) , y ¯ l 1 , y ¯ l 2 , h ¯ l 1 } , respectively. Evidently, the second and fourth equations of System (4) are, respectively, equivalent to the equations
h ¯ l 1 ( t ) = H l 1 ( t ) h ¯ l 1 ( 0 ) + ( C l 1 ( V 5 l y ¯ l 1 + V 7 l y ¯ l 2 + V 8 l h ¯ l 2 + f l l 1 ) ) ( t ) ( t 0 ) , h ¯ l 2 ( s ) ) = H l 2 ( s , 0 ) h ¯ l 2 ( 0 ) + τ = 0 s 1 H l 2 ( s , τ + 1 ) ( ( V 5 n l y ¯ l 1 ) ( τ ) + ( V 6 n l h ¯ l 1 ) ( τ ) + ( V 7 n l y ¯ l 2 ) ( τ ) + f n l l 2 ( τ ) ) ( s N + ) ,
where H l 1 is the fundamental matrix, and C l 1 is the Cauchy operator for the second equation of System (4); H l 2 ( s , τ ) ( s , τ N + , 0 τ s ) is an ( n l l 2 ) × ( n l l 2 ) -matrix whose columns are solutions of the system h ¯ l 2 ( s + 1 ) = h ¯ l 2 ( s ) ( B ( s ) ) 4 h ¯ l 2 ( s ) h ( s N + ) , where H l 2 ( s , s ) ( s N + ) is the identity matrix of dimension ( n l l 2 ) × ( n l l 2 ) . Therefore, from the first and third equations of System (4), we obtain
d y ¯ l 1 ( t ) = ( V 1 l y ¯ l 1 ) ( t ) + ( G 1 y ¯ ) ( t ) + S 1 ( t ) + ( F 1 φ ¯ ) ( t ) d Z ( t ) ( t 0 ) , y ¯ l 2 ( s + 1 ) = y ¯ l 2 ( s ) + ( V 3 n l y ¯ l 2 ) ( s ) + ( G 2 y ¯ ) ( s ) + S 2 ( s ) + ( F 2 φ ¯ ) ( s ) ( s N + ) ,
where
( G 1 y ¯ ) ( t ) = ( V 2 l C l 1 V 5 l y ¯ l 1 ) ( t ) + V 4 l τ = 0 [ · ] 1 H l 2 ( [ · ] , τ + 1 ) ( V 5 n l y ¯ l 1 ) ( τ ) ( t )
+ ( V 3 l y ¯ l 2 ) ( t ) + ( V 2 l C l 1 V 7 l y ¯ l 2 ) ( t ) + V 4 l τ = 0 [ · ] 1 H l 2 ( [ · ] , τ + 1 ) ( V 7 n l y ¯ l 2 ) ( τ ) ( t ) ;
( G 2 y ¯ ) ( s ) = ( V 1 n l y ¯ l 1 ) ( s ) + ( V 2 n l C l 1 V 5 l y ¯ l 1 ) ( s ) + V 4 n l τ = 0 · 1 H l 2 ( · , τ + 1 ) ( V 5 n l y ¯ l 1 ) ( τ ) ( s )
+ ( V 2 n l C l 1 V 7 l y ¯ l 2 ) ( s ) + V 4 n l τ = 0 · 1 H l 2 ( · , τ + 1 ) ( V 7 n l y ¯ l 2 ) ( τ ) ( s ) ;
S 1 ( t ) = ( V 2 l ( H l 1 h ¯ l 1 ( 0 ) ) ) ( t ) + ( V 4 l ( H l 2 ( [ · ] , 0 ) h ¯ l 2 ( 0 ) ) ) ( t ) ;
( F 1 φ ¯ ) ( t ) = ( V 2 l ( C l 1 f l l 1 ) ) ( t ) + V 4 l τ = 0 · 1 H l 2 ( [ · , τ + 1 ] ) f n l l 2 ( τ ) ( t ) + f l 1 ( t ) ;
S 2 ( s ) = ( V 2 n l ( H l 1 h ¯ l 1 ( 0 ) ) ) ( s ) + ( V 4 n l ( H l 2 ( · , 0 ) h ¯ l 2 ( 0 ) ) ) ( s ) ;
( F 2 φ ¯ ) ( s ) = ( V 2 n l ( C l 1 f l l 1 ) ) ( s ) + ( V 2 n l τ = 0 · 1 H l 2 ( · , τ + 1 ) f n l l 2 ( s ) + f l 2 ( s ) .
The state space of System (5), unlike those of Systems (3) and (4), only contains stable variables, whereas the equations for the unstable variables in Equation (4) are resolved and their solutions are inserted in the right-hand sides of Equation (5).
Hence, System (1) is M q γ y -stable if and only if the solution
y ¯ ( t , b , φ ) = ( y ¯ l 1 ( t , b , φ ) , y ¯ l 2 ( t , b , φ ) ) T
of System (5) satisfies y ¯ ( . , b , φ ) M ¯ q γ and
y ¯ ( . , b , φ ) M ¯ q γ c ¯ b k q n + φ L q n
for any b k q n , φ L q n and some positive number c ¯ .
To verify these conditions, we apply the regularization method, which starts with a choice of an auxiliary system whose asymptotic properties are known. Let this system be of the form
d y ¯ l 1 ( t ) = ( ( Q y ¯ l 1 ) ( t ) + G ( t ) ) d Z ( t ) ( t 0 ) , y ¯ l 2 ( s + 1 ) = y ¯ l 2 ( s ) + ( B ¯ ( s ) y ¯ l 2 ( s ) + g ( s ) ) h ( s N + ) ,
where ( Q y ¯ l 1 ) ( t ) = ( B ( t ) y ¯ l 1 ( t ) , 0 ¯ , , 0 ¯ ) , B ( t ) l 1 × l 1 is a matrix, whose entries are progressively measurable stochastic processes on the interval [ 0 , ) with a.s. locally integrable paths; 0 ¯ is the column vector of dimension l 1 with the zero entries (their number is m 1 ); G ( t ) = ( g 1 ( t ) , , g m ( t ) ) , g 1 ( t ) is an l 1 -dimensional progressively measurable stochastic process on [ 0 , ) with a.s. locally integrable paths; g i ( t ) , i = 2 , , m l 1 -dimensional progressively measurable stochastic processes on [ 0 , ) with a.s. locally square integrable paths; B ¯ ( s ) is l 2 × l 2 ) , which is a matrix whose entries are F s -measurable scalar random variables ( s N + ); g ( s ) are l 2 -dimensional F s -measurable random variables ( s N + ); and h > 0 is a constant from System (1).
Applying the standard representation of solutions of linear ordinary inhomogeneous differential and difference equations, we obtain the following formulas for the solution y ¯ ( t ) of System (7), satisfying y ¯ ( 0 ) = ( y ¯ l 1 ( 0 ) , y ¯ l 2 ( 0 ) ) T :
y ¯ l 1 ( t ) = X l 1 ( t , 0 ) y ¯ l 1 ( 0 ) + ( W G ) ( t ) ( t 0 ) , y ¯ l 2 ( s ) = X l 2 ( s , 0 ) y ¯ l 2 ( 0 ) + τ = 0 s 1 X l 2 ( s , τ + 1 ) g ( τ ) h ( s N + ) ,
where X l 1 ( t , ς ) ( 0 ς t ) l 1 × l 1 is a matrix whose columns are solutions of the system d y ¯ l 1 ( t ) = B ( t ) y ¯ l 1 ( t ) d t ( t 0 ) for a fixed ς . Here, X l 1 ( t , t ) ( t 0 ) is the identity matrix of dimension l 1 × l 1 , ( W G ) ( t ) = 0 t X l 1 ( t , ς ) g 1 ( ς ) d ς + i = 2 m 0 t X l 1 ( t , ς ) g i ( ς ) d B i ( ς ) , and X l 2 ( s , τ ) ( s , τ N + , 0 τ s ) l 2 × l 2 is a matrix, whose columns are solutions of the system y ¯ l 2 ( s + 1 ) = y ¯ l 2 ( s ) B ¯ ( s ) y ¯ l 2 ( s ) h ( s N + ) for a fixed τ . Here, X l 2 ( s , s ) ( s N + ) is the identity matrix of dimension l 2 × l 2 .
By virtue of the auxiliary system (7), System (5) can be rewritten in the following equivalent form:
y ¯ ( t ) = X ( t ) y ¯ ( 0 ) + ( Θ y ¯ ) ( t ) + S ( t ) + ( K φ ¯ ) ( t ) ( t 0 ) ,
where X ( t ) is a block-diagonal matrix, with the matrices X l 1 ( t , 0 ) and X l 2 ( [ t ] , 0 ) on the main diagonal and with the zero matrices 0 ^ l 1 and 0 ^ l 2 of dimensions l 1 × l 2 and l 2 × l 1 , respectively, outside the main diagonal:
( Θ y ¯ ) ( t ) = ( W ( V 1 l Q ) y ¯ l 1 + G 1 y ¯ ) ( t ) , τ = 0 [ t ] 1 X l 2 ( [ t ] , τ + 1 ) ( ( V 3 n l y ¯ l 2 ) ( τ ) + B ¯ ( τ ) y ¯ l 2 ( τ ) + ( G 2 y ¯ ) ( τ ) ) ) T ,
S ( t ) = ( ( W S 1 ) ( t ) , τ = 0 [ t ] 1 X l 2 ( [ t ] , τ + 1 ) S 2 ( τ ) ) T , ( K φ ¯ ) ( t ) = ( W F 1 φ ¯ ) ( t ) , τ = 0 [ t ] 1 X l 2 ( [ t ] , τ + 1 ) ( F 2 φ ¯ ) ( τ ) ) T .
Theorem 1. 
Let X y ¯ ( 0 ) + S + K φ ¯ M ¯ q γ for any x ¯ ( 0 ) k q n , φ L q n and | | X y ¯ ( 0 ) + S + K φ ¯ | | M ¯ q γ c ¯ ( | | x ¯ ( 0 ) | | k q n + | | φ | | L q n ) for some positive number c ¯ , and let the operator Θ act in the space M ¯ q γ . Then, if the operator ( I Θ ) : M ¯ q γ M ¯ q γ is continuously invertible, then System (1) is M q γ y -stable.
Proof. 
Due to the continuous invertibility of the operator ( I Θ ) : M ¯ q γ M ¯ q γ , the equation ( I Θ ) y ¯ = g , where g M ¯ q γ has a unique solution from M ¯ q γ , i.e., y ¯ = ( I Θ ) 1 g M ¯ q γ and y ¯ M ¯ q γ ( I Θ ) 1 M ¯ q γ g M ¯ q γ . From here and from the conditions of the theorem, we obtain that ( I Θ ) 1 ( X y ¯ ( 0 ) + S + K φ ¯ ) M ¯ q γ for any x ¯ ( 0 ) k q n , φ L q n and the inequality ( I Θ ) 1 ( X y ¯ ( 0 ) + S + K φ ¯ ) M ¯ q γ c ¯ ( x ¯ ( 0 ) k q n + φ L q n ) holds for some positive number c ¯ . But on the other hand, y ¯ ( . , x ¯ ( 0 ) , φ ) = ( I Θ ) 1 ( X y ¯ ( 0 ) + S + K φ ¯ ) . Consequently, y ¯ ( . , x ¯ ( 0 ) , φ ) M ¯ q γ for any x ¯ ( 0 ) k q n , φ L q n and inequality (6) holds for it, and this means that System (1) is M q γ y -stable. □
Theorem 1 can be used to obtain sufficient conditions for partial stability of System (1) in terms of the parameters of this system, as is performed in the classical version of the regularization method [27] based on the invertibility of certain linear operators. On the other hand, the authors’ recent papers [28,29] demonstrate that the stability conditions are more accurate if component-wise estimates of the solutions are used. Below, we refine the latter approach to study the partial stability of System (1).
For a stochastic process y ¯ ( t ) = ( y ¯ l 1 ( t ) , y ¯ l 2 ( t ) ) T , we introduce the notation y ¯ γ ( q ) = ( x ¯ 1 γ ( q ) , , x ¯ l 1 γ ( q ) ) , x l + l 1 γ ( q ) , , x ¯ l + l 2 γ ( q ) ) T , where x ¯ i γ ( q ) = sup t 0 E | γ ( t ) x ¯ i ( t ) | q 1 / q for
i = 1 , , l 1 , and x ¯ i γ ( q ) = sup t 0 E | γ ( t ) x ¯ i ( [ t ] ) | q 1 / q for i = l + 1 , , l + l 2 .
Assume that for some 1 q < and a positive continuous function γ : [ 0 , ) R 1 , we have managed to obtain a matrix inequality of the following form using component-wise estimates of the solution y ¯ ( t , b , φ ) = y ¯ ( t ) of System (8):
E ¯ y ¯ γ ( q ) C y ¯ γ ( q ) + c ¯ b k q n + c ^ φ L q n ,
where C is some non-negative matrix of dimension ( l 1 + l 2 ) × ( l 1 + l 2 ) , and c ¯ , c ^ are some l 1 + l 2 -dimensional column vectors, whose entries are non-negative numbers. Then, the following result holds true:
Theorem 2. 
Assume that there exists an auxiliary system (8) such that the matrix E ¯ C in inequality (9) has a non-negative inverse (i.e., all entries of the inverse matrix are non-negative numbers). Then, System (1) is M q γ y -stable.
Proof. 
Using the above property of the matrix E ¯ C , we rewrite inequality (9) as follows:
E ¯ y ¯ γ ( q ) ( E ¯ C ) 1 c ¯ b k q n + c ^ φ L q n ,
so that
| y ¯ γ ( q ) | c b k q n + φ L q n ,
where c = ( E ¯ C ) 1 max { | c ¯ | , | c ^ | } . Since y ¯ ( t , b , φ ) = y ¯ ( t ) for t 0 and y ¯ ( . , b , φ ) M ¯ q γ | y ¯ γ ( q ) | , it follows from inequality (10) that for any b k q n , φ L q n , the solutions y ¯ ( t , b , φ ) ( t ( , ) ) of problem (3a), (3b) satisfy the relation y ¯ ( . , b , φ ) M ¯ q γ and the inequality
y ¯ ( . , b , φ ) M ¯ q γ c ( b k q n + φ L q n ) ,
where c is some positive number. Therefore, System (1) is M q γ y -stable. □
Based on Theorem 2, verifiable conditions for partial moment stability of System (1) can be obtained in a rather efficient way. This is carried out in the next sections.

4. Sufficient Conditions for Partial Stability in the Stochastic Case

In this section, we study M q γ y -stability of System (1), i.e., stability in the sense of Definition 3. Recall that this definition includes all kinds of partial stability listed in Definition 2 if one considers the spaces M q γ with a special weight γ or without any weight ( γ = 1 ).
The three inequalities below are crucial for what follows.
E 0 t f ( ς ) d B ( ς ) 2 p 1 / ( 2 p ) c p E 0 t | f ( ς ) | 2 d ς p 1 / ( 2 p ) ,
where f ( ς ) is a scalar progressively measurable stochastic process, integrable with respect to the Wiener process B ( ς ) on the interval [ 0 , t ] and c p is some number, which only depends on p. This result can, e.g., be found in the monograph [31] (p. 65), where specific estimates for c p are also given. Note that c 1 = 1 .
Two other inequalities are proven in [28]. It is assumed that g ( ς ) is a scalar and locally square integrable function on [ 0 , ) and f ( ς ) is a scalar stochastic process such that sup ς 0 ( E | f ( ς ) | 2 p ) 1 / 2 p < :
sup t 0 E 0 t g ( ς ) f ( ς ) d ς 2 p 1 / ( 2 p ) sup t 0 0 t | g ( ς ) | d ς sup ς 0 E f ( ς ) 2 p 1 / ( 2 p ) ,
sup t 0 E 0 t g ( ς ) 2 f ( ς ) 2 d ς p 1 / ( 2 p ) sup t 0 0 t g ( ς ) 2 d ς 1 / 2 sup ς 0 E f ( ς ) 2 p 1 / ( 2 p ) .
In the sequel, the notations and assumptions introduced in the previous sections are used. In addition, we have the following notational agreements: the entries of the matrix A i j ( t ) from System (1) are denoted by a k r i j ( t ) , k = 1 , , l , r = 1 , , n , t 0 , and the entries of the matrix A i ( s , j ) from this system are denoted by a k r i ( s , j ) , k = l + 1 , , n , r = 1 , , n s N + , j = 0 , , s ; the l 1 -dimensional vector ξ and the l 2 -dimensional vector ζ are combined as ( ξ , ζ ) T = ( ξ 1 , , ξ l 1 , ξ l + 1 , , ξ l + l 2 ) T .
  • Assumption set 1:
τ = 0 j = 0 τ a ¯ k r i ( τ , j ) < for i = 1 , , m , k , r = l + 1 , , l + l 2 ,
and there exist integrable functions a ¯ k r 1 j ( t ) ( t 0 ) , j = 1 , , m 1 , k , r = 1 , , l 1 , and square integrable functions a ¯ k r i j ( t ) ( t 0 ) , i = 2 , , m , j = 1 , , m i , k , r = 1 , , l 1 such that | a k r i j ( t ) | a ¯ k r i j ( t ) ( t 0 )   P × μ -almost everywhere for i = 1 , , m , j = 1 , , m i , k , r = 1 , , l 1 ;
Non-negative numbers a ¯ k r i ( s , j ) , i = 1 , , m , k , r = l + 1 , , l + l 2 , s N + , j = 0 , , s such that | a k r i ( s , j ) | a ¯ k r i ( s , j ) P-almost everywhere for i = 1 , , m , k , r = l + 1 , , l + l 2 , s N + , j = 0 , , s ;
Non-negative numbers d k , k = 1 , , l 1 , l + 1 , , l + l 2 such that
sup t 0 E 0 t ( S 1 ( s ) d Z ( s ) ) k 2 p 1 / 2 p d k b k 2 p n for k = 1 , , l 1 ,
sup t 0 E τ = 0 [ t ] 1 ( S 2 ( τ ) ) k 2 p 1 / 2 p d k b k 2 p n for k = l + 1 , , l + l 2 ;
Non-negative numbers d k r , k , r = 1 , , l 1 , l + 1 , , l + l 2 such that
sup t 0 E 0 t ( ( G 1 y ¯ ) ( s ) d Z ( s ) ) k 2 p 1 / 2 p r = 1 l 1 d k r x ¯ r 1 ( 2 p ) + r = l + 1 l + l 2 d k r x ¯ r 1 ( 2 p ) for k = 1 , , l 1 ,
sup t 0 E τ = 0 [ t ] 1 ( ( G 2 y ¯ ) ( τ ) ) k 2 p 1 / 2 p r = 1 l 1 d k r x ¯ r 1 ( 2 p ) + r = l + 1 l + l 2 d k r x ¯ r 1 ( 2 p ) for k = l + 1 , , l + l 2 ;
Non-negative numbers d k , k = 1 , , l 1 , l + 1 , , l + l 2 such that
sup t 0 E 0 t ( ( F 1 φ ¯ ) ( s ) d Z ( s ) ) k 2 p 1 / 2 p d k φ L q n for k = 1 , , l 1 ,
sup t 0 E τ = 0 [ t ] 1 ( ( F 2 φ ¯ ) ( τ ) ) k 2 p 1 / 2 p d k φ L q n for k = l + 1 , , l + l 2 .
Let us define the entries ( l 1 + l 2 ) × ( l 1 + l 2 ) of the matrix C as follows:
c k r = j = 1 m 1 0 a ¯ k r 1 j ( ς ) d ς + c p i = 2 m j = 1 m i 0 a ¯ k r i j ( ς ) 2 d ς 1 / 2 + d k r , k , r = 1 , , l 1 , c k r = d k ( r l 1 + l ) , k = 1 , , l 1 , r = l 1 + 1 , , l 1 + l 2 , c k r = d k r , k = l 1 + 1 , , l 1 + l 2 , r = 1 , , l 1 , c k r = h τ = 0 j = 0 τ a ¯ k r 1 ( τ , j ) + c p h i = 2 m τ = 0 j = 0 τ a ¯ k r i ( τ , j ) + d k ( r l 1 + l ) , k , r = l 1 + 1 , , l 1 + l 2 .
Then, we have
Theorem 3. 
If, under Assumption set 1, the matrix E ¯ C has a non-negative inverse, then System (1) is M 2 p y -stable.
Proof. 
We apply Theorem 2 for q = 2 p and γ ( t ) 1 ( t 0 ) . As an auxiliary system (7), we take a system in which the entries of the matrices B ( t ) , B ¯ ( s ) are identically equal to zero. In this case, the matrices X l 1 ( t , ς ) ( t 0 , 0 ς t ) , X l 2 ( s , τ ) ( s , τ N + , 0 τ s ) are the identity matrices of dimension l 1 × l 1 and l 2 × l 2 , respectively. We rewrite representation (8) as
x ¯ k ( t ) = b k j = 1 m 1 r = 1 l 1 0 t a k r 1 j ( ς ) x ¯ r ( h 1 j ( ς ) ) d ς + i = 2 m j = 1 m i r = 1 l 1 0 t a k r i j ( ς ) x ¯ r ( h i j ( ς ) ) d B i ( ς ) + 0 t ( S 1 ( ς ) d Z ( ς ) ) k + 0 t ( ( G 1 y ¯ ) ( ς ) d Z ( ς ) ) k + 0 t ( ( F 1 φ ¯ ) ( ς ) d Z ( ς ) ) k ( t 0 ) , k = 1 , , l 1 , x ¯ k ( [ t ] ) = b k τ = 0 [ t ] 1 j = 0 τ r = l + 1 l + l 2 a k r 1 ( τ , j ) x ¯ r ( j ) h + i = 2 m τ = 0 [ t ] 1 j = 0 τ r = l + 1 l + l 2 a k r i ( τ , j ) x ¯ r ( j ) τ h ( τ + 1 ) h d B i ( ς ) + τ = 0 [ t ] 1 ( S 2 ( τ ) ) k + τ = 0 [ t ] 1 ( ( G 2 y ¯ ) ( τ ) ) k + τ = 0 [ t ] 1 ( F 2 φ ¯ ) ( τ ) ) k ( t 0 ) , k = l + 1 , , l + l 2 .
From this representation, the conditions of the theorem and inequalities (11)–(13), and also taking into account the estimates ( E | b k | 2 p ) 1 / ( 2 p ) b k 2 p n ( k = 1 , , n ) , we obtain
x ¯ k 1 ( 2 p ) ( 1 + d k ) b k 2 p n + r = 1 l 1 ( c k r + d k r ) x k 1 ( 2 p ) + r = l + 1 l + l 2 d k r x k 1 ( 2 p ) + d k b k 2 p n + d k φ L 2 p n , k = 1 , , l 1 , x ¯ k 1 ( 2 p ) ( 1 + d k ) b k 2 p n + r = 1 l 1 d k r x k 1 ( 2 p ) + r = l + 1 l + l 2 ( c k r + d k r ) x k 1 ( 2 p ) + d k b k 2 p n + d k φ L 2 p n , k = l + 1 , , l 2 .
We rewrite the last system of inequalities in the matrix form
E ¯ y ¯ 1 ( 2 p ) C y ¯ 1 ( 2 p ) + b k q n e + φ L q n e ^ ,
where e = ( 1 + d 1 , , 1 + d l 1 , 1 + d l + 1 , , 1 + d l + l 2 ) T , e ^ = ( d 1 , , d l 1 , d l + 1 , , d l + l 2 ) T is an l 1 + l 2 -dimensional column vector. Since the matrix E ¯ C has a non-negative inverse, System (1) is M 2 p y -stable by virtue of Theorem 2. □
To be able to formulate the next theorem, we need the following.
  • Assumption set 2:
h 11 ( t ) = t ( t 0 ) ;
The diagonal entries of the matrices ( A 11 ( t ) ) 1 ( t 0 ) , ( A 1 ( s , s ) ) 3 ( s N + ) are of the form a k k 11 ( t ) + λ k ( t 0 ) , k = 1 , , l 1 and a k k 1 ( s , s ) + λ k ( s N + ) , k = l + 1 , , l + l 2 , respectively, where λ k , k = 1 , , l 1 , l + 1 , , l + l 2 are some positive numbers, and 0 < λ k h < 1 , k = l + 1 , , l + l 2 .
  • In addition, there exist the following:
Non-negative numbers a ¯ k r i j , i = 1 , , m , j = 1 , , m i , k , r = 1 , , l 1 such that | a k r i j ( t ) | a ¯ k r i j ( t 0 ) P × μ -almost everywhere for i = 1 , , m , j = 1 , , m i , k , r = 1 , , l 1 ;
Non-negative numbers a ¯ k r i ( s , j ) , i = 1 , , m , k , r = l + 1 , , l + l 2 , s N + , j = 0 , , s such that | a k r i ( s , j ) | a ¯ k r i ( s , j ) P-almost everywhere for i = 1 , , m , k , r = l + 1 , , l + l 2 , s N + , j = 0 , , s , and sup τ N + j = 0 τ a ¯ k r i ( τ , j ) < for i = 1 , , m , k , r = l + 1 , , l + l 2 ;
Non-negative numbers d k , k = 1 , , l 1 , l + 1 , , l + l 2 such that
sup t 0 E 0 t exp λ k ( t ς ) ( S 1 ( s ) d Z ( s ) ) k 2 p 1 / 2 p d k b k 2 p n for k = 1 , , l 1 ,
sup t 0 E τ = 0 [ t ] 1 ( 1 λ k h ) [ t ] τ ( S 2 ( τ ) ) k 2 p 1 / 2 p d k b k 2 p n for k = l + 1 , , l + l 2 ;
Non-negative numbers d k r , k , r = 1 , , l 1 , l + 1 , , l + l 2 such that
sup t 0 E 0 t exp λ k ( t ς ) ( ( G 1 y ¯ ) ( s ) d Z ( s ) ) k 2 p 1 / 2 p r = 1 l 1 d k r x ¯ r 1 ( 2 p ) + r = l + 1 l + l 2 d k r x ¯ r 1 ( 2 p )
for k = 1 , , l 1 ,
sup t 0 E τ = 0 [ t ] 1 ( 1 λ k h ) [ t ] τ ( ( G 2 y ¯ ) ( τ ) ) k 2 p 1 / 2 p r = 1 l 1 d k r x ¯ r 1 ( 2 p ) + r = l + 1 l + l 2 d k r x ¯ r 1 ( 2 p )
for k = l + 1 , , l + l 2 ;
Non-negative numbers d k , k = 1 , , l 1 , l + 1 , , l + l 2 such that
sup t 0 E 0 t exp λ k ( t ς ) ( ( F 1 φ ¯ ) ( s ) d Z ( s ) ) k 2 p 1 / 2 p d k φ L q n for k = 1 , , l 1 ,
sup t 0 E τ = 0 [ t ] 1 ( 1 λ k h ) [ t ] τ ( F 2 φ ¯ ) ( τ ) ) k 2 p 1 / 2 p d k φ L q n for k = l + 1 , , l + l 2 .
Defining the entries of the ( l 1 + l 2 ) × ( l 1 + l 2 ) -matrix C as
c k r = 1 λ k j = 1 m 1 a ¯ k r 1 j + c p 2 λ k i = 2 m j = 1 m i a ¯ k r i j + d k r , k , r = 1 , , l 1 , c k r = d k ( r l 1 + l ) , k = 1 , , l 1 , r = l 1 + 1 , , l 1 + l 2 , c k r = d k r , k = l 1 + 1 , , l 1 + l 2 , r = 1 , , l 1 , c k r = 1 λ k sup τ N + j = 0 τ a ¯ k r 1 ( τ , j ) + c p λ k h i = 2 m sup τ N + j = 0 τ a ¯ k r i ( τ , j ) + d k ( r l 1 + l ) , k , r = l 1 + 1 , , l 1 + l 2 .
we obtain
Theorem 4. 
If, under Assumption set 2, the matrix E ¯ C has a non-negative inverse, then System (1) is M 2 p y -stable.
Proof. 
Let us again use Theorem 2 for q = 2 p and γ ( t ) 1 ( t 0 ) . As an auxiliary system (7), we take a system in which B ( t ) , B ¯ ( s ) are constant diagonal matrices with the diagonal entries λ k , k = 1 , , l 1 and λ k , k = l + 1 , , l + l 2 , respectively. In this case, X l 1 ( t , ς ) ( t 0 , 0 ς t ) , X l 2 ( s , τ ) ( s , τ N + , 0 τ s ) are also diagonal matrices with the diagonal entries x ^ k ( t , ς ) = exp { λ k ( t ς ) } ( t 0 , 0 ς t ) , k = 1 , , l 1 and x ˜ k ( s , τ ) = ( 1 λ k h ) s τ ( s , τ N + , 0 τ s ) , k = l + 1 , , l + l 2 , respectively. Then, System (8) can be written in the following form:
x ¯ k ( t ) = x ^ k ( t , 0 ) b k j = 1 m 1 r = 1 l 1 0 t x ^ k ( t , ς ) a k r 1 j ( ς ) x ¯ r ( h 1 j ( ς ) ) d ς + i = 2 m j = 1 m i r = 1 l 1 0 t x ^ k ( t , ς ) a k r i j ( ς ) x ¯ r ( h i j ( ς ) ) d B i ( ς ) + 0 t x ^ k ( t , ς ) ( S 1 ( ς ) d Z ( ς ) ) k + 0 t x ^ k ( t , ς ) ( ( G 1 y ¯ ) ( ς ) d Z ( ς ) ) k + 0 t x ^ k ( t , ς ) ( ( F 1 φ ¯ ) ( ς ) d Z ( ς ) ) k ( t 0 ) , k = 1 , , l 1 , x ¯ k ( [ t ] ) = x ˜ k ( [ t ] , 0 ) b k τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) j = 0 τ r = l + 1 l + l 2 a k r 1 ( τ , j ) x ¯ r ( j ) h + i = 2 m τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) j = 0 τ r = l + 1 l + l 2 a k r i ( τ , j ) x ¯ r ( j ) τ h ( τ + 1 ) h d B i ( ς ) + τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) ( S 2 ( τ ) ) k + τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) ( ( G 2 y ¯ ) ( τ ) ) k + τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) ( ( F 2 φ ¯ ) ( τ ) ) k ( t 0 ) , k = l + 1 , , l + l 2 .
From this system, taking into account the conditions of the theorem, inequalities (11)–(13), the estimate ( E | b k | 2 p ) 1 / ( 2 p ) b k 2 p n ( k = 1 , , n ) and the equalities
sup t 0 0 t x ^ k ( t , ς ) d ς = 1 λ k , sup t 0 0 t x ^ k ( t , ς ) 2 d ς 1 / 2 = 1 2 λ k ( k = 1 , , l ) ,
sup t 0 τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) = sup t 0 ( 1 + ( 1 λ k h ) + + ( 1 λ k h ) [ t ] 1 = τ = 0 ( 1 λ k h ) τ = 1 1 ( 1 λ k h ) = 1 λ k h ( k = l + 1 , , n ) ,
we deduce
x ¯ k 1 ( 2 p ) ( 1 + d k ) b k 2 p n + r = 1 l 1 ( c k r + d k r ) x k 1 ( 2 p ) + r = l + 1 l + l 2 d k r x k 1 ( 2 p ) + d k b k 2 p n + d k φ L 2 p n , k = 1 , , l 1 , x ¯ k 1 ( 2 p ) ( 1 + d k ) b k 2 p n + r = 1 l 1 d k r x k 1 ( 2 p ) + r = l + 1 l + l 2 ( c k r + d k r ) x k 1 ( 2 p ) + d k b k 2 p n + d k φ L 2 p n , k = l + 1 , , l 2 .
Let us rewrite the last system of inequalities in the matrix form
E ¯ y ¯ 1 ( 2 p ) C y ¯ 1 ( 2 p ) + b k q n e + φ L q n e ^ ,
where e = ( 1 + d 1 , , 1 + d l 1 , 1 + d l + 1 , , 1 + d l + l 2 ) T , e ^ = ( d 1 , , d l 1 , d l + 1 , , d l + l 2 ) T are an l 1 + l 2 -dimensional column vector. As the matrix E ¯ C has a non-negative inverse, System (1) is M 2 p y -stable by virtue of Theorem 2. □
In the remaining part of the section, we study the exponential partial stability of System (1). To this end, we put γ ( t ) = exp { λ t } ( t 0 ) , where λ is a positive number, h ¯ i j is a Borel measurable function defined on ( , ) and such that h ¯ i j ( t ) = h i j ( t ) for t 0 and h ¯ i j ( t ) = 0 for t < 0 , while I k are some subsets of the set { 1 , , m 1 } , i.e., I k { 1 , , m 1 } for k = 1 , , l 1 .
The next theorem reviews the M 2 p γ y -stability of System (1) with an exponential weight γ ( t ) . This theorem is a source of more specific results on partial exponential 2 p -stability of System (1) with respect to initial data. Note that numerous examples show (see, e.g., [27]) that the exponential stability of deterministic delay differential equations is, as a rule, observed only in the case of bounded delays. This explains, in particular, the first of the conditions imposed on System (1) in Theorem 5 below.
  • Assumption set 3:
  • Suppose there exist the following:
Non-negative numbers τ i j such that 0 t h i j ( t ) τ i j ( t 0 ) μ -almost everywhere;
Non-negative numbers a ¯ k r i j , i = 1 , , m , j = 1 , , m i , k , r = 1 , , l 1 such that | a k r i j ( t ) | a ¯ k r i j ( t 0 )   P × μ -almost everywhere for i = 1 , , m , j = 1 , , m i , k , r = 1 , , l 1 ;
Positive numbers λ k , k = 1 , , l 1 , l + 1 , , l + l 2 , for which j I k a k k 1 j ( t ) λ k ( t 0 ) P × μ -almost everywhere for k = 1 , , l 1 , and diagonal entries of the matrix ( A 1 ( s , s ) ) 3 ( s N + ) have the form a k k 1 ( s , s ) + λ k ( s N + ) , k = l + 1 , , l + l 2 and 0 < λ k h < 1 for k = l + 1 , , l + l 2 ;
d i N + , i = 1 , , m , for which the entries of the matrices ( A i ( s , j ) ) 3 are equal to zero P-almost everywhere for s N + , j = 0 , , s d i 1 , i = 1 , , m ;
Non-negative numbers a ¯ k r i ( s , j ) such that | a k r i ( s , j ) | a ¯ k r i ( s , j ) P-almost everywhere for i = 1 , , m , k , r = l + 1 , , l + l 2 , s N + , j = s d i , , s , and for all i = 1 , , m , k , r = l + 1 , , l + l 2
sup τ N + j = ν i ( τ ) τ a ¯ k r 1 ( τ , j ) < ,
where ν i ( τ ) = 0 at 0 τ d i and ν i ( τ ) = τ d i at τ > d i ;
Continuous on some interval [ 0 , λ 0 ] ( λ 0 > 0 ) functions d ¯ k ( λ ) , d ^ ν ( λ ) , k = 1 , , l 1 ,
l + 1 , , l + l 2 , ν = 1 , , l 1 such that
sup t 0 E γ ( t ) h ¯ 1 j ( t ) t ( ( S 1 ( s ) ) Z ( s ) ) k 2 p 1 / 2 p d ^ k ( λ ) b k 2 p n for j I k , k = 1 , , l 1 ,
sup t 0 E γ ( t ) 0 t exp ς t j I k a k k 1 j ( ς ) d ς ( S 1 ( s ) d Z ( s ) ) k 2 p 1 / 2 p d ¯ k ( λ ) b k 2 p n for k = 1 , , l 1 ,
sup t 0 E γ ( t ) τ = 0 [ t ] 1 ( 1 λ k h ) [ τ ] τ ( S 2 ( τ ) ) k 2 p 1 / 2 p d ¯ k ( λ ) b k 2 p n for k = l + 1 , , l + l 2 ;
Functions d ¯ k r ( λ ) , d ^ ν r ( λ ) , k , r = 1 , , l 1 , l + 1 , , l + l 2 , ν = 1 , , l 1 , continuous on the interval [ 0 , λ 0 ] and such that
sup t 0 E γ ( t ) h ¯ 1 j ( t ) t ( ( G 1 y ¯ ) ( s ) d Z ( s ) ) k 2 p 1 / 2 p r = 1 l 1 d ^ k r ( λ ) x ¯ r γ ( 2 p ) + r = l + 1 l + l 2 d ^ k r ( λ ) x ¯ r γ ( 2 p )
for j I k , k = 1 , , l 1 ,
sup t 0 E γ ( t ) 0 t exp ς t j I k a k k 1 j ( ς ) d ς ( ( G 1 y ¯ ) ( s ) d Z ( s ) ) k 2 p 1 / 2 p r = 1 l 1 d ¯ k r ( λ ) x ¯ r γ ( 2 p ) + r = l + 1 l + l 2 d ¯ k r ( λ ) x ¯ r γ ( 2 p ) for k = 1 , , l 1 ,
sup t 0 E γ ( t ) τ = 0 [ t ] 1 ( 1 λ k h ) [ τ ] τ ( ( G 2 y ¯ ) ( τ ) ) k 2 p 1 / 2 p r = 1 l 1 d ¯ k r ( λ ) x ¯ r γ ( 2 p ) + r = l + 1 l + l 2 d ¯ k r ( λ ) x ¯ r γ ( 2 p )
for k = l + 1 , , l + l 2 ;
Functions d ¯ k ( λ ) , d ^ ν ( λ ) , k = 1 , , l 1 , l + 1 , , l + l 2 , ν = 1 , , l 1 , continuous on [ 0 , λ 0 ] and such that
sup t 0 E γ ( t ) h ¯ 1 j ( t ) t ( ( F 1 φ ¯ ) ( s ) d Z ( s ) ) k 2 p 1 / 2 p d ^ k ( λ ) φ L q n for j I k , k = 1 , , l 1 ,
sup t 0 E γ ( t ) 0 t exp ς t j I k a k k 1 j ( ς ) d ς ( ( F 1 φ ¯ ) ( s ) d Z ( s ) ) k 2 p 1 / 2 p d ¯ k ( λ ) φ L q n
for k = 1 , , l 1 ,
sup t 0 E γ ( t ) τ = 0 [ t ] 1 ( 1 λ k h ) [ τ ] τ ( F 2 φ ¯ ) ( τ ) ) k 2 p 1 / 2 p d ¯ k ( λ ) φ L q n for k = l + 1 , , l + l 2 .
The entries of the n × n -matrix C are defined as
c k k = 1 λ k j I k a ¯ k k 1 j γ ( τ 1 j ) ν = 1 m 1 a ¯ k k 1 ν γ ( τ 1 ν ) τ 1 j + c p i = 2 m ν = 1 m i a ¯ k r i ν γ ( τ i ν ) τ 1 j + d ^ k k ( 0 ) + j = 1 , j { 1 , , m 1 } / I k m 1 a ¯ k k 1 j γ ( τ 1 j ) + c p 2 λ k i = 2 m j = 1 m i a ¯ k k i j γ ( τ i j ) + d ¯ k k ( 0 ) , k = 1 , , l 1 ,
c k r = 1 λ k j I k a ¯ k k 1 j γ ( τ 1 j ) ν = 1 m 1 a ¯ k k 1 ν γ ( τ 1 ν ) τ 1 j + c p i = 2 m ν = 1 m i a ¯ k r i ν γ ( τ i ν ) τ 1 j + d ^ k r ( 0 ) + j = 1 m 1 a ¯ k r 1 j γ ( τ 1 j ) + c p 2 λ k i = 2 m j = 1 m i a ¯ k k i j γ ( τ i j ) + d ¯ k r ( 0 ) , k , r = 1 , , l 1 , k r , c k r = d ¯ k ( r l 1 + l ) ( 0 ) , k = 1 , , l 1 , r = l 1 + 1 , , l 1 + l 2 , c k r = d ¯ k r ( 0 ) , k = l 1 + 1 , , l 1 + l 2 , r = 1 , , l 1 , c k r = 1 1 exp { ln ( 1 λ k h ) } h γ ( d 1 + 1 ) sup τ N + j = ν 1 ( τ ) τ a ¯ k r 1 ( τ , j ) + c p h i = 2 m γ ( d i + 1 ) sup τ N + j = ν i ( τ ) τ a ¯ k r i ( τ , j ) + d ¯ k ( r l 1 + l ) ( 0 ) , k , r = 1 + 1 , , l + l 2 ,
Theorem 5. 
If, under Assumption set 3, the matrix E ¯ C is positive invertible (i.e., all entries of the inverse matrix are positive), then System (1) is M 2 p γ y -stable with the exponential weight γ ( t ) = exp { λ t } ( t 0 ) , where
0 < λ < min { λ i , i = 1 , , l ; ln ( 1 λ i h ) , i = l + 1 , , n } .
Proof. 
Setting q = 2 p , we use the scheme of the proof of the two previous theorems. In the auxiliary system (7), we define B ( t ) , B ¯ ( s ) to be the diagonal matrices with the diagonal entries j I k a k k 1 j ( t ) , k = 1 , , l and λ k , k = l + 1 , , n , respectively. In this case, X l 1 ( t , ς ) ( t 0 , 0 ς t ) , X l 2 ( s , τ ) ( s , τ N + , 0 τ s ) are also diagonal matrices with diagonal entries x ^ k ( t , ς ) = exp { ς t j I k a k k 1 j ( ς ) d ς } ( t 0 , 0 ς t ) , k = 1 , , l and x ˜ k ( s , τ ) = ( 1 λ k h ) s τ ( s , τ N + , 0 τ s ) , k = l + 1 , , n , respectively. Then, System (8) can be rewritten in the following form:
x ¯ k ( t ) = x ^ k ( t , 0 ) b k + j I k 0 t x ^ k ( t , ς ) a k k 1 j ( ς ) h 1 j ( ς ) ς d x ¯ k ( ζ ) d ς j { 1 , , m 1 } / I k 0 t x ^ k ( t , ς ) a k k 1 j ( ς ) x ¯ k ( h 1 j ( ς ) ) d ς j = 1 m 1 r = 1 , r k l 1 0 t x ^ k ( t , ς ) a k r 1 j ( ς ) x ¯ r ( h 1 j ( ς ) ) d ς d ς + i = 2 m j = 1 m i r = 1 l 1 0 t x ^ k ( t , ς ) a k r i j ( ς ) x ¯ r ( h i j ( ς ) ) d B i ( ς ) d B i ( ς ) + 0 t x ^ k ( t , ς ) ( S 1 ( ς ) d Z ( ς ) ) k + 0 t x ^ k ( t , ς ) ( ( G 1 y ¯ ) ( ς ) d Z ( ς ) ) + 0 t x ^ k ( t , ς ) ( ( F 1 φ ¯ ) ( ς ) d Z ( ς ) ) k , t 0 , k = 1 , , l 1 , x ¯ k ( [ t ] ) = x ˜ k ( [ t ] , 0 ) b k τ = 0 [ t ] 1 j = ν 1 ( τ ) τ x ˜ k ( [ t ] , τ + 1 ) r = l + 1 l + l 2 a k r 1 ( τ , j ) x ¯ r ( j ) h + i = 2 m τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) j = ν i ( τ ) τ r = l + 1 l + l 2 a k r i ( τ , j ) x ¯ r ( j ) τ h ( τ + 1 ) h d B i ( ς ) + τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) ( S 2 ( τ ) ) k + τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) ( ( G 2 y ¯ ) ( τ ) ) k + τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) ( ( F 2 φ ¯ ) ( τ ) ) k t 0 , k = l + 1 , , l + l 2 .
From the previous system, the conditions of the theorem and inequalities (11)–(13), and also taking into account that
d x ¯ k ( ζ ) = ν = 1 m 1 r = 1 l 1 a k r 1 ν ( ζ ) x ¯ r ( h 1 ν ( ζ ) ) d ζ + i = 2 m ν = 1 m i r = 1 l 1 a k r i ν ( ζ ) x ¯ r ( h i ν ( ζ ) ) d B i ( ζ ) + ( S 1 ( t ) ) k d Z ( t ) + ( ( G 1 y ¯ ) ( t ) ) k d Z ( t ) + ( ( F 1 φ ¯ ) ( t ) ) k d Z ( t ) ,
E | b k | 2 p ) 1 / ( 2 p ) b k 2 p n ( k = 1 , , n ) , sup t 0 0 t x ^ k ( t , ς ) γ ( t ) γ ( ς ) 1 d ς 1 λ k λ , sup t 0 0 t ( x ^ k ( t , ς ) γ ( t ) γ ( ς ) 1 ) 2 d ς 1 / 2 1 2 ( λ k λ ) ( k = 1 , , l 1 ) , λ < max { λ i , i = 1 , , l 1 , ln ( 1 λ i h ) , i = l + 1 , , l + l 2 } ,
sup t 0 γ ( t ) τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) γ ( τ + 1 ) 1 = sup t 0 ( ( 1 + exp { ln ( 1 λ k h ) + λ } + + exp { [ t ] ( ln ( 1 λ k h ) + λ ) } = τ = 0 exp { τ ( ln ( 1 λ k h ) + λ ) } = 1 1 exp { ln ( 1 λ k h ) + λ } , k = l + 1 , , l + l 2 ,
γ ( t ) γ ( h ¯ i j ( t ) ) 1 γ ( τ i j ) ( t 0 ) , i = 1 , , m , j = 1 , , m i , sup τ N + j = ν i ( τ ) τ γ ( τ + 1 ) γ ( j ) 1 a ^ k r i ( τ , j ) γ ( d i + 1 ) sup τ N + j = ν i ( τ ) τ a ^ k r i ( τ , j ) ,
where i = 1 , , m , k , r = l + 1 , , l + l 2 , and finally,
sup ς 0 E γ ( ς ) h 1 j ( ς ) ς d x ¯ k ( ζ ) 2 p 1 / ( 2 p ) = sup ς 0 E γ ( ς ) h ¯ 1 j ( ς ) ς ν = 1 m 1 r = 1 l 1 a k r 1 ν ( ζ ) x ¯ r ( h 1 ν ( ζ ) ) d ζ + i = 2 m ν = 1 m i r = 1 l 1 a k r i ν ( ζ ) x ¯ r ( h i ν ( ζ ) ) d B i ( ζ ) + ( S 1 ( ζ ) ) k d Z ( ζ ) + ( ( G 1 y ¯ ) ( ζ ) ) k d Z ( ζ ) + ( ( F 1 φ ¯ ) ( ζ ) ) k d Z ( ζ ) 2 p 1 / ( 2 p ) ν = 1 m 1 r = 1 l 1 sup ς 0 E γ ( ς ) h ¯ 1 j ( ς ) ς a k r 1 ν ( ζ ) x ¯ r ( h 1 ν ( ζ ) ) d ζ 2 p 1 / ( 2 p ) + i = 2 m ν = 1 m i r = 1 l 1 sup ς 0 E γ ( ς ) h ¯ 1 j ( ς ) ς a k r i ν ( ζ ) ( x ¯ r ( h i ν ( ζ ) ) d B i ( ζ ) 2 p 1 / ( 2 p ) + sup ς 0 E γ ( ς ) h ¯ 1 j ( ς ) ς ( S 1 ( ζ ) ) k d Z ( ζ ) 2 p 1 / ( 2 p ) + sup ς 0 E γ ( ς ) h ¯ 1 j ( ς ) ς ( ( G 1 y ¯ ) ( ζ ) ) k d Z ( ζ ) 2 p 1 / ( 2 p ) + sup ς 0 E γ ( ς ) h ¯ 1 j ( ς ) ς ( ( F 1 φ ¯ ) ( ζ ) ) k d Z ( ζ ) 2 p 1 / ( 2 p ) ν = 1 m 1 r = 1 l 1 a ¯ k r 1 ν sup ς 0 γ ( ς ) h ¯ 1 j ( ς ) ς γ ( ζ ) 1 γ ( ζ ) γ ( h ¯ 1 ν ( ζ ) ) 1 d ζ x ¯ r γ ( 2 p ) + c p i = 2 m ν = 1 m i r = 1 l 1 a ¯ k r i ν sup ς 0 γ ( ς ) 2 h ¯ 1 j ( ς ) ς γ ( ζ ) 2 γ ( ζ ) 2 γ ( h ¯ i ν ( ζ ) ) 2 d ζ 1 / 2 x ¯ r γ ( 2 p ) + d ^ ( λ ) b k 2 p n + r = 1 l 1 d ^ k r ( λ ) x ¯ r γ ( 2 p ) + r = l + 1 l + l 2 d ^ i r ( λ ) x ¯ r γ ( 2 p ) + d ^ k ( λ ) φ L q n r = 1 l 1 γ ( τ 1 j ) ν = 1 m 1 a ¯ k r 1 ν γ ( τ 1 ν ) τ 1 j + c p i = 2 m ν = 1 m i a ¯ k r i ν γ ( τ i ν ) τ 1 j + d ^ k r ( λ ) x ¯ r γ ( 2 p ) + r = l + 1 l + l 2 d ^ i r ( λ ) x ¯ r γ ( 2 p ) + d ^ k ( λ ) φ L q n ,
where k = 1 , , l 1 , j I k , we obtain
x ¯ k γ ( 2 p ) ( 1 + d ¯ k ( λ ) ) | b k 2 p n + 1 λ k λ × j I k a ¯ k k 1 j r = 1 l 1 γ ( τ 1 j ) ν = 1 m 1 a ¯ k r 1 ν γ ( τ 1 ν ) τ 1 j + c p i = 2 m ν = 1 m i a ¯ k r i ν γ ( τ i ν ) τ 1 j + d ^ k r ( λ ) x ¯ r γ ( 2 p ) + 1 λ k λ j I k a ¯ k k 1 j r = l + 1 l + l 2 d ^ i r ( λ ) x ¯ r γ ( 2 p ) + 1 λ k λ j { 1 , , m 1 } / I k a ¯ k k 1 j γ ( τ 1 j ) x ¯ k γ ( 2 p ) + 1 λ k λ j = 1 m 1 r = 1 , r k n a ¯ k r 1 j γ ( τ 1 j ) x ¯ r γ ( 2 p ) + 1 λ k λ j I k a ¯ k k 1 j r = l + 1 l + l 2 d ^ i r ( λ ) x ¯ r γ ( 2 p ) + c p 2 ( λ k λ ) i = 2 m j = 1 m i r = 1 l 1 a ¯ k r i j γ ( τ i j ) x ¯ r γ ( 2 p ) + r = 1 l 1 d ¯ k r ( λ ) x ¯ r γ ( 2 p ) + r = l + 1 l + l 2 d ¯ i r ( λ ) x ¯ r γ ( 2 p ) + 1 λ k λ j I k a ¯ k k 1 j d ^ k ( λ ) + d ¯ k ( λ ) φ L q n , k = 1 , , l 1 ,
x ¯ k γ ( 2 p ) ( 1 + d ¯ k ( λ ) ) b k 2 p n + sup t 0 γ ( t ) τ = 0 [ t ] 1 x ˜ k ( [ t ] , τ + 1 ) γ ( τ + 1 ) 1 × r = l + 1 l + l 2 h sup τ N + j = ν 1 ( τ ) τ γ ( τ + 1 ) γ ( j ) 1 a ¯ k r 1 ( τ , j ) + c p h i = 2 m sup τ N + j = ν i ( τ ) τ γ ( τ + 1 ) γ ( j ) 1 a ¯ k r i ( τ , j ) x ¯ r γ ( 2 p ) + r = 1 l 1 d ¯ k r ( λ ) x ¯ r γ ( 2 p ) + r = l + 1 l + l 2 d ¯ i r ( λ ) x ¯ r γ ( 2 p ) + d ¯ k ( λ ) φ L q n
( 1 + d ¯ k ( λ ) ) b k 2 p n + 1 1 exp { ln ( 1 λ k h ) + λ } r = l + 1 l + l 2 h γ ( d 1 + 1 ) sup τ N + j = ν 1 ( τ ) τ a ¯ k r 1 ( τ , j ) + c p h i = 2 m γ ( d i + 1 ) sup τ N + j = ν i ( τ ) τ a ¯ k r i ( τ , j ) x ¯ r γ ( 2 p ) + r = 1 l 1 d ¯ k r ( λ ) x ¯ r γ ( 2 p ) + r = l + 1 l + l 2 d ¯ k r ( λ ) x ¯ r γ ( 2 p ) + d ¯ k ( λ ) φ L q n , k = l + 1 , , l + l 2 .
In the matrix form, the last system of inequalities becomes
E ¯ x ¯ γ ( 2 p ) C ( λ ) x ¯ γ ( 2 p ) + b k 2 p n e + ( λ ) φ L 2 p n e ^ ,
where C ( λ ) = ( c i j ( λ ) ) i , j = 1 n ( l 1 + l 2 ) × l 1 + l 2 ) n × n is a matrix whose entries are defined as follows:
c k k ( λ ) = 1 λ k λ j I k a ¯ k k 1 j γ ( τ 1 j ) ν = 1 m 1 a ¯ k k 1 ν γ ( τ 1 ν ) τ 1 j + c p i = 2 m ν = 1 m i a ¯ k r i ν γ ( τ i ν ) τ 1 j + d ^ k k ( λ ) + j = 1 , j { 1 , , m 1 } / I k m 1 a ¯ k k 1 j γ ( τ 1 j ) + c p 2 ( λ k λ ) i = 2 m j = 1 m i a ¯ k k i j γ ( τ i j ) + d ^ k k ( λ ) , k = 1 , , l 1 , c k r ( λ ) = 1 λ k λ j I k a ¯ k k 1 j γ ( τ 1 j ) ν = 1 m 1 a ¯ k k 1 ν γ ( τ 1 ν ) τ 1 j + c p i = 2 m ν = 1 m i a ¯ k r i ν γ ( τ i ν ) τ 1 j + d ^ k r ( λ ) + j = 1 m 1 a ¯ k k 1 j γ ( τ 1 j ) + c p 2 ( λ k λ ) i = 2 m j = 1 m i a ¯ k k i j γ ( τ i j ) + d ^ k r ( λ ) , k , r = 1 , , l 1 ,
c k r ( λ ) = d ^ k ( r l 1 + l ) ( λ ) , k = 1 , , l 1 , r = l 1 + 1 , , l 1 + l 2 , c k r ( λ ) = d ^ k r ( λ ) , k = l 1 + 1 , , l 1 + l 2 , r = 1 , , l 1 , c k r ( λ ) = 1 1 exp { ln ( 1 λ k h ) + λ } h γ ( d 1 + 1 ) sup τ N + j = ν 1 ( τ ) τ a ¯ k r 1 ( τ , j ) + c p h i = 2 m γ ( d i + 1 ) sup τ N + j = ν i ( τ ) τ a ¯ k r i ( τ , j ) + d ¯ k r ( λ ) , k , r = 1 + 1 , , l + l 2 ,
and the l 1 + l 2 -dimensional column vectors e ( λ ) = ( e 1 ( λ ) , , e 1 1 ( λ ) , e l + 1 ( λ ) , , e l + l 2 ( λ ) ) T , e ^ ( λ ) = ( e ^ 1 ( λ ) , , e ^ 1 1 ( λ ) , e ^ l + 1 ( λ ) , , e ^ l + l 2 ( λ ) ) T , depending on the parameter λ , are given as follows:
e k ( λ ) = 1 + d ¯ k ( λ ) , k = 1 , , l 1 , 1 + 1 , , l + l 2 ;
e ^ k ( λ ) = 1 λ k λ j I k a ¯ k k 1 j d ^ k ( λ ) + d ¯ k ( λ ) , k = 1 , , l 1 , e ^ k ( λ ) = d ¯ k ( λ ) , k = l + 1 , , l + l 2 .
By virtue of the conditions of the theorem, the matrix E ¯ C is positive invertible, and C ( 0 ) = C . Consequently, for sufficiently small, positive λ , the matrix E ¯ C ( λ ) will also be positive invertible, and therefore, by virtue of Theorem 2, System (1) will be M 2 p γ -stable with the coefficient λ satisfying the estimate 0 < λ < min { λ i , i = 1 , , l , ln ( 1 λ i h ) , i = l + 1 , , n } . □

5. Partial Stability of Deterministic Discrete–Continuous Systems

To the best of our knowledge, partial Lyapunov stability for hybrid discrete–continuous systems has not been studied before, not only in the stochastic, but even in the deterministic non-delay case. Note that the previously cited paper [26] deals with finite-time stability. In this section, we concentrate, therefore, on the deterministic systems and show that the general stochastic results from the previous section give new stability conditions for such systems as well. The definition of partial Lyapunov stability can be found in [20]. Alternatively, one can naturally adjust Definition 2 to the deterministic case.
As before, we restrict ourselves to stability with respect to the first l 1 continuous-time components and the first discrete-time l 2 components, where 0 < l 1 < l , 0 < l 2 < n l .
Consider
d x l ( t ) = A ( t ) x ( t ) d t ( t 0 ) , x n l ( s + 1 ) = x n l ( s ) B ( s ) x ( s ) h ( s N + ) ,
where A ( t ) are l × n -matrices with locally integrable entries a k r ( t ) , k = 1 , , l , r = 1 , , n ; B ( s ) ( s N + ) are ( n l ) × n -matrices, whose entries are arbitrary real numbers b k r ( s ) , k = l + 1 , , n , r = 1 , , n ; and h is a sufficiently small positive real number. Below we also use the notational agreements listed in Conditions M1–M2 from Section 3, applied to the matrices A ( t ) and B ( s ) , respectively.
Using the representation x ( t ) = ( x l ( t ) , x n l ( [ t ] ) ) T = ( y l 1 ( t ) , h l 1 ( t ) , y l 2 ( t ) , h l 2 ( t ) ) T , System (15) can be rewritten in the following form, where t 0 and s N + :
d y l 1 ( t ) = ( ( A ( t ) ) 1 y l 1 ( t ) + ( A ( t ) ) 2 h l 1 ( t ) + ( A ( t ) ) 3 y l 2 ( t ) + ( A ( t ) ) 4 h l 2 ( t ) ) d t , d h l 1 ( t ) = ( ( A ( t ) ) 5 y l 1 ( t ) + ( A ( t ) ) 6 h l 1 ( t ) + ( A ( t ) ) 7 y l 2 ( t ) + ( A ( t ) ) 8 h l 2 ( t ) ) d t , y l 2 ( s + 1 ) = y l 2 ( s ) ( ( B ( s ) ) 1 y l 1 ( s ) + ( B ( s ) ) 2 h l 1 ( s ) + ( B ( s ) ) 3 y l 2 ( s ) + ( B ( s ) ) 4 h l 2 ( s ) ) h , h l 2 ( s + 1 ) = h l 2 ( s ) ( ( B ( s ) ) 5 y l 1 ( s ) + ( B ( s ) ) 6 h l 1 ( s ) + ( B ( s ) ) 7 y l 2 ( s ) + ( B ( s ) ) 8 h l 2 ( s ) ) h .
Therefore,
d y l ( t ) = ( A ( t ) ) 1 y l ( t ) d t + ( G 1 y ) ( t ) d t + S 1 ( t ) d t ( t 0 ) , y n l ( s + 1 ) = y n l ( s ) ( B ( s ) ) 3 y n l ( s ) h + ( G 2 y ) ( s ) + S 2 ( s ) ( s N + ) ,
where
( G 1 y ) ( t ) = ( A ( t ) ) 2 0 t C ( t , s ) ( A ( s ) ) 5 y l 1 ( s ) d s ( A ( t ) ) 4 τ = 0 [ t ] 1 H l 2 ( [ t ] , τ + 1 ) ( B ( τ ) ) 5 y l 1 ( τ ) h ( A ( t ) ) 3 y l 2 ( t ) ( A ( t ) ) 2 0 t C ( t , s ) ( A ( s ) ) 7 y l 2 ( s ) d s + ( A ( t ) ) 4 τ = 0 [ t ] 1 H l 2 ( [ t ] , τ + 1 ) ( B ( τ ) ) 7 y l 2 ( τ ) h ( G 2 y ) ( s ) = ( B ( s ) ) 1 y l 1 ( s ) h ( B ( s ) ) 2 h 0 s C ( s , ς ) ( A ( ς ) ) 5 y l 1 ( ς ) d ς ( B ( s ) ) 4 τ = 0 s 1 H l 2 ( s , τ + 1 ) ( B ( τ ) ) 5 y l 1 ( τ ) h ( B ( s ) ) 2 h 0 s C ( s , ς ) ( A ( ς ) ) 7 y l 2 ( ς ) d ς ( B ( s ) ) 4 τ = 0 s 1 H l 2 ( s , τ + 1 ) ( B ( τ ) ) 7 y l 2 ( τ ) , S 1 ( t ) = ( A ( t ) ) 2 C ( t , 0 ) h l 1 ( 0 ) ( A ( t ) ) 4 H l 2 ( [ t ] , 0 ) h l 2 ( 0 ) , S 2 ( s ) = ( B ( s ) ) 2 C ( s , 0 ) h l 1 ( 0 ) ( B ( s ) ) 4 h ( H l 2 ( s , 0 ) h l 2 ( 0 ) ) , C ( t , s ) = exp { s t ( A ( ς ) ) 1 d ς } ,
H l 2 ( s , τ ) ( s , τ N + , 0 τ s ) is an ( n l l 2 ) × ( n l l 2 ) -matrix, the columns of which are solutions of the system h l 2 ( s + 1 ) = h l 2 ( s ) ( B ( s ) ) 4 h l 2 ( s ) h ( s N + ) , where H l 2 ( s , s ) ( s N + ) is the identity matrix of dimension ( n l l 2 ) × ( n l l 2 ) .
  • Assumption set 4:
  • Suppose that the following conditions are met for System (17):
The entries of the matrix ( A ( t ) ) 1 are integrable;
The entries of the matrix ( B ( s ) ) 3 satisfy the inequalities τ = 0 | b k r ( τ ) | < that are valid for k , r = l + 1 , , l + l 2 ;
There exist non-negative numbers d k , k = 1 , , l 1 , l + 1 , , l + l 2 such that
sup t 0 0 t ( S 1 ( s ) d s ) k d k | x ( 0 ) | for k = 1 , , l 1 , sup t 0 τ = 0 [ t ] 1 ( S 2 ( τ ) ) k d k | x ( 0 ) |
for k = l + 1 , , l + l 2 ;
There exist non-negative numbers d k r , k , r = 1 , , l 1 , l + 1 , , l + l 2 such that
sup t 0 0 t ( ( G 1 y ) ( s ) d s ) k r = 1 l 1 d k r sup t 0 | x r ( t ) | + r = l + 1 l + l 2 d k r sup t 0 | x r ( [ t ] ) | for k = 1 , , l 1 ,
sup t 0 τ = 0 [ t ] 1 ( ( G 2 y ) ( τ ) ) k r = 1 l 1 d k r sup t 0 | x r ( t ) | + r = l + 1 l + l 2 d k r sup t 0 | x r ( [ t ] ) | for k = l + 1 , , l + l 2 .
Let us define the entries ( l 1 + l 2 ) × ( l 1 + l 2 ) of the matrix C as follows:
c k r = 0 | a k r ( ς ) | d ς + d k r , k , r = 1 , , l 1 , c k r = d k ( r l 1 + l ) , k = 1 , , l 1 , r = l 1 + 1 , , l 1 + l 2 , c k r = d k r , k = l 1 + 1 , , l 1 + l 2 , r = 1 , , l 1 , c k r = h τ = 0 | b k r ( τ ) | + d k ( r l 1 + l ) , k , r = l 1 + 1 , , l 1 + l 2 .
Under these assumptions, the following result follows directly from Theorem 3:
Proposition 1. 
If, under Assumption set 4, the matrix E ¯ C has a non-negative inverse, then System (15) is partially Lyapunov stable with respect to the first l 1 continuous-time variable and the first l 2 discrete-time variable.
To be able to formulate the next result, we need the following assumptions for System (15):
  • Assumption set 5:
The diagonal entries of the matrices ( ( A ( t ) ) 1 , ( B ( s ) ) 3 are of the form a k k ( t ) + λ k , k = 1 , , l 1 and b k k ( s ) + λ k , k = l + 1 , , l + l 2 , respectively, where λ k , k = 1 , , l 1 , l + 1 , , l + l 2 are some positive numbers, and 0 < λ k h < 1 , k = l + 1 , , l + l 2 ;
There exist non-negative numbers a ¯ k r , k , r = 1 , , l 1 such that | a k r ( t ) | a ¯ k r ( t 0 ) μ -almost everywhere for k , r = 1 , , l 1 ;
sup τ N + | b k r ( τ ) < when k , r = l + 1 , , l + l 2 ;
There exist non-negative numbers d k , k = 1 , , l 1 , l + 1 , , l + l 2 such that
sup t 0 0 t exp { λ k ( t ς ) } ( S 1 ( s ) d s ) k d k | x ( 0 ) | for k = 1 , , l 1 ,
sup t 0 τ = 0 [ t ] 1 ( 1 λ k h ) [ t ] τ ( S 2 ( τ ) ) k d k | x ( 0 ) | for k = l + 1 , , l + l 2 ;
There exist non-negative numbers d k r , k , r = 1 , , l 1 , l + 1 , , l + l 2 such that
sup t 0 0 t exp { λ k ( t ς ) } ( ( G 1 y ) ( s ) d s ) k r = 1 l 1 d k r sup t 0 | x r ( t ) | + r = l + 1 l + l 2 d k r sup t 0 | x r ( [ t ] ) |
for k = 1 , , l 1 ,
sup t 0 τ = 0 [ t ] 1 ( 1 λ k h ) [ t ] τ ( ( G 2 y ) ( τ ) ) k r = 1 l 1 d k r sup t 0 | x r ( t ) | + r = l + 1 l + l 2 d k r sup t 0 | x r ( [ t ] ) |
for k = l + 1 , , l + l 2 .
Let us define the entries ( l 1 + l 2 ) × ( l 1 + l 2 ) of the matrix C as follows:
c k r = 1 λ k a ¯ k r + d k r , k , r = 1 , , l 1 , c k r = d k ( r l 1 + l ) , k = 1 , , l 1 , r = l 1 + 1 , , l 1 + l 2 , c k r = d k r , k = l 1 + 1 , , l 1 + l 2 , r = 1 , , l 1 , c k r = 1 λ k sup τ N + | b k r ( τ ) | + d k ( r l 1 + l ) , k , r = l 1 + 1 , , l 1 + l 2 .
Then, Theorem 4 implies the following.
Proposition 2. 
If, under Assumption set 5, the matrix E ¯ C has a non-negative inverse, then System (15) is partially Lyapunov stable with respect to the first l 1 continuous-time variable and the first l 2 discrete-time variable.
To study partial exponential stability, we put γ ( t ) = exp { λ t } ( t 0 ) , where λ is some positive number.
  • Assumption set 6:
  • Let there exist the following:
Non-negative numbers a ¯ k r , k , r = 1 , , l 1 such that | a k r ( t ) | a ¯ k r ( t 0 )   μ -almost everywhere for k , r = 1 , , l 1 ;
Positive numbers λ k , k = 1 , , l 1 , l + 1 , , l + l 2 , for which a k k ( t ) λ k ( t 0 ) μ -almost everywhere for k = 1 , , l 1 , the diagonal entries of the matrix ( B ( s ) ) 3 have the form b k k ( s ) + λ k , k = l + 1 , , l + l 2 and 0 < λ k h < 1 for k = l + 1 , , l + l 2 ;
sup τ N + | b k r ( τ ) | < for k , r = l + 1 , , l + l 2 ;
Continuous functions d ¯ k ( λ ) on some interval [ 0 , λ 0 ] ( λ 0 > 0 ), k = 1 , , l 1 , l + 1 , , l + l 2 such that
sup t 0 γ ( t ) 0 t exp ς t a k k ( ς ) d ς ( S 1 ( s ) d s ) k d ¯ k ( λ ) | x ( 0 ) | for k = 1 , , l 1 ,
sup t 0 γ ( t ) τ = 0 [ t ] 1 ( 1 λ k h ) s τ ( S 2 ( τ ) ) k d ¯ k ( λ ) | x ( 0 ) | for k = l + 1 , , l + l 2 ;
Continuous functions d ¯ k r ( λ ) , k , r = 1 , , l 1 , l + 1 , , l + l 2 on the interval [ 0 , λ 0 ] such that
sup t 0 γ ( t ) 0 t exp ς t a k k ( ς ) d ς ( ( G 1 y ) ( s ) d s ) k r = 1 l 1 d ¯ k r ( λ ) sup t 0 | γ ( t ) x r ( t ) | + r = l + 1 l + l 2 d ¯ k r ( λ ) sup t 0 | γ ( t ) x r ( [ t ] ) | for k = 1 , , l 1 ,
sup t 0 γ ( t ) τ = 0 [ t ] 1 ( 1 λ k h ) s τ ( ( G 2 y ) ( τ ) ) k r = 1 l 1 d ¯ k r ( λ ) sup t 0 | γ ( t ) x r ( t ) | + r = l + 1 l + l 2 d ¯ k r ( λ ) sup t 0 | γ ( t ) x r ( [ t ] ) | for k = l + 1 , , l + l 2 .
Let us define the entries of the n × n -matrix C as follows:
c k k = d ¯ k k ( 0 ) , k = 1 , , l 1 , c k r = 1 λ k a ¯ k r + d ¯ k r ( 0 ) , k , r = 1 , , l 1 , k r , c k r = d ¯ k ( r l 1 + l ) ( 0 ) , k = 1 , , l 1 , r = l 1 + 1 , , l 1 + l 2 , c k r = d ¯ k r ( 0 ) , k = l 1 + 1 , , l 1 + l 2 , r = 1 , , l 1 , c k r = 1 1 exp { ln ( 1 λ k h ) } h sup τ N + | b k r ( τ ) | + d ¯ k ( r l 1 + l ) ( 0 ) , k , r = 1 + 1 , , l + l 2 .
Then, Theorem 5 yields the following.
Proposition 3. 
If, under Assumption set 6, the matrix E ¯ C is positive invertible, then System (15) is partially exponentially stable in the Lyapunov sense with respect to the first l 1 continuous-time variable and the first l 2 discrete-time variable.

6. A Numerical Example

Consider System (15) with n = 4 , l = 2 , l 1 = 1 , l 2 = 1 , a 21 ( t ) = 0 , a 23 ( t ) = 0 , a 24 ( t ) = 0 ( t 0 )   μ -almost everywhere, b 41 ( s ) = 0 , b 42 ( s ) = 0 , b 43 ( s ) = 0 ( s N + ) .
Suppose that there exist positive numbers λ 1 , λ 2 , λ 3 , λ 4 , λ 5 and numbers a ¯ 12 , a ¯ 13 , a ¯ 14 , b ¯ 31 , b ¯ 32 , b ¯ 34 such that the following apply:
a 12 ( t ) exp 0 t a 22 ( ς ) d ς | a ¯ 12 | exp { λ 2 t } ; | a 13 ( t ) | | a ¯ 13 | exp { λ 2 t } ;
a 14 ( t ) ( 1 λ 4 h ) [ t ] | a ¯ 14 | exp { λ 4 t } ( t 0 ) μ -almost everywhere;
sup s N + | b 31 ( s ) | | b ¯ 31 | , b 32 ( s ) exp 0 s a 22 ( ς ) d ς | b ¯ 32 | ;
b 33 ( s ) = λ 5 , b 34 ( s ) ( 1 λ 4 h ) s | b ¯ 34 | ( s N + ) ;
a 11 ( t ) = λ 1 and λ 1 > max { λ 2 , λ 3 , λ 4 } ;
The entries of the 2 × 2 -matrix C are given as c 11 = 0 ; c 12 = | a ¯ 13 | λ 1 λ 3 ; c 21 = | b ¯ 31 | λ 5 , c 22 = 0 .
Then, it is easy to verify that the assumptions of Proposition 3 are satisfied. Consequently, if the matrix E ¯ C is positive invertible, then system (15) is exponentially stable in the Lyapunov sense with respect to the first continuous-time variable and the first discrete-time variable.
Positive invertibility of the matrix E ¯ C will be ensured by the inequality
1 | a ¯ 13 | | b ¯ 31 | ( λ 1 λ 3 ) λ 5 > 0 .
The Lyapunov stability region is depicted in Figure 1, Figure 2 and Figure 3.
This numerical example clearly shows how Lyapunov stability regions, which are obtained using our version of the regularization method, usually look like if several of the system’s parameters are fixed. In particular, these regions are open subsets of the respective partial phase spaces, which means that the property of Lyapunov stability is preserved under small perturbations of the varying parameters. Note that changing the set of varying parameters yields stability regions with similar geometric properties.
The example indicates as well how such an analysis can be performed in other cases considered in the article, including stochastic ones. Indeed, the stability conditions, obtained by the regularization method, are expressed in terms of the system’s parameters. Keeping fixed some of the parameters and aggregating the others into more convenient ones (as we had performed by introducing the variable z = ( λ 1 λ 2 ) λ 5 in the example) would produce inequalities defining stability regions in the corresponding partial phase spaces. We do not include more numerical examples in this article, as this would considerably increase its size.

7. Discussion

This article deals with partial moment stability of hybrid discrete–continuous systems of linear stochastic equations, a complicated problem that apparently has not been studied before. To tackle its theoretical and technical challenges, a modified version of the classical regularization method, based on the choice of an auxiliary equation and the theory of non-negative and positive invertible matrices, has been suggested and justified.
To be able to apply the regularization techniques, we have offered an alternative description of stochastic partial stability in terms of input-to-state stability, known in control theory, in such a way that different spaces of stochastic processes correspond to different kinds of partial stability.
We have also proposed a formalized algorithm as to how a regularization of a given hybrid system can be performed in practice and how this leads to verifiable, coefficient-based stability conditions. We have concentrated on the case of stability with respect to proper subsets of both continuous-time and discrete-time components, while also outlining a method of analysis for the other cases as well.
We have also demonstrated that our algorithm, being primarily designed for the stochastic case, provides new conditions for the partial stability of linear deterministic hybrid systems.
In the future, we plan to study other classes of hybrid stochastic systems—first of all, nonlinear ones, as our previous publications show that the regularization method has such a potential. In addition, we want to extend our analysis to hybrid equations with unbounded delays, where exponential stability should be replaced by more general kinds of asymptotic stability, as more involved auxiliary systems may be necessary to exploit.

Author Contributions

Conceptualization, R.I.K.; methodology, R.I.K. and A.P.; formal analysis, R.I.K.; investigation, R.I.K. and A.P.; writing—original draft preparation, R.I.K.; writing—review and editing, A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to express their appreciation for the anonymous reviewers’ in-depth comments, suggestions and corrections, which have greatly improved the presentation of the results.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
a.s.almost surely
e. g.exempli gratia (for example)
i. e.id est (that is)

Appendix A. Conditions for Other Cases of Partial Stability

Conditions M1–M2 and the definition of System (4) are presented in Section 3 for Case 1 ( 0 < l 1 < l , 0 < l 2 < n l ). Cases 2–7 require some adjustments, which directly influence the vector solutions y l 1 ( t ) , y l 2 ( t ) , h l 1 ( t ) and h ¯ l 2 ( t ) in System (4) and the matrices A i j ( t ) k and A i ( s , j ) k ( k = 1 , . . , 10 ), determining the operators in (4), as well as the matrices A ( t ) k , B ( s ) k in Propositions 1–3. In Appendix A, we formulate these adjustments explicitly, including Case 1 for the sake of completeness. We found it convenient to summarize Conditions M1–M2 in a tabular form. Note that Table A1 and Table A8 describe Conditions M1–M2, respectively, for Case 1 0 < l 1 < l , 0 < l 2 < n l , which was analyzed in Section 3, Section 4 and Section 5 in detail.
It is important to remark that all results presented in Section 3, Section 4 and Section 5 remain true, and with the same formulations, for any of the seven cases of partial stability, provided that the following adjustments are made:
If a matrix M k is not defined in a table, then the corresponding matrices A i j ( t ) k , A i ( s , j ) k and the matrices A ( t ) k , B ( s ) k should be removed from System (4) and Propositions 1–3, respectively. The same rule applies to the vector solutions y l 1 ( t ) , y l 2 ( t ) , h l 1 ( t ) and h ¯ l 2 ( t ) in System (4).
As before, we consider below partial stability with respect to the first l 1 continuous-time variables x 1 , , x l and the first l 2 discrete-time variables x l + 1 , , x n , so that 0 l 1 l , 0 l 2 n l and 0 < l 1 + l 2 < n .
Table A1. Case 1: 0 < l 1 < l , 0 < l 2 < n l .
Table A1. Case 1: 0 < l 1 < l , 0 < l 2 < n l .
M k Size of M k Removed RowsRemoved Columns
M 1 l 1 × l 1 Last l l 1 Last n l 1
M 2 l 1 × ( l l 1 ) Last l l 1 First l 1 and last n l
M 3 l 1 × l 2 Last l l 1 First l and last n l l 2
M 4 l 1 × ( n l l 2 ) First l l 1 First l + l 2
M 5 ( l l 1 ) × l 1 First l 1 Last n l 1
M 6 ( l l 1 ) × ( l l 1 ) First l 1 First l 1 and last n l
M 7 ( l l 1 ) × l 2 Last l l 1 First l and last n l l 2
M 8 ( l l 1 ) × ( n l l 2 ) First l 1 First l + l 2
M 9 l 1 × n Last l l 1
M 10 ( l l 1 ) × n First l 1
Table A2. Case 2: l 1 = l , 0 < l 2 < n l .
Table A2. Case 2: l 1 = l , 0 < l 2 < n l .
M k Size of M k Removed RowsRemoved Columns
M 1 l × l Last n l
M 3 l × l 2 First lLast n l l 2
M 4 l × ( n l ) First l
M 9 l × n
Table A3. Case 3: 0 < l 1 < l , l 2 = n l .
Table A3. Case 3: 0 < l 1 < l , l 2 = n l .
M k Size of M k Removed RowsRemoved Columns
M 1 l 1 × l 1 Last l l 1 Last n l 1
M 2 l 1 × ( l l 1 ) Last l l 1 First l 1 and last n l
M 3 l 1 × ( n l ) Last l l 1 First l
M 5 ( l l 1 ) × l 1 First l 1 Last n l 1
M 6 ( l l 1 ) × ( l l 1 ) First l 1 First l 1 and last n l
M 9 l 1 × n Last l l 1
M 10 ( l l 1 ) × n First l 1
Table A4. Case 4: l 1 = l , l 2 = 0 .
Table A4. Case 4: l 1 = l , l 2 = 0 .
M k Size of M k Removed RowsRemoved Columns
M 1 l × l Last n l
M 4 l × ( n l ) First l
M 9 l × n
Table A5. Case 5: 0 < l 1 < l , l 2 = 0 .
Table A5. Case 5: 0 < l 1 < l , l 2 = 0 .
M k Size of M k Removed RowsRemoved Columns
M 1 l 1 × l 1 Last l l 1 Last n l 1
M 2 l 1 × ( l l 1 ) Last l l 1 First l 1 and last n l
M 4 l 1 × ( n l ) First l l 1 First l
M 9 l 1 × n Last l l 1
M 10 ( l l 1 ) × n First l 1
Table A6. Case 6: l 1 = 0 , 0 < l 2 < n l .
Table A6. Case 6: l 1 = 0 , 0 < l 2 < n l .
M k Size of M k Removed RowsRemoved Columns
M 6 l × l First n l
M 7 l × l 2 First lLast n l l 2
M 8 l × ( n l l 2 ) First l + l 2
M 10 l × n
Table A7. Case 7: l 1 = 0 , l 2 = n l .
Table A7. Case 7: l 1 = 0 , l 2 = n l .
M k Size of M k Removed RowsRemoved Columns
M 7 l × ( n l ) First l
M 10 l × n
Condition M2. In Table A8, Table A9, Table A10, Table A11, Table A12, Table A13 and Table A14, M is an arbitrary ( n l ) × n matrix.
Table A8. Case 1: 0 < l 1 < l , 0 < l 2 < n l .
Table A8. Case 1: 0 < l 1 < l , 0 < l 2 < n l .
M k Size of M k Removed RowsRemoved Columns
M 1 l 2 × l 1 Last n l l 2 Last n l 1
M 2 l 2 × ( l l 1 ) Last n l l 2 First l 1 and last n l
M 3 l 2 × l 2 Last n l l 2 First l and last n l l 2
M 4 l 2 × ( n l l 2 ) Last n l l 2 First l + l 2
M 5 ( n l l 2 ) × l 1 First l 2 Last n l 1
M 6 ( n l l 2 ) × ( l l 1 ) First l 2 First l 1 and last n l
M 7 ( n l l 2 ) × l 2 First l 2 First l and last n l l 2
M 8 ( n l l 2 ) × ( n l l 2 ) ) First l 2 First l + l 2
M 9 l 2 × n Last n l l 2
M 10 ( n l l 2 ) × n First l 2
Table A9. Case 2: l 1 = l , 0 < l 2 < n l .
Table A9. Case 2: l 1 = l , 0 < l 2 < n l .
M k Size of M k Removed RowsRemoved Columns
M 1 l 2 × l Last n l l 2 Last n l
M 3 l 2 × l 2 Last n l l 2 First l and last n l l 2
M 4 l 2 × ( n l l 2 ) Last n l l 2 First l + l 2
M 5 ( n l l 2 ) × l First l 2 Last n l
M 7 ( n l l 2 ) × l 2 First l 2 First l and last n l l 2
M 8 ( n l l 2 ) × ( n l l 2 ) ) First l 2 First l + l 2
M 9 l 2 × n Last n l l 2
M 10 ( n l l 2 ) × n First l 2
Table A10. Case 3: 0 < l 1 < l , l 2 = n l .
Table A10. Case 3: 0 < l 1 < l , l 2 = n l .
M k Size of M k Removed RowsRemoved Columns
M 1 ( n l ) × l 1 Last n l 1
M 2 ( n l ) × ( l l 1 ) First l 1 and last n l
M 3 ( n l ) × ( n l ) First l
M 9 ( n l ) × n
Table A11. Case 4: l 1 = l , l 2 = 0 .
Table A11. Case 4: l 1 = l , l 2 = 0 .
M k Size of M k Removed RowsRemoved Columns
M 5 ( n l ) × l Last n l
M 8 ( n l ) × ( n l ) First l
M 10 ( n l ) × n
Table A12. Case 5: 0 < l 1 < l , l 2 = 0 .
Table A12. Case 5: 0 < l 1 < l , l 2 = 0 .
M k Size of M k Removed RowsRemoved Columns
M 5 ( n l ) × l 1 Last n l 1
M 6 ( n l ) × ( l l 1 ) ) First l 1 and last n l
M 8 ( n l ) × ( n l ) First l
M 10 ( n l ) × n
Table A13. Case 6: l 1 = 0 , 0 < l 2 < n l .
Table A13. Case 6: l 1 = 0 , 0 < l 2 < n l .
M k Size of M k Removed RowsRemoved Columns
M 2 l 2 × l Last n l l 2 Last n l
M 3 l 2 × l 2 Last n l l 2 First l and last n l l 2
M 4 l 2 × ( n l l 2 ) Last n l l 2 First l + l 2
M 6 ( n l l 2 ) × l First l 2 Last n l
M 7 ( n l l 2 ) × l 2 First l 2 First l and last n l l 2
M 8 ( n l l 2 ) × ( n l l 2 ) First l 2 First l + l 2
M 9 l 2 × n Last n l l 2
M 10 ( n l l 2 ) × n First l 2
Table A14. Case 7: l 1 = 0 , l 2 = n l .
Table A14. Case 7: l 1 = 0 , l 2 = n l .
M k Size of M k Removed RowsRemoved Columns
M 2 ( n l ) × l Last n l
M 3 ( n l ) × ( n l ) First l
M 9 ( n l ) × n
The vector solutions y l 1 ( t ) , y l 2 ( t ) , h l 1 ( t ) and h ¯ l 2 ( t ) included in System (4) and defined in Section 3 for Case 1 ( 0 < l 1 < l , 0 < l 2 < n l ) as
y ¯ l 1 ( t ) = ( x ¯ 1 ( t ) , , x ¯ l 1 ( t ) ) T , h ¯ l 1 ( t ) = ( x ¯ l 1 + 1 ( t ) , , x ¯ l ( t ) ) T , y ¯ l 2 ( t ) = ( x ¯ l + 1 ( [ t ] ) , , x ¯ l + l 2 ( [ t ] ) ) T , h ¯ l 2 ( t ) = ( x ¯ l + l 2 + 1 ( [ t ] ) , , x n ( [ t ] ) ) T
should be adjusted to the other cases as follows:
  • Case 2: l 1 = l , 0 < l 2 < n l .
    y ¯ l 1 ( t ) = ( x ¯ 1 ( t ) , , x ¯ l ( t ) ) T , h ¯ l 1 ( t ) should be removed , y ¯ l 2 ( t ) = ( x ¯ l + 1 ( [ t ] ) , , x ¯ l + l 2 ( [ t ] ) ) T , h ¯ l 2 ( t ) = ( x ¯ l + l 2 + 1 ( [ t ] ) , , x n ( [ t ] ) ) T .
  • Case 3: 0 < l 1 < l , l 2 = n l .
    y ¯ l 1 ( t ) = ( x ¯ 1 ( t ) , , x ¯ l 1 ( t ) ) T , h ¯ l 1 ( t ) = ( x ¯ l 1 + 1 ( t ) , , x ¯ l ( t ) ) T , y ¯ l 2 ( t ) = ( x ¯ l + 1 ( [ t ] ) , , x n ( [ t ] ) ) T , h ¯ l 2 ( t ) should be removed .
  • Case 4: l 1 = l , l 2 = 0 .
    y ¯ l 1 ( t ) = ( x ¯ 1 ( t ) , , x ¯ l ( t ) ) T , h ¯ l 1 ( t ) should be removed , y ¯ l 2 ( t ) should be removed , h ¯ l 2 ( t ) = ( x ¯ l + 1 ( [ t ] ) , , x n ( [ t ] ) ) T .
  • Case 5: 0 < l 1 < l , l 2 = 0 .
    y ¯ l 1 ( t ) = ( x ¯ 1 ( t ) , , x ¯ l 1 ( t ) ) T , h ¯ l 1 ( t ) = ( x ¯ l 1 + 1 ( t ) , , x ¯ l ( t ) ) T , y ¯ l 2 ( t ) should be removed , h ¯ l 2 ( t ) = ( x ¯ l + 1 ( [ t ] ) , , x n ( [ t ] ) ) T .
  • Case 6: l 1 = 0 , 0 < l 2 < n l .
    y ¯ l 1 ( t ) should be removed , h ¯ l 1 ( t ) = ( x ¯ 1 ( t ) , , x ¯ l ( t ) ) T , y ¯ l 2 ( t ) = ( x ¯ l + 1 ( [ t ] ) , , x ¯ l + l 2 ( [ t ] ) ) T , h ¯ l 2 ( t ) = ( x ¯ l + l 2 + 1 ( [ t ] ) , , x n ( [ t ] ) ) T .
  • Case 7: l 1 = 0 , l 2 = n l .
    y ¯ l 1 ( t ) should be removed , h ¯ l 1 ( t ) = ( x ¯ 1 ( t ) , , x ¯ l ( t ) ) T , y ¯ l 2 ( t ) = ( x ¯ l + 1 ( [ t ] ) , , x ¯ n ( [ t ] ) ) T , h ¯ l 2 ( t ) should be removed .

References

  1. Fazel, R.; Shafei, A.M.; Nekoo, S.R. A new method for finding the proper initial conditions in passive locomotion of bipedal robotic systems. Comm. Nonlin. Sci. Num. Sim. 2024, 130, 107693. [Google Scholar] [CrossRef]
  2. Borquez, J.; Peng, S.; Chen, Y.; Nguyen, Q.; Bansal, S. Hamilton-Jacobi reachability analysis for hybrid systems with controlled and forced transitions. Robotics 1923, 19, 13. [Google Scholar]
  3. Schupp, S.; Leofante, F.; Behr, L.; Ábrahám, E.; Taccella, A. Robot swarms as hybrid systems: Modelling and verification. In Symbolic-Numeric Methods for Reasoning About CPS and IoT; Remke, A., Tran, D.H., Eds.; EPTCS 361: Munich, Germany, 2022; pp. 61–77. [Google Scholar]
  4. Zahedi, A.; Shafei, A.M.; Shamsi, M. Application of hybrid robotic systems in crop harvesting: Kinematic and dynamic analysis. Comp. Electr. Agric. 2023, 209, 107724. [Google Scholar] [CrossRef]
  5. Kong, N.J.; Payne, J.; Council, G.; Johnson, A.M. The salted kalman filter: Kalman filtering on hybrid dynamical systems. Automatica 2021, 131, 109752. [Google Scholar] [CrossRef]
  6. Sansana, J.; Joswiak, M.N.; Castillo, I.; Wang, Z.; Rendall, R.; Chiang, L.H.; Reis, M.S. Recent trends on hybrid modeling for Industry 4.0. Comp. Chem. Engin. 2021, 151, 107365. [Google Scholar] [CrossRef]
  7. Hybrid Energy System Models; Berrada, A., El Mrabet, R., Eds.; Elsevier Academic Press: London, UK, 2021; 371p. [Google Scholar]
  8. Wu, H.; Huang, Y.; Yin, Z.P. Flexible hybrid electronics: Enabling integration techniques and applications. Sci. China Techn. Sci. 2022, 65, 1–12. [Google Scholar] [CrossRef]
  9. Heuer, J.; Krenz-Baath, R.; Obermaisser, R. Human-heart-model for hardware-in-the-loop testing of pacemakers. Comp. Biol. Med. 2024, 180, 108966. [Google Scholar] [CrossRef]
  10. Baldari, L.; Boni, L.; Cassinotti, E. Hybrid robotic systems. Surgery 2024, 176, 1538–1541. [Google Scholar] [CrossRef]
  11. Coy, S.; Czumaj, A.; Scheideler, C.; Schneider, P.; Werthmann, J. Routing schemes for hybrid communication networks. Theor. Comp. Sci. 2024, 985, 114352. [Google Scholar] [CrossRef]
  12. Liberzon, D. Switching in Systems and Control; Springer Science & Business Media: Berlin, Germany, 2003; 233p. [Google Scholar]
  13. Lin, H.; Antsaklis, P.J. Stability and stabilizability of switched linear systems: A survey of recent results. IEEE Trans. Automat. Contr. 2009, 54, 308–322. [Google Scholar] [CrossRef]
  14. Liberzon, D.; Zharnitsky, V. Almost Lyapunov functions for nonlinear systems. Automatica 2020, 113, 108758. [Google Scholar]
  15. Teel, A.R.; Subbaramana, A.; Sferlazza, A. Stability analysis for stochastic hybrid systems: A survey. Automatica 2014, 50, 2435–2456. [Google Scholar] [CrossRef]
  16. Schlotterbeck, C.; Gallegos, J.A.; Teel, A.R.; Núñez, F. Stability guarantees for a class of networked control systems subject to stochastic delays. IEEE Transact. Autom. Contr. 2024, 69, 8884–8891. [Google Scholar] [CrossRef]
  17. Mao, X.; Yin, G.G.; Yuan, C. Stabilization and destabilization of hybrid systems of stochastic differential equations. Automatica 2007, 43, 264–273. [Google Scholar] [CrossRef]
  18. Imzegouan, C. Stability for Markovian switching stochastic neural networks with infinite delay driven by Lévy noise. Int. J. Dyn. Contr. 2019, 7, 547–556. [Google Scholar] [CrossRef]
  19. Liu, D.; Wang, Z.; Zhang, Z.; Liu, J. Partial stabilization of stochastic hybrid neural networks driven by Lévy noise. Sys. Sc. Contr. Eng. 2020, 8, 413–421. [Google Scholar] [CrossRef]
  20. Vorotnikov, V.I. Partial Stability and Control; Birkhäuser: Basel, Switzerland, 1998; 433p. [Google Scholar]
  21. Amorim, P.; Casteras, J.-B.; Dias, J.P. On the existence and partial stability of standing waves for a nematic liquid crystal director field equation. arXiv 2023, arXiv:2312.09035v1. [Google Scholar] [CrossRef]
  22. Caraballo, T.; Ezzine, V.; Hammami, M.A. Partial stability analysis of stochastic differential equations with a general decay rate. J. Engr. Math. 2021, 130, 17. [Google Scholar] [CrossRef]
  23. Mchiri, L.; Caraballo, T.; Rhaima, M. Partial asymptotic stability of neutral pantograph stochastic differential equations with Markovian switching. Adv. Cont. Discr. Mod. 2022, 18, 15. [Google Scholar] [CrossRef]
  24. Vorotnikov, V.I.; Martyshenko, Y.G. Partial stability in probability of nonlinear stochastic discrete-time systems with delay. Autom. Remote Contr. 2024, 8, 20–35. [Google Scholar]
  25. Kadiev, R.I. Sufficient conditions of partial stability of linear stochastic equations with aftereffect. Russ. Math. J. Izv. Vuz. 2000, 6, 75–90. [Google Scholar]
  26. Garg, K.; Panagou, D. Finite-time stability of hybrid systems with unstable modes. Front. Control Eng. Sec. Nonlin. Control. 2021, 2, 707729. [Google Scholar] [CrossRef]
  27. Azvelev, N.V.; Simonov, P.M. Stability of Differential Equations with Aftereffect; Taylor and Francis: London, UK, 2002. [Google Scholar]
  28. Ponosov, A.; Kadiev, R.I. Inverse-positive matrices and stability properties of linear stochastic difference equations with aftereffect. Mathematics 2024, 12, 2710. [Google Scholar] [CrossRef]
  29. Kadiev, R.I.; Ponosov, A. Stability analysis of solutions of continuous–discrete stochastic systems with aftereffect by a regularization method. Diff. Eqs. 2022, 58, 433–454. [Google Scholar] [CrossRef]
  30. Sontag, E.D. Input-to-State Stability. In Encyclopedia of Systems and Control; Baillieul, J., Samad, T., Eds.; Springer: Cham, Switzerland, 2001; 9p. [Google Scholar]
  31. Liptser, R.S.; Shirjaev, A.N. Theory of Martingales; Kluwer: Dordrecht, The Netherlands, 1989. [Google Scholar]
Figure 1. The region of partial stability for System (15), expressed in the variables x = a ¯ 13 , y = b ¯ 31 , z = ( λ 1 λ 3 ) λ 5 , consists of the points above the depicted surface.
Figure 1. The region of partial stability for System (15), expressed in the variables x = a ¯ 13 , y = b ¯ 31 , z = ( λ 1 λ 3 ) λ 5 , consists of the points above the depicted surface.
Mathematics 13 00397 g001
Figure 2. The region of partial stability for System (15), expressed in the variables x = a ¯ 13 , y = b ¯ 31 , with z = 0.1 ; as z increases, this region expands.
Figure 2. The region of partial stability for System (15), expressed in the variables x = a ¯ 13 , y = b ¯ 31 , with z = 0.1 ; as z increases, this region expands.
Mathematics 13 00397 g002
Figure 3. The region of partial stability for System (15), expressed in the variables y = b ¯ 31 , z = ( λ 1 λ 3 ) λ 5 , with x = 0.1 and x = 10 , respectively. If x , then this region becomes a ray perpendicular to the x-axis; if x 0 , then this region will cover the upper semiplane of the y z -plane. Due to symmetry, the same kind of regions will be in the variables x = a ¯ 13 , z = ( λ 1 λ 3 ) λ 5 for different y.
Figure 3. The region of partial stability for System (15), expressed in the variables y = b ¯ 31 , z = ( λ 1 λ 3 ) λ 5 , with x = 0.1 and x = 10 , respectively. If x , then this region becomes a ray perpendicular to the x-axis; if x 0 , then this region will cover the upper semiplane of the y z -plane. Due to symmetry, the same kind of regions will be in the variables x = a ¯ 13 , z = ( λ 1 λ 3 ) λ 5 for different y.
Mathematics 13 00397 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kadiev, R.I.; Ponosov, A. Partial Stability of Linear Hybrid Discrete–Continuous Itô Systems with Aftereffect. Mathematics 2025, 13, 397. https://doi.org/10.3390/math13030397

AMA Style

Kadiev RI, Ponosov A. Partial Stability of Linear Hybrid Discrete–Continuous Itô Systems with Aftereffect. Mathematics. 2025; 13(3):397. https://doi.org/10.3390/math13030397

Chicago/Turabian Style

Kadiev, Ramazan I., and Arcady Ponosov. 2025. "Partial Stability of Linear Hybrid Discrete–Continuous Itô Systems with Aftereffect" Mathematics 13, no. 3: 397. https://doi.org/10.3390/math13030397

APA Style

Kadiev, R. I., & Ponosov, A. (2025). Partial Stability of Linear Hybrid Discrete–Continuous Itô Systems with Aftereffect. Mathematics, 13(3), 397. https://doi.org/10.3390/math13030397

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop