Stability of the Planar Quadratic Systems from the Ring-Theoretic Viewpoint

: We show that the classical result on the stability of the origin in a quadratic planar system of ODEs can be formulated using either matrix theory or via its associated real and complex Marcus algebra. A generalization to a three-dimensional case is considered and some counterexamples provided.


Introduction
Let Q : R n → R n be a homogeneous form of degree two. An autonomous polynomial system of ODEsv where the vector function v is defined on some real interval, will be referred to as a quadratic system. In the special case when n = 2, we can write such a system in the forṁ x = α 1 x 2 + 2β 1 xy + γ 1 y 2 y = α 2 x 2 + 2β 2 xy + γ 2 y 2 , where α 1,2 , β 1,2 , γ 1,2 are real constants. The origin of R n is always a critical point of a quadratic system. A (real) Markus algebra associated to a quadratic form Q, which will be denoted by A Q , is a space R n equipped with a (nonassociative in the general case) product (R n , ·) defined by This product is obviously commutative. The idea to study quadratic ODEs via its real algebra was considered by many authors. In [1][2][3], Boujemaa et al. considered unboundedness of the solutions in quadratic systems and stated a reduction theorem based on the existence of an ideal generated by an idempotent element. Burdujan [4][5][6][7] considered quadratic systems with derivations, automorphisms, nilpotents of order three and the application in Lie triple system theory. Krasnov et al. [8,9] considered the connections between algebras and integral (quadratic) systems and partial differential equations. Kinyon and Sagle [10][11][12] considered many general relations between commutative algebras and quadratic systems of ODEs and quadratic maps (for this paper the most important result is the result on blow-up solutions [10]). Kutnjak [13,14] considered the relation between commutative algebras and quadratic maps in correspondence to chaotic dynamics in quadratic homogeneous difference systems. Some partial results in R 3 are known for the case when the system contains a plane of singular points (for details, see [15]).
It is easy to verify that the Markus algebra of a planar quadratic system of the form (1) has the following multiplication rules · e 1 e 2 e 1 α 1 e 1 + α 2 e 2 β 1 e 1 + β 2 e 2 e 2 β 1 e 1 + β 2 e 2 γ 1 e 1 + γ 2 e 2 (2) where the vectors e 1 and e 2 denote the standard basis of R 2 . First applications of this ring-theoretic approach to the study of quadratic ODEs were provided by Markus in [16]. The standard monograph on this topic is [17].
The methods using Markus algebras techniques are useful in the study of quadratic systems because there exist many connections between the properties of quadratic systems and their algebras. Some of those connections are (see [10,11,17] for proofs): • The quadratic systemv(t) = Q(v(t)) has ray solutions if and only if there exists a nonzero idempotent in A Q , · , i.e., an element e ∈ A Q such that e = 0 and e · e = e. Any ray solution implies unstable dynamics near the origin. The solutions tov(t) = Q(v(t)) lying on a line through the idempotent are called blow-up solutions. Note that this implication holds in any dimension. • The quadratic systemv(t) = Q(v(t)) has a line of critical points if and only if there exists a nonzero nilpotent of index two in A Q , · , i.e., an element n ∈ A Q such that n = 0 and n · n = 0. • The quadratic systemv(t) = Q(v(t)) has an invariant r-dimensional linear subspace E r if and only if A Q , · has an r-dimensional subalgebra [16]. Note that the invariance of E r means that for any initial condition v 0 ∈ E r the flow v(t; t 0 , v 0 ) remains within E r for any time t > t 0 and any initial time t 0 > 0. • The quadratic systemv(t) = Q(v(t)) can be solved by reduction if and only if the A Q , · contains a nontrivial ideal. The last statement is especially important, since it means that we can attempt to fully classify possible behaviour of quadratic systems of a certain type if we develop the classification theory for some class of nonassociative algebras and treat only those explicit quadratic systems that emerge from such classification.
In the sequel, we will use terms idempotent and nilpotent in the restricted sense, i.e., they will only refer to nonzero elements.
The starting point for our first result in the above remarks and the following lemma which proves that locally the trajectories of the scaled linear system and the (corresponding) linear system coincide (up to the time scaling) in the half-planes determined by the common factor of the quadratic system. Lemma 1. The quadratic system with a common factor can be treated in terms of linear systeṁ The common factor δx + γy of (3) represents a line of singular points and splits the (x, y)−plane in two half-planes: on the half-plane δx + γy > 0 solutions of system (3) have the same orientation as the solutions of (4), while on the half-plane δx + γy < 0, the solutions of quadratic system have reversed time comparing to the linear one (i.e., t −τ).
The relation between τ and t follows from x ẋ = y ẏ = dt dτ = δx + γy: It is of obvious interest whether the origin is a (Lyapunov) stable critical point or not. In the planar case, the analysis is rather simple. In Theorem 1 we observe that the result can be nicely expressed using a suitable 2 × 2 matrix.

Theorem 1. A planar quadratic system has a stable origin if and only if it can be factorized in the form
where β is nonzero.
Proof. The result follows from Lemma 1, the one-to-one relation between systems and algebras [16], the result [18] of Kaplan and Yorke on nilpotents and idempotents, and the result due by Kinyon and Sagle on blow-up solutions [11]. According to the Kaplan-Yorke's result, any real finite dimensional algebra contains at least one nonzero idempotent or nonzero nilpotent of rank two. The existence of an idempotent implies by result of Kinyon and Sagle unbounded trajectories starting arbitrary close to origin which implies instability of the origin. In dimension two, this implies directly that (1) must be of the form (3). Note that the line γx + δy represents the nilpotent in the corresponding algebra (2). The rest of the proof follows by Lemma 1 and the well known theory of planar linear systems; see for example ([19], Section 4) for details. ccording to Lemma 1, just the phase portraits with bounded trajectories (i.e., foci and centres) assure the stability of the origin in (3) which yields directly that (1) must be of the form (7) and concludes the proof.
The main purpose of this paper is to show that matrix characterisation of stability also has an alternative formulation which is ring-theoretic in nature.
To explain our new result, we must also consider an obvious complexification of A Q which will be denoted by C Q . This complexification is an involutive complex algebra modeled on the space C Q = A Q ⊕ iA Q ≈ C n . Its multiplicationby a complex number and involution are defined by for all a, b, c, d ∈ A Q and all ζ, η ∈ R. We can identify A Q with a real subalgebra A Q ⊕ {0} ⊂ C Q . The concept of an idempotent, i.e., an element satisfying e 2 := e · e = e makes sense in an arbitrary ring. The purpose of our paper is to formulate an analogue of Theorem 1 in terms of purely ring theory framework and offer a possible path toward the generalization to a three-dimensional real space stability problem.

Main Result
In this section, we prove our main result.
Theorem 2. A planar quadratic system, different fromẋ = 0,ẏ = 0, has a stable origin if and only if its associated complex Markus algebra is spanned by (two) idempotents, while the only idempotent in its associated real Markus algebra is the zero element.
We refer to the systemẋ = 0,ẏ = 0 as the trivial system. We will divide our arguments into two separate statements. Proposition 1. Letv = Q(v) be one of the nontrivial planar systems from Theorem 1. Then the only idempotent of A Q is its zero element. The algebra C Q contains precisely two nonzero idempotents which are linearly independent over C, and therefore C Q = span{p 1 , p 2 ) where (p 1 ) 2 = p 1 and (p 2 ) 2 = p 2 .
Proof. Systems from Theorem 1 can be rewritten aṡ while the corresponding (real) Markus algebra is given by the following multiplication rules * e 1 e 2 e 1 αγe 1 − βγe 2 The complex Markus algebra can be given by the same multiplication rules if we assume (e 1 ) * = e 1 and (e 2 ) * = e 2 in addition. We can solve the equation p 2 = p for both algebras simultaneously if we use complex arithmetics.
The above condition clearly coincides with system (8) which leads to a contradiction.

Proposition 2.
Let Q : R 2 → R 2 be a quadratic form, such that the only idempotent of A Q is zero, while C Q is spanned by idempotents. Then there exists a linear transformation on R 2 such that the quadratic systemv = Q(v) is equivalent to one of the systems from Theorem 1.
Proof. Step 1. Let p 1 = p ∈ C Q be a nonzero idempotent. Since p 2 = p implies (p * ) 2 = (p 2 ) * = p * , it follows that p 2 = p * is also an idempotent. Since A Q is isomorphic to {x ∈ C Q : x * = x}, it follows p 1 = p 2 . Let us assume that p and p * are linearly dependent over C. Since both are nonzero, there would exist λ ∈ C such that p * = λp. In the proof of Proposition 1, we saw that λ must be 1, i.e., p * = p which contradicts our assumption about A Q Step 2. Since C Q is two-dimensional as a complex space, {p, p * } is (one of) its basis. This means that p • p * must be a linear combination of those two elements, i.e., there exist complex numbers ξ 0 and ζ 0 such that p • p * = ξ 0 p + ζ 0 p * .
Step 3. We can decompose the idempotent p into p = a + ib where a, b ∈ A Q . Since p = p * , the element b must be nonzero. If a = 0 then p = ib, together with p 2 = p, imply ib = −b 2 . The left-hand side is an element of iA Q , while the right-hand side is the element of A Q . This would imply b = 0 and consequently p = 0 which contradicts the assumption.
If a, b could be linearly dependent (we know they must be nonzero elements), then there would exist a nonzero real number λ such that a = λb would hold. From (λb + ib) • (λb + ib) = λb + ib we could derive, in the second component, If we define q = 2λb, we would have a nonzero element of A Q , satisfying which would contradict the assumption we made for this Proposition. Hence, a and b cannot be linearly dependent.
Step 4. Since A Q is two-dimensional, {a, b} is one of its bases. From the multiplication rules we can easily compute that the multiplication rules for A Q are given by corresponding quadratic system takes the forṁ for some value of the parameter ψ ∈ (0, π]. Step 5. Assume first that ψ = π. Then system (12) takes the following formẋ = −y 2 , y = xy which can be written in form of (7) as follows i.e., α = 0, β = −1, γ = 0 and δ = 1.
One has to use the following matrix 2 1 2 cot ψ 2 cot ψ , the relation k = − 1 8 cos 2 1 2 ψ and the following trigonometric identities: Proof of Theorem 2. If a planar quadratic system Q has a stable origin, it is linearly equivalent to one of the systems 7. According to ([16], Theorem 1), its real Markus algebra A Q is isomorphic to one of the real Markus algebras A α,β,γ,δ corresponding to (7). It is easy to see that the derived complex Markus algebras C Q and C α,β,γ,δ are also isomorphic. By Proposition 1, A α,β,γ,δ has only the zero as an idempotent, while C α,β,γ,δ is spanned by idempotents. This clealy implies that the zero element is the only idempotent of A Q , while C Q is spanned by idempotents. Conversely, assume that the quadratic system Q is such that A Q contains only the trivial idempotent, while C Q is spanned by idempotents. According to Proposition 1 and Theorem 1, Q is linearly equivalent to some quadratic system with a stable origin. Since this linear equivalence is clearly a bounded mapping, the system Q also has a stable origin.

Three-Dimensional Case
In this section, we prove that an immediate generalisation of Theorem 2 is not true in R 3 . Such a conjecture would take a form Statement 1. Let Q : R 3 → R 3 be a nonzero quadratic map. The system of ODEsv(t) = Q(v(t)), different fromv = 0, (A) has a stable origin if and only if (B) its associated complex Markus algebra is spanned by three idempotents. while the only idempotent in its associated real Markus algebra is the zero element.
To this end. we consider two (counter)examples which prove that neither of both implications in A ⇔ B is true.
The first example contradicts the necessity of the conditions. In this example. the origin will be shown to be unstable. while the corresponding algebra will contain enough complex idempotents and no nontrivial real idempotent.
The second example contradicts the sufficiency of the conditions in the above attempt of the generalization of Theorem 2. In this example. the system has an unstable origin but enough complex idempotents. A)). Let us consider the system
The idempotents are determined by the solutions of Obviously, any nontrivial solution must be nonzero in all three components. Therefore, inserting x = −yz into y = xz yields 1 = −z 2 (after canceling by y), proving all four solutions to (14) being complex.
To prove ( A), let us search for a particular solution which is arbitrary close to origin and tends to infinity when t is large enough. Dividing dy dt = xz by dx dt = −yz yields dy dx = − x y proving that solutions lie on cyllinders x 2 + y 2 = r 2 .
A straigtforward computation shows that are the only two (nontrivial) linearly independent solutions to (15). This means there exists just two nontrivial complex idempotents in this case. Let us prove that the origin of (15) is stable.
In the sequel, we will use the abreviation SSO for a system with a stable origin. The idea of CMA as presented here is an attempt towards the final solution of the abovementioned problem.
This problem is not trivial, but we hope the full apparatus of complex analysis and complex spectral theory of matrices can be fruitful. Direct calculations in R 3 involve 18 coefficients and seem not to be the best possible approach. This is the reason why we propose the introduction of CMA methods. Note also that the multiplication rules defined in (10) involve only one real parameter.
The first obvious observation is that every invariant plane Π ⊂ R 3 for a SSO generates a 2-dimensional SSO in a natural way. If we translate this obvious remark in the laguage of CMA, it is obvious that any two-dimensional subalgebra of a three-dimensional CMA corresponding to a SSO must also correspond to the SSO of a two-dimensional CMA. Precisely those algebras were classified in Theorem 2.
More precisely, if a three-dimensional CMA contains a two-dimensional subalgebra which does not contain two complex idempotents p, p * with the properties defined in (10), the original quadratic system is not a SSO. This implies that to classify all three-dimensional SSOs, we propose to first solve Problem 2. Classify all three-dimensional complex involutive algebras with at least one twodimensional subalgebra, whose two-dimensional subalgebras all satisfy properties in the formulation of Theorem 2.
To fully solve Problem 1, our numerical experiments suggest that the following result may be true.

Conjecture 1.
If a three-dimensional CMA has no subalgebras of dimension 2, the original quadratic system is not a SSO.
The simplest open problem which we intend to solve with the CMA method is Problem 3. Let us consider a family of three-dimensional systemṡ x = x 2 − y 2 + 2αxz + 2βyż y = −x 2 − y 2 − 2xy z = 2γxz + 2δyz (16) where α, β , γ, δ are some real numbers. After change of time t → 2τ, the corresponding CMA has the following form p 2 = p, (p * ) 2 = p * , n 2 = 0, p · p * = i 2 (p − p * ), p · n = E, p * · n = E * , where E = 1 4 (α + iβ)(p + p * ) + 1 2 (γ + iδ)n. Elements p and p * generate a two-dimensional subalgebra which is isomorphic to one of the algebras from (11) when ψ = π 2 . Since the third dimension in this new basis is represented by a nilpotent of rank two, we can deduce that the corresponding system (depending on α, β , γ, δ) has a potentially stable origin. The problem is to describe precisely for which parameter values the origin is stable.
We are currently working on its solution. The main idea is to find just one suitable two-dimensional subalgebra which is not isomorphic to one of the algebras described in Theorem 2, for most α, β, γ, δ and study the remaining cases.