On Some Symmetries of Quadratic Systems

: We provide a general method for identifying real quadratic polynomial dynamical systems that can be transformed to symmetric ones by a bijective polynomial map of degree one, the so-called afﬁne map. We mainly focus on symmetry groups generated by rotations, in other words, we treat equivariant and reversible equivariant systems. The description is given in terms of afﬁne varieties in the space of parameters of the system. A general algebraic approach to ﬁnd subfamilies of systems having certain symmetries in polynomial differential families depending on many parameters is proposed and computer algebra computations for the planar case are presented. and -equivarivance,


Introduction
Studying various symmetries of dynamical systems is important for several reasons. Systems which phase diagrams posses a rotational symmetry are interesting because they are related to the second part of Hilbert's 16th problem and the existence of such systems can lead to the construction of families with many limit cycles. Another reason for investigation of symmetries is their connection with the integrability. It is well known that an elementary singular point of a time-reversible polynomial system of degree two is always integrable [1]. It can be either a center or an integrable saddle [2] and it thus cannot be a focus. A similar conclusion is valid for much larger families of systems, as it had been shown in [2] where we found affine varieties in the space of parameters of a quadratic planar system which can be brought to a time-reversible one by a bijective linear transformation. This transformation preserves the integrability property of a present singular point at the origin. So, if the quadratic planar system with an isolated non-degenerate trace-zero singular point at the origin admits the transformation to a time-reversible system, the origin is automatically integrable.
In this paper we firstly treat affine transformations to symmetric systems (n-dimensional, n ≥ 2) in a unified and rather general way. Secondly, for planar quadratic systems, we calculate the varieties in the space of parameters defining the systems which can be transformed to rotationally (reversible) equivariant systems by an affine or merely linear (i.e., no translation involved) transformation. Let us add some notation. All vectors from R n will be typeset in boldface. When we mention a linear map from R n → R n , we have in mind an additive homogeneous (of degree one) map (which sends 0 to 0). By an affine map we denote a composition of a linear map and a translation. Linear maps will be denoted by capital letters and general (not necessarily linear) maps in calligraphic font. The symbols Id or Id n will both stand for the identity matrix and the transposition of a matrix A Our paper is organized as follows-in Section 2 we describe the general properties of (reversible) symmetric systems in R n and in Section 3 we compute algebraic varieties in parameter space, corresponding to quadratic planar systems that can be transformed to the (reversible) symmetric one by an affine or linear transformation. With some examples in Section 3 we end the paper.

Equivariant and Reversible Equivariant Systems in R n
Let us start with a rather general definition of a symmetry of a dynamical system. Throughout the paper, we will be interested in smooth, mostly polynomial dynamical systems of the form Following Lamb et al. [3,4], we say that a bijective map B : R n → R n is a symmetry of the system (1) when for each trajectory ϕ : t → x(t) of system (1). A map C : R n → R n is called a reversible symmetry of system (1) when for each trajectory ϕ of (1). The condition (2) implies that the system is invariant under transformation (x, t) → (B(x), t) and (3) implies the invariance under the map (x, t) → (C(x), −t).
We will be only interested in linear (reversible) symmetries. Let B : R n → R n be a linear map. Then, by the linearity, . Now, if for some regular n × n matrix B and a fixed ε ∈ {1, −1} F (Bx) = εBF (x) (4) holds for every x ∈ R n , then B is a symmetry if ε = 1, and it is a reversible symmetry if ε = −1.
Let Γ be a finite cyclic group of invertible linear operators (matrices) on R n . It is said that system (1) is Γ-equivariant if every B ∈ Γ is a symmetry of the system. Further, it is said that a system (1) is reversible Γ-equivariant if there exists a non-trivial homomorphism σ : Γ → {−1, 1} such that for every B ∈ Γ, holds true for every x ∈ R n . Obviously, the elements B ∈ Γ for which σ(B) = −1, are reversing symmetries and those B with σ(B) = 1 are symmetries. By the successive application of (5), we get for all x ∈ R n and for all integers k. It easily follows that all even powers of a reversible symmetry are symmetries and all odd powers of a reversible symmetry are reversible symmetries. Therefore, in order that there exists a non-trivial reversible Γ-equivariant system, the order of Γ must be even as it can be easily seen by inserting the order of Γ for k in (6).
In the next proposition we will see that having only information for a generator of the group suffices. Proposition 1. A system (1) is Γ-equivariant if and only if any fixed generator of Γ is a symmetry. A system (1) is reversible Γ-equivariant if and only if some fixed generator of Γ is a reversible symmetry.
Proof. By application of (6) and taking σ(R) = −1 for the chosen generator R of Γ in the reversible case, the claims easily follow.
We recall the definition of Kronecker product of matrices related to the tensor product of operators. The Kronecker product of A = [a ij ] ∈ R m,n and B = [b ij ] ∈ R p,q is defined as block matrix A ⊗ B = a ij B ∈ R mp,nq . It is well know that the following rule can be applied (A ⊗ B)(C ⊗ D) = AC ⊗ BD for any matrices A, B, C and D for which the products AC and BD are well defined.
We shall now restrict ourselves to quadratic dynamical systems of n equations and n unknown functions x = (x 1 (t), x 2 (t), . . . , x n (t)) tr . The way we express our system is a bit unusual, but it will provide some advantages for our consideration. Let us writė where f 0 ∈ R n , F 1 is a n × n real matrix and G = (G 1 , . . . , G n ) tr is a n 2 × n matrix with symmetric blocks G 1 , . . . , G n , that is, G k = G tr k , k = 1, 2, . . . , n. Each of the matrices G k is the symmetric matrix arising from the quadratic form in the k-th equation, for example, ax 2 + 2bxy + cy 2 = x tr a b b c x with x = (x, y) tr . For the reader's convenience, Id n ⊗ x tr is a block-diagonal n × n 2 matrix with x tr sitting in the diagonal blocks.
The algebraic conditions for a (reversible) Γ-equivariant quadratic systems are given in the following statement.
Before presenting the proof we have a direct consequence of the fact that Equations (8)-(10) are linear with respect to f 0 , F 1 and G. Corollary 1. Let Γ be as in Theorem 1. The family of all (reversible) Γ-equivariant systems with parameters collected in f 0 ∈ R n,1 , F 1 ∈ R n,n and G = (G 1 , . . . , G n ) tr ∈ R n 2 ,n , forms a linear (vector) subspace in R n,1 × R n,n × R n 2 ,n .
We continue with the proof of Theorem 1.
Proof. By Proposition 1, we set the value of homomorphism σ on the generator R as σ(R) = ε and apply identity (5) with R in place of B and the form F (x) = f 0 + F 1 x + (Id ⊗ x tr )Gx. Then, the comparison of the terms of degree zero and one easily provides Equations (8) and (9). For Equation (10) some more effort is necessary. Equating the terms of order two gives us Evaluating the derivative DH(x) acting at y gives the identity The above expression must be zero for all x, so we get Several remarks are in order.

Remark 1.
From equality (8) we see that for a (reversible) equivariant quadratic system (with respect to Γ generated by R), the constant term column vector f 0 is either an eigenvector of R corresponding to the eigenvalue ε or, when ε is not an eigenvalue of R, f 0 = 0. For example, when n = 2, and R is the rotation for the angle 2π/3, neither 1 nor −1 is an eigenvalue of R, therefore, f 0 = 0, and the origin must be a singular point of the system.

Remark 2.
Equality (9) says that the matrix F 1 of the linear part is a zero of the Lie product RF 1 − F 1 R (in other words F 1 commutes with R) when ε = 1 and F 1 is a zero of the Jordan product RF 1 + F 1 R when ε = −1. (10) is also a well-known linear equation in linear algebra, it is the so-called (homogeneous) Sylvester equation. From the next theorem we will recall that non-trivial solutions X of matrix equation AX − BX = 0 are subject to the existence of common eigenvalues of the given matrices A and B of sizes n × n and k × k, respectively. . Let A ∈ C n,n , B ∈ C k,k and C ∈ C n,k . The equation AX − XB = C has unique solution X ∈ C n,k if and only if A and B do not have any joint eigenvalues, that is, their spectra are disjoint.

Remark 3. Equation
In the sequel we apply Theorem 1 for computation of (in fact linear) varieties (in the space of parameters) of systems which are equivariant or reversible equivariant with respect to a chosen group of rotations of the plane. The theorem is also valid for the so-called time-reversible systems and mirror symmetric systems, where the group of symmetries, being of order two, is generated by a non-trivial involution. For the varieties in the space of parameters for such (merely planar) systems we refer the reader to our previous work [2]. Theorem 1 will also show why we have many planar equivariant quadratic systems with respect to the group generated by 2π/3 rotation, meanwhile, planar quadratic equivariant systems with respect to other rotations are found only among linear systems for the only solution G of system (10) is trivial. Similarly, we will see that the only non-linear quadratic reversible-equivariant systems will appear with π/3 or π rotation.
From now on we consider planar (n = 2) quadratic systems (7), parametrized aṡ x = −(a 0 + a 00 x + a −1,1 y + a 10 x 2 + a 01 xy + a 12 y 2 ) where We further restrict ourselves to rotational groups of symmetries. Recall that the cyclic multiplicative group Γ q of order q generated by 2π/q-angle rotation around the origin is naturally isomorphic to the additive group Z q . From now on we refer to (reversible) Γ q -equivariant systems as (reversible) Z q -equivariant systems. It is not difficult to find the corresponding linear subspace of the space of parameters, completely describing (reversible) Z q -equivariant systems and, we present their description below. In different form as in our setting (complex parametrization), and for planar polynomial systems of degree not bounded by 2, the Z q -equivariant systems were completely determined in [6] and reversible Z q -equivariant systems in [7]. Let be the rotation in counter clock direction for an angle ϕ q = 2π/q, where the integer q ∈ {2, 3, . . . } is being fixed. In the next proposition we recall the form of (reversible) Z q -equivariant systems in the plane. The forms of polynomial planar Z q -equivariant systems may be found in [6] and for the reversible Z q -equivariant ones in [7]. In the following propositions we present the forms of (reversible) Z q -equivariant systems in terms of coefficient matrices. As we shall see, only Z 3 -equivariant and reversible Z 6 and reversible Z 2 -equivariant planar quadratic systems are in some sense non-trivial.

System
3. System (12) is Z q -equivariant for any q ≥ 4 if and only if f 0 = 0, G = 0 and F 1 is of the form (15).
The proof will be given after the following proposition in which we describe reversible Z q -equivariant systems. Note that only even q makes sense.
Before starting to prove both propositions we need a preliminary argument involving eigenvalues of the Kronecker product [5]. Let us denote by eig(A) = λ 1 , λ 2 , . . . , λ n ; the list of eigenvalues (possibly complex) of a n × n real matrix A, listed according to their algebraic multiplicities. The order of the listing is irrelevant. Then eig(R q ) = e iϕ q , e −iϕ q ;. It is known that the eigenvalues of the Kronecker product of matrices A and B are all products λ i µ j running through all eigenvalues λ i of A and µ j of B. Then, since R q ⊗ R tr q −1 = R q ⊗ R q and, eig(R q ⊗ R q ) is the list of all four products of eigenvalues, we have eig(R q ⊗ R q ) = e 2iϕ q , e −2iϕ q , 1, 1;. When solving equality (10) for G, we have only trivial solution G = 0 unless there exists a joint member of lists eig(R q ) and eig(εR q ⊗ R q ) due to the Sylvester type of the equation. If ε = 1, the only instance when G is not necessarily zero, is q = 3, in this case we have e 2iϕ 3 = e −iϕ 3 . If ε = −1, this happens when q ∈ {2, 6} since e 2iϕ q = −e −iϕ q .
The following Lemma will be used at several places in the proofs.
(i) The set of all real matrices commuting with the 2 × 2 rotation matrix R q is the family A matrix F is similar to a member of the family S if and only if F = λId for some λ ∈ R or it has a conjugated pair of non-real eigenvalues.
(ii) The set of all real matrices Jordan commuting with the 2 × 2 rotation matrix R q is the family A matrix F is similar to a member of the family T if and only if it is either zero or it has nonzero eigenvalues λ, −λ.
Proof. The forms of the matrices in families S and T can be easily validated by direct elementary computations. All matrices in S are either of the form a Id or have eigenvalues a ± ib. In T , all non-zero members have a pair of non-zero eigenvalues ± √ a 2 + b 2 . The argument for the second claims in (i) and (ii) is that the similarity preserves the mentioned eigenvalue types and there are no other possibilities.
Proof of Proposition 2. Set ε = 1. Validating the if statements, being an elementary exercise, is left to the reader. By using the above mentioned argument on the eigenvalues of R q and R q ⊗ R q , we observe that G = 0 unless q = 3. Moreover, as 1 is not an eigenvalue of R q for any q, we must have f = 0 in all cases due to (8). Clearly, every linear system is Z 2 -equivariant, since F (−x) = −F (x) for all x. For q = 4, by (9) the matrix F 1 must commute with R 4 and it is thus of the form (15) by (1) of Lemma 1. That G 1 , G 2 must be of the form (16) when q = 3 can be obtained by a straightforward solution of linear system (10) with R 3 in place of R.
Proof of Proposition 3. Now, let ε = −1. We again write the proof only for the only if statements. By (8), f 0 = 0 for all q = 2, since only in the case q = 2, the number −1 is an eigenvalue of R 2 = −Id. If q = 2, it is easy to see that any constant and any quadratic terms can be present in any reversible Z 2 -equivariant system which proves (1).
If q ≥ 4, F 1 must Jordan commute with R 4 , and (2) of Lemma 1 gives the desired form. Moreover, reversible Z 6 -equivariant system is simultaneously Z 3 -equivariant, and F 1 must also be of the form (15). Then F 1 = 0. The form of G follows from the direct solution of the homogeneous system given by the Sylvester Equation (10). Finally, when q = 2k ≥ 8, recall (6) and apply R 2 q = R k in order to obtain holding true for all x. This implies that a reversible Z q -equivariant system must as well be Z k -equivariant. Since k ≥ 4, by (4) of Proposition 2, the system can only be linear. The matrix F 1 must furthermore satisfy R q F 1 + F 1 R q = 0 and since the trace of R q equals 2 cos π k , k ≥ 4, and is non-zero, there must be F 1 = 0.

Transformation to (Reversible) Z q -Equivariant Systems
Our next interest is to classify the quadratic systems which can be transformed to Γ-equivariant or reversible Γ-equivariant systems by a bijective affine transformation of R n . Based on this result we will then compute the affine varieties in the space of parameters of planar systems. The next theorem is the cornerstone of our further consideration. Without loss of any generality, to simplify computations we will assume that the origin is a singular point of investigated systems, F (0) = 0. Note that the map x → Sx + x 0 is a bijective affine transformation of R n whenever S is a real invertible matrix. Clearly, the inverse map is affine as well.
(1) If there exists an affine transformation of the form x = Sy + x 0 , with S invertible, which transforms a given quadratic systemẋ = F 1 x + (Id ⊗ x tr )Gx, with F 1 and G given by (7), to a (reversible) Γ-equivariant system, then, by introducing the notation the following identities, where R is a generator of the group Γ and B = SRS −1 , must be valid (2) If there exists a linear transformation of the form x = Sy, which provides transformation of a system given in (1) of this theorem to (reversible) Γ-equivariant one, then where B = SRS −1 and R generates the group Γ. Proof.
(1) By substitution x = Sy + x 0 we rewrite the system aṡ Suppose that the obtained system is (reversible) Γ R -equivariant. Then the constant term vector g 0 = S −1 y 0 must satisfy (8) with g 0 in place of f 0 . Then Rg 0 = εg 0 , which by multiplying with S gives that By 0 = εy 0 and (22) follows. Writing explicitly linear and quadratic terms and using the proposed notations (19)-(21), gives (23) and (24).
(3) In the proof of (1) follow the steps in the reversed direction.
Problem. We aim to find algebraic conditions on the parameters of planar quadratic systems for which there exists either

Theorem 4 (Elimination Theorem).
Fix an eliminating term order on the ring k[x 1 , . . . , x m ] with x 1 > x 2 > · · · > x m , and let G be a Gröbner basis for an ideal I of k[x 1 , . . . , x m ] with respect to this order. Then, for every , is a Gröbner basis for the -th elimination ideal I .
The radical of an ideal I is the ideal An ideal I is a radical ideal if I = √ I. A proper ideal I is primary if f g ∈ I implies that f ∈ I or g p ∈ I for some p ∈ N. An ideal I is prime if f g ∈ I implies that f ∈ I or g ∈ I. An ideal I is primary if √ I is prime; in this case √ I is called associated prime ideal of I. A primary decomposition of I is a finite intersection I = Q 1 ∩ Q 2 ∩ · · · ∩ Q s where all ideal Q j are primary. It is called minimal primary decomposition if the associated prime ideal Q j are all distinct and none of them contains the intersection of all others. Note that V(I) = V( √ I). The following fact will be extensively used for our computations. We will be interested in solutions of systems of polynomial equations f 1 = f 2 = . . . = f p = 0. For a certain elimination ideal I ( ) associated to I = f 1 , . . . , f p obtained by computation of Gröbner basis with the routine eliminate of SINGULAR [10], we will then compute the minimal associated primes of I ( ) applying the routine minAssGTZ [11,12] of SINGULAR.

Calculations
For the family of systems (the notation goes back to [13]) x = −(a 00 x + a −1,1 y + a 10 x 2 + a 01 xy + a 12 y 2 ) we aim to find (at least necessary) conditions on parameters a = (a 00 , a −1,1 , a 10 , a 01 , a 12 ) and b = (b 1,−1 , b 00 , b 21 , b 10 , b 01 ) which would assure that it is possible to do such a linear/non-linear affine transformation of the coordinate system that the system becomes (reversible) Z q -equivariant. Let us for a moment assume that we allow any affine transformation, linear or non-linear. Then recall that with chosen ε ∈ {−1, 1} and q, Equations (22)-(24) must be satisfied for some B = α β γ δ and a vector x 0 = (ρ, σ) tr . At the moment we will not impose any condition on x 0 .
A glance to the equality (22) and the fact that ε is not an eigenvalue of B unless q = 2 and ε = −1, which case we treat separately, gives us that y 0 = (F 1 + X 0 )x 0 = 0, that is, x 0 = (ρ, σ) must be a singular point of the system. This produces two polynomial equations where h 1 := a 00 r + a 10 r 2 + s(a −1,1 + a 01 r + a 12 s), The next four equations arise from (23) and notation (21)
1. Fix the integer q.
4. Elimination of six parameters α, β, γ, δ, ρ and σ from the ideal J 0 by applying routine eliminate in SINGULAR gives the elimination ideal J (6) =: I q . 5. By running the procedure minAssGTZ in SINGULAR compute the minimal primary components of I q and analyse the properties of the systems belonging to each component.
The exact results are presented in the theorems below. Some of the computations and all resulting ideals are given in the text file accessible on website (http://www.camtp.uni-mb.si/camtp/amade/ Code-rotations.txt).
A remark is in order at this point. Assume that the elimination ideal and its decomposition into minimal primary ideals I q = I q1 ∩ · · · ∩ I qk , k ≥ 1, has been obtained by steps (1)-(5) above. Then the necessary condition that there exists a bijective affine transformation, which brings the system to a (reversible) Z q -equivariant one is that the vector of its parameters (a, b) belongs to one of the varieties V(I qi ), i ∈ {1, . . . , k}.
Conversely, the family of partial solutions (a * , b * ) ∈ V(I q i ), i ∈ {1, . . . , k}, which can be extended to solutions, that is, there exists a real 6-tuple (α, β, γ, δ, ρ, σ); solving the polynomial system (32), provides systems which can be transformed to (reversible) Z q -equivariant ones by affine transformations. The families in a certain component typically have interesting common properties. In our case, we will get two components for each q, one corresponding to systems being transformable by linear transformations and the other including systems which can be transformed to symmetric ones by non-linear affine transformations.
In the following theorems, we present necessary and in some cases also sufficient conditions for transformation of the family of systems (27) to (reversible) Z q -equivariant systems. In all of these theorems, the sufficient conditions are easy to check. So we give only arguments for the necessary conditions.
We say that a system from the family (27) is trivial if all parameters a and b are zero. To shorten writing, we introduce the following families.
• L q (resp. rL q ) will denote the family of all systems (27) such that for each of them there exists a bijective linear transformation sending it to a Z q -equivariant (resp. reversible Z q -equivariant) one. The transformation in general depends on the system. • A q (resp. rA q ) will denote the family of all systems (27) such that for each of them there exists a non-linear bijective affine transformation sending it to a Z q -equivariant (resp. reversible Z q -equivariant) one. The transformation in general depends on the system. Theorem 6. Let q = 2 or q ≥ 4. For any system in the family (27) we claim: 1. The system belongs to L q if and only if the system is linear and additionaly, when q ≥ 4, its Jacobian matrix is either scalar multiple of the identity or, it has a pair of conjugated non-real eigenvalues. 2. The system belongs to A 2 if and only if the system is linear with the singular Jacobian matrix. The system belongs to A q , q ≥ 4, if and only if it is trivial.
Proof. Proposition 2, assertion (1), tells us that the set of all Z 2 -equivariant systems is exactly the set of all linear systems (as the origin is a singular point by our assumption). The set of all linear systems is clearly invariant under any bijective linear transformation of R 2 .
For q = 4, as every Z 4 -equivariant system is as well Z 2 -equivariant, similarly as above, the system must be linear and when q ≥ 4, its Jacobian matrix F 1 must commute with B, B = SR q S −1 , by (9), ε = 1. In turn, S −1 F 1 S commutes with R 4 so, F 1 must be similar to a member of the family S in (i) of Lemma 1. Now, the second assertion of (i) provides the form of F 1 which proves (1).
In order to exist a non-linear affine transformation, involving also a translation, which converts original system to a Z 2 -equivariant one, the system must be linear and the matrix of the linear part must be singular. Indeed, if we substitute x = y − x 0 , x 0 = 0, intoẋ = F 1 x, the obtained systeṁ y = −F 1 x 0 + F 1 y must have the origin as a singular point. Therefore, F 1 x 0 = 0 and F 1 must be singular. If q ≥ 4, F 1 must be a singular member of the family S in (i) of Lemma 1, thus F 1 = 0.
In the next theorem we handle the nontrivial case q = 3. Theorem 7. For any system in the family (27) it holds true: 1. If the system belongs to L 3 , then its parameters belong to the variety V(I 31 )and, if the linear part is non-zero, its determinant must be positive: a −1,1 b 1,−1 − a 00 b 00 > 0.
Proof. In this case we apply steps (1)-(5) on page 10. We firstly generate the ideal J 0 , see (33), and then J (6) as the 6th elimination ideal, eliminating α, β, γ, δ, ρ, σ by applying eliminate of SINGULAR. A Gröbner basis of this ideal consists of 11 generators; this gives the ideal I 3 (http: //www.camtp.uni-mb.si/camtp/amade/Code-rotations.txt). It turns out that the ideal I 3 has two minimal associated primes provided by the routine minAssGTZ in SINGULAR, the ideals I 31 and I 32 .
As we have not imposed any conditions on the translation vector (ρ, σ) so far, the variety V(I 3 ) contains all systems which can be transformed to a Z 3 -equivariant one by a linear or non-linear affine transformation. We repeated the above procedure once more, adding equations ρ = 0 and σ = 0 to the equations h 1 = . . . = h 16 = 0 and then the computation of minimal associated primes actually resulted in I 31 .
To handle non-linear transformations, we have to implement the condition that (ρ, σ) is non-zero. We have found the following. It suffices to add only the condition ρ = 0 by introducing a new variable w 1 , constructing a new ideal as the intersection J 0 ∩ 1 − w 1 ρ , and, then computing its 7-th elimination ideal J (7) by eliminating variables α, β, γ, δ, ρ, σ, w 1 . It turns out that this ideal has only one minimal associated prime, say J 71 . Repeating the same procedure with σ replacing ρ, adding a variable w 2 , computing the elimination ideal eliminating α, β, γ, δ, ρ, σ, w 2 and decomposing it into its minimal associated primes, gives again a sole ideal, let us name it J 72 . They happen to be not only equal, J 71 = J 72 , but also equal to the second component I 32 of I 3 . Therefore, as the condition (ρ, σ) = (0, 0) gives rise to the solution of problem of describing A 3 , it can be implemented as introducing the union of ideals (J 0 ∩ 1 − w 1 ρ ) ∪ (J 0 ∩ 1 − w 2 σ ) and computing its minimal associated primes, which by the above consideration is only one, equal to I 32 .
We next describe the transformation to reversible Z q -equivariant systems. Here we set ε = −1 throughout. Only even q s make sense, as we know, and the only non-trivial cases, as we shall see, occur when q = 2, q = 4 or q = 6.
Proof. Here we have to take into account only Equation (23) Setting ρ = σ = 0 instantly gives (1). For establishing (2), eliminating parameters ρ, σ and computing minimal associated primary components, gives us one component, the ideal I 2 .
Assertion in (3) follows by extension of partial solutions.
Finally, let us present the last almost trivial cases, for q ≥ 4.
1. System (27) belongs to rL 4 if and only if it is linear and, if being non-trivial, the Jacobian has a pair of antipodal real eigenvalues. 2. If a system (27) belongs to rL 6 , then the parameters belong to V(I 6 ), Conversely, if for a vector of parameters (a * , b * ) ∈ V(I 6 ) there exist real α, β, γ, δ solving the system (32) with ρ = σ = 0 and ε = −1, then such a system is a member of rL 6 . 3. Only trivial systems belong to rL q for q = 2l > 6. 4. Only trivial systems belong to rA q when q ≥ 4.
Then, S −1 F 1 S must Jordan commute with R 4 which implies that F 1 is similar to a member of the family T in (b) of Lemma 1. Therefore, F 1 is of the required form.
(2) Ideal I 6 is obtained by executing steps on page 10. The second claim follows by extending partial solutions. (3) Notice that R 2 q = R l which implies that rL q ⊂ L l , and by (2) of Theorem 6, L l contains only trivial systems for all l ≥ 4. (4) Similarly as above, rA q ⊂ A l , l ≥ 2. If l = 2, the systems in A l are all linear with the singular Jacobian matrix, see (2) of Theorem 6. On the other hand, the Jacobian matrix F 1 must also satisfy F 1 B + BF 1 = 0, B = SR 4 S −1 , for some invertible S. Then it follows that S −1 F 1 S must Jordan commute with R 4 . The only singular matrix Jordan commuting with R 4 is the zero matrix by (2) of Lemma 1. Thus, F 1 = 0.
For q ≥ 6, we can apply a geometrical reasoning. If the system has a line of singular points, then as a such cannot be reversible Z 6 -equivariant unless being trivial. The computational procedure does not give any real solutions in this case. Alternatively, if the translation point would have been an isolated singular point, due to reversible Z q symmetry, the system would have 7 singular points which for a planar quadratic system is not possible. For q ≥ 8 we can apply similar argument or, we can use the relation rA q ⊂ A l and (2) of Theorem 6.

Examples
The following examples illustrate our work. Example 1. The vector of coefficients of the systeṁ see Figure 1, belongs to the variety V(I 31 ), so it is quite likely that there exists a linear transformation which brings this system to a Z 3 -equivariant one. After setting ε = 1, q = 3 and ρ = σ = 0 in equations h 1 = . . . = h 16 = 0 and solving this system for α, β, γ, δ, we get the solution which is not unique since the inverse of B is a generating symmetry as well. For finding the linear transformation S which will change the system in the Z 3 -equivariant one, we do the following. By the eigenvalue decomposition which is clearly Z 3 -equivariant with a node at the origin, see Figure 2.  Example 2. The next system will be one which we transform to a reversible Z 6 -equivariant system by a bijective linear tranformation. We claim that the system below is a member of rL 6 , x = x 2 + xy − 2y 2 y = 2x 2 − 2xy − y 2 /2, see Figure 3. Similarly as above, but this time taking ideal I 6 instead, we get the systeṁ u = 3 32 see   Example 3. An example of system in A 3 will be constructed so that we first fix the origin to be a saddle, then we choose a member (a, b) ∈ V(I 32 ) such that the corresponding system has four singular points. As above we find the matrix S representing the linear part of transformation and additionally the translation vector (ρ, σ). We start withẋ = −x − x 2 + 4xẏ see Figure 5, which parameter-vector belongs to V(I 32 ). The transformed system is noẇ see Figure 6, where the affine map (x, y) = S(u, v) + (ρ, σ) is defined by S = − √ 3 1 0 1 , (ρ, σ) = (−1/3, 1/6).

Example 4.
Our last example presents a system which can be send to a reversible Z 2 -equivariant one, here we actually need only translation because R 2 = −Id. Starting with the systeṁ x = 2x 2 − y − xy + 25 8 y 2 depicted in Figure 7, and transformed tȯ which phase portrait is drawn in Figure 8. Note that this provides a simple method to construct a system with two singular points of the same kind.