About Cogredient and Contragredient Linear Differential Equations †

† The results of the work obtained in the framework Abstract: The notions of cogredience and contragredience, which have great importance to the question of algebraic independence of linear differential equation solutions, are discussed in the paper. Conditions of equivalence of two deﬁnitions of cogredience and contragredience are found.


Introduction
Let M(q, K) be the set of all matrices of size q × q with elements from a ring K, GL(q, K) be the set of all invertible matrices in M(q, K), C[z ±1 ] be the ring C[z, z −1 ], F v 1 , . . . , v n be the smallest differential field containing the field F and functions v 1 , . . . , v n (see Chapter 1 in [1]).
In the theory of transcendental numbers, one of the main methods remains the Siegel-Shidlovsky method (see [2,3]), with which one can prove transcendence and algebraic independence of the values of entire functions of a certain class (so-called E-functions).
Siegel calls the entire function f (z) = ∞ ∑ n=0 c n z n n! an E-function, if: (1) all the numbers c n belong to an algebraic field K of finite degree over Q; (2) for arbitrary ε > 0 |c n | = O(n εn ), n → ∞, where |α| is the maximum of the absolute values of the algebraic number α and all its conjugates in the field K; (3) for arbitrary ε > 0 the least common denominator of c 1 , . . . , c n is O(n εn ), n → ∞.
To apply the Siegel-Shidlovsky method, it is necessary that the functions under consideration constitute a solution of a system of differential equations and be algebraically independent over C(z).
The question of algebraic independence of solutions of linear differential equations and systems of such equations is also of great importance in differential algebra, analytical theory of differential equations, theory of special functions and calculus in a broad sense. As shown in the works of E. Kolchin [4], F. Beukers, W. Brownwell and G. Heckman [5], this question is largely reduced to checking the cogredience and contragredience conditions. Two systems of linear homogeneous differential equations of the 1st order, y = A k y, A k ∈ M(q, C(z)), q 2, k = 1, 2, The function l ϕ q ( ν; λ; z) satisfies (generalized) hypergeometric differential equation An explicit form of the equation L( ν; λ; αz p )y = 0, obtained from L( ν; λ; z)y = 0 by the substitution z −→ αz p , where α ∈ C, p ∈ N, is given in [8,10]. The Wronskian of the equation L( ν; λ; αz p )y = 0 is Lemma 6 in [10] where c ∈ C \ {0}, ε = δ l q , ε 1 = δ l+1 q , δ j i is the Kronecker delta. If ν ∈ Q l , λ ∈ Q q , l < q, α is an algebraic number, then the function l ϕ q ( ν; λ; αz q−l ) is an E-function (see [2]; Chapter 5 in [3]).
The first example of contragredience (without mentioning this term) of a system of differential equations related to hypergeometric functions was apparently constructed by Yu.V.,Nesterenko (see Lemma 8 in [11]). Cogredience and contragredience theorems for generalized hypergeometric differential equations, many of which had necessary and sufficient conditions, were proved by the author in [8] and [7] (Lemma 14). At the same time, in article [8], in fact a narrower definition was used. According to this definition, we have the equality (1) g = z r e γz or g = z r exp(γz p + γ 1 z p 1 ), where r, γ, γ 1 ∈ C, p, p 1 ∈ N. The examples constructed in [10][11][12] were also related to case (3).
In this paper, we find the conditions under which the definitions of cogredient and contragredient equations or systems in equalities (1), we can restrict ourselves to case (3). Theorem 1. Let Φ k = v k,t,s t,s=1,...,q be the fundamental matrices of the systems Then, for the cogredience and contragredience of systems (A 1 ), (A 2 ), it is necessary and sufficient to satisfy equalities (1) with conditions (3).

Proof of the Theorem 1
Lemma 1. (Lemma 6 in [8]). Suppose that F is a differential field with field of constants C. Suppose that, for any k, k,s i=0,...,q k −1; s=1,...,q k is the fundamental matrix of the differential equation y (q k ) + P k,q k −1 y (q k −1) + · · · + P k,0 y = 0, q k ≥ 2, P k,s ∈ F, and |Φ k | ∈ F. Suppose that the field of constants of the differential field L = F v 1,1 , . . . , v n,q n is C, and deg tr F L < q 2 1 + · · · + q 2 n − n. Then, either deg tr F F v k,1 , . . . , v k,q k < q 2 k − 1 for some k or, for some indices 1 ≤ j < k ≤ n q j = q k = q holds as well as at least one of the following equalities: where a ∈ L, a q ∈ F, B ∈ GL(q, F), C ∈ GL(q, C).
An analog of Lemma 1 for q 1 = · · · = q n was proved by E. Kolchin [4]. For groups Sp(2k, C) and SO(n, C) there is a generalization of the assertion of Lemma 1 (see Proposition 1.8.2 in [13]). An implicit analog of Lemma 1 for Galois groups containing SL(m), Sp(2k), was used in [5] in the proof of the Theorem 2.3.
Assuming in Lemma 1 F = C(z) and changing, if necessary, the numbering of equations, equalities (4) can be written in the form where A is a matrix whose elements are analytic functions, generally speaking, are ambiguous. Let A k be the matrix of coefficients of the system, corresponding to the differential equation with number k, k = 1, . . . , n.
The next Lemma follows from Lemma 1 (see Lemma Lemma 3. Let V be an arbitrary differential field of analytic functions containing C(z) but not containing irrational functions, whose logarithmic derivatives belong to C(z). Then, any functions linearly independent over C(z), whose logarithmic derivatives belong to C(z) will be linearly independent over V.

Proof of Lemma 3.
Let where n ≥ 2 is the smallest possible. Differentiating equality (6), we obtain Since the number n is minimal, the linear combinations on the left-hand sides equalities (6) and (7) must be proportional. In this way, . . , n and the functions v k,i,s k=1,...,n; i,s=1,...,q k ; (i,s) =(q k ,q k ) be algebraically independent over C(z). Then the field V generated over C(z) by functions (9) does not contain irrational functions whose logarithmic derivatives belong to C(z).

Corollary 1.
Any linearly (algebraically) independent over C(z) functions whose logarithmic derivatives belong to C(z), under the conditions of Lemma 4 will be linearly (respectively, algebraically) independent over V.
Proof of Lemma 4. If functions (9) are algebraically independent over C(z), then it is convenient to carry out all operations with them formally, as with their corresponding variables x k,i,s k=1,...,n; i,s=1,...,q k ; (i,s) =(q k ,q k ) .
The fundamental matrix Φ k takes the form wherex k,q k ,q k is a rational function of variables (10), defined by the equation |Φ k | = b k ∈ C(z), equivalent to A k,q k ,1 x k,q k ,1 + · · · + A k,q k ,q k −1 x k,q k ,q k −1 + A k,q k ,q kx k,q k ,q k = b k , from wherex where A k,q k ,1 , . . . , A k,q k ,q k are algebraic complements the corresponding elements of the matrix Φ k , which are polynomials in variables (10). Note that for a different choice of functions (9) included in W k , the function b k ∈ C(z), generally speaking, is multiplied by some factor from C. The derivatives with respect to z of variables (10) can be calculated formally, proceeding from the systems of Equation (8) and equalities (11).
Let the function v satisfy the equation y = ay, a ∈ C(z) (12) and belongs to the field V, that is, it can be represented in the form where T is a rational function over C of functions (9) and z, P and Q are polynomials in the same functions, (P, Q) = 1. Replacing functions (9) in equality (13) by variables (10) and differentiating it with respect to z, we obtain where P 1 , Q 1 are polynomials in variables (10) and z, (P 1 , Q 1 ) = 1. In view of equality (12) identically by (10) and z. This implies, that if in equality (13) instead of v k,1 , . . . , v k,q k , k = 1, . . . , n we substitute any other linearly independent solutions of the corresponding systems (A k ), such that W k = b k , then the function u = T will be a solution of Equation (12) and, therefore, u = cv, c ∈ C. Let T really depend on the variables included in the matrix Φ 1 , and q 1 ≥ 3. Substitute in T instead of variables x 1,i,1 functions v 1,i,1 + λv 1,i,2 , i = 1, . . . , q 1 , where λ is a new variable, and instead of the remaining variables (10), the corresponding functions (9). Obviously, the Jacobian W k will not change in this case. Then c(λ)v = T(v 1,1,1 + λv 1,1,2 , . . . , v 1,q 1 ,1 + λv 1,q 1 ,2 , v 1,1,2 , . . . , v n,q n −1,q n ).
In view of the algebraic independence of functions (9), this equality is preserved when replacing functions (9) with the corresponding variables (10). Differentiating after such a change equality (14) with respect to λ and then setting λ = 0, we get We define the degree of a rational function with respect to any set of variables as the difference between the degrees of the numerator and denominator for this population. It is easy check that, with such a definition, the degree of the product of rational functions is equal to the sum of the powers of the factors, the degree of the sum does not exceed the maximum degrees of terms, and when taking a partial derivative with respect to some variable from the selected population, the degree decreases. Hence, the degree of the right-hand side of equality (15) with respect to the set of variables x 1,1,1 , . . . , x 1,q 1 ,1 is strictly less than the degree of the left side, except for the case when T does not depend on these variables, and c (0) = 0. Exactly the same reasoning shows that in the case of q 1 ≥ 3 T does not depend on x 1,1,s , . . . , x 1,q 1 ,s , 2 ≤ s ≤ q 1 , and in case q 1 = 2 T does not depend on x 1,1,2 . It remains to prove that for q 1 = 2 T does not depend on x 1,1,1 , x 1,2,1 .
The Corollary 1 of Lemma 4 is obtained using Lemma 3 and the fact that any product of powers of functions whose logarithmic derivatives belong to C(z), is a function with the same property.
Proof of Theorem 1. It is enough to show what if equalities (1) are not satisfied with conditions (3), then they are not satisfied with the condition g /g ∈ C(z).

Consider the functions
, which differ from the Bessel's functions J λ (z) with the index λ only by multiplier (z/2) λ (Γ(λ + 1)) −1 and satisfying the equations Theorem 1 allows us, in particular, to describe all algebraic identities between the functions K λ (z) and Kummer's functions A µ,ν (z). Consider Example 2. Suppose that 2λ ∈ C \ Z, α ∈ C, p ∈ N, and Φ 1 , Φ 2 , Φ 3 are the fundamental matrices corresponding to the collections of functions are fundamental matrices of the 2nd order linear differential equations, which are easy to obtain from Equations (2) and (17). Then the identities hold (see [12]).
Are there other algebraic identities between K λ (z) and A µ,ν (z)? Theorem 1 is applicable to the functions l ϕ q ( ν; λ; αz p ) for l < q. Therefore, the necessary and sufficient conditions of cogredience and contragredience of generalized hypergeometric equations from article [8] that were found for the case (3) are also valid for the general definition (1). This comment also applies to the article [10], where conditions of cogredience and contragredience were also discussed. According to [8,10], Example 2 is the only case of cogredience and contragredience between the equations that are obtained from (2) and (17) by the substitution z −→ αz p . Therefore, according to [5,8], other algebraic identities between K λ (z) and A µ,ν (z), different from the identities derived from Example 2 do not exist.
2. The lack of cogredience and contragredience allows one to conclude about the algebraic independence of generalized hypergeometric functions over C(z) (see [5,8]). It follows the algebraic independence of their values (see [2,3]). Using the theorems of Chapters 11-13 of the book [3] (or their more exact analogs from [14]), one can also obtain lower estimates of moduli of polynomials of the values.