Riemann–Hilbert Problem for the Matrix Laguerre Biorthogonal Polynomials: The Matrix Discrete Painlevé IV

: In this paper, the Riemann–Hilbert problem, with a jump supported on an appropriate curve on the complex plane with a ﬁnite endpoint at the origin, is used for the study of the corresponding matrix biorthogonal polynomials associated with Laguerre type matrices of weights—which are constructed in terms of a given matrix Pearson equation. First and second order differential systems for the fundamental matrix, solution of the mentioned Riemann–Hilbert problem, are derived. An explicit and general example is presented to illustrate the theoretical results of the work. The non-Abelian extensions of a family of discrete Painlevé IV equations are discussed.


Introduction
Mark Grigorievich Krein [1,2] was the first to discuss matrix extensions of real orthogonal polynomials. Some relevant papers that appear afterwards on this subject are [3,4] and more recently [5]. The Russian mathematicians Aptekarev and Nikishin [5] made a remarkable finding: for a kind of discrete Sturm-Liouville operators they solved the scattering problem and proved that the matrix polynomials that satisfy a three-term recurrence relation with matrix coefficients xP k (x) = A k P k+1 (x) + B k P k (x) + A * k−1 P k−1 (x), k = 0, 1, . . . , are orthogonal with respect to a positive definite matrix measure, i.e., they derived a matrix Favard theorem. Later, it was found that matrix orthogonal polynomials (MOP) sometimes satisfy properties, as do the classical orthogonal polynomials. For example, for matrix versions of Laguerre, Hermite and Jacobi polynomials, the scalar-type Rodrigues' formula [6,7] and a second order differential equation [8][9][10] has been discussed. It also has been proven in [11] that operators of the form D = ∂ 2 F 2 (t) + ∂ 1 F 1 (t) + ∂ 0 F 0 have as eigenfunctions different infinite families of MOP's. In [12,13] matrix extensions of the generalized polynomials considered in [14,15] were studied. Recently, in [16], the Christoffel transformation to matrix orthogonal polynomials in the real line (MOPRL) has been extended and a new matrix Christoffel formula was obtained. Finally, in [17,18] more general transformations-of Geronimus and Uvarov type-where also considered.
Fokas, Its and Kitaev [19] found, in the context of 2D quantum gravity, that certain Riemann-Hilbert problems were solved in terms of orthogonal polynomials in the real line • The support of W(z) is a non-intersecting smooth curve on the complex plane with end points at ∞. • Weight matrix entries were, in principle, Hölder continuous, and eventually requested to have holomorphic extensions to the complex plane. • The matrix of weights W(z) is regular, i.e., det W j+k j,k=0,...n = 0, n ∈ N := {0, 1, . . .}, where the moment of order n, W n , associated with W is, for each n ∈ N, given by, We obtained Sylvester systems of differential equations for the orthogonal polynomials and its second kind functions, directly from a Riemann-Hilbert problem, with jumps supported on appropriate curves on the complex plane. We considered a Sylvester type differential Pearson equation for the matrix of weights. We also studied whenever the orthogonal polynomials and their second kind functions are solutions of second order linear differential operators with matrix eigenvalues. This was achieved by stating an appropriate boundary value problem for the matrix of weights. In particular, special attention was paid to non-Abelian Hermite biorthogonal polynomials in the real line, understood as those whose matrix of weights is a solution of a Sylvester type Pearson equation with given matrices of degree one polynomial coefficients. We also found nonlinear equations for the matrix coefficients of the corresponding three-term relations, which extends to the non-commutative case the discrete Painlevé I and the alternate discrete Painlevé I equations.
In this paper, we conduct a similar study but with more relaxed conditions, namely of Laguerre type.
The layout of the paper is as follows. In Section 2 we present the main definitions and theorems fundamental in the theory worked on in this paper. In Section 3 we state the Riemann-Hilbert problem for matrix biorthogonal polynomials, discussing the Pearson-Laguerre matrix weights with a finite end point, introducing the constant jump fundamental matrix and the important structure matrix. We also apply these ideas to obtain first and second order matrix differential operators of Laguerre type. In Section 4, we take a Laguerre type weight as a case study and reinterpret the results just stated for this specific and general example. Then, in Section 5 we end the paper with the finding of a matrix extension of an instance of the discrete Painlevé IV equation.

Matrix Biorthogonal Polynomials
We begin this section with the important definition of this paper.
Definition 1 (Laguerre type Matrix of weights). We say that a regular matrix of weights The support of W(z) is a non self-intersecting smooth curve on the complex plane with a beginning point at 0 and an ending point at ∞, and such that it intersects the circles |z| = R, R ∈ (0, +∞), once and only once (i.e., it can be taken as a determination curve for arg z).

•
The entries W (j,k) of the matrix measure W can be written as where I j,k denotes a finite set of indexes, Re(α m ) > −1, p m ∈ N ∪ {0} and A m (z) is Hölder continuous and bounded. Here, the determination of logarithm and the powers are taken along γ. We will request, in the development of the theory, that the functions A m have a holomorphic extension to the whole complex plane.
In this work, for the sake of simplicity, γ = (0, +∞) and the finite end point of the curve γ is taken at the origin, c = 0, with no loss of generality, as similar arguments apply for c = 0. In [10], different examples of Laguerre weights for the matrix orthogonal polynomials on the real line are studied.
Given a Laguerre type matrix of weights, as in Definition 1, we define the sequences of matrix monic polynomials, P L n (z) n∈Z + , where deg P L n (z) = n, n ∈ N, left orthogonal and right orthogonal, P R n (z) n∈N , where deg P R n (z) = n, n ∈ N, with respect to the regular matrix of measure W, by the conditions, for k = 0, 1, . . . , n and n ∈ N, where C n is a nonsingular matrix. We can see that the sequence of monic polynomials P L n n∈N are defined by (2) with respect to a regular matrix weight, W. In fact, taking into account a representation for P L n as P L n (z) = p 0 L,n z n + p 1 L,n z n−1 + · · · + p n−1 L,n z + p n L,n , such that, for each j = 0, 1, . . . , n − 1 γ P L n (z)W(z)z j dz = p 0 L,n W n+j + p 1 L,n W n+j−1 + · · · + p n−1 L,n W j+1 + p n L,n W j = 0, and with j = n γ P L n (z)W(z)z n dz = p 0 L,n W 2n + p 1 L,n W 2n−1 + · · · + p n−1 L,n W n+1 + p n L,n W n = C −1 n .
Let us notice that In matrix notation, we have p n L,n p n−1 L,n · · · p 1 L,n p 0 L,n U n = 0 0 · · · 0 C −1 n .
Since det U n = 0, we know that the above linear system has a unique solution, i.e., there exists and are unique the matrices p n L,n , p n−1 L,n , . . . , p 1 L,n , p 0 L,n , and so the sequence P L n n∈Z + is uniquely defined up to a multiplicative nonsingular matrix defined by (2). As a direct consequence of the non-singularity of the last block of U −1 n , i.e., the one in the position (n + 1), (n + 1), of the matrix U −1 n we find that p 0 L,n is a non singular matrix. In fact, as (see for instance [53]) and det D = det U n−1 det U n , we get the non singularity of p 0 L,n . The same can be seen for P R n n∈N . The matrix of weights W(z) induces a non-degenerate bilinear form in the set of matrix polynomials C N×N [z] given by for which P L n (z) n∈N and P R n (z) n∈N are biorthogonal As the polynomials are chosen to be monic, we can write P L n (z) = I N z n + p 1 L,n z n−1 + p 2 L,n z n−2 + · · · + p n L,n , P R n (z) = I N z n + p 1 R,n z n−1 + p 2 R,n z n−2 + · · · + p n R,n , with matrix coefficients p k L,n , p k R,n ∈ C N×N , k = 0, . . . , n and n ∈ N (imposing that p 0 L,n = p 0 R,n = I N , n ∈ N). Here, I N ∈ C N×N denotes the identity matrix. We define the sequence of second kind matrix functions by for n ∈ N. From the orthogonality conditions (2) and (3), we have, for all n ∈ N, the following asymptotic expansion near infinity

Three-Term Recurrence Relation
Following standard arguments of orthogonality, we conclude that the sequence of monic polynomials P L n (z) n∈N satisfies the three-term recurrence relations with recursion coefficients given by β L n := p 1 L,n − p 1 L,n+1 and γ L n := C −1 n C n−1 , with initial conditions, where S W (z) is the Stieltjes-Markov transformation. Analogously, where β R n := C n β L n C −1 n , γ R n := C n γ L n C −1 n = C n−1 C −1 n , Q R −1 (z) = −C −1 −1 and Q R 0 (z) = S W (z). These relations could be written as, where T L n denotes the left transfer matrix. For the right orthogonality, we similarly obtain, where T L n denotes the right transfer matrix.

Reductions: From Biorthogonality to Orthogonality
We consider two possible reductions for the matrix of weights, the symmetric reduction and the Hermitian reduction. (i) A matrix of weights W(z) with support on γ is said to be symmetric if (W(z)) = W(z), z ∈ γ.
(ii) A matrix of weights W(x) with support on R is said to be Hermitian if These two reductions lead to orthogonal polynomials, as the two biorthogonal families are identified; i.e., for the symmetric case and for the Hermitian case, with γ = R In both cases, biorthogonality collapses into orthogonality, that for the symmetric case reads as while for the Hermitian case it can be written as follows where P n = P L n .

The Riemann-Hilbert Problem
We begin this section stating a general theorem on the Riemann-Hilbert problem for the matrix Laguerre general weights. A preliminary version of this can be found in [54]. Theorem 1. Given a regular Laguerre type matrix of weights W(x) with support on γ, we have: is, for each n ∈ N, the unique solution of the Riemann-Hilbert problem, which consists in the determination of a 2N × 2N complex matrix function such that: Has the following asymptotic behavior near infinity, , as z → 0, lim (ii) The matrix function is, for each n ∈ N, the unique solution of the Riemann-Hilbert problem, which consists of the determination of a 2N × 2N complex matrix function such that: Has the following asymptotic behavior near infinity, conditions are understood entrywise.
(iii) The determinant of Y L n (z) and Y R n (z) are both equal to 1, for every z ∈ C.
Proof. Using the standard calculations from the scalar case [55], it follows that the matrices Y L n and Y R n satisfy (RHL1)-(RHL3) and (RHR1)-(RHR3), respectively. The entries W j,k of the matrix measure W are given in (1). It holds (cf. [56]) that in a neighborhood of z = 0 the Cauchy transform where p(ζ) denotes any polynomial in ζ, that satisfies lim z→0 zφ m (z) = 0. Then, (RHL4) and (RHR4) are fulfilled by the matrices Y L n , Y R n , respectively. To prove the unicity of both Riemann-Hilbert problems let us consider the matrix function It can easily be proved that G(z) has no jump or discontinuity on the curve γ and that its behavior at the end point 0 is given by so it holds that lim z→0 zG(z) = 0 and we conclude that the end point 0 is a removable singularity of G. Now, from the behavior for z → ∞, hence the Liouville theorem implies that G(z) = I 2N . To prove the unicity of the solution of (RHL1)-(RHL3) and (RHR1)-(RHR3) let Y L n be any solution of the left Riemann-Hilbert problem. Then Hence, any solution of this left Riemann-Hilbert problem is equal to the inverse of a fixed matrix, and the uniqueness follows. We obtain the uniqueness of the solution of the right Riemann-Hilbert in a similar way. Let us calculate the determinant of the fundamental matrix. Since A similar reasoning leads to a similar result for Y R n .
that entrywise reads as follows

Pearson-Laguerre Matrix Weights with a Finite End Point
Instead of a given matrix of weights, we consider two matrices of entire functions, say h L (z) and h R (z), such that the following matrix Pearson equations are satisfied and, given solutions to them, we construct the corresponding matrix of weights as W = W L W R . This matrix of weights is also characterized by a Pearson equation.
Proposition 1 (Pearson differential equation). Given two matrices of entire functions h L (z) and h R (z). A solution of the Sylvester type matrix differential equation, which we call a Pearson equation for the weight, W, is of the form W = W L W R where the factor matrices W L and W R are solutions of (10).
Proof. Given solutions W L and W R of (10), it follows immediately, just using the Leibniz law for derivatives, that W = W L W R fulfills (11). Moreover, given a solution W of (11) we pick a solution W L of the first equation in (10), then it is easy to see that (W L ) −1 W satisfies the second equation in (10).
We can give the following result from the literature [57].
Theorem 2 (Solution at a regular singular point). Let h L (z) be an entire matrix function. Then, for the solutions of the Pearson Equation (10) we have: (i) If A L := h L (0) has no eigenvalues that differ from each other by positive integers then, the solution of the left matrix differential equation in (10) can be written as where H L (z) is an entire and nonsingular matrix function such that H L (0) = I N , and W L 0 is a constant nonsingular matrix.
If the matrix function A L has eigenvalues that differ from each other by positive integers, then the solution of the left matrix differential equation in (10) can be written as where, in this case, and S L (z) is a finite product of factors of the form T i S L i (z), with T i a nonsingular matrix and S L i (z) is a shearing matrix, i.e., a matrix given by blocks as for some positive integers n i , m i , and Π L (z) is an entire and non singular matrix function such that Π L (0) = I,Ã L is a constant matrix built from the matrix A L , where the eigenvalues of this matrix are decreased in such a way that the eigenvalues of the resulting matrix do not differ by a positive integer and W L 0 is a constant nonsingular matrix.
We can obtain analogous results for the right matrix differential equation in (10) and we will denote the solution as Notice that given a matrix A, and the oriented curve γ, the matrix of functions z A = e A log z is a matrix of holomorphic functions in C \ γ, and We also adopt the convention that W L (z)W R (z) + = W(z), i.e., the matrix of weight is obtained from the limit behavior of the right side of the curve γ of the matrix function It is necessary, in order to consider the Riemann-Hilbert problem related to the matrix of weights W satisfying (11), to study the behavior of W(z) around the origin. For that aim, let us consider J, the Jordan matrix similar to the matrix A, i.e., there exists an nonsingular matrix P such that where m k is the order of the nilpotent matrix N k , we have that where we have used the nilpotency of N j k = 0 N for j ≥ m k . So we can conclude that the entries of z A are linear combinations of z λ j with polynomials coefficients in the variable log z.

Constant Jump Fundamental Matrix
According to the above notation and given a regular matrix of weights as described in (11), we introduce the constant jump fundamental matrices for n ∈ N.
The constant jump fundamental matrices Z L n (z) and Z R n (z) satisfy, for each n ∈ N, the following properties: Present the following constant jump condition on γ Proof. (i) The holomorphic properties of Z L n are inherited from those of the fundamental matrices Y L n and z A and taking into account that H L (z) is an entire matrix function.
, and taking into account Theorem 1 we successively obtain Hence, we obtain the desired constant jump condition for Z L n (z).
To complete the proof we only have to check that which is a consequence of (13).

Structure Matrix and Zero Curvature Formula
In parallel to the matrices Z L n (z) and Z R n (z), for each factorization we introduce what we call structure matrices given in terms of the left, respectively right, logarithmic derivatives by, It is not difficult to prove that

Proposition 3 ([52]).
We have the following properties: (i) The transfer matrices satisfy The zero curvature formulas Now, we discuss the holomorphic properties of the structure matrices just introduced.
Theorem 3. The structure matrices M L n (z) and M R n (z), cf. (15), are, for each n ∈ N, meromorphic on C, with singularity located at z = 0, which happens to be a removable singularity or a simple pole.
Proof. Let us prove the statement for M L n (z), for M R n (z) one should proceed similarly. From (15) it follows that M L n (z) is holomorphic in C \ γ. Due to the fact that Z L n (z) has a constant jump on the curve γ, the matrix function Z L n has the same constant jump on the curve γ, so the matrix M L n (z) has no jump on the curve γ, and it follows that, at the origin, M L n (z) has an isolated singularity. From (15) and (12), it holds . Each entry of the matrix Q L n (z) is the Cauchy transform of certain function f (z), where From the boundary conditions, the first term is zero and we obtain zg and from the definition of f we obtain that t f (t) is a function in the class of f , which we denote by v and, consequently, lim z→0 z 2 g (z) = 0. From these considerations it follows, Similar considerations leads us to the result that so we obtain that lim z→0 z 2 M L n (z) = 0 2N , and hence the matrix function M L n (z) has at most a simple pole at the point z = 0.

Differential Relations from the Riemann-Hilbert Problem
We are interested in the differential equations fulfilled by the biorthogonal matrix polynomials determined by Laguerre type matrices of weights. Here we use the Riemann-Hilbert problem approach in order to derive these differential relations. We use the notation for the structure matrices with M L n (z) and M R n (z) matrices of entire functions.
Proposition 4 (First order differential equation for the fundamental matrices Y L n (z) and Y R n (z)). It holds that Proof. Equations (17) and (18) follow immediately from the definition of the matrices M L n (z) and M R n (z) in (15).

Corollary 2.
Let h L (z) = A L z + B L and h R (z) = A R z + B R be two first degree matrix polynomials. The left and right fundamental matrices are given, respectively, by Proof. By considering (17), Theorem 3, the asymptotic expansion at infinity of the fundamental matrix Y L n and (5), the asymptotic behavior of the second kind matrix functions at infinity, and using the identities p 1 R,n = −q 1 L,n−1 and p 1 L,n = −q 1 R,n−1 (19) follows. The relation (16) leads to (20).

Proposition 5 (Second order differential equation for the fundamental matrices). It holds
Proof. Differentiating in (15), we obtain Now, using (12) and (10), we obtain the stated result (21). Equation (22) It holds that the second order matrix differential Equations (21) and (22) split in the following differential relations z P L n + P L n 2h L + I N + P L n N (h L ) = H L 1,1,n P L n − H L 1,2,n C n−1 P L n−1 ,
For this class of Laguerre weights, we obtain, using analytic arguments, an alternative formula for the residue matrix with the simple pole at z = 0 of the left fundamental matrix. In a similar manner, we could obtain the result for the right fundamental matrix. Notice that the fundamental matrix is completely determined in the previous section (19), where A L , A R , is substituted, respectively, by A 1 , A 2 , and B L , B R by α 2 . This alternative formula enables us to make an important simplification in the Equation (21) previously obtained.
Accordingly, we choose Straightforward calculation shows that h L and h R appearing in (11) are given by Proposition 6. The structure matrix M L n defined in (15) has a simple pole given by the following expression: α has the yielding canonical Jordan form, α = PJP −1 with and N + (respectively, N − ) being the sum of the algebraic multiplicities associated with eigenvalues with positive (respectively, negative) real parts and in J + (respectively, J − ), we gather together the Jordan blocks of all eigenvalues with positive (respectively, negative) real parts, andŶ L n (z) being given bŷ

Remark 1.
In the first case, F L n (0) have a simpler form if Re(σ(α)) are all positive or all negative Proof. It can be seen that the matrix function Z L n defined by , satisfies the following conditions: So, over the interval (0, +∞), we have For z ∈ C \ [0, +∞), let us define the matrix which satisfies, over (0, +∞), the following jump condition Consequently, the matrix has no jump in the interval (0, +∞). The matrix function F L n has an isolated singularity at the origin which, as we will show now, is a removable singularity, i.e., lim z→0 zF L n (z) = 0 2N . From its definition, we have that and as zs L 1 , zs L 2 → 0 N as z → 0 and O(z)z α → 0 N , as z → 0 (because the eigenvalues of α are bounded from below by −1), we conclude that zF L n (z) → 0 2N for z → 0. Hence, F L n (z) is a matrix of entire functions. Now, we want to compute F L n (0) = lim z→0 F L n (z). For this fact, we will elaborate with respect to the sign of the real part of spectrum of α. Notice that, where the limit of each factor does not need to exist. We separately compute F L n (0) in the cases, when Re(σ(α)) ⊂ (0, +∞) and when Re(σ(α)) ⊂ (−1, 0), and then we give F L n (0) in general.
When the real part of all the eigenvalues of α are strictly positive then each limit exists and Case Re(σ(α)) ⊂ (−1, 0) and σ(α) ∩ {N} = ∅. We cannot proceed as before. However, as the limit exists, if we are able to rewrite in terms of two matrix factorsŶ L n (z) and f (z), a non singular matrix, with f having a well defined limit for z → 0, also being a non-singular matrix, we can ensure the existence of lim z→0Ŷ L n (z), and F L n (0) = lim z→0Ŷ L n (z) lim z→0 f (z) . This can be achieved by considerinĝ So that, General case Re(σ(α)) ⊂ (−1, +∞) and σ(α) ∩ {N} = ∅.
Recalling the canonical Jordan form, we can write α = PJP −1 with and N + (respectively, N − ) being the sum of the algebraic multiplicities associated with positive (respectively, negative) eigenvalues and in J + (respectively, J − ), we gather together the Jordan blocks of all positive (respectively, negative) eigenvalues. Hence, Now, as we did in the previous case, with negative eigenvalues only, we left multiply by the following nonsingular matrix which for z → 0 has a well defined limit, being a non-singular matrix, given by Thus, By definition, as det F L n (z) = 0, we know that F L n F L n −1 has no singularities, while Consequently, M L n (z) has a simple pole at the origin with Let us move to the proof of second case, i.e., α = mI N , m ∈ N. It can be seen that the matrix function Z L n satisfies over (0, +∞) the following jump condition For z ∈ C \ [0, +∞), instead of (27), let us define the matrix where we take the branch of the logarithmic function defined in C \ [0, +∞), which satisfies, over (0, +∞), the same jump condition Consequently, the matrix has no jump in the interval (0, +∞). The matrix function F L n has an isolated singularity at the origin which, as we will show now, is a removable singularity, i.e., and as zs L 1 , zs L 2 → 0 N as z → 0, we conclude that zF L n (z) → 0 2N , as z → 0. Hence, F L n (z) is a matrix of entire functions. To compute F L n (0) we notice that, For m = 1, 2 . . . it holds that F L n (0) = Y L n (0). For m = 0, the limit of each factor inside the limit does not need to exist. As the limit exists, let us write Using the same kind of reasoning as above, we find that M L n (z) has a simple pole at the origin with which ends the proof.

Proposition 7.
The structure matrix M L n has the yielding expression Proof. Substitute A L , A R , respectively, by A 1 , A 2 , and B L , B R by α 2 in (19) and (20).
If there exists λ ∈ (0, +∞) such that α 2 = λI N , or α = mI N , for some m ∈ {0, 1, 2, . . .}, then the second order differential equation is simplified to Proof. If we take into account that M L n (z) = M L n (0) + z( M L n ) (0) and that we find that (21), the second order differential equation that the fundamental matrix satisfies, can be written as Under the restriction that the real part of the spectrum of α is contained on (−1, +∞) and n −1 has a pole of order 1 at z = 0, with residue given by If we now also assume on the matrix α that α 2 = λI N , we obtain In the case that α = mI N , for some m ∈ {0, 1, 2, . . .}, we find that In both cases, we have and substituting The result follows.

Corollary 3.
Let us consider N = 1 (i.e., the scalar case). If A 1 = A 2 = − 1 2 , and α > −1, then the second order equation for P L n n∈N and Q L n n∈N is given by Proof. In the scalar case, this equation reduces to now, considering the (1, 1) and the (1, 2) entry of this differential matrix equation the result follows.

Matrix Discrete Painlevé IV
We can consider, using the notation introduced before, the matrix weight measure From Proposition 5 we find that the matrix M n = zM L n is given explicitly by From the three-term recurrence relation for {P L n } n∈N we find that p 1 L,n − p 1 L,n+1 = β L n and p 2 L,n − p 2 L,n+1 = β L n p 1 L,n + γ L n where γ L n = C −1 n C n−1 . Consequently, In the same manner, from the three-term recurrence relation for {Q L n } n∈N , we deduce that q 1 L,n − q 1 L,n−1 = β R n := C n β L n C −1 n and q 2 L,n − q 2 L,n−1 = β R n q 1 L,n + γ R n , where γ R n = C n C −1 n+1 . If we consider that W = W L and W R = I N , and use the representation for {P L n } n∈N and {Q L n } n∈N in z powers, the (1, 2) and (2, 2) entries in (17) read We can write these equations as follows We will show now that this system contains a noncommutative version of an instance of discrete Painlevé IV equation, as happens in the analogous case for the scalar scenario. We see, on the right hand side of the nonlinear discrete Equations (28) and (29) nonlocal terms (sums) in the recursion coefficients β L n and γ L n , all of them inside commutators. These nonlocal terms vanish whenever the three matrices {h L 0 , h L 1 , h L 2 } conform an Abelian set. Moreover, {h L 0 , h L 1 , h L 2 , β L n , γ L n } is also an Abelian set. In this commutative setting, we have In terms of ξ n := h L 0 2 + nI N + h L 2 γ n and µ n := h L 2 β L n + h L 1 the above equations are β L n µ n = −(ξ n + ξ n+1 ), β L n (ξ n − ξ n+1 ) = −γ n µ n−1 + γ n+1 µ n+1 . Now, we multiply the second equation by µ n and taking into account the first one we arrive at −(ξ n + ξ n+1 )(ξ n − ξ n+1 ) = −γ n µ n−1 µ n + γ n+1 µ n µ n+1 and so ξ 2 n+1 − ξ 2 n = γ n+1 µ n µ n+1 − γ n µ n−1 µ n .
We have just seen that, Theorem 4 (Non-Abelian extension of the dPIV). When B L = 0 N , the following nonlocal nonlinear non-Abelian system for the recursion coefficients is fulfilled Moreover, this system reduces in the commutative context to the standard dPIV equation.

Conclusions and Future Work
In this paper, using the Riemann-Hilbert problem for the Laguerre-type weight matrices, we obtain differential properties of the corresponding matrix biorthogonal polynomials as well as for the second kind matrix functions. It is remarkable to notice that we do not explicitly know the matrix measure, but only its differential properties.
We will consider in future work the case that the support of the measure has two finite end points, the Jacobi-type weight matrices, trying to also obtain differential properties and extensions of Painlevé discrete systems.

Conflicts of Interest:
The authors declare no conflict of interest.

List of Acronyms
The following abbreviations are used in this manuscript: MOP matrix orthogonal polynomials MOPRL matrix orthogonal polynomials in the real line OPRL orthogonal polynomials in the real line dPIV discrete Painlevé IV