Riemann–Hilbert Problems and Soliton Solutions of Type ( λ ∗ , − λ ∗ ) Reduced Nonlocal Integrable mKdV Hierarchies

: Reduced nonlocal matrix integrable modiﬁed Korteweg–de Vries (mKdV) hierarchies are presented via taking two transpose-type group reductions in the matrix Ablowitz–Kaup–Newell– Segur (AKNS) spectral problems. One reduction is local, which replaces the spectral parameter λ with its complex conjugate λ ∗ , and the other one is nonlocal, which replaces the spectral parameter λ with its negative complex conjugate − λ ∗ . Riemann–Hilbert problems and thus inverse scattering transforms are formulated from the reduced matrix spectral problems. In view of the speciﬁc distribution of eigenvalues and adjoint eigenvalues, soliton solutions are constructed from the reﬂectionless Riemann–Hilbert problems.


Introduction
Starting from matrix spectral problems, one can generate integrable hierarchies of equations, based on the corresponding zero curvature equations.Among typical examples are the nonlinear Schrödinger (NLS) hierarchy and the modified Korteweg-de Vries (mKdV) hierarchy.Specific group reductions on spectral matrices can yield reduced integrable hierarchies.In soliton theory, there are a few effective methods to solve integrable equations, which include the inverse scattering transforms [1,2], the Darboux transformation [3], and the Hirota bilinear method [4].A kind of multiple wave solution, called soliton solutions, can be presented explicitly by the Hirota bilinear method [5][6][7].Riemann-Hilbert problems, formulated from the associated given matrix spectral problems, also provide a powerful technique that allows us to solve integrable equations, particularly to present soliton solutions [8].
Let us consider the (1+1)-dimensional case.Let x and t be two independent variables, λ a spectral parameter, and u = u(x, t) a column vector of dependent variables.Take two square matrices, U = U(u, λ) and V = V(u, λ), from a loop algebra to form a Lax pair consisting of spatial and temporal matrix spectral problems: where φ is a square matrix eigenfunction and i is the unit imaginary number.We assume that the compatibility condition of the above two matrix spectral problems, namely the zero curvature equation where [•, •] denotes the matrix commutator, gives us an integrable equation: For such integrable equations, Lie algebraic structures behind matrix spectral problems have been explored to generate their infinitely many symmetries [9].The adjoint Lax pair of the matrix spectral problems in (1) is defined by: where φ is a square matrix eigenfunction, too.Their compatibility condition leads to the same zero curvature equation as (2), and so, it does not bring any additional equations.
Both the Lax pair and the adjoint Lax pair lay the basis for the subsequent analyses in the formulation of Riemann-Hilbert problems and soliton solutions.We state the standard procedure for establishing Riemann-Hilbert problems as follows.It begins with a pair of matrix spectral problems in (1) with: U(u, λ) = A(λ) + P(u, λ), V(u, λ) = B(λ) + Q(u, λ), (5) where A, B are commuting constant square matrices, and P, Q are trace-less square matrices satisfying deg λ (P) < deg λ (A) and deg λ (Q) < deg λ (B).In order to formulate a Riemann-Hilbert problem for the corresponding integrable equation Equation ( 3), we adopt an equivalent Lax pair of matrix spectral problems: where P = iP, Q = iQ, and an equivalent adjoint Lax pair consisting of the following matrix spectral problems: i ψx = [ ψ, A(λ)] + ψP(u, λ), i ψt = [ ψ, B(λ)] + ψQ(u, λ), (7) where ψ and ψ also denote square matrix eigenfunctions.The equivalence between the matrix spectral problems in (1) and the matrix spectral problems in ( 6) is a consequence of the commutativity of A and B. From tr P = tr Q = 0, we obtain the properties (det ψ) x = (det ψ) t = 0. Obviously, there are the relations φ = φ −1 and ψ = ψ −1 .There also exists a direct connection between the matrix spectral problems in (1) and the matrix spectral problems in (6): It is crucial to note that for the pair and the adjoint pair of matrix spectral problems in ( 6) and (7), we can require the asymptotic conditions: where I stands for the identity matrix.Then, based on those matrix eigenfunctions ψ ± and ψ± , we can pick the entries to form two generalized matrix Jost solutions T ± (x, t, λ), which are analytic in the upper and lower half-planes, C + and C − , and continuous in the closed upper and lower half-planes, C+ and C− , respectively, and present a Riemann-Hilbert problem with a jump on the real line: The two unimodular generalized matrix Jost solutions, G + and G − , and the jump matrix, G 0 , are all generated from the generalized Jost solutions T + and T − , and G + and G − have the same analyticity properties as T + and T − , respectively.Moreover, the jump matrix, G 0 , carries all essential scattering data, generated from the scattering matrix S g (λ) of the associated matrix spectral problems, which is defined via: Exact solutions to the resulting Riemann-Hilbert problem (9) present the required generalized matrix Jost solutions to recover the potential of the matrix spectral problems, and thus, solutions to the corresponding integrable Equation (3).Such solutions, G + and G − , can be determined through an application of the Sokhotski-Plemelj formula to the difference of G + and G − .Observing the asymptotic behaviors of the generalized matrix Jost solutions G ± at infinity of λ leads to a recovery of the potential.The whole procedure also generates the corresponding inverse scattering transforms.Soliton solutions correspond to the reflectionless case, and they are constructed by solving the reflectionless Riemann-Hilbert problems or computing the corresponding reflectionless inverse scattering transforms.
It is also known that we can generate reduced integrable equations under group reductions of matrix spectral problems, both local (see, e.g., [10]) and nonlocal (see, e.g., [11][12][13][14][15]).One class of local group reductions is defined by: and one class of nonlocal group reductions reads: where † stands for the Hermitian transpose, Σ, ∆ are two constant invertible Hermitian matrices, and λ * is the complex conjugate of λ.The first class of reductions in (10) works for both the NLS equations and the mKdV equations, but the second class of reductions in (11) works only for the mKdV equations.In those reductions, the crucial point is to replace the spectral parameter λ with its complex conjugate, λ * , and its negative complex conjugate −λ * , respectively.Each of them can yield reduced integrable equations from zero curvature equations.
In this paper, we would like to consider the above two classes of group reductions (10) and (11) for the matrix Ablowitz-Kaup-Newell-Segur (AKNS) spectral problems simultaneously, to generate reduced nonlocal matrix integrable mKdV hierarchies and to establish their Riemann-Hilbert problems and inverse scattering transforms.The starting point is a kind of arbitrary-order matrix AKNS spectral problems.The corresponding reflectionless Riemann-Hilbert problems are used to construct soliton solutions, by taking advantage of the specific distribution of eigenvalues and adjoint eigenvalues.The last section gives the conclusions and concluding remarks.

The Matrix AKNS Integrable Hierarchies Revisited
To present reduced nonlocal matrix integrable mKdV hierarchies, let us recall the construction of the integrable hierarchies of matrix AKNS equations (see, e.g., [16]).
Assume that m, n ≥ 1 are two given integers, and p, q are two matrix potentials: λ is a spectral parameter, I s denotes the identity matrix of size s, s ≥ 0, and α 1 , α 2 and β 1 , β 2 are two arbitrary pairs of distinct real constants.Each of the matrix AKNS integrable hierarchies is constructed from the matrix AKNS spectral problems with matrix potentials: where the Lax pair of spectral matrices are given by: In this pair of spectral matrices, Λ and Ω are two constant square matrices: and the other two involved square matrices are defined by: which is called the potential matrix, and: where a [s] , b [s] , c [s] , and d [s] will be defined recursively later.Evidently, when m = 1, the matrix spectral problems in (13) are reduced to the multicomponent case, and if there is only a pair of nonzero potentials-for example, p jk and q kj -the matrix spectral problems in (13) are reduced to the standard AKNS case [17].
As normal, to generate an associated matrix AKNS integrable hierarchy, let us first solve the stationary zero curvature equation: for a given spectral matrix U defined as in (14).We look for a solution W of the form: where a, b, c, d are m × m, m × n, n × m, and n × n matrices, respectively.Obviously, the stationary zero curvature Equation ( 18) precisely presents: where α = α 1 − α 2 .Let us take W as a formal Laurent series: , s ≥ 0, (21) and then, the system (20) leads equivalently to the recursion relations: and a [s] Now, let us take the initial values for a [0] and d [0] : and select zero constants of integration in (25), which means that we impose: In this way, with a [0] and d [0] given by (26), we see that: and can uniquely determine all matrices W s , s ≥ 1, defined recursively.For instance, one can work out that: and where Particularly, we can obtain: and: in which I m,n = diag(I m , −I n ).Based on (25), we can easily derive, from ( 23) and ( 24), a recursion relation for determining b [s] and c [s] : where the matrix operator Ψ reads: Finally, we see that the compatibility conditions of the two matrix spectral problems in (13), i.e., the zero curvature equations engender one so-called matrix AKNS integrable hierarchy: The first two nonlinear integrable equations in this hierarchy give us the AKNS matrix NLS equations: and the AKNS matrix mKdV equations: where the two matrix potentials, p and q, are defined by (12).When m = 1 and n = 2, the matrix NLS Equation ( 40) can be reduced to the Manakov system [18], under a group reduction of type (10).
By a theory of Lax operator algebras [9], we can directly show that (39) defines a hierarchy of commuting flows, which implies that each equation in the hierarchy (39) possesses infinitely many symmetries.Moreover, an application of the trace identity [19] can show that every nonlinear equation in (39) possesses a bi-Hamiltonian structure and thus infinitely many conservation laws, which commute under both Poisson brackets associated with the bi-Hamiltonian structure.

Reduced Nonlocal Matrix Integrable mKdV Hierarchies
Let us now construct a kind of reduced nonlocal integrable mKdV hierarchies by two groups reductions of the matrix AKNS spectral problems in (39), of which one is local and the other is nonlocal.
We take two pairs of constant invertible Hermitian matrices Σ 1 , Σ 2 and ∆ 1 , ∆ 2 , and consider two classes of group reductions for the spectral matrix U defined as in ( 14): and: where Σ, ∆ are two constant invertible Hermitian matrices given by: and: These two classes of reductions precisely require the local potential reduction: and the nonlocal potential reduction: which allow us to make the local and nonlocal reductions for the matrix potentials: and: respectively.It then follows that both classes of reductions for the spectral matrix U need an additional constraint for the matrix potential p: Further, noting that the group reductions in ( 42) and ( 43) ensure that we can see that and where s ≥ 0, V [2s+1] is defined as in ( 14) and Q [2s+1] is defined by (17).Therefore, under the group reductions ( 48) and ( 49), the integrable matrix AKNS equations in (39) with r = 2s + 1, s ≥ 0, are reduced to a hierarchy of reduced nonlocal matrix integrable mKdV type equations: where p = (p jl ) m×n satisfies (50), and Σ 1 , ∆ 1 and Σ 2 , ∆ 2 are two pairs of arbitrarily given invertible Hermitian matrices of sizes m and n, respectively.All equations in the hierarchy (54) possess Lax pairs of the reduced spatial and temporal matrix spectral problems in ( 13) with 2s + 1, s ≥ 0, and infinitely many symmetries and conservation laws reduced from those for the integrable matrix AKNS equations in (39) with 2s + 1, s ≥ 0. Let us fix s = 1, i.e., r = 3.Then, the reduced nonlocal matrix integrable mKdV type equations in (54) present a class of reduced nonlocal matrix integrable mKdV equations: where p is an m × n matrix potential satisfying (50).
In what follows, we would like to present a few examples of these novel nonlocal matrix integrable mKdV equations, by taking different values for m, n and different choices for Σ, ∆.Let us first consider m = 1 and n = 2, and take where σ and δ are real constants and satisfy σ 2 = δ 2 = 1.Then, the potential constraint (50) tells: where p = (p 1 , p 2 ), and so the corresponding potential matrix P reads: Then, the corresponding novel nonlocal integrable mKdV equation becomes: where σ = ±1, |z| is the absolute value of z and z * is the complex conjugate of z.This nonlocal integrable mKdV equation, which has two nonlinear terms and two reverse spacetime factors, is very different from the ones studied in [20][21][22], which have only one nonlinear term and one reverse spacetime factor.The equation needs more restrictions while formulating its soliton solutions, which will be seen later.
Let us second consider m = 1 and n = 4, and take: where σ j and δ j are real constants and satisfy σ 2 j = δ 2 j = 1, j = 1, 2. Then, the potential constraint (50) generates: where p = (p 1 , p 2 , p 3 , p 4 ), and so the corresponding potential matrix P reads: This enables us to obtain a class of two-component reduced nonlocal integrable mKdV equations: where σ j are real constants and satisfy σ 2 j = 1, j = 1, 2. Similarly, we can also generate multi-component reduced nonlocal integrable mKdV equations.
Recall that the adjoint equation of the x-part of ( 13) and the adjoint equation of (66) are: and respectively.Obviously, the pair of adjoint matrix spectral problems or equivalent adjoint matrix spectral problems does not create any additional condition.Let ψ(λ) be a matrix eigenfunction of the spatial spectral problem (66) associated with an eigenvalue λ.Then, Σψ −1 (λ) and ∆ψ −1 (λ) are two matrix adjoint eigenfunctions associated with the same eigenvalue λ.With the group reduction in (43), one can have: This implies that the matrix gives rise to another matrix adjoint eigenfunction associated with the same original eigenvalue λ.Equivalently, ψ † (−x, −t, −λ * )∆ solves the adjoint spectral problem (70).Thus, upon observing the asymptotic conditions of the matrix eigenfunction ψ, it follows from the uniqueness of solutions that ψ(λ) satisfies: Similarly, based on the group reduction in (42), we can find that: presents a new matrix adjoint eigenfunction associated with λ and satisfies:

Riemann-Hilbert Problems
We begin to present a class of associated Riemann-Hilbert problems with the space variable x.To formulate the problems explicitly, we make the following assumptions: While considering the scattering problem, let us first take the two matrix eigenfunctions ψ ± (x, λ) of (66) with the canonical asymptotic conditions: respectively.Based on (68), we then see that det ψ ± = 1 for all values of x ∈ R. Since are both matrix eigenfunctions of the x-part of the matrix spectral problems (13), they have to be linearly dependent, and therefore, we have: Here, S(λ) is the so-called scattering matrix, and it is clear that det S(λ) = 1, due to det ψ ± = 1.
As usual, by the method of variation in parameters, one can transform the x-part of the matrix spectral problems (13) into the following Volterra integral equations for ψ ± [8]: where the canonical asymptotic conditions (76) have been applied.Further, by the Neumann series [23] in the theory of Volterra integral equations, one can prove the existence of the eigenfunctions ψ ± , which allow analytic continuations off the real axis λ ∈ R as long as the integrals on their right-hand sides converge (see, e.g., [24]).Based on the diagonal form of Λ and the first assumption in (75), one can show that the integral equation for the first m columns of ψ − contains only the exponential factor e −iαλ(x−y) , and the integral equation for the last n columns of ψ + contains only the exponential factor e iαλ(x−y) .Note that the function factor e −iαλ(x−y) decays because of y < x in the integral, when λ takes values in the upper half-plane C + , and the function factor e iαλ(x−y) also decays because of y > x in the integral, when λ takes values in the upper half-plane C + .Accordingly, one knows that those m + n columns are analytic in the upper half-plane C + and continuous in the closed upper half-plane C+ .In a similar manner, one can show that the first m columns of ψ + and the last n columns of ψ − are analytic in the lower half-plane C − and continuous in the closed lower half-plane C− .
In what follows, we show how to prove the above statements.Let us split namely, ψ ± j denotes the jth column of φ ± (1 ≤ j ≤ m + n).We would like to show that the m + n column eigenfunctions, ψ − j , 1 ≤ j ≤ m, and ψ + j , m + 1 ≤ j ≤ m + n, are analytic with respect to λ in C + and continuous with respect to λ in C+ ; and the m + n row eigenfunctions, ψ + j , 1 ≤ j ≤ m, and ψ − j , m + 1 ≤ j ≤ m + n, are analytic with respect to λ in C − and continuous with respect to λ in C− .Below, we only seek to prove the result for ψ + j , m + 1 ≤ j ≤ m + n, and the proofs for the other row and column eigenfunctions follow analogously.
From the Volterra integral Equation (79), we know that and where e j , 1 ≤ j ≤ m + n, are the standard basis of R m+n and the square matrices R 1 and R 2 are given by e −iαλ(x−y) q(y) 0 , and R 2 (λ, x, y) = i 0 e iαλ(x−y) p(y) q(y) 0 .
Let us first prove that for each m + 1 ≤ j ≤ m + n, the Neumann series whose terms are defined recursively by will determine the solution to (81).This statement will be true, if we can show that the Neumann series converges uniformly for both x ∈ R and λ ∈ C+ .Based on (84), an application of the mathematical induction yields for both x ∈ R and λ ∈ C+ , where | • | stands for the Euclidean norm for vectors and • denotes the Frobenius norm for square matrices.By using the Weierstrass M-test, it follows from this estimation that uniformly converges for both λ ∈ C+ and x ∈ R, and all ψ + j (λ, x), m + 1 ≤ j ≤ m + n, are continuous with respect to λ in C+ , because so are all ψ + j,k (λ, x), m + 1 ≤ j ≤ m + n, k ≥ 0. Next, we would like to consider the differentiability of ψ + j (λ, x), m + 1 ≤ j ≤ m + n, with respect to λ in C + (similarly, we can show the differentiability with respect to x in R).Let us fix an integer m + 1 ≤ j ≤ m + n.For a complex number µ in C + , take a disk B ρ (µ) = {λ ∈ C | |λ − µ| ≤ ρ} with a radius ρ > 0 such that B ρ (µ) ⊆ C + .Then, there is a constant C(ρ) > 0 such that |αxe −iαλx | ≤ C(ρ) for λ ∈ B ρ (µ) and x ≥ 0. We consider the following Neumann series: where ψ + j,λ,0 = 0 and ψ + j,λ,k = 0, k ≥ 1, are defined recursively by with ψ + j,k , k ≥ 0, being defined by (84) and R 2,λ being given by It can be easily shown by applying the mathematical induction that for both x ∈ R and λ ∈ B ρ (µ).Now, based on the Weierstrass M-test, the Neumann series determined by (86) converges uniformly for both x ∈ R and λ ∈ B ρ (µ), and through the term-by-term differentiability theorem, it converges to the derivative of ψ + j with respect to λ, since ψ + j,λ,k = ∂ ∂λ ψ + j,k , k ≥ 0. It follows that ψ + j is analytic at any point λ ∈ B ρ (µ), and thus, particularly at the point µ.This tells that all ψ + j , m + 1 ≤ j ≤ m + n, are analytic with respect to λ in C + , indeed.Therefore, the required proof is finished.Now, on the basis of these analyses, we can define the generalized matrix Jost solution T + as where H 1 and H 2 are given by and know that T + is analytic with respect to λ in the upper half-plane C + and continuous with respect to λ in the closed upper half-plane C+ .Additionally, the generalized matrix Jost solution is analytic with respect to λ in the lower half-plane C − and continuous with respect to λ in the closed lower half-plane C− .To determine the other generalized matrix Jost solution T − , we adopt the analytic counterpart of T + in the lower half-plane C − , which can be generated from the adjoint counterparts of the matrix spectral problems.Recall that the inverse matrices (φ ± ) −1 and (ψ ± ) −1 provide solutions to the two corresponding adjoint matrix spectral problems.Thus, upon splitting ψ± into rows, namely, ψ±,j stands for the jth row of ψ± (1 ≤ j ≤ m + n), one can show by similar arguments that one can define the generalized matrix Jost solution T − as the adjoint matrix solution of (70), i.e., This is analytic at λ in the lower half-plane C − and continuous at λ in the closed lower half-plane C− .Additionally, the other generalized matrix Jost solution of (70), is analytic at λ in the upper half-plane C + and continuous at λ in the upper half-plane C+ .Furthermore, based on det ψ ± = 1 and the scattering relation (78) for ψ + and ψ − , one immediately obtains and thus, det where we split S(λ) and S −1 (λ) into block matrices as follows: and These two generalized matrix Jost solutions allow us to establish the required matrix Riemann-Hilbert problems on the real line: for the reduced nonlocal matrix integrable mKdV type equations (54).Here, the jump matrix G 0 is given by which is a consequence of (78).The matrix S(λ) has the following factorization: which can be shown to be Note that for the presented Riemann-Hilbert problems, the canonical normalization conditions come from the Volterra integral equations in (79).Moreover, from the properties of eigenfunctions in (72) and (74), we can have and It therefore follows that the the jump matrix G 0 satisfies the following involution properties:

Evolution of the Scattering Data
For the completeness of the required direct scattering transforms, we compute the derivative of the eigenfunction relation (78) with time t, and apply the following temporal matrix spectral problems: where s ≥ 0 is fixed.Then, one can know that the scattering matrix S possesses the following evolution equation: This leads to the time evolution: for the time-dependent scattering coefficients, and tells that all remaining scattering coefficients do not depend on the time variable t.

Gelfand-Levitan-Marchenko Type Equations
To determine the generalized matrix Jost solutions, we compute equivalent Gelfand-Levitan-Marchenko type integral equations.To this end, we transform the associated Riemann-Hilbert problems in (99) into the following problems: where each jump matrix G 0 is defined by (100) and (102).Define G(λ) = G ± (λ) for λ ∈ C ± .To avoid the spectral singularity, we suppose that G has only simple poles off R: {ξ j } R j=1 , where R ≥ 1 is an arbitrarily given integer.Further, define where G j denotes the residue of G at λ = ξ j , namely, Evidently, one has: Then, upon applying the Sokhotski-Plemelj formula [25], one obtains the solution to each problem in (113): Computing the limit as λ → ξ l engenders: where: and consequently, we arrive at: which define the required Gelfand-Levitan-Marchenko type integral equations.All these equivalent integral equations completely determine solutions to the resulting Riemann-Hilbert problems and thus the required generalized matrix Jost solutions.However, little is known regarding the existence and uniqueness of solutions.However, in the reflectionless case, a formulation of solutions will be given for the reduced nonlocal reverse spacetime matrix integrable mKdV type equations in the next section.

Recovering the Potential Matrix
To obtain the potential matrix P from the unimodular generalized matrix Jost solutions, let us consider an asymptotic expansion: Upon plugging the above asymptotic expansion into the matrix spectral problem (66) and making a comparison of constant terms, one obtains Consequently, the potential matrix is given by where the matrix G + 1 has been similarly partitioned into a block matrix as follows: Therefore, the solutions to the matrix AKNS equations (39) are given by: When the reduction conditions in (46) and (47) are satisfied, the reduced matrix potential p solves the reduced nonlocal matrix integrable mKdV type Equation (54).
To sum up, this provides a Riemann-Hilbert problem formulation of the inverse scattering transform for computing solutions to the reduced nonlocal matrix integrable mKdV type equations (54).It starts from the scattering data in S(λ), and then computes the jump matrix G 0 (λ).The potential matrix P finally follows from the solution {G + (λ), G − (λ)} of the associated Riemann-Hilbert problems.
In order to compute soliton solutions explicitly, we additionally assume that each of these zeros, λ k and λk , 1 ≤ k ≤ N, is geometrically simple.Thus, we know that each of ker T + (λ k ) and ker T − ( λk ), 1 ≤ k ≤ N contains only a single column and row basis vector, respectively.We take v k ∈ ker T + (λ k ), v k = 0, and vk ∈ ker T − (λ k ), vk = 0, for 1 ≤ k ≤ N. In this way, we have: It is known that soliton solutions are associated with the situation where G 0 = I m+n is taken in each Riemann-Hilbert problem in (99).Such a situation can be met if we take that S 21 = Ŝ12 = 0-namely, take all zero reflection coefficients in the scattering problem.Such a kind of specific Riemann-Hilbert problems, which possess the canonical normalization conditions in (103) and the zero structures given in (123), is solvable [8,26], in the local case of and therefore, we can present the potential matrix P exactly, which generates soliton solutions.
In the nonlocal case, we cannot keep the condition (124).Therefore, to present a general formulation of solutions to reflectionless Riemann-Hilbert problems in the nonlocal case, we assume that for N = 2N 1 + N 2 , where N 1 , N 2 ≥ 0 are two integers, we can make the rearrangements of eigenvalues λ k , 1 ≤ k ≤ N and adjoint eigenvalues λk , 1 ≤ k ≤ N: and the rearrangements for their corresponding eigenfunctions and adjoint eigenfunctions: Then, we introduce where M is a square matrix M = ( mkl ) N×N with its entries determined by: mkl Therefore, G + (λ) and G − (λ) are analytical in C + and C − , respectively.By an analogous argument to the one in [12], we can prove that G + (λ) and G − (λ) solve the corresponding reflectionless Riemann-Hilbert problem: Since the zeros λ k and λk do not depend on the space and time variables, one can readily determine the spatial and temporal evolutions for the kernel vectors, v k (x, t) and vk (x, t), 1 ≤ k ≤ N.For example, one can compute v k (x, t), 1 ≤ k ≤ N, as follows.Taking the x-derivative of both sides of the first set of equations in (123), and applying (66) and then again the first set of equations in (123), one obtains: Consequently, for each 1 ≤ k ≤ N, dv k dx − iλ k Λv k is a kernel vector T + (x, λ k ), and hence, a constant multiple of v k , because ker T + (λ k ) is one-dimensional.Therefore, without loss of generality, one can just assume: The time dependence of v k dv can be obtained in a similar manner via applying the associated temporal matrix spectral problem, i.e., (67).As a consequence of these differential equations, we get: and completely similarly, we can obtain: where w k and ŵk , 1 ≤ k ≤ N, are constant column and row vectors, respectively.Now, based on the solutions in (129), one obtains: and further, the presentations in (122) give rise to the N-soliton solutions to the matrix AKNS Equation ( 39): Here, for each 1 ≤ k ≤ N, we have made the splittings vk = (( These mean that the resulting potential matrix P determined by (120) will satisfy the group reduction conditions in (46) and (47).In this way, the above N-soliton solutions to the matrix AKNS Equation (39) reduce to the following N-soliton solutions: to the reduced nonlocal matrix integrable mKdV type Equation (54).

Realization
Let us now check how to realize the involution properties in (138).First, we take N distinct zeros of det T + (λ) (i.e., eigenvalues of the matrix spectral problems with the zero potential): and N zeros of det T − (λ) (i.e., eigenvalues of the adjoint matrix spectral problems with the zero potential): where µ k ∈ C + , µ k ∈ iR, and ν k ∈ R + , It is easy to see that all ker T + (λ k ), 1 ≤ k ≤ N, are linearly spanned by respectively, where each w k (1 ≤ k ≤ N) is a constant column vector.These column vectors in (142) are eigenfunctions of the matrix spectral problems with the zero potential associated with the eigenvalue λ k , 1 ≤ k ≤ N. Furthermore, following the proceeding analyses in Section 3.1, ker T and respectively.These row vectors vk , 1 ≤ k ≤ N, are eigenfunctions of the adjoint spectral problems with the zero potential associated with the adjoint eigenvalues λk , 1 ≤ k ≤ N, respectively.It is direct to see that the choices in ( 143)-( 145) yield the selections on where * denotes the complex conjugate of a matrix.We emphasize that all these selections aim to satisfy the reduction conditions in ( 46) and (47).Now, note that when the solutions to the special Riemann-Hilbert problems, defined by ( 129) and (130), possess the involution properties in (104) and (105), the corresponding relevant matrix G + 1 will satisfy the involution properties in (138), which are consequences of the group reductions in (42) and (43).Therefore, when the selections in (146) are made, the Formula (139), together with (129), (130), and (142)-(145), gives rise to N-soliton solutions to the reduced nonlocal matrix integrable mKdV type equations (54).

Concluding Remarks
We have proposed type (λ * , −λ * ) reduced nonlocal matrix integrable mKdV hierarchies of equations, by taking advantage of two group reductions of the matrix AKNS spectral problem of arbitrary order, and formulated Riemann-Hilbert problems for the resulting matrix integrable mKdV type equations, by use of the Lax pair and the adjoint Lax pair of matrix spectral problems.The reflectionless Riemann-Hilbert problems have been applied to soliton solutions of the proposed reduced matrix integrable mKdV type equations.
The key step in our construction is to use two group reductions simultaneously to generate reduced integrable equations, of which one is local and the other is nonlocal.In our analyses of Riemann-Hilbert problems, we have reformulated solutions to the corresponding reflectionless Riemann-Hilbert problems, based on the distribution of eigenvalues and adjoint eigenvalues.Such a treatment for Riemann-Hilbert problems is vital to the presentation of soliton solutions in the nonlocal case.It should also be interesting to apply the idea of adopting a pair of group reductions to other matrix spectral problems to generate reduced nonlocal integrable equations.
Indeed, the Riemann-Hilbert approach is very effective in presenting soliton solutions (see also, e.g., [27][28][29]), and the technique has been generalized to solve various initial boundary value problems of nonlinear integrable equations on the half-line or the finite interval [30,31].There exist many other powerful approaches to soliton solutions, which include the Hirota direct method [4], the Wronskian technique [32,33], the generalized bilinear technique [34,35], the Bell polynomial approach [36,37], and the Darboux transformation [3,38].It would be of significant importance to search for connections among different methods to exhibit dynamical behaviors of soliton solutions.It is another interesting topic for future study to establish Riemann-Hilbert problems to solve generalized integrable counterparts-for example, integrable couplings, super-symmetric integrable equations, and fractional spacetime analogous equations.We would also like to emphasize that it would be particularly interesting to construct diverse exact solutions other than solitons to nonlinear integrable equations-for instance, positon solutions [39], or more generally, complexiton solutions [40], rogue wave and lump solutions [41][42][43][44], solitonless solutions [45], and algebro-geometric solutions [46] from the perspective of Riemann-Hilbert problems.

)
From (94), we know that S 11 , Ŝ11 are m × m matrices; and so, S 12 , Ŝ12 are m × n matrices, S 21 , Ŝ21 are n × m matrices, and S 22 , Ŝ22 are n × n matrices, since S(λ) is a square matrix of size m + n.Also, it follows from the uniform convergence of the Neumann series, defined previously, that S 11 (λ) and Ŝ11 (λ) are analytic at λ ∈ C + and λ ∈ C − , respectively.Now, one can define the two unimodular generalized matrix Jost solutions as follows: where v1 k and v1 k are column and row vectors of dimension m, respectively, while v2 k and v2 k are column and row vectors of of dimension n, respectively.To present N-soliton solutions for the reduced nonlocal matrix integrable mKdV type Equation (54), one needs to check if G + 1 determined by (136) possesses the involution properties: