Next Article in Journal
Approximate Conservation Laws of Nonvariational Differential Equations
Previous Article in Journal
Undergraduate Students’ Solutions of Modeling Problems in Algorithmic Graph Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Long-Time Asymptotics of a Three-Component Coupled mKdV System

1
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China
2
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3
Department of Mathematics and Statistics, University of South Florida, Tampa, FL 33620, USA
4
College of Mathematics and Physics, Shanghai University of Electric Power, Shanghai 200090, China
5
College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
6
International Institute for Symmetry Analysis and Mathematical Modelling, Department of Mathematical Sciences, North-West University, Mafikeng Campus, Private Bag X2046, Mmabatho 2735, South Africa
Mathematics 2019, 7(7), 573; https://doi.org/10.3390/math7070573
Submission received: 2 June 2019 / Revised: 10 June 2019 / Accepted: 20 June 2019 / Published: 27 June 2019

Abstract

:
We present an application of the nonlinear steepest descent method to a three-component coupled mKdV system associated with a 4 × 4 matrix spectral problem. An integrable coupled mKdV hierarchy with three potentials is first generated. Based on the corresponding oscillatory Riemann-Hilbert problem, the leading asympototics of the three-component mKdV system is then evaluated by using the nonlinear steepest descent method.

1. Introduction

Asymptotic analysis has become an important area in soliton theory and its basic approaches are used to determine limiting behaviors of Cauchy problems of integrable systems. Significant work on long-time asympototics of integrable systems was carried out by Manakov [1] and Ablowitz and Newell [2]. Zakharov and Manakov [3] wrote down precise expressions, depending explicitly on initial data, for the leading asymptotics for the nonlinear Schrödinger equation in the physically interesting region x = O ( t ) . A complete description was presented for the leading asymptotics of the Cauchy problem of the Korteweg-de Vries (KdV) equation by Ablowitz and Segur [4]. The method of Zakharov and Manakov starts an ansatz for the asymptotic form of the solution and utilizes some techniques which are removed from the classical framework of Riemann-Hilbert (RH) problems. Segur and Ablowitz [5] began with the similarity solution form to derive the leading two terms in each of the asymptotic expansions for the amplitude and phase for the nonlinear Schrödinger equation, based on conservation laws. However, all those results were presented, without applying the method of stationary phase.
Its [6] used the stationary phase idea to conjugate the RH problem associated with the nonlinear Schrödinger equation, up to small errors which decay as t , by an appropriate parametrix to a model RH problem solvable by the technique from the theory of isomonodromic deformations. Deift and Zhou [7] determined the long-time asymptotics of the mKdV equation, by manipulating an associated oscillatory RH problem systematically and rigorously, in the spirit of the stationary phase method. Their technique, further developed in [8,9] and also in [10], opens a nonlinear steepest descent method to explore asymptotics of integrable systems through analyzing oscillatory RH problems associated with matrix spectral problems. A crucial ingredient of the Deift-Zhou approach is the asymptotic analysis of singular integrals on contours by deformations. Applications have been made for a few of integrable equations, including the KdV equation, the nonlinear Schrödinger equation, the sine-Gordon equation, the derivative nonlinear Schrödinger equation, the Camassa-Holm equation and the Kundu-Eckhaus equation (see, e.g., [11,12,13,14,15,16,17,18]). McLaughlin and Miller [19] generalized the steepest descent method to the case when the jump matrix fails to be analytic and their method is now called a nonlinear ¯ steepest descent method. One important factor in the nonlinear steepest descent method is the order of involved spectral matrices in RH problems or equivalently the inverse scattering theory. However, only 2 × 2 spectral matrices and their RH problems have been systematically considered (see, e.g., [20]), which lead to algebro-geometric solutions to integrable systems expressed by hyperelliptic functions [21]. There are very few 3 × 3 spectral matrices, whose long-time asymptotics or RH problems are considered (see, e.g., [22,23]) and whose associated inverse scattering transforms are solved (see, e.g., [24,25]). Associated trigonal curves exhibit much more diverse asymptotic behaviors and algebro-geometric solutions than hyperelliptic curves [26,27]. There has been no application of the nonlinear steepest descent method to the 4th-order or higher-order matrix spectral problems so far.
In this paper, we would like to present an application to a 4 × 4 matrix spectral problem
i ϕ x = U ϕ , U = k p 1 p 2 p 3 p 1 * k 0 0 p 2 * 0 k 0 p 3 * 0 0 k ,
where i denotes the unit imaginary number, k is the spectral parameter and the superscript ∗ denotes the complex conjugate. This spectral problem generates a three-component coupled mKdV system
p j , t = p j , x x x + 3 ( | p 1 | 2 + | p 2 | 2 + | p 3 | 2 ) p j , x + 3 ( p 1 * p 1 , x + p 2 * p 2 , x + p 3 * p 3 , x ) p j , 1 j 3 ,
where
| p i | 2 = p i p i * , 1 j 3 .
We will compute the leading asymptotics of this mKdV system by analyzing an associated oscillatory RH problem. Symmetry constraints and RH problems have been made for an unreduced combined mKdV system in [28,29]. It is worth noting that this mKdV system (2) has a slightly different matrix spectral problem from the one for the multiple wave interaction equations [30].
Let | M | denote an equivalent matrix norm for a matrix M (may not be square):
| M | = [ tr ( M M ) ] 1 2 ,
where M stands for the Hermitian transpose of M, and
S 3 = { ( f 1 , f 2 , f 3 ) | f i S , 1 i 3 } ,
where S denotes the Schwartz space. The primary result of the paper can be stated as follows.
Theorem 1.
Let p ( x , t ) = ( p 1 ( x , t ) , p 2 ( x , t ) , p 3 ( x , t ) ) solves the Cauchy problem of the three-component coupled mKdV system (2) with initial data in S 3 . Suppose that γ ( k ) = ( γ 1 ( k ) , γ 2 ( k ) , γ 3 ( k ) ) is the reflection coefficient vector in S 3 associated with the initial data and it satisfies
γ * ( k * ) = γ ( k ) , sup k R | γ ( k ) | < 1 .
Then, in the physically interesting region x = O ( t ) , the leading asymptotics of the solution for x < 0 is given by
p ( x , t ) = ( p 1 ( x , t ) , p 2 ( x , t ) , p 3 ( x , t ) ) = i e π ν ν 6 t k 0 sinh π ν ( | γ 1 ( k 0 ) | cos φ 1 , | γ 2 ( k 0 ) | cos φ 2 , | γ 3 ( k 0 ) | cos φ 3 ) + O ( ln t t ) ,
where
k 0 = x 12 t , ν = 1 2 π ln ( 1 | γ ( k 0 | 2 ) , φ j = φ 0 + arg γ j ( k 0 ) , 1 j 3 , φ 0 = ν ln 192 t k 0 3 2 i χ ( k 0 ) + 16 t k 0 3 + π 4 + arg Γ ( i ν ) , χ ( k 0 ) = 1 2 π i k 0 k 0 ln 1 | γ ( ξ ) | 2 1 | γ ( k 0 ) | 2 d ξ ξ k 0 ,
Γ ( · ) being the Gamma function.
The rest of the paper is organized as follows. In Section 2, within the zero-curvature formulation, we derive an integrable coupled hierarchy with three potentials and furnish its bi-Hamiltonian structure, based on the 4 × 4 matrix spectral problem (1) suited for the RH theory. In Section 3, taking the three-component coupled mKdV system (2) as an example, we present an associated oscillatory RH problem. In Section 4, we explore long-time asymptotics for the three-component coupled mKdV system through manipulating the oscillatory RH problem by the nonlinear steepest descent method. In the last section, we give a summary of our conclusions, together with some discussions.

2. An Integrable Three-Component Coupled mKdV Hierarchy

2.1. Zero Curvature Formulation

Let us first recall the zero curvature formulation to construct integrable hierarchies [31]. Let u be a vector potential, k be a spectral parameter, and I n stand for the n-th order identity matrix. Choose a square spectral matrix U = U ( u , k ) from a given matrix loop algebra. Suppose that
W = W ( u , k ) = m = 0 W m k m = m = 0 W m ( u ) k m
presents a solution to the corresponding stationary zero curvature equation
W x = i [ U , W ] .
By using the solution W, we define an infinite sequence of Lax matrices
V [ r ] = V [ r ] ( u , k ) = ( k r W ) + + Δ r , r 0 ,
where the subscript + denotes the operation of taking a polynomial part in k, and Δ r , r 0 , are appropriate modification terms, and then construct an integrable hierarchy
u t = K r ( u ) = K r ( x , t , u , u x , ) , r 0 ,
from an infinite sequence of zero curvature equations
U t V x [ r ] + i [ U , V [ r ] ] = 0 , r 0 .
The two matrices U and V [ r ] here are called a Lax pair [32] of the r-th integrable system in the soliton hierarchy (9). The zero curvature equations in (10) present the compatibility requirements of the spatial and temporal matrix spectral problems
i ϕ x = U ϕ = U ( u , k ) ϕ , i ϕ t = V [ r ] ϕ = V [ r ] ( u , k ) ϕ , r 0 ,
with ϕ being the matrix eigenfunction.
To show the Liouville integrability of the soliton hierarchy (9), we usually furnish a bi-Hamiltonian structure [33]:
u t = K r = J 1 δ H ˜ r + 1 δ u = J 2 δ H ˜ r δ u , r 1 ,
where J 1 and J 2 constitute a Hamiltonian pair and δ δ u is the variational derivative. The Hamiltonian structures are normally achieved through the trace identity [31]:
δ δ u tr ( W U k ) d x = k γ k k γ tr ( W U u ) , γ = k 2 d d k ln | tr ( W 2 ) | ,
or more generally, the variational identity [34]:
δ δ u W , U k d x = k γ k k γ W , U u , γ = k 2 d d k ln | W , W | ,
with · , · being a symmetric and ad-invariant non-degenerate bilinear form on the underlying matrix loop algebra [35]. The bi-Hamiltonian structure often guarantees the existence of infinitely many commuting Lie symmetries { K n } n = 0 and conserved quantities { H ˜ n } n = 0 :
[ K n 1 , K n 2 ] = K n 1 [ K n 2 ] K n 2 [ K n 1 ] = 0 ,
and
{ H ˜ n 1 , H ˜ n 2 } N = δ H ˜ n 1 δ u T N δ H ˜ n 2 δ u d x = 0 ,
where n 1 , n 2 0 , N = J 1 or J 2 , and K denotes the Gateaux derivative of K with respect to the potential u:
K ( u ) [ S ] = ε | ε = 0 K ( u + ε S , u x + ε S x , ) .
Such Abelian algebras of symmetries and conserved quantities can be generated directly from Lax pairs (see, e.g., [36,37]). We also know that for a system of evolution equations,
H ˜ = H d x
is a conserved functional if and only if δ H ˜ δ u is an adjoint symmetry [38], and thus, Hamiltonian structures connect conserved functionals with adjoint symmetries and further symmetries. Pairs of adjoint symmetries and symmetries actually correspond to conservation laws [39].
When the underlying matrix loop algebra in the zero curvature formulation is simple, semisimple and non-semisimple, the associated zero curvature equations generate classical integrable hierarchies [40,41], a collection of different integrable hierarchies, and hierarchies of integrable couplings [42], respectively. Integrable couplings require extra care in presenting lumps and solitons.

2.2. Three-Component mKdV Hierarchy

Let us consider a 4 × 4 matrix spectral problem
i ϕ x = U ϕ = U ( u , k ) ϕ , U = ( U j l ) 4 × 4 = k p 1 p 2 p 3 p 1 * k 0 0 p 2 * 0 k 0 p 3 * 0 0 k ,
where k is a spectral parameter and u = p T = ( p 1 , p 2 , p 3 ) T is a three-component potential. A special case of p 3 = 0 transforms (17) into the Manakov type spectral problem [43]. Since the leading matrix diag ( 1 , 1 , 1 , 1 ) has a multiple eigenvalue, the spectral problem (17) is degenerate, which is different from the case of multiple wave interaction equations [30].
To derive an associated integrable coupled mKdV hierarchy, we first solve the stationary zero curvature Equation (7) corresponding to (17). We assume that a solution W is determined by
W = a b b d ,
where a is a real scalar, b is a three-dimensional row, and d is a 3 × 3 Hermitian matrix. It is direct to observe that the stationary zero curvature Equation (7) becomes
a x = i ( b p p b ) , b x = i ( 2 k b + p d a p ) , d x = i ( b p p b ) .
We search for a formal series solution as follows:
W = a b b d = m = 0 W m k m , W m = W m ( u ) = a [ m ] b [ m ] b [ m ] d [ m ] , m 0 ,
with b [ m ] and d [ m ] being assumed to be
b [ m ] = ( b 1 [ m ] , b 2 [ m ] , b 3 [ m ] ) , d [ m ] = ( d j l [ m ] ) 3 × 3 , m 0 .
Obviously, the system (19) is equivalent to the following recursion relations:
b [ 0 ] = 0 , a x [ 0 ] = 0 , d x [ 0 ] = 0 ,
b [ m + 1 ] = 1 2 ( i b x [ m ] + p d [ m ] a [ m ] p ) , m 0 ,
a x [ m ] = i ( b [ m ] p p b [ m ] ) , d x [ m ] = i ( b [ m ] p p b [ m ] ) , m 1 .
The three-component mKdV system (2) can correspond to the specific initial values:
a [ 0 ] = 6 , d [ 0 ] = 2 I 3 .
Besides those initial values, we set all constants of integration in (22c) to be zero, that is, demand
W m | u = 0 = 0 , m 1 .
This way, with a [ 0 ] and d [ 0 ] given by (23), all matrices W m , m 1 , will be uniquely determined. For instance, by a direct computation, (22) generates
b j [ 1 ] = 4 p j , a [ 1 ] = 0 , d j l [ 1 ] = 0 ;
b j [ 2 ] = 2 i p j , x , a [ 2 ] = 2 l = 1 3 | p l | 2 , d j l [ 2 ] = 2 p l p j * ;
b j [ 3 ] = p j , x x + 2 ( l = 1 3 | p l | 2 ) p j ,
a [ 3 ] = i l = 1 3 ( p l p l , x * p l , x p l * ) , d j l [ 3 ] = i ( p l , x p j * p l p j , x * ) ;
b j [ 4 ] = 1 2 i [ p j , x x x 3 ( l = 1 3 | p l | 2 ) p j , x 3 ( l = 1 3 p l , x p l * ) p j ] ,
a [ 4 ] = 1 2 [ 3 ( l = 1 3 | p l | 2 ) 2 l = 1 3 ( p l p l , x x * p l , x p l , x * + p l , x x p l * ) ] ,
d j l [ 4 ] = 1 2 [ 3 p l ( l = 1 3 | p l | 2 ) p j * p l , x x p j * + p l , x p j , x * p l p j , x x * ] ;
where 1 j , l 3 . Based on (22b) and (22c), we can obtain a recursion relation for b [ m ] and c [ m ] = b [ m ] :
c [ m + 1 ] b [ m + 1 ] T = Ψ c [ m ] b [ m ] T , m 1 ,
where Ψ is a 6 × 6 matrix operator
Ψ = i 2 ( + l = 1 3 q l 1 p l ) I 3 + q 1 p q 1 q T ( q 1 q T ) T p T 1 p + ( p T 1 p ) T ( + l = 1 3 p l 1 q l ) I 3 p T 1 q T ,
with q = p . Obviously, this tells
b [ m + 1 ] T = i 2 [ p T 1 p + ( p T 1 p ) T ] b [ m ] + [ ( l = 1 3 p l 1 p l * ) I 3 p T 1 p * ] b [ m ] T , m 1 .
To generate an integrable coupled mKdV hierarchy with three components, we introduce, for all integers r 0 , the following Lax matrices
V [ r ] = V [ r ] ( u , λ ) = ( V j l [ r ] ) 4 × 4 = ( λ r W ) + = s = 0 r W s λ r s , r 0 ,
where the modification terms are chosen as zero. The compatibility requirements of (11), i.e., the zero curvature equations (10), lead to the integrable coupled mKdV hierarchy with three components:
u t = p t T = K r = 2 i b [ r + 1 ] T , r 0 .
The first two nonlinear systems in the above integrable coupled mKdV hierarchy (30) are the three-component coupled nonlinear Schrödinger system
p j , t = 2 i p j , x x 4 i ( | p 1 | 2 + | p 2 | 2 + | p 3 | 2 ) p j , 1 j 3 ,
and the three-component coupled mKdV system (2). Under a reduction p 3 = 0 , the three-component coupled system (31) is reduced to the Manakov system [43], for which a decomposition into finite-dimensional integrable Hamiltonian systems was made in [44], whileas the three-component coupled system (2) contains various examples of mKdV equations, for which there are diverse kinds of integrable decompositions through symmetry constraints (see, e.g., [45,46]).
The three-component coupled mKdV hierarchy (30) with an extended six-component potential u = ( p , q T ) T = ( p , p * ) T possesses a Hamiltonian structure [29,38], which can be computed through applying the trace identity [31], or more generally, the variational identity [34]. A direct computation tells
i tr ( W U k ) = a + tr ( d ) = m = 0 ( a [ m ] + d 11 [ m ] + tr d 22 [ m ] ) k m ,
and
i tr ( W U u ) = c b T = m 0 G m 1 k m ,
where W = a b c d and c = b . Plugging these into the corresponding trace identity and considering the case of m = 2 tell γ = 0 , and thus
δ H ˜ m δ u = i G m 1 , H ˜ m = i m ( a [ m + 1 ] + d 11 [ m + 1 ] + tr d 22 [ m + 1 ] ) d x , G m 1 = c [ m ] b [ m ] T , m 1 ,
where c [ m ] = b [ m ] . A bi-Hamiltonian structure for the extended six-component coupled mKdV systems, consisting of (30) and its conjugate compartment, then follows:
u t = K r = J 1 G r = J 1 δ H ˜ r + 1 δ u = J 2 δ H ˜ r δ u , r 1 ,
where the Hamiltonian pair ( J 1 , J 2 = J 1 Ψ ) is defined by
J 1 = 0 2 I 3 2 I 3 0 ,
J 2 = i p T 1 p + ( p T 1 p ) T ( + k = 1 3 p k 1 q k ) I 3 p T 1 q T ( + k = 1 3 p k 1 q k ) I 3 q 1 p q 1 q T + ( q 1 q T ) T ,
where q = p again. Adjoint symmetry constraints (or equivalently symmetry constraints) decompose a four-component nonlinear Schrödinger system with p 3 = q 3 = 0 into two commuting finite-dimensional Liouville integrable Hamiltonian systems in [38]. In the next section, we will concentrate on the three-component coupled mKdV system (2).

3. An Associated Oscillatory Riemann-Hilbert Problem

For a matrix M = M ( k ; x , t ) , we define
M p = | M | p ,
where | M | is the matrix norm of M given by (3) and f p is the L p -norm of a function f of k. We assume that as t ,
A ( t ) B ( t ) means C , T > 0 such that | A ( t ) | C B ( t ) , when t T .
If the constant C depends on a few parameters α 1 , α 2 , , α n , then we write A ( t ) α 1 , α 2 , , α n B ( t ) , but on some situations where the dependence of C on some parameters is not important, we will often suppress those parameters for brevity.
For an oriented contour, we denote the left-hand side by + and the right-hand side by −, as one travels on the contour in the direction of the arrow. A Riemann-Hilbert (RH) problem ( Γ , J ) on an oriented contour Γ C (open or closed) with a jump matrix J defined on Γ reads
M + ( k ) = M ( k ) J ( k ) , k Γ , M ( k ) J 0 , as k ,
where J 0 is a given matrix determining a boundary condition at infinity, and M ± are analytic in the ± side regions and continuous to Γ from the ± sides, and M = M ± in the ± side regions, respectively.

3.1. An Equivalent Matrix Spectral Problem

In the previous section, we have seen that the spectral problems of the three-component coupled mKdV system (2) read
i ϕ x = U ϕ = U ( u , k ) ϕ , i ϕ t = V [ 3 ] ϕ = V [ 3 ] ( u , k ) ϕ ,
with the Lax pair being of the form
U ( u , k ) = k Λ + P , V [ 3 ] ( u , k ) = k 3 Ω + Q ,
where u = p T = ( p 1 , p 2 , p 3 ) T and
Λ = diag ( 1 , 1 , 1 , 1 ) , Ω = diag ( 6 , 2 , 2 , 2 ) .
The other two matrices P and Q are given by
P = 0 p p 0 , Q = a [ 1 ] k 2 + a [ 2 ] k + a [ 3 ] b [ 1 ] k 2 + b [ 2 ] k + b [ 3 ] b [ 1 ] k 2 b [ 2 ] k b [ 3 ] d [ 1 ] k 2 + d [ 2 ] k + d [ 3 ] = Q 11 Q 12 Q 21 Q 22 ,
where a [ m ] , b [ m ] , d [ m ] , 1 m 3 , are defined in (25), and thus
Q 11 = 2 p p k + i ( p p x p x p ) , Q 12 = 4 p k 2 + 2 i p x k p x x + 2 p p p , Q 21 = 4 p k 2 + 2 i p x k + p x x 2 p p p , Q 22 = 2 p p k + i ( p p x p x p ) .
As normal, for the spectral problems in (38), upon making the variable transformation
ϕ = ψ E g , E g = e i k Λ x + i k 3 Ω t ,
we can impose the canonical normalization condition:
ψ ± I 4 , when x , t ± .
The equivalent pair of matrix spectral problems reads
ψ x = i k [ Λ , ψ ] + P ˇ ψ ,
ψ t = i k 3 [ Ω , ψ ] + Q ˇ ψ ,
where P ˇ = i P and Q ˇ = i Q . Noting tr ( P ˇ ) = tr ( Q ˇ ) = 0 , we have
det ψ = 1 ,
by a generalized Liouville’s formula [47].
Applying the method of variation in parameters, and using the canonical normalization condition (44), we can transform the x-part of (38) into the following Volterra integral equations for ψ ± [48]:
ψ ( k , x ) = I 4 + x e i k Λ ( x y ) P ˇ ( y ) ψ ( k , y ) e i k Λ ( y x ) d y ,
ψ + ( k , x ) = I 4 x e i k Λ ( x y ) P ˇ ( y ) ψ + ( λ , y ) e i k Λ ( y x ) d y .
Similarly, we can turn the t-part of (38) into the following Volterra integral equations:
ψ ( k , t ) = I 4 + t e i k 3 Ω ( t s ) Q ˇ ( s ) ψ ( k , s ) e i k 3 Ω ( s t ) d s ,
ψ + ( k , t ) = I 4 t e i k 3 Ω ( t s ) Q ˇ ( s ) ψ + ( k , s ) e i k 3 Ω ( s t ) d s .
Observing the structures of Λ and Ω , we can know that the first column of ψ and the last three columns of ψ + consist of analytical functions in the upper half plane C + , and the first column of ψ + and the last three columns of ψ consist of analytical functions in the lower half plane C (also see [29,49]). All this will help us formulate an associated RH problem for the three-component coupled mKdV system (2).

3.2. An Oscillatory Riemann-Hilbert Problem

The scattering matrix S is determined by
ψ = ψ + e i k x Λ ^ + i k 3 t Ω ^ S ( k ) ,
where e α M ^ X = e α M X e α M for a scalar α and two same order square matrices M and X. We also adopt the simple expressions
M ( k * ) = ( M ( k ) ) , M 1 ( k * ) = ( M ( k * ) ) 1 ,
for a matrix M depending on k C . Because
U ( k * ) = T U ( k * ) T 1 , V [ 3 ] ( k * ) = T V [ 3 ] ( k * ) T 1 , T = diag ( 1 , 1 , 1 , 1 ) ,
we have
ψ ( k * ) = T ψ 1 ( k * ) T 1 ,
and
S ( k * ) = T S 1 ( k * ) T 1 .
Notice that det S ( λ ) = 1 due to det ψ ± = 1 . It then follows from det S = 1 and (55) that
S 11 * ( k * ) = det [ S 22 ( k ) ] , S 21 ( k * ) = S 12 ( k ) adj [ S 22 ( k ) ] ,
where adj ( M ) is the adjoint matrix of M, and S is the block matrix
S = ( S j l ) 2 × 2 ,
with S 22 being a 3 × 3 matrix. Thus, the scattering matrix S can be written as
S ( k ) = det [ a ̲ ( k * ) ] b ̲ ( k ) adj [ a ̲ ( k * ) ] b ̲ ( k * ) a ̲ ( k ) ,
where a ̲ is a 3 × 3 matrix.
Introduce
M ( k ; x , t ) = ( ψ , L ( k ) det [ a ̲ ( k * ) ] , ψ + , R ( k ) ) , k C + , ( ψ + , L ( k ) , ψ , R ( k ) a ̲ 1 ( k ) ) , k C ,
where ψ ± , L and ψ ± , R denote the first column and the rest columns of ψ ± . This matrix M solves an associated oscillatory RH problem
M + ( k ; x , t ) = M ( k ; x , t ) J ( k ; x , t ) , k R , M ( k ; x , t ) I 4 , as k ,
where M ± ( k ; x , t ) = lim ε 0 + M ( k ± i ε , x , t ) and
J ( k ; x , t ) = 1 γ ( k ) γ ( k * ) e 2 i t θ ( k ) γ ( k ) e 2 i t θ ( k ) γ ( k * ) I 3 ,
with
θ ( k ) = θ ( k ; x , t ) = k x t + 4 k 3 , γ ( k ) = b ̲ ( k ) a ̲ 1 ( k ) .
In the above RH problem, we assume that the matrix a ( k ) is invertible, and so we will only analyze the solitonless case. The behavior of the oscillatory factor
e 2 i t θ ( k )
depends on the increasing and decreasing of θ ( k ) and the signature of Re i θ ( k ) (see Figure 1 and Figure 2).
In what follows, we assume that γ lies in S 3 and satisfies
γ * ( k * ) = γ ( k ) , sup k R | γ ( k ) | < 1 .
The analysis on RH problems in [50] shows that the above RH problem (58) is equivalent to a Fredholm integral equation of the second kind. For such Fredholm equations, the existences and uniqueness of solutions can be guaranteed by the vanishing lemma [51]. A direct computation also shows [29] that one can compute the potential p ( x , t ) through the solution M ( k ; x , t ) to the RH problem (58) as follows.
Theorem 2.
Assume that γ lies in S 3 and satisfies the conditions in (61). Then there exists a unique solution M ( k ; x , t ) to the Riemann-Hilbert problem (58), and the solution of the three-component coupled mKdV system (2) is recovered via
p ( x , t ) = ( p 1 ( x , t ) , p 2 ( x , t ) , p 3 ( x , t ) ) = 2 lim k ( k M ( k ; , x , t ) ) 12 ,
where we partition M into a block matrix M = ( M j l ) 2 × 2 with M 22 being a 3 × 3 matrix.

4. Long-Time Asymptotic Behavior

We will first deal with the Riemann-Hilbert (RH) problem (58) and then compute the long-time aymptotics of the three-component coupled mKdV system (2) with the leading term, by using the nonlinear steepest descent method [7]. We will be concentrated on the physically interesting region | x t | C , where C is a positive constant.

4.1. Transformation of the RH Problem

Obviously, the jump matrix J has the following upper-lower and lower-upper factorizations:
J = 1 e 2 i t θ ( k ) γ ( k ) 0 I 3 1 0 e 2 i t θ ( k ) γ ( k * ) I 3 ,
and
J = 1 0 e 2 i t θ ( k ) γ ( k * ) 1 γ ( k ) γ ( k * ) I 3 1 γ ( k ) γ ( k * ) 0 0 ( I 3 γ ( k * ) γ ( k ) ) 1 1 e 2 i t θ ( k ) γ ( k ) 1 γ ( k ) γ ( k * ) 0 I 3 .
We would like to introduce another RH problem to replace the upper-lower factorization with the lower-upper factorization to make the analytic and decay properties of the two factorizations consistent.
We easily see that the stationary points of θ are ± k 0 , where k 0 = x 12 t , by computing d θ d k | k = ± k 0 = 0 . Let δ ( k ) solve the RH problem
δ + ( k ) = ( I 3 γ ( k * ) γ ( k ) ) δ ( k ) , | k | < k 0 , = δ ( k ) , | k | > k 0 , δ ( k ) I 3 , as k ,
which implies a scalar RH problem
det δ + ( k ) = ( 1 | γ ( k ) | 2 ) det δ ( k ) , | k | < k 0 , = det δ ( k ) , | k | > k 0 , det δ ( k ) 1 , as k ,
noting that
det ( I 3 γ ( k * ) γ ( k ) ) = 1 γ ( k ) ) γ ( k * ) = 1 | γ ( k ) | 2 .
Due to (61), the jump matrix I 3 γ ( k * ) γ ( k ) is positive definite. Thus, it follows from the vanishing lemma (see, e.g., [51]) that the RH problem (65) has a unique solution δ ( k ) . In addition, the Plemelj formula [51] gives the unique solution of the above scalar RH problem (66):
det δ ( k ) = k k 0 k + k 0 i ν e χ ( k ) ,
where
ν = 1 2 π ln ( 1 | γ ( k 0 ) | 2 ) , χ ( k ) = 1 2 π i k 0 k 0 ln 1 | γ ( ξ ) | 2 1 | γ ( k 0 ) | 2 d ξ ξ k .
By the uniqueness, we obtain
δ ( k ) = ( δ ( k * ) ) 1 .
It then follows that
| δ + ( k ) | 2 = 3 | γ ( k ) | 2 , | k | < k 0 , 3 , | k | > k 0 ,
| δ ( k ) | 2 = 3 + | γ ( k ) | 2 1 | γ ( k ) | 2 , | k | < k 0 , 3 , | k | > k 0 ,
and
| det δ + ( k ) | 2 1 , | det δ ( k ) | 2 ( 1 sup ζ R | γ ( ζ ) | 2 ) 1 < .
Therefore, by the maximum principle for analytic functions, we can have
| δ ( k ) | < , | det δ ( k ) | < , k C ,
from the canonical normalization condition of the above two RH problems.
Let us now introduce
Δ ( k ) = det δ ( k ) 0 0 δ 1 ( k ) , k C ,
and a vector-valued spectral induced function
ρ ( k ) = γ ( k ) 1 γ ( k ) γ ( k * ) , | k | < k 0 , γ ( k ) , | k | k 0 .
Denote M Δ by
M Δ ( k ; x , t ) = M ( k ; x , t ) Δ 1 ,
and reverse the orientation for | k | > k 0 as shown in Figure 3.
Then, we see that M Δ presents the solution of the RH problem on R oriented as in Figure 3:
M + Δ ( k ; x , t ) = M Δ ( k ; x , t ) J Δ ( k ; x , t ) , k R , M Δ ( k ; x , t ) I 4 , as k ,
where the jump matrix is defined by
J Δ ( k ; x , t ) = Δ ( k ) J ( k ; x , t ) Δ + 1 ( k ) = 1 0 e 2 i t θ ( k ) δ 1 ( k ) ρ ( k * ) det δ ( k ) I 3 1 e 2 i t θ ( k ) [ det δ + ( k ) ] ρ ( k ) δ + ( k ) 0 I 3 .
We will deform this RH problem to evaluate the asymptotics of the three-component coupled mKdV system (2).

4.2. Decomposition of the Spectral Induced Function

To determine the required deformation, we first make a decomposition of the spectral induced function ρ ( k ) . By L, we denote the contour
L : { k = k 0 + k 0 α e 3 π i 4 : < α 2 } { k = k 0 + k 0 α e π i 4 : < α 2 }
and by Σ , denote the contour
Σ = L L * R
with the orientation in Figure 4.
Further let L ε denote the following part of the contour L:
L ε : { k = k 0 + k 0 α e 3 π i 4 : ε < α 2 } { k = k 0 + k 0 α e π i 4 : ε < α 2 } ,
where 0 < ε < 2 . We will focus on the contour Σ , though any other contour of the same shape as Σ , which locates in the the region where Re i θ ( k ) is positive, will work.
Proposition 1.
The vector-valued spectral induced function ρ ( k ) has the following decomposition on the real axis:
ρ ( k ) = R ( k ) + h 1 ( k ) + h 2 ( k ) , k R ,
where R ( k ) is a polynomial on | k | < k 0 and a rational function on | k | k 0 , h 2 ( k ) has an analytic continuation from R to L in the region where Re i θ ( k ) > 0 , and h 1 , h 2 and R have the estimates as t :
| e 2 i t θ ( k ) h 1 ( k ) | 1 ( 1 + | k | 2 ) t l , k R ,
| e 2 i t θ ( k ) h 2 ( k ) | 1 ( 1 + | k | 2 ) t l , k L ,
| e 2 i t θ ( k ) R ( k ) | e 16 ε 2 k 0 3 t , k L ε ,
where l is an arbitrary positive integer. Taking the Hermitian conjugate
ρ ( k * ) = R ( k * ) + h 1 ( k * ) + h 2 ( k * )
yields the same estimates for e 2 i t θ ( k ) h 1 ( k * ) , e 2 i t θ ( k ) h 2 ( k * ) and e 2 i t θ ( k ) R ( k * ) on R , L * and L ε * , respectively.
Proof. 
Throughout the proof, we use the differential notation d ¯ s = 1 2 π d s for brevity, while dealing with the Fourier transform. We only consider the physically interesting region x = O ( t ) and so k 0 is bounded. Let r be a fixed positive integer.
(a) First, we consider the case of | k | < k 0 . In this case, we have ρ ( k ) = γ ( k ) ( 1 γ ( k ) γ ( k * ) ) 1 . Splitting it into even and odd parts leads to
ρ ( k ) = H e ( k 2 ) + k H o ( k 2 ) ,
where H e and H o are two vector functions in S 3 . By Taylor’s theorem with the integral form of the remainder, we have
H e ( k 2 ) = j = 0 r μ j e ( k 2 k 0 2 ) j + 1 r ! k 0 2 k 2 H e ( r + 1 ) ( ξ ) ( k 2 ξ ) r d ξ ,
and
H o ( k 2 ) = j = 0 r μ j o ( k 2 k 0 2 ) j + 1 r ! k 0 2 k 2 H o ( r + 1 ) ( ξ ) ( k 2 ξ ) r d ξ .
Set
R ( k ) = R r ( k ) = j = 0 r μ j e ( k 2 k 0 2 ) j + k j = 0 r μ j o ( k 2 k 0 2 ) j .
Then, we have
d j ρ ( k ) d k j | k = ± k 0 = d j R ( k ) d k j | k = ± k 0 , 0 j r ,
and the coefficients
μ j e ( k 0 2 ) = 1 j ! d j H e ( w ) d w j | w = k 0 2 , μ j o ( k 0 2 ) = 1 j ! d j H o ( w ) d w j | w = k 0 2 , 0 j r ,
decay rapidly as k 0 .
We assume that r is of form
r = 4 q + 1 , q Z + ,
with an even number q. Express
ρ ( k ) = h ( k ) + R ( k ) , | k | < k 0 .
By the characteristic of R in (89), we have
d j h ( k ) d k j | k = ± k 0 = 0 , 0 j r .
Based on this property, we will try to split h into two parts
h ( k ) = h 1 ( k ) + h 2 ( k ) ,
where h 1 is small and h 2 has an analytic continuation from R to L in the region where Re i θ ( k ) > 0 . This way, we obtain the required splitting of ρ .
Let us introduce
α ( k ) = ( k 2 k 0 2 ) q .
We consider the Fourier transform with respect to θ . Because k θ ( k ) is one-to-one in | k | < k 0 (see Figure 1), we define
( h / α ) ( k ) = h ( k ( θ ) ) / α ( k ( θ ) ) , 8 k 0 3 = θ ( k 0 ) < θ < θ ( k 0 ) = 8 k 0 3 , = 0 , | θ | 8 k 0 3 .
Based on (94), we have
( h / α ) ( θ ) = O ( ( k 2 ( θ ) k 0 2 ) r + 1 q ) , | θ | 8 k 0 3 , | θ | < 8 k 0 3 ;
and as d k / d θ = [ 12 ( k 2 ( θ ) k 0 2 ) ] 1 , we see that
h / α H j ( < θ < ) , 0 j 3 q + 2 2 ,
where the H j s are Hilbert spaces. It follows from the Fourier inversion theorem that
( h / α ) ( k ) = e i s θ ( k ) ( h / α ^ ) ( s ) d ¯ s , | k | < k 0 ,
where h / α ^ is the Fourier transform
( h / α ^ ) ( s ) = k 0 k 0 e i s θ ( k ) ( h / α ) ( k ) d ¯ θ ( k ) , s R .
By the formulae (86), (87), (88) and (93) we have
( h / α ) ( k ) = ( k 2 k 0 2 ) 3 q + 2 r ! 0 1 H e ( r + 1 ) ( k 0 2 + w ( k 2 k 0 2 ) ) ( 1 w ) r d w + k 0 1 H o ( r + 1 ) ( k 0 2 + w ( k 2 k 0 2 ) ) ( 1 w ) r d w .
Then, it follows that
k 0 k 0 d d θ j h α ( k ) 2 | d ¯ θ ( k ) | = k 0 k 0 1 12 ( k 2 k 0 2 ) d d k j h α ( k ) 2 | 12 ( k 2 k 0 2 ) | d ¯ k 1 , 0 j 3 q + 2 2 ,
from which, by the Plancherel theorem, we know
( 1 + s 2 ) j | ( h / α ^ ) ( s ) | 2 d s 1 , 0 j 3 q + 2 2 .
Now make a splitting for h as follows:
h ( k ) = α ( k ) t e i s θ ( k ) ( h / α ^ ) ( s ) d ¯ s + α ( k ) t e i s θ ( k ) ( h / α ^ ) ( s ) d ¯ s = h 1 ( k ) + h 2 ( k ) .
On one hand, based on (104) and noting | k | < k 0 , we have
| e 2 i t θ ( k ) h 1 ( k ) | | α ( k ) | t | ( h / α ^ ) ( s ) | d ¯ s | α ( k ) | t ( 1 + s 2 ) p d ¯ s 1 2 t ( 1 + s 2 ) p | ( h / α ^ ) ( s ) | d ¯ s 1 2 | α ( k ) | t s 2 p d ¯ s 1 2 ( 1 + s 2 ) p | ( h / α ^ ) ( s ) | d ¯ s 1 2 t s 2 p d ¯ s 1 2 = 1 t p 1 2 , 1 p 3 q + 2 2 .
On the other hand, we know that Re i θ ( k ) is positive in the hatched region in Figure 5, and thus that h 2 ( k ) has an analytic continuation from R to the line segments in the upper half k-plane:
k ( w ) = k 0 + k 0 w e 3 π i 4 , 0 w 2 ,
k ( w ) = k 0 + k 0 w e π i 4 , 0 w 2 .
Let k be on the line segment (107). Then, we can have that
| e 2 i t θ ( k ) h 2 ( k ) | | k + k 0 | q ( k 0 w ) q e t Re i θ ( k ) t e ( s t ) Re i θ ( k ) | ( h / α ^ ) ( s ) | d ¯ s k 0 2 q w q e t Re i θ ( k ) t d ¯ s 1 + s 2 1 2 t ( 1 + s 2 ) | ( h / α ^ ) ( s ) | 2 d ¯ s 1 2 k 0 2 q w q e t Re i θ ( k ) ,
again based on (104). But
Re i θ ( k ) = 4 k 0 3 w 2 ( 3 w 2 ) 8 k 0 3 w 2 .
Therefore, we have
| e 2 i t θ ( k ) h 2 ( k ) | k 0 2 q w q e 8 t k 0 3 w 2 = k 0 3 q 2 t q 2 w q e 8 t k 0 3 w 2 ( k 0 t ) q 2 t q 2 .
Similarly, we can show that the same estimate holds on the line segment (108).
Finally fix 0 < ε < 2 . Then on the parts of the line segments (107) and (108) with ε < w 2 away from ± k 0 , we have
| e 2 i t θ ( k ) R ( k ) | e 16 t k 0 3 w 2 e 16 ε 2 τ ,
(by, e.g., (110) on the line segment (107) and R ( k ) is a polynomial), where τ = t k 0 3 .
(b) Second, we consider the case of | k | k 0 . We only consider the sub-case k k 0 , since the other sub-case k k 0 is completely similar. In this case, we have
ρ ( k ) = γ ( k ) , | k | k 0 .
Once more, we use Taylor’s theorem to obtain
( k i ) r + 6 ρ ( k ) = j = 0 r μ j ( k k 0 ) j + 1 r ! k 0 k ( ( · i ) r + 6 ρ ( · ) ) ( r + 1 ) ( ξ ) ( k ξ ) r d ξ .
Define
R ( k ) = 1 ( k i ) r + 6 j = 0 r μ j ( k k 0 ) j ,
and set
h ( k ) = ρ ( k ) R ( k ) .
As before, we know that
d j h ( k ) d k j | k = k 0 = 0 , 0 j r ,
and
μ j = μ j ( k 0 ) = 1 j ! d j d k j ( k i ) r + 6 ρ ( k ) | k = k 0 , 0 j r ,
decay rapidly as k 0 .
Similarly, introduce
β ( k ) = ( k k 0 ) q ( k i ) q + 2 .
Following the Fourier inversion theorem, we have
( h / β ) ( k ) = e i s θ ( k ) ( h / β ^ ) ( s ) d ¯ s , k k 0 ,
where h / β ^ is the Fourier transform
( h / β ^ ) ( s ) = k 0 e i s θ ( k ) ( h / β ) ( k ) d ¯ θ ( k ) , s R .
Based on the formulae (114), (116) and (119), we see that
( h / β ) ( k ) = ( k k 0 ) 3 q + 2 ( k i ) 3 q + 5 g ( k , k 0 ) ,
where
g ( k , k 0 ) = 1 r ! 0 1 ( · i ) r + 6 ρ ( · ) ( r + 1 ) ( k 0 + w ( k k 0 ) ) ( 1 w ) r d w ,
from which it follows that
d j g ( k , k 0 ) d k j 1 , k k 0 , j 0 .
Noting that
k k 0 k + k 0 1 , k k 0 ,
we have
k 0 d d θ j h β ( k ) 2 d ¯ θ ( k ) = k 0 1 12 ( k 2 k 0 2 ) d d k j h β ( k ) 2 [ 12 ( k 2 k 0 2 ) ] d ¯ k k 0 ( k k 0 ) 3 q + 2 3 j ( k i ) 3 q + 5 2 ( k 2 k 0 2 ) d ¯ k 1 , 0 j < 3 q + 2 3 .
By the Plancherel theorem,
( 1 + s 2 ) j | ( h / β ^ ( s ) | 2 d s < , 0 j < 3 q + 2 3 .
Again, we split
h ( k ) = β ( k ) t e i s θ ( k ) ( h / β ^ ) ( s ) d ¯ s + β ( k ) t e i s θ ( k ) ( h / β ^ ) ( s ) d ¯ s = h 1 ( k ) + h 2 ( k ) .
On one hand, for k k 0 , similarly as in the previous example (106), we see that
| e 2 i t θ ( k ) h 1 ( k ) | 1 | k i | 2 t p 1 2 , 0 p < 3 q + 2 3 ,
upon noting
k k 0 k i 1 , k k 0 .
On the other hand, h 2 ( k ) has an analytic continuation from R to the ray
k ( w ) = k 0 + k 0 w e π i 4 , w 0 ,
in two parts of the lower half k-plane, where Re i θ ( k ) > 0 , as shown in Figure 6.
Let k be on the ray (130). Similarly as in the previous case | k | < k 0 , we can show that
| e 2 i t θ ( k ) h 2 ( k ) | k 0 q w q e t Re i θ ( k ) | k i | q + 2 .
However, we have
Re i θ ( k ) 4 k 0 3 w 2 ( w 2 + 3 ) 2 2 k 0 3 w 3 ,
and thus, we have
| e 2 i t θ ( k ) h 2 ( k ) | k 0 q [ ( ( t k 0 3 ) 1 3 w ) q e 2 2 t k 0 3 w 3 ] | k i | q + 2 ( t k 0 3 ) q 3 1 | k i | q + 2 t q 3 1 | k i | 2 t q 3 ,
since k 0 is bounded. This completes the proof of Proposition 1.  □
We are now ready to make a deformation of the RH problem (76).

4.3. Deformation of the RH Problem

Let us first explain what a deformation of a RH problem is. Suppose that we have a RH problem ( Γ , J ) on an oriented contour Γ (see Figure 7):
M + ( k ) = M ( k ) J ( k ) , k Γ , M ( k ) J 0 , as k ,
and that on a part (which could be the whole contour Γ ) of Γ from k 1 to k 2 in the direction of Γ , denoted by Γ k 1 k 2 , we have
J ( k ) = b 1 ( k ) J 1 ( k ) b + ( k ) , k Γ k 1 k 2 ,
where b ± have invertible and analytic continuations to the ± sides of a region D (see Figure 7) supported by k 1 and k 2 , respectively. Define an extended oriented contour Γ (see Figure 7):
Γ = Γ D , Γ = Γ 1 Γ 2 Γ k 1 k 2 , D = B + B ,
and an extended jump matrix J :
J ( k ) = J ( k ) , k Γ Γ k 1 k 2 , J ( k ) = J 1 ( k ) , k Γ k 1 k 2 , J ( k ) = b + ( k ) , k B + , J ( k ) = b 1 ( k ) , k B .
Then the RH problem (133) on Γ is equivalent to the following RH problem on Γ :
M + ( k ) = M ( k ) J ( k ) , k Γ , M ( k ) J 0 , as k ,
and the relation between the two solutions reads
M ( k ) = M ( k ) , k C D , M ( k ) = M ( k ) b + 1 ( k ) , k D + , M ( k ) = M ( k ) b 1 ( k ) , k D .
It is clear that one RH problem is solvable if and only if the other RH problem is solvable, and the solution to the one problem gives the solution to the other problem automatically by (138). We call ( Γ , J ) a deformation of the original RH problem ( Γ , J ) . If D is not bounded, then we need to require that b + ( k ) and b ( k ) tend to the identity matrix as k , in order to keep the same normalization condition.
We now deform the RH problem (76) from R to the augmented contour Σ . The first important step is to observe that the jump matrix J Δ ( k ; , x , t ) can be rewritten as
J Δ ( k ; x , t ) = ( b ) 1 b + , b ± = I 4 ± ω ± , k R ,
where
ω + = 0 e 2 i t θ ( k ) [ det δ + ( k ) ] ρ ( k ) δ + ( k ) 0 0 , ω = 0 0 e 2 i t θ ( k ) δ 1 ( k ) ρ ( k * ) det δ ( k ) 0 .
Moreover, noting the decomposition (81), we have
ω + = ω + o + ω + a = 0 e 2 i t θ ( k ) [ det δ + ( k ) ] h 1 ( k ) δ + ( k ) 0 0 + 0 e 2 i t θ ( k ) [ det δ + ( k ) ] [ h 2 ( k ) + R ( k ) ] δ + ( k ) 0 0 ,
ω = ω o + ω a = 0 0 e 2 i t θ ( k ) δ 1 ( k ) h 1 ( k * ) det δ ( k ) 0 + 0 0 e 2 i t θ ( k ) δ 1 ( k ) [ h 2 ( k * ) + R ( k * ) ] det δ ( k ) 0 ,
and thus, the decompositions for b ± :
b + = b + o b + a = ( I 4 + ω + o ) ( I 4 + ω + a ) = 1 e 2 i t θ ( k ) [ det δ + ( k ) ] h 1 ( k ) δ + ( k ) 0 I 3 1 e 2 i t θ ( k ) [ det δ + ( k ) ] [ h 2 ( k ) + R ( k ) ] δ + ( k ) 0 I 3 , b = b o b a = ( I 4 ω o ) ( I 4 ω a )
= 1 0 e 2 i t θ ( k ) δ 1 ( k ) h 1 ( k * ) det δ ( k ) I 3 1 0 e 2 i t θ ( k ) δ 1 ( k ) [ h 2 ( k * ) + R ( k * ) ] det δ ( k ) I 3 .
We can also state that
b ± o = ω ± | ρ = h 1 , b ± a = ω ± | ρ = h 2 + R .
It finally follows that the jump matrix J Δ has the following factorization:
J Δ ( k ; x , t ) = ( b a ) 1 [ ( b o ) 1 b + o ] b + a , k R .
It is now standard to introduce
M ( k ; x , t ) = M Δ ( k ; x , t ) , k Ω 1 Ω 2 , M Δ ( k ; x , t ) ( b a ) 1 , k Ω 3 Ω 4 Ω 5 , M Δ ( k ; x , t ) ( b + a ) 1 , k Ω 6 Ω 7 Ω 8 ,
and then the RH problem (76) on R becomes the following new RH problem on Σ :
M + ( k ; x , t ) = M ( k ; x , t ) J ( k ; x , t ) , k Σ , M ( k ; x , t ) I 4 , k ,
where
J = ( b ) 1 b + , b + = b + o , k R , b + a , k L , I 4 , k L * , b = b o , k R , I 4 , k L , b a , k L * .
The canonical normalization condition in (148) can be verified, indeed. For example, ( b + a ) 1 converges to I 4 as k in Ω 6 Ω 8 , because we observe that for fixed x , t , by the definition of h 2 in (128) and the boundedness of det δ and δ in (72), we have
| e 2 i t θ ( k ) [ det δ ( k ) ] h 2 ( k ) δ ( k ) | | β ( k ) | e t Re i θ ( k ) t e ( s t ) Re i θ ( k ) | ( h / β ^ ) ( s ) | d ¯ s | k k 0 | q | k i | q + 2 t | ( h / β ^ ) ( s ) | d ¯ s 1 | k i | 2 ;
and by the definition of R in (115) and the boundedness of det δ and δ in (72), we have
| e 2 i t θ ( k ) [ det δ ( k ) ] R ( k ) δ ( k ) | | j = 0 r μ j ( k k 0 ) j | | k i | r + 6 1 | k i | 6 ;
both of which converge to 0 as k in Ω 6 Ω 8 .
The above RH problem (148) can be solved by using the Cauchy operators as follows (see [52,53]). Let us denote by C ± the two Cauchy operators
( C ± f ) ( k ) = Σ f ( ξ ) ξ k ± d ξ 2 π i , k Σ , f L 2 ( Σ )
on Σ . As is well known, these two operators C ± are bounded from L 2 ( Σ ) to L 2 ( Σ ) , and satisfy C + C = 1 . Define
C ω f = C + ( f ω ) + C ( f ω + )
for a 4 × 4 matrix-valued function f, where
ω ± = ± ( b ± I 4 ) , ω = ω + + ω .
Assume that μ = μ ( k ; x , t ) L 2 ( Σ ) + L ( Σ ) solves the fundamental inverse equation
μ = I 4 + C ω μ .
Then, we can see that
M ( k ; x , t ) = I 4 + Σ μ ( k ; x , t ) ω ( k ; x , t ) ξ k d ξ 2 π i , k C Σ ,
presents the unique solution of the RH problem (148).
Theorem 3.
The solution p ( x , t ) of the three-component coupled mKdV system (2) is expressed by
p ( x , t ) = ( p 1 ( x , t ) , p 2 ( x , t ) , p 3 ( x , t ) ) = 2 lim k ( k M ( k ; , x , t ) ) 12 = i ( Σ μ ( ξ ; x , t ) ω ( ξ ) d ξ π ) 12 = i ( Σ ( ( 1 C ω ) 1 I 4 ) ( ξ ) ω ( ξ ) d ξ π ) 12 .
Proof. 
This statement can be shown by Theorem 2, (75) and (147).  □

4.4. Contour Truncation

Let Σ be the truncated contour Σ = Σ ( R L ε L ε * ) with the orientation as in Figure 8. We will reduce the RH problem (148) from Σ to Σ , and estimate the difference between the two RH problems, one on Σ and the other on Σ .
Let ω e : Σ M ( 4 , C ) be a sum of three terms
ω e = ω a + ω b + ω c ,
where ω a = ω R is supported on R and is composed of the contributions to ω from terms of type h 1 ( k ) and h 1 ( k * ) , ω b = ω L L * is supported on L L * and is composed of terms of type h 2 ( k ) and h 2 ( k * ) , and ω c = ω L ε L ε * is supported on L ε L ε * and is composed of terms of type R ( k ) and R ( k * ) .
Define
ω = ω ω e .
Obviously ω = 0 on Σ Σ , and hence ω is supported on Σ from terms of type R ( k ) and R ( k * ) .
Lemma 1.
For sufficiently small ε, we have
ω a L 1 ( R ) L 2 ( R ) L ( R ) t l ,
ω b L 1 ( L L * ) L 2 ( L L * ) L ( L L * ) t l ,
ω c L 1 ( L ε L ε * ) L 2 ( L ε L ε * ) L ( L ε L ε * ) e 16 ε 2 τ ,
ω L 2 ( Σ ) ε τ 1 4 , ω L 1 ( Σ ) ε τ 1 2 ,
where τ = t k 0 3 .
Proof. 
The estimates (158), (159) and (160) can be derived, similarly to Proposition 1. Based on the early definition of R ( k ) , we can see that
| R ( k ) | ( 1 + | k | 6 ) 1
on the contour { k = k 0 + k 0 α e 3 π i 4 : < α ε } , where ε < 2 . On this contour, by (110), we have
Re i θ ( k ) 8 k 0 3 α 2 ,
and then using (72), we find that
| e 2 i t θ ( k ) [ det δ ( k ) ] R ( k ) δ ( k ) | e 16 k 0 3 α 2 t ( 1 + | k | 6 ) 1 .
It then follows from a direct computation that the estimate (161) holds.  □
Similarly to Proposition 2.23 in [7], we can show the following estimate.
Proposition 2.
In the physically interesting region x = O ( t ) , as τ , there exists the inverse of the operator 1 C ω : L 2 ( Σ ) L 2 ( Σ ) , which is uniformly bounded:
( 1 C ω ) 1 L 2 ( Σ ) 1 .
Corollary 1.
In the physically interesting region x = O ( t ) , as τ , there exists the inverse of the operator 1 C ω : L 2 ( Σ ) L 2 ( Σ ) , which is uniformly bounded:
( 1 C ω ) 1 L 2 ( Σ ) 1 .
Proof. 
Noting C ω = C ω + C ω e , we have
C ω C ω L 2 ( Σ ) = C ω e L 2 ( Σ ) ω e L ( Σ ) .
In addition, it follows from Lemma 1 that
ω e L ( Σ ) ω a L ( Σ ) + ω b L ( Σ ) + ω c L ( Σ ) τ l ,
in the physically interesting region x = O ( t ) (where k 0 is bounded and thus t = O ( τ ) ). Therefore, we have
C ω C ω L 2 ( Σ ) τ l .
Now first from 1 C ω = ( 1 C ω ) ( C ω C ω ) , we know, based on (164), that ( 1 C ω ) 1 exists.
Second, the second resolvent identity tells
( 1 C ω ) 1 ( C ω C ω ) ( 1 C ω ) 1 = ( 1 C ω ) 1 ( 1 C ω ) 1 .
Again based on (164), we see from this identity that the estimate in the corollary is just a consequence of Proposition 2.  □
Theorem 4.
As τ , we have
Σ ( 1 C ω ) 1 I 4 ( ξ ) ω ( ξ ) d ξ = Σ ( 1 C ω ) 1 I 4 ( ξ ) ω ( ξ ) d ξ + O ( τ l ) .
Proof. 
First, from (165), we can obtain
( 1 C ω ) 1 I 4 ω = ( 1 C ω ) 1 I 4 ω + ω e + ( 1 C ω ) 1 ( C ω e I 4 ω + ( 1 C ω ) 1 ( C ω I 4 ) ω e + ( 1 C ω ) 1 C ω e ( 1 C ω ) 1 ( C ω I 4 ) ω .
Second, it follows directly from Lemma 1 that
ω L 1 ( Σ ) ω a L 1 ( Σ ) + ω b L 1 ( L L * ) + ω c L 1 ( L ε L ε * ) τ l , ( 1 C ω ) 1 ( C ω e I 4 ) ω L 1 ( Σ ) ( 1 C ω ) 1 L 2 ( Σ ) C ω e I 4 L 2 ( Σ ) ω L 2 ( Σ ) ω e L 2 ( Σ ) ω L 2 ( Σ ) ω e L 2 ( Σ ) ( ω e L 2 ( Σ ) + ω L 2 ( Σ ) ) τ l 1 4 , ( 1 C ω ) 1 ( C ω e I 4 ) ω e L 1 ( Σ ) ( 1 C ω ) 1 L 2 ( Σ ) C ω I 4 L 2 ( Σ ) ω e L 2 ( Σ ) ω L 2 ( Σ ) ω e L 2 ( Σ ) τ l 1 4 , ( 1 C ω ) 1 C ω e ( 1 C ω ) 1 ( C ω I 4 ) ω L 1 ( Σ ) ( 1 C ω ) 1 L 2 ( Σ ) C ω e L 2 ( Σ ) ( 1 C ω ) 1 L 2 ( Σ ) C ω I 4 L 2 ( Σ ) ω L 2 ( Σ ) ω e L ( Σ ) ω L 2 ( Σ ) 2 τ l 1 2 .
This proves the theorem, together with (167).  □
Note that as k Σ Σ , ω ( k ) = 0 , we can reduce C ω from L 2 ( Σ ) to L 2 ( Σ ) , and for simplicity’s sake, we still denote this reduced operator by C ω . Then
Σ ( 1 C ω ) 1 I 4 ( ξ ) ω ( ξ ) d ξ = Σ ( 1 C ω ) 1 I 4 ( ξ ) ω ( ξ ) d ξ ,
and thus it follows from Theorems 3 and 4 that the following statement holds.
Theorem 5.
As τ , we have
p ( x , t ) = i Σ ( 1 C ω ) 1 I 4 ( ξ ) ω ( ξ ) d ξ π 12 + O ( τ l ) .
Let L = L L ε , which yields Σ = L L * , and denote μ = ( 1 C ω ) 1 I 4 . Then, we see that
M ( k ; x , t ) = I 4 + Σ μ ( k ; x , t ) ω ( k ; x , t ) ξ k d ξ 2 π i
solves the following RH problem
M + ( k ; x , t ) = M ( k ; x , t ) J ( k ; x , t ) , k Σ , M ( k ; x , t ) I 4 , as k ,
where the jump matrix is defined by
J = ( b ) 1 b + , b ± = I 4 ± ω ± , ω = ω + + ω ,
ω + ( k ) = 0 e 2 i θ ( k ) [ det δ ( k ) ] R ( k ) δ ( k ) 0 0 , ω ( k ) = 0 , k L ,
ω + ( k ) = 0 , ω ( k ) = 0 0 e 2 i θ ( k ) δ 1 ( k ) R ( k * ) det δ ( k ) 0 , k L * .

4.5. Disconnecting Contour Components

Denote the contour Σ = Σ A Σ B (see Figure 8), and set
ω ± = ω A ± + ω B ± ,
where
ω A ± ( k ) = ω ± , k Σ A , 0 , k Σ B ,
ω B ± ( k ) = ω ± , k Σ B , 0 , k Σ A .
Further define
ω A = ω A + + ω A , ω B = ω B + + ω B ,
and thus
ω = ω + + ω = ω A + ω B .
The two Cauchy operators C ω A and C ω B (see (150) for definition) are bounded operators from L 2 ( Σ ) L 2 ( Σ ) .
One can prove a similar lemma to Lemma 3.5 in [7].
Lemma 2.
We have the estimates
C ω A C ω B L 2 ( Σ ) = C ω B C ω A L 2 ( Σ ) k 0 τ 1 2 ,
and
C ω A C ω B L ( Σ ) L 2 ( Σ ) , C ω B C ω A L ( Σ ) L 2 ( Σ ) k 0 τ 3 4 .
Proposition 3.
As τ , we have
Σ ( 1 C ω ) 1 I 4 ( ξ ) ω ( ξ ) d ξ = Σ A ( 1 C ω A ) 1 I 4 ( ξ ) ω A ( ξ ) d ξ + Σ B ( 1 C ω B ) 1 I 4 ( ξ ) ω B ( ξ ) d ξ + O ( 1 τ ) .
Proof. 
From the identity
( 1 C ω A C ω B ) [ 1 + C ω A ( 1 C ω A ) 1 + C ω B ( 1 C ω B ) 1 ] = 1 C ω B C ω A ( 1 C ω A ) 1 C ω A C ω B ( 1 C ω B ) 1 ,
it follows that
( 1 C ω ) { [ 1 + C ω A ( 1 C ω A ) 1 + C ω B ( 1 C ω B ) 1 ] [ 1 + ( 1 E ) 1 ] E } = ( 1 C ω ) [ 1 + C ω A ( 1 C ω A ) 1 + C ω B ( 1 C ω B ) 1 ] + E = 1 ,
which implies that
( 1 C ω ) 1 = [ 1 + C ω A ( 1 C ω A ) 1 + C ω B ( 1 C ω B ) 1 ] + E .
In the above computation, we denote
E = C ω B C ω A ( 1 C ω A ) 1 + C ω A C ω B ( 1 C ω B ) 1
for simplicity’s sake. Note that if an operator T is bounded and the inverse of 1 T exists, then ( 1 T ) 1 is also bounded. Further, based on the two Lemmas, Lemmas 1 and 2, and one proposition, Proposition 2, we can derive the estimate (179).  □
Now, from Theorem 5 and Proposition 3, we see that the following result holds.
Theorem 6.
As τ , we have
p ( x , t ) = i Σ A ( 1 C ω A ) 1 I 4 ( ξ ) ω A ( ξ ) d ξ π 12 + i Σ B ( 1 C ω B ) 1 I 4 ( ξ ) ω B ( ξ ) d ξ π 12 + O ( 1 τ ) .

4.6. Rescaling and Reduction of the Disconnected RH Problem

Extend the two contours Σ A and Σ B to the following two contours
Σ ^ A = { k = k 0 + k 0 α e ± π i 4 : α R } , Σ ^ B = { k = k 0 + k 0 α e ± π i 4 : α R } ,
and define ω ^ A , ω ^ B on Σ ^ A , Σ ^ B as
ω ^ A = ω ^ A + + ω ^ A , ω ^ B = ω ^ B + + ω ^ B ,
respectively, where
ω ^ A ± = ω A ± ( k ) , k Σ A , 0 , k Σ ^ A Σ A ,
ω ^ B ± = ω B ± ( k ) , k Σ B , 0 , k Σ ^ B Σ B .
Let Σ A and Σ B denote the contours
{ k = k 0 α e ± π i 4 : α R }
oriented outward as in Σ A , Σ ^ A and inward as in Σ B , Σ ^ B , respectively (see Figure 9).
Motivated by the method of stationary phase, define the scaling transformations
N A : L 2 ( Σ ^ A ) L 2 ( Σ A ) , f N A f , ( N A f ) ( k ) = f ( k 48 t k 0 k 0 ) ,
N B : L 2 ( Σ ^ B ) L 2 ( Σ B ) , f N B f , ( N B f ) ( k ) = f ( k 48 t k 0 + k 0 ) ,
and denote
ω A = N A ω ^ A , ω B = N B ω ^ B .
A direct change-of-variable argument tells that
C ω ^ A = N A 1 C ω A N A , C ω ^ B = N B 1 C ω B N B ,
where C ω A (or C ω B ) is a bounded operator from L 2 ( Σ A ) (or L 2 ( Σ B ) into L 2 ( Σ A ) (or L 2 ( Σ B ) ). On the part
L A = { k = α k 0 48 t k 0 e 3 π i 4 : ε < α < }
of the contour Σ A , we have
ω A = ω A + = 0 N A s 1 0 0 ,
and on the conjugate part L A * , we have
ω A = ω A = 0 0 N A s 2 0 ,
where
s 1 ( k ) = e 2 i t θ ( k ) [ det δ ( k ) ] R ( k ) δ ( k ) , s 2 ( k ) = e 2 i t θ ( k ) δ 1 ( k ) R ( k * ) det δ ( k ) .
Lemma 3.
As t , we have the estimates
| ( N A δ ˜ ) ( k ) | t l + 1 , k L A ,
where δ ˜ ( k ) = e 2 i t θ ( k ) [ R ( k ) δ ( k ) ( det δ ( k ) ) R ( k ) ] , and
| ( N A δ ^ ) ( k ) | t l + 1 , k L A * ,
where δ ^ ( k ) = e 2 i t θ ( k ) [ δ 1 ( k ) R ( k * ) ( det δ ) 1 ( k ) R ( k * ) ] .
Proof. 
We only prove the estimate (193) and the proof of the other estimate is completely similar.
From (65) and (66), we see that δ ˜ solves the following RH problem
δ ˜ + ( k ) = δ ˜ ( k ) ( 1 | γ ( k ) | 2 ) + e 2 i t θ ( k ) f ( k ) , | k | < k 0 , δ ˜ + ( k ) = δ ˜ ( k ) , | k | > k 0 , δ ˜ ( k ) 0 , as k ,
where
f ( k ) = R ( k ) ( | γ ( k ) | 2 I 3 γ ( k * ) γ ( k ) ) δ ( k ) .
The solution of this vector RH problem is given by
δ ˜ ( k ) = X ( k ) k 0 k 0 e 2 i t θ ( ξ ) f ( ξ ) X + ( ξ ) ( ξ k ) d ξ 2 π i ,
with
X ( k ) = e 1 2 π i k 0 k 0 ln ( 1 | γ ( ξ ) | 2 ) ξ k d ξ .
Noting that
R | γ | 2 I 3 R γ γ = ( R ρ ) | γ | 2 I 3 ( R ρ ) γ γ = ( h 1 + h 2 ) | γ | 2 I 3 ( h 1 + h 2 ) γ γ ,
we can have f ( k ) = O ( ( k 2 k 0 2 ) l + 1 ) when k ± k 0 , upon noting the definition of h = h 1 + h 2 , and decompose f ( k ) into two parts: f ( k ) = f 1 ( k ) + f 2 ( k ) , where f 1 ( k ) satisfies
| e 2 i t θ ( k ) f 1 ( k ) | 1 ( 1 + | k | 2 ) t l , k R ,
and f 2 ( k ) has an analytical continuation from R to L t (see Figure 10):
L t = { k = k 0 + k 0 α e 3 π i 4 : 0 α 2 ( 1 1 2 t ) } { k = k 0 t k 0 + k 0 α e π i 4 : 0 α 2 ( 1 1 2 t ) }
and satisfies
| e 2 i t θ ( k ) f 2 ( k ) | 1 ( 1 + | k | 2 ) t l , k L t .
Let k L A , and we decompose
( N A δ ˜ ) ( k ) = X ( k 48 t k 0 k 0 ) k 0 k 0 t k 0 e 2 i t θ ( ξ ) f ( ξ ) X + ( ξ ) ( ξ + k 0 k 48 t k 0 ) d ξ 2 π i + X ( k 48 t k 0 k 0 ) k 0 t k 0 k 0 e 2 i t θ ( ξ ) f 1 ( ξ ) X + ( ξ ) ( ξ + k 0 k 48 t k 0 ) d ξ 2 π i + X ( k 48 t k 0 k 0 ) k 0 t k 0 k 0 e 2 i t θ ( ξ ) f 2 ( ξ ) X + ( ξ ) ( ξ + k 0 k 48 t k 0 ) d ξ 2 π i = T 1 + T 2 + T 3 .
As t , for the first and second terms, we can have
| T 1 | k 0 k 0 t k 0 | f ( ξ ) | | ξ + k 0 k 48 t k 0 | d ξ k 0 k 0 t k 0 | ξ + k 0 | l + 1 | ξ + k 0 k 48 t k 0 | d ξ = 0 k 0 t η l + 1 | η k 48 t k 0 | d η t l 0 k 0 t η | η k 48 t k 0 | d η t l t 5 2 t l + 1 , if k 0 , = t l k 0 t t l + 1 , if k = 0 , | T 2 | k 0 t k 0 k 0 | e 2 i t θ ( ξ ) f 1 ( ξ ) | | ξ + k 0 k 48 t k 0 | d ξ t l 2 t k 0 ( 2 k 0 k 0 t ) t l + 1 ,
which are due to f ( k ) = O ( ( k + k 0 ) l + 1 ) and
| ξ + k 0 k 48 t k 0 | k 0 2 t , ξ ( k 0 t k 0 , k 0 ) , k L A ,
respectively. By using Cauchy’s Theorem, we can have a similar estimate | T 3 | t l + 1 through integrating along the contour L t in C + instead of the interval ( k 0 t k 0 , k 0 ) on R . Now, the estimate (193) is a consequence of those three estimates. The proof is finished.  □
Express
J A o = ( I 4 ω A o ) 1 ( I 4 + ω A o + ) ,
where
ω A o = ω A o + = 0 ( δ A ) 2 ( k ) 2 ν i e i k 2 2 γ ( k 0 ) 0 0 , k Σ A 4 , 0 ( δ A ) 2 ( k ) 2 ν i e i k 2 2 γ ( k 0 ) 1 | γ ( k 0 ) | 2 0 0 , k Σ A 2 ,
ω A o = ω A o = 0 0 ( δ A ) 2 ( k ) 2 ν i e i k 2 2 γ ( k 0 ) 0 , k Σ A 3 , 0 0 ( δ A ) 2 ( k ) 2 ν i e i k 2 2 γ ( k 0 ) 1 | γ ( k 0 ) | 2 0 , k Σ A 1 ,
with
δ A = e χ ( k 0 ) 8 τ i ( 192 τ ) ν i 2 .
Based on (193), (194) and Lemma 3.35 in [7], we can obtain
ω A ω A o L ( Σ A ) L 1 ( Σ A ) L 2 ( Σ A ) k 0 ln t t .
Therefore, we have
Σ A ( 1 C ω A ) 1 I 4 ( ξ ) ω A ( ξ ) d ξ = Σ ^ A ( 1 C ω A ) 1 I 4 ( ξ ) ω ^ A ( ξ ) d ξ = Σ ^ A N A 1 ( 1 C ω A ) 1 N A I 4 ( ξ ) ω ^ A ( ξ ) d ξ = Σ ^ A ( 1 C ω A ) 1 I 4 ( ( ξ + k 0 ) 48 t k 0 ) N A ω ^ A ( ( ξ + k 0 ) 48 t k 0 ) d ξ = 1 48 t k 0 Σ A ( 1 C ω A ) 1 I 4 ( ξ ) ω A ( ξ ) d ξ = 1 48 t k 0 Σ A ( 1 C ω A o ) 1 I 4 ( ξ ) ω A o ( ξ ) d ξ + O ( ln t t ) .
Together with a similar argument for B, we can obtain
p ( x , t ) = i 1 48 t k 0 Σ A ( 1 C ω A o ) 1 I 4 ( ξ ) ω A o ( ξ ) d ξ π 12 + i 1 48 t k 0 Σ B ( 1 C ω B o ) 1 I 4 ( ξ ) ω B o ( ξ ) d ξ π 12 + O ( ln t t ) .
For k C Σ A , we set
M A o ( k ) = I 4 + Σ A ( 1 C ω A o ) 1 I 4 ( ξ ) ω A o ( ξ ) ξ k d ξ 2 π i ,
which solves the following RH problem
M + A o ( k ; x , t ) = M A o ( k ; x , t ) J A o ( k ; x , t ) , k Σ A , M A o ( k ; x , t ) I 4 , as k .
Particularly, from an asymptotic expansion
M A o ( k ) = I 4 + M 1 A o k + O ( k 2 ) , k ,
we get
M 1 A o = Σ A ( 1 C ω A o ) 1 I 4 ( ξ ) ω A o ( ξ ) d ξ 2 π i .
An analogous RH problem for B o on Σ B reads
M + B o ( k ; x , t ) = M B o ( k ; x , t ) J B o ( k ; x , t ) , k Σ B , M B o ( k ; x , t ) I 4 , as k ,
and its solution is given by
M B o ( k ) = I 4 + Σ B ( 1 C ω B o ) 1 I 4 ( ξ ) ω B o ( ξ ) ξ k d ξ 2 π i .
The above jump matrix is defined by
J B o = ( I 4 ω B o ) 1 ( I 4 + ω B o + ) ,
where
ω B o = ω B o + = 0 ( δ B ) 2 k 2 ν i e i k 2 2 γ ( k 0 ) 0 0 , k Σ B 1 , 0 ( δ B ) 2 k 2 ν i e i k 2 2 γ ( k 0 ) 1 | γ ( k 0 ) | 2 0 0 , k Σ B 3 ,
ω B o = ω B o = 0 0 ( δ B ) 2 k 2 ν i e i k 2 2 γ ( k 0 ) 0 , k Σ B 2 , 0 0 ( δ B ) 2 k 2 ν i e i k 2 2 γ ( k 0 ) 1 | γ ( k 0 ) | 2 0 , k Σ B 4 ,
with
δ B = e χ ( k 0 ) + 8 τ i ( 192 τ ) ν i 2 .
Based on (201)–(204) and (211)–(214), we can show that
J A o ( k ) = ( J B o ) * ( k * ) .
By uniqueness,
M A o ( k ) = ( M B o ) * ( k * )
and thus, we have
M 1 A o = ( M 1 B o ) * .
Now from (206), we obtain
p ( x , t ) = 1 12 t k 0 ( M 1 A o ( M 1 A o ) * ) 12 + O ( ln t t ) .

4.7. The Model RH Problem and Its Solution

To determine ( M A o ) 12 explicitly, we solve a model RH problem
Φ + ( k ) = Φ ( k ) v ( k 0 ) , v ( k 0 ) = ( k ) i ν Λ ^ e 1 4 i k 2 Λ ^ ( δ A ) Λ ^ J A o .
The solution to this RH problem is given by
Φ ( k ) = G ( k ) ( k ) i ν Λ e 1 4 i k 2 Λ , G ( k ) = ( δ A ) Λ ^ M A o = ( δ A ) Λ M A o ( k ) ( δ A ) Λ ,
where ( δ A ) Λ = diag ( ( δ A ) 1 , δ A I 3 ) and ( δ A ) Λ = ( ( δ A ) Λ ) 1 .
Since the jump matrix v ( k 0 ) does not depend on k along each ray of Σ A , we have
d Φ + ( k ) d k = d Φ ( k ) d k v ( k 0 ) .
Together with (216), this implies that d Φ ( k ) d k Φ 1 ( k ) has no jump discontinuity along any of the four rays. Directly from the solution, we obtain
d Φ ( k ) d k Φ 1 ( k ) = d G ( k ) d k G 1 ( k ) 1 2 i k G ( k ) Λ G 1 ( k ) + i ν k G ( k ) Λ G 1 ( k ) = O ( 1 k ) 1 2 i k Λ + 1 2 i δ A Λ [ Λ , M 1 A o ] δ A Λ .
It then follows from Liouville’s theorem that
d Φ ( k ) d k + 1 2 i k Λ Φ ( k ) = η Φ ( k ) ,
where
η = 1 2 i δ A Λ [ Λ , M 1 A o ] δ A Λ = 0 η 12 η 21 0 .
Particularly, we have
( M 1 A o ) 12 = i δ A 2 η 12 .
We partition Φ ( k ) into the following form
Φ ( k ) = Φ 11 ( k ) Φ 12 ( k ) Φ 21 ( k ) Φ 22 ( k ) ,
where Φ 11 ( k ) is a scalar and Φ 22 ( k ) is a 3 × 3 matrix. From the differential Equation (219) for Φ , we obtain
d 2 Φ 11 ( k ) d k 2 = η 12 η 21 + i 2 k 2 4 Φ 11 ( k ) , η 12 Φ 21 ( k ) = d Φ 11 ( k ) d k i 2 k Φ 11 ( k ) , d 2 η 12 Φ 22 ( k ) d k 2 = η 12 η 21 i 2 k 2 4 η 12 Φ 22 ( k ) , Φ 12 ( k ) = 1 η 12 η 21 d η 12 Φ 22 ( k ) d k + i 2 k η 12 Φ 22 ( k ) ,
where
η 12 η 21 = ν > 0
provided that γ ( k 0 ) 0 (note that the case of γ ( k 0 ) = 0 is, of course, trivial).
As is well known, the following Weber’s equation
d 2 g ( ζ ) d ζ 2 + ( a + 1 2 ζ 2 4 ) g ( ζ ) = 0
has a general solution
g ( ζ ) = c 1 D a ( ζ ) + c 2 D a ( ζ ) ,
where c 1 , c 2 are arbitrary constants and D a ( ζ ) denotes the standard (entire) parabolic-cylinder function which satisfies
d D a ( ζ ) d ζ + ζ 2 D a ( ζ ) a D a 1 ( ζ ) = 0 ,
D a ( ± ζ ) = Γ ( a + 1 ) e i π a 2 2 π D a 1 ( ± i ζ ) + Γ ( a + 1 ) e i π a 2 2 π D a 1 ( i ζ ) ,
Γ ( · ) being the Gamma function. From the textbook [54] (see pp. 347–349), we know that as ζ ,
D a ( ζ ) = ζ a e ζ 2 4 1 + O ( ζ 2 ) , | arg ζ | < 3 π 4 , ζ a e ζ 2 4 2 π Γ ( a ) e a π i ζ a 1 e ζ 2 4 1 + O ( ζ 2 ) , π 4 < arg ζ < 5 π 4 , ζ a e ζ 2 4 2 π Γ ( a ) e a π i ζ a 1 e ζ 2 4 1 + O ( ζ 2 ) , 5 π 4 < arg ζ < π 4 .
Denote a = i η 12 η 21 and then
η 12 η 21 ± i 2 = ± i ( ± a + 1 2 ) .
Thus we find
Φ 11 ( k ) = d 1 D a ( e π i 4 k ) + d 2 D a ( e 3 π i 4 k ) , η 12 Φ 22 ( k ) = d 3 D a ( e π i 4 k ) + d 4 D a ( e 3 π i 4 k ) ,
where d 1 and d 2 are constants, and d 3 and d 4 are row vectors of constants. Note that as arg k ( 3 π 4 , 7 π 4 ) and k ,
Φ 11 ( k ) ( k ) ν i e i k 2 4 1 , Φ 22 ( k ) ( k ) ν i e i k 2 4 I 3 .
Thus, for arg k ( 3 π 4 , 5 π 4 ) , we have
Φ 11 ( k ) = e π ν 4 D a ( e 3 π i 4 k ) , η 12 Φ 22 ( k ) = η 12 e π ν 4 D a ( e 3 π i 4 k ) ,
and further,
η 12 Φ 21 = a e π ( ν i ) 4 D a 1 ( e 3 π i 4 k ) , Φ 12 = η 12 e π ( ν + 3 i ) 4 D a 1 ( e 3 π i 4 k ) .
For arg k ( 5 π 4 , 7 π 4 ) , we have
Φ 11 ( k ) = e π ν 4 D a ( e 3 π i 4 k ) , η 12 Φ 22 ( k ) = η 12 e 3 π ν 4 D a ( e π i 4 k ) ,
and further,
η 12 Φ 21 = a e π ( ν i ) 4 D a 1 ( e 3 π i 4 k ) , Φ 12 = η 12 e π ( i + 3 ν ) 4 D a 1 ( e π i 4 k ) .
Along the ray arg k = 5 π 4 , we have
Φ + ( k ) = Φ ( k ) 1 γ ( k 0 ) 0 I 3 ,
from which it follows that
η 12 e π ( i + 3 ν ) 4 D a 1 ( e π i 4 k ) = e π ν 4 D a ( e 3 π i 4 k ) γ ( k 0 ) + η 12 e π ( ν + 3 i ) 4 D a 1 ( e 3 π i 4 k ) .
In addition, based on (225), we obtain
D a ( e 3 π i 4 k ) = Γ ( a + 1 ) 2 π e π a i 2 D a 1 ( e π i 4 k ) + Γ ( a + 1 ) 2 π e π a i 2 D a 1 ( e 3 π i 4 k ) .
Upon plugging (228) into (227), separating the coefficients of the two independent special functions presents
η 12 = Γ ( a + 1 ) 2 π e π ν 2 + π i 4 γ ( k 0 ) = ν Γ ( i ν ) 2 π e π ν 2 π i 4 γ ( k 0 ) .
Finally, we conclude that (4) is a consequence of (215), (221) and (229).

5. Concluding Remarks

The paper is dedicated to determination of long-time asymptotics of the three-component coupled mKdV system, based on an associated Riemann-Hilbert (RH) problem. The crucial step is to deform the associated RH problem to the one which is solvable explicitly, and estimate small errors between solutions to different deformed RH problems. This is an example of applications of the nonlinear steepest descent method to asymptotics in the case of 4 × 4 matrix spectral problems.
The nonlinear steepest descent method is powerful in exploring long-time asymptotics for integrable systems and even nonlocal integrable systems (see also, e.g., [55]). Moreover, it has been generalized to determine asymptotics of initial-boundary value problems of integrable systems on the half-line (see, e.g., [56,57,58]), and asymptotics of integrable systems whose RH problems possess rational phases (see, e.g., [16]) or non-analytic jumps (see, e.g., [19]).
There are various other approaches in the field of integrable systems, which include the Hirota direct method [59], the generalized bilinear technique [60], the Wronskian technique [61,62] and the Darboux transformation [63]. Connections between different approaches would be interesting. About coupled mKdV equations, there are many other studies such as integrable couplings [64], super hierarchies [65] and fractional analogous equations [66,67], and an important topic for further study is long-time asymptotics of those generalized integrable counterparts via the nonlinear steepest descent method. It is hoped that our result could be helpful in computing limiting behaviors of solutions incorporating features of other exact solutions, such as lumps [68,69], from the perspective of steepest descent based on RH problems.

Funding

This research was in part funded by NSFC under the grants 11371326, 11301331 and 11371086, and NSF under the grant DMS-1664561.

Acknowledgments

The author would also like to thank Ahmed A., Adjiri A., Gu X., Hao X.Z., McAnally M., Wang F.D., Wang H., and Zhou Y. for their valuable discussions.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Manakov, S.V. Nonlinear Fraunnhofer diffraction. Sov. Phys. JETP 1974, 38, 693–696. [Google Scholar]
  2. Ablowitz, M.J.; Newell, A.C. The decay of the continuous spectrum for solutions of the Korteweg-de Vries equation. J. Math. Phys. 1973, 14, 1277–1284. [Google Scholar] [CrossRef]
  3. Zakharov, V.E.; Manakov, S.V. Asymptotic behavior of non-linear wave systems integrated by the inverse scattering method. Sov. Phys. JETP 1976, 44, 106–112. [Google Scholar]
  4. Ablowitz, M.J.; Segur, H. Asymptotic solutions of the Korteweg-de Vries equation. Stud. Appl. Math. 1977, 57, 13–44. [Google Scholar] [CrossRef]
  5. Segur, H.; Ablowitz, M.J. Asymptotic solutions and conservation laws for the nonlinear Schrödinger equation I. J. Math. Phys. 1973, 17, 710–713. [Google Scholar] [CrossRef]
  6. Its, A.R. Asymptotics of solutions of the nonlinear Schrödinger equation and isomonodromic deformations of systems of linear differential equations. Sov. Math. Dokl. 1981, 24, 452–456. [Google Scholar]
  7. Deift, P.; Zhou, X. A steepest descent method for oscillatory Riemann-Hilbert problems: Asymptotics for the MKdV equation. Ann. Math. 1993, 137, 295–368. [Google Scholar] [CrossRef]
  8. Deift, P.; Venakides, S.; Zhou, X. The collisionless shock region for the long-time behavior of solutions of the KdV equation. Commun. Pure Appl. Math. 1994, 47, 199–206. [Google Scholar] [CrossRef]
  9. Deift, P.; Zhou, X. Asymptotics for the Painlevé II equation. Commun. Pure Appl. Math. 1995, 48, 277–337. [Google Scholar] [CrossRef]
  10. Deift, P.; Venakides, S.; Zhou, X. New results in small dispersion KdV by an extension of the steepest descent method for Riemann-Hilbert problems. Int. Math. Res. Not. 1997, 1997, 286–299. [Google Scholar] [CrossRef]
  11. Kamvissis, S. Long time behavior for the focusing nonlinear Schroedinger equation with real spectral singularities. Commun. Math. Phys. 1996, 180, 325–341. [Google Scholar] [CrossRef]
  12. Kitaev, A.V.; Vartanian, A.H. Leading-order temporal asymptotics of the modified nonlinear Schrödinger equation: Solitonless sector. Inverse Probl. 1997, 13, 1311–1339. [Google Scholar] [CrossRef]
  13. Cheng, P.J.; Venakides, S.; Zhou, X. Long-time asymptotics for the pure radiation solution of the sine-Gordon equation. Commun. Part. Differ. Equ. 1999, 24, 1195–1262. [Google Scholar] [CrossRef]
  14. Grunert, K.; Teschl, G. Long-time asymptotics for the Korteweg-de Vries equation via nonlinear steepest descent. Math. Phys. Anal. Geom. 2009, 12, 287–324. [Google Scholar] [CrossRef]
  15. Boutet de Monvel, A.; Kostenko, A.; Shepelsky, D.; Teschl, G. Long-time asymptotics for the Camassa-Holm equation. SIAM J. Math. Anal. 2009, 41, 1559–1588. [Google Scholar] [CrossRef]
  16. Xu, J.; Fan, E.G. Long-time asymptotics for the Fokas-Lenells equation with decaying initial value problem: Without solitons. J. Differ. Equ. 2015, 259, 1098–1148. [Google Scholar] [CrossRef]
  17. Andreiev, K.; Egorova, I.; Lange, T.L.; Teschl, G. Rarefaction waves of the Korteweg-de Vries equation via nonlinear steepest descent. J. Differ. Equ. 2016, 261, 5371–5410. [Google Scholar] [CrossRef]
  18. Wang, D.S.; Wang, X.L. Long-time asymptotics and the bright N-soliton solutions of the Kundu-Eckhaus equation via the Riemann-Hilbert approach. Nonlinear Anal. Real World Appl. 2018, 41, 334–361. [Google Scholar] [CrossRef]
  19. McLaughlin, K.T.-R.; Miller, P.D. The ¯ steepest descent method and the asymptotic behavior of polynomials orthogonal on the unit circle with fixed and exponentially varying nonanalytic weights. Int. Math. Res. Pap. 2006, 2006, 1–77. [Google Scholar] [CrossRef]
  20. Varzugin, G.G. Asymptotics of oscillatory Riemann-Hilbert problems. J. Math. Phys. 1996, 37, 5869–5892. [Google Scholar] [CrossRef]
  21. Geng, X.G.; Xue, B. Quasi-periodic solutions of mixed AKNS equations. Nonlinear Anal. Theory Meth. Appl. 2010, 73, 3662–3674. [Google Scholar] [CrossRef]
  22. Boutet de Monvel, A.; Shepelsky, D. A Riemann-Hilbert approach for the Degasperis-Procesi equation. Nonlinearity 2013, 26, 2081–2107. [Google Scholar] [CrossRef]
  23. Geng, X.G.; Liu, H. The nonlinear steepest descent method to long-time asymptotics of the coupled nonlinear Schrödinger equation. J. Nonlinear Sci. 2018, 28, 739–763. [Google Scholar] [CrossRef]
  24. Wu, J.P.; Geng, X.G. Inverse scattering transform and soliton classification of the coupled modified Korteweg-de Vries equation. Commun. Nonlinear Sci. Numer. Simul. 2017, 53, 83–93. [Google Scholar] [CrossRef]
  25. Ma, W.X. The inverse scattering transform and soliton solutions of a combined modified Korteweg-de Vries equation. J. Math. Anal. Appl. 2019, 471, 796–811. [Google Scholar] [CrossRef]
  26. Ma, W.X. Trigonal curves and algebro-geometric solutions to soliton hierarchies I. Proc. R. Soc. A 2017, 473, 20170232. [Google Scholar] [CrossRef] [PubMed]
  27. Ma, W.X. Trigonal curves and algebro-geometric solutions to soliton hierarchies II. Proc. R. Soc. A 2017, 473, 20170233. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Ma, W.X. Binary Bargmann symmetry constraints of soliton equations. Nonlinear Anal. Theory Meth. Appl. 2001, 47, 5199–5211. [Google Scholar] [CrossRef] [Green Version]
  29. Ma, W.X. Riemann-Hilbert problems of a six-component mKdV system and its soliton solutions. Acta Math. Sci. 2019, 39, 509–523. [Google Scholar]
  30. Ma, W.X.; Zhou, Z.X. Binary symmetry constraints of N-wave interaction equations in 1+1 and 2+1 dimensions. J. Math. Phys. 2001, 42, 4345–4382. [Google Scholar] [CrossRef] [Green Version]
  31. Tu, G.Z. On Liouville integrability of zero-curvature equations and the Yang hierarchy. J. Phys. A Math. Gen. 1989, 22, 2375–2392. [Google Scholar]
  32. Lax, P.D. Integrals of nonlinear equations of evolution and solitary waves. Commun. Pure Appl. Math. 1968, 21, 467–490. [Google Scholar] [CrossRef]
  33. Magri, F. A simple model of the integrable Hamiltonian equation. J. Math. Phys. 1978, 19, 1156–1162. [Google Scholar] [CrossRef]
  34. Ma, W.X.; Chen, M. Hamiltonian and quasi-Hamiltonian structures associated with semi-direct sums of Lie algebras. J. Phys. A Math. Gen. 2006, 39, 10787–10801. [Google Scholar] [CrossRef] [Green Version]
  35. Ma, W.X. Variational identities and applications to Hamiltonian structures of soliton equations. Nonlinear Anal. Theor. Meth. Appl. 2009, 71, e1716–e1726. [Google Scholar] [CrossRef]
  36. Ma, W.X. Generators of vector fields and time dependent symmetries of evolution equations. Sci. China A 1991, 34, 769–782. [Google Scholar]
  37. Ma, W.X. The algebraic structure of zero curvature representations and application to coupled KdV systems. J. Phys. A Math. Gen. 1993, 26, 2573–2582. [Google Scholar] [CrossRef]
  38. Ma, W.X.; Zhou, R.G. Adjoint symmetry constraints leading to binary nonlinearization. J. Nonlinear Math. Phys. 2002, 9 (Suppl. 1), 106–126. [Google Scholar] [CrossRef]
  39. Ma, W.X. Conservation laws by symmetries and adjoint symmetries. Discrete Contin. Dyn. Syst. S 2018, 11, 707–721. [Google Scholar] [CrossRef]
  40. Drinfel’d, V.G.; Sokolov, V.V. Equations of Korteweg-de Vries type, and simple Lie algebras. Sov. Math. Dokl. 1982, 23, 457–462. [Google Scholar]
  41. Terng, C.L.; Uhlenbeck, K. The n × n KdV hierarchy. J. Fixed Point Theory Appl. 2011, 10, 37–61. [Google Scholar] [CrossRef]
  42. Ma, W.X.; Xu, X.X.; Zhang, Y.F. Semi-direct sums of Lie algebras and continuous integrable couplings. Phys. Lett. A 2006, 351, 125–130. [Google Scholar] [CrossRef] [Green Version]
  43. Manakov, S.V. On the theory of two-dimensional stationary self-focusing of electromagnetic waves. Sov. Phys. JETP 1974, 38, 248–253. [Google Scholar]
  44. Chen, S.T.; Zhou, R.G. An integrable decomposition of the Manakov equation. Comput. Appl. Math. 2012, 31, 1–18. [Google Scholar]
  45. Ma, W.X. Symmetry constraint of MKdV equations by binary nonlinearization. Physica A 1995, 219, 467–481. [Google Scholar] [CrossRef] [Green Version]
  46. Yu, J.; Zhou, R.G. Two kinds of new integrable decompositions of the mKdV equation. Phys. Lett. A 2006, 349, 452–461. [Google Scholar] [CrossRef]
  47. Ma, W.X.; Yong, X.L.; Qin, Z.Y.; Gu, X.; Zhou, Y. A generalized Liouville’s formula. Appl. Math. B-A J. Chin. Univ. 2016. submitted. [Google Scholar]
  48. Novikov, S.P.; Manakov, S.V.; Pitaevskii, L.P.; Zakharov, V.E. Theory of Solitons: The Inverse Scattering Method; Consultants Bureau: New York, NY, USA, 1984. [Google Scholar]
  49. Ma, W.X. Application of the Riemann-Hilbert approach to the multicomponent AKNS integrable hierarchies. Nonlinear Anal. Real World Appl. 2019, 47, 1–17. [Google Scholar] [CrossRef]
  50. Zhou, X. The Riemann-Hilbert problem and inverse scattering. SIAM J. Math. Anal. 1989, 20, 966–986. [Google Scholar] [CrossRef]
  51. Ablowitz, M.J.; Fokas, A.S. Complex Variables: Introduction and Applications; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  52. Beals, R.; Coifman, R.R. Scattering and inverse scattering for first order systems. Commun. Pure Appl. Math. 1984, 37, 39–90. [Google Scholar] [CrossRef]
  53. Clancey, K.; Gohberg, I. Factorization of Matrix Functions and Singular Integral Operators; Birkhäuser: Basel, Switzerland, 1981. [Google Scholar]
  54. Whittaker, E.T.; Watson, G.N. A Course of Modern Analysis, 4th ed.; Cambridge University Press: Cambridge, UK, 1927. [Google Scholar]
  55. Rybalko, Y.; Shepelsky, D. Long-time asymptotics for the integrable nonlocal nonlinear Schrödinger equation. J. Math. Phys. 2019, 60, 031504. [Google Scholar] [CrossRef]
  56. Boutet de Monvel, A.; Fokas, A.S.; Shepelsky, D. The mKdV equation on the half-line. J. Inst. Math. Jussieu 2004, 3, 139–164. [Google Scholar] [CrossRef]
  57. Lenells, J. The nonlinear steepest descent method: Asymptotics for initial-boundary value problems. SIAM J. Math. Anal. 2016, 48, 2076–2118. [Google Scholar] [CrossRef]
  58. Guo, B.L.; Liu, N. Long-time asymptotics for the Kundu-Eckhaus equation on the half-line. J. Math. Phys. 2018, 59, 061505. [Google Scholar] [CrossRef]
  59. Hirota, R. The Direct Method in Soliton Theory; Cambridge University Press: New York, NY, USA, 2004. [Google Scholar]
  60. Ma, W.X. Generalized bilinear differential equations. Stud. Nonlinear Sci. 2011, 2, 140–144. [Google Scholar]
  61. Freeman, N.C.; Nimmo, J.J.C. Soliton solutions of the Korteweg-de Vries and Kadomtsev-Petviashvili equations: The Wronskian technique. Phys. Lett. A 1983, 95, 1–3. [Google Scholar] [CrossRef]
  62. Ma, W.X.; You, Y. Solving the Korteweg-de Vries equation by its bilinear form: Wronskian solutions. Trans. Am. Math. Soc. 2005, 357, 1753–1778. [Google Scholar] [CrossRef]
  63. Matveev, V.B.; Salle, M.A. Darboux Transformations and Solitons; Springer: Berlin, Germany, 1991. [Google Scholar]
  64. Xu, X.X. An integrable coupling hierarchy of the MKdV_ integrable systems, its Hamiltonian structure and corresponding nonisospectral integrable hierarchy. Appl. Math. Comput. 2010, 216, 344–353. [Google Scholar]
  65. Dong, H.H.; Zhao, K.; Yang, H.W.; Li, Y.Q. Generalised (2+1)-dimensional super MKdV hierarchy for integrable systems in soliton theory. East Asian J. Appl. Math. 2015, 5, 256–272. [Google Scholar] [CrossRef]
  66. Dong, H.H.; Guo, B.Y.; Yin, B.S. Generalized fractional supertrace identity for Hamiltonian structure of NLS-MKdV hierarchy with self-consistent sources. Anal. Math. Phys. 2016, 6, 199–209. [Google Scholar] [CrossRef]
  67. Guo, M.; Fu, C.; Zhang, Y.; Liu, J.X.; Yang, H.W. Study of ion-acoustic solitary waves in a magnetized plasma using the three-dimensional time-space fractional Schamel-KdV equation. Complexity 2018, 2018, 6852548. [Google Scholar] [CrossRef]
  68. Ma, W.X.; Zhou, Y. Lump solutions to nonlinear partial differential equations via Hirota bilinear forms. J. Differ. Equ. 2018, 264, 2633–2659. [Google Scholar] [CrossRef]
  69. Ma, W.X.; Li, J.; Khalique, C.M. A study on lump solutions to a generalized Hirota-Satsuma-Ito equation in (2+1)-dimensions. Complexity 2018, 2018, 9059858. [Google Scholar] [CrossRef]
Figure 1. Increasing and decreasing of θ .
Figure 1. Increasing and decreasing of θ .
Mathematics 07 00573 g001
Figure 2. The signature table of Re i θ when x < 0 .
Figure 2. The signature table of Re i θ when x < 0 .
Mathematics 07 00573 g002
Figure 3. The oriented contour on R .
Figure 3. The oriented contour on R .
Mathematics 07 00573 g003
Figure 4. The oriented jump contour Σ .
Figure 4. The oriented jump contour Σ .
Mathematics 07 00573 g004
Figure 5. Part of Re i θ ( k ) > 0 in the upper half k-plane.
Figure 5. Part of Re i θ ( k ) > 0 in the upper half k-plane.
Mathematics 07 00573 g005
Figure 6. Part of Re i θ ( k ) > 0 in the lower half k-plane.
Figure 6. Part of Re i θ ( k ) > 0 in the lower half k-plane.
Mathematics 07 00573 g006
Figure 7. Deformation of a RH problem.
Figure 7. Deformation of a RH problem.
Mathematics 07 00573 g007
Figure 8. The oriented contour Σ = Σ A Σ B .
Figure 8. The oriented contour Σ = Σ A Σ B .
Mathematics 07 00573 g008
Figure 9. Oriented contours Σ A and Σ B (reoriented).
Figure 9. Oriented contours Σ A and Σ B (reoriented).
Mathematics 07 00573 g009
Figure 10. The oriented contour L t .
Figure 10. The oriented contour L t .
Mathematics 07 00573 g010

Share and Cite

MDPI and ACS Style

Ma, W.-X. Long-Time Asymptotics of a Three-Component Coupled mKdV System. Mathematics 2019, 7, 573. https://doi.org/10.3390/math7070573

AMA Style

Ma W-X. Long-Time Asymptotics of a Three-Component Coupled mKdV System. Mathematics. 2019; 7(7):573. https://doi.org/10.3390/math7070573

Chicago/Turabian Style

Ma, Wen-Xiu. 2019. "Long-Time Asymptotics of a Three-Component Coupled mKdV System" Mathematics 7, no. 7: 573. https://doi.org/10.3390/math7070573

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop