1. Introduction
To make the introduction as accessible as possible to the reader, we put the topic of the present paper in a historical context, thereby highlighting the results relevant for its content. The study of integrable or exactly solvable systems has a long history. Classical mathematicians such as Euler, Lagrange, Liouville, Riemann and Poincaré amongst many others, investigated nonlinear systems which could be integrated more or less explicitly. We start at the beginning of the area of soliton theory of which we give a short description based on [
1]. The first recorded observation of a solitary wave is the one made by Russell in 1834 [
2]. He observed along a narrow channel in Scotland that, when a barge was suddenly stopped, the water mass put in motion by the boat formed at the prow of the barge a solitary wave that moved forward for quite some time at a constant speed and maintaining its height. After returning from this trip, he started careful experiments with water waves to classify them and after some time he came empirically to a number of results: the existence of solitary waves, a formula for their speed and a nonlinear differential equation they had to satisfy. The astronomer Airy, based on his own work on water waves, denied the existence of solitary waves, and this led to bitter disputes. This controversy around the validity of Russell’s claims went on till 1895, when Korteweg and his student de Vries published the paper [
3] on propagation of shallow water waves in a narrow rectangular canal. It settled the controversy entirely in favor of Russell. In their paper, they argued that the height 
 of the waves in the canal should satisfy the differential equation
      where 
x is a space coordinate along the canal and 
t a time coordinate. Moreover, they gave a set of solitary wave solutions of this equation, the so-called 
cnoidal waves. Since then, Equation (
1) has been called the KdV equation.
Interest in the KdV equation reignited when Zabusky and Kruskal [
4] found remarkable interaction properties for the solitary wave solutions to KdV. They coined the name 
solitons for these solutions. Soon after that, Gardner, Greene, Kruskal and Miura [
5] introduced 
inverse scattering transform or IST to solve the Cauchy problem for KdV, which turned out to be applicable to many more equations. As a result, the research on both theoretical and practical aspects of these equations intensified. E.g., in 1995, at the centennial of the eponymous KdV paper, the practical applications [
6] of the KdV equation alone were recognized in various fields: fluid mechanics, optics, oceanography, plasma physics, astronomy and electrical transmission lines, among others.
An alternative formulation of the KdV equation was found by Lax [
7]. He considered the Schrödinger operator 
 and the third order operator 
, where 
u is a function depending on 
x, 
t and possibly other parameters and 
∂ denotes the partial derivative 
. Then, a direct calculation yields that their commutator is a zero-th order operator in 
∂
      and we see that the evolution equation for 
      is equivalent to 
u satisfying the KdV equation. Equation (
2) was soon called the 
Lax form of the KdV. This form suggests that 
 is obtained by conjugating a 
t-independent operator like 
 with a 
t-dependent one and we will see it coming back at all kinds of integrable equations. For example, the Lax form of the nonlinear Schrödinger equation was found by Zacharov and Shabat in [
8] and NLS was the starting point in [
9] to treat Hamiltonian methods for soliton equations. It laid the foundation for the quantum version of IST developed by the Fadeev school in St. Petersburg, see [
10]. This also brought solvable models from statistical mechanics into the picture and likewise quantum groups made their appearance in the integrable world. A rich collection of contributions to these topics and integrability can be found in [
11].
The third nonlinear equation that needs to be mentioned is a two-dimensional version of KdV that was used in plasma physics by Khadomtsev and Petviashvilii and is therefore called the KP equation. For a function 
, it reads
	  Let 
 be another function, 
 the operator 
 and 
 the operator 
. Consider the following linear system for 
:
	  Then, the compatibility of this system is 
. Eliminating 
v from these equations gives you that 
u satisfies the KP Equation (
3). Krichever searched for an analogue for the KP equation of the results in [
12,
13,
14,
15] for the KdV equation and succeeded [
16,
17]. The algebraic geometric data he needed were the following: a complete irreducible algebraic curve 
X, a rank 1 torsion free coherent sheaf 
 over 
X, a non-singular point 
, a local coordinate 
 around 
 such that 
z is an isomorphism between a closed neighbourhood 
 of 
 and the unit disc around infinity on the Riemann sphere and finally a trivialization 
 of 
 over 
. If 
X is nonsingular, then 
X is a compact Riemann surface and 
 is a line bundle over 
X.
The fourth type of system that exhibit soliton-like behavior was found by M.Toda [
18]. He was inspired by results of Fermi [
19] and Ford [
20] and his goal was to elucidate certain characteristic features of nonlinear waves, using nonlinear lattice models. He found lattice models that possess multi-soliton solutions, periodic waves and solvability of the initial value problem with IST. Since then, they carry his name. We mention one of them, the 
finite Toda lattice. For more examples, we refer the reader to [
21]. The finite Toda lattice describes the motion of 
n particles, all with mass equal to 1, on a line with only nearest neighbor interaction. Let 
 denote the position of the 
j-th particle and 
 its momentum. The equations of motion of the 
n particles are:
      where we put 
 and 
. Introduce the new coordinates
      and recall that 
. Then, the equations of motion (
5) transform into
	  It was H. Flaschka [
22] who showed that Equation (
6) can be written in Lax form. Consider the following 
-matrices
	  Then, one verifies directly that Equation (
6) amounts to the Lax equation
	  With the help of this equation, one shows that the eigenvalues of 
L are time-independent and polynomials in 
 and 
. Moreover, the functions
      are 
n linear independent integrals for the vector field
      for the Hamiltonian 
 and 
 are in involution with respect to the standard Poisson bracket 
 on 
, i.e., 
. For proofs, we refer the reader to Chapter 3 in [
23]. Note that the matrix 
B in the Lax form of the finite Toda lattice is the anti-symmetric part of 
L in the splitting of the 
-matrices in an anti-symmetric part and a lower triangular part. Hence, (
7) is a Lax equation coupled to this decomposition. An integrable hierarchy, coupled to an infinite dimensional variant of this decomposition, is described in [
24]. This decomposition is different from the one on which the discrete KP hierarchy [
25] is based.
Gelfand and Dickey realized that the third order operator 
A occurring in the Lax form (
2) of KdV was nothing but the differential operator part of the third power of the square root 
 of 
 in the pseudo-differential operators Psd in 
∂. Here, Psd is an extension of the differential operators with coefficients, functions on which 
∂ acts via Leibniz. It consists of possibly infinite series 
 satisfying 
 and 
 and it has the property that each monic element of Psd of order 
 possesses an 
m-th root in Psd. Then, it was clear how to generalize it. In [
26], the authors considered, instead of the Schrödinger operator 
, a monic differential operator 
 of order 
 in 
∂ of the form 
 and for all 
 the monic differential operator 
 consisting of the differential operator part of the 
m-th power of the 
n-th root 
 of 
 in the pseudo-differential operators. Take the infinite set of Lax equations for 
	  Gelfand and Dickey showed that this system of equations is compatible and determined its Hamiltonian structure. For 
, this system is usually called the 
KdV hierarchy and for the generalization one uses the name 
n-th Gelfand–Dickey hierarchy or 
n-KdV hierarchy. The strict version of the n-th Gelfand–Dickey hierarchy is a wider deformation of 
 and is treated in [
27].
A next big step was the appearance of a long series of papers by members of the Sato school in Kyoto on transformation groups for soliton equations. They corresponded with various infinite dimensional groups and were full of all kinds of intriguing ideas and connections. We focus on the main example: the KP hierarchy, that is the one asssociated with the group 
. Date, Jimbo, Kashiwara and Miwa introduced it in [
28] and considered elements 
L in Psd of the form 
 and we see 
 as a deformation of the commutative algebra 
. Note that any 
 is of this form. In that case, we are dealing with a deformation of 
 by conjugating with the dressing operator 
K. We assume that the coefficients in the pseudo-differential operators are differentiable with respect to the parameters 
 and that each 
 commutes with 
∂. Let 
 be the differential operator part of 
, then the evolution equations for 
L are
	  These Lax equations are equivalent to the zero curvature relations for 
	  Now, the first two nontrivial powers of 
L start as follows
	  Hence, 
 and 
 and from the calculation at the introduction of the KP equation it follows that 
 is a solution of the KP equation. This justifies the name of the hierarchy. Each of the Gelfand–Dickey hierarchies is a subsystem of the KP hierarchy. Any solution 
L of the KP hierarchy for which 
 is a differential operator delivers the solution 
 of the 
n-th Gelfand–Dickey and in terms of deformations this corresponds to deforming the commutative algebra 
 inside the differential operators. The geometric picture sketched in [
28] is that the group 
 acts in the unspecified space of 
-functions, that the 
-orbit of the vacuum is the Sato Grassmannian and this 
-orbit is the solutions of the KP hierarchy. This could use some clarification. In a later paper, the authors specified the space of 
-functions as the polynomials in 
 and the picture became better, but not good enough for, e.g., the solutions found by Krichever. Segal and Wilson followed a different path in [
29] to obtain solutions of the KP hierarchy. They used the linearization of KP. This is a set of relations in the Psd-module of oscillating functions
	  If you find for 
L and all the 
 a suitable 
 in that module such that these relations hold, then 
L is a solution of the KP hierarchy. Segal and Wilson introduced a Grasmann manifold based on a space of Hilbert–Schmidt operators from which they constructed solutions for the KP hierarchy and for each point of this variety they introduced a 
-function that relates to the wave function in the way, the Sato school predicted. The solutions found by Krichever fall also within this framework. The strict KP hierarchy is a wider deformation of 
 introduced in [
30]. The geometric picture is a flag variety covering the Grassmann manifold of Segal and Wilson and can be found in [
31]. In this case, one needs two 
-functions to describe the relevant wave function [
32]. It will not come as a surprise that the strict versions of the Gelfand–Dickey hierarchies are subsystems of the strict KP hierarchy.
We have seen how research in integrable systems has developed from single equations to infinite dimensional integrable hierarchies. They play an important role in both mathematics and theoretical physics and common grounds are a big stimulus for cooperation and success. We mention a few areas in physics where they play an important role. First of all, they often occur in matrix models at the application of large 
N-techniques, see, e.g., Refs. [
33,
34,
35]. The next subject combines a wide range of disciplines: two-dimensional quantum field theory, intersection theory on moduli spaces of Riemann surfaces, integrable hierarchies, matrix integrals and random surfaces to name a few. At the beginning of the 1990s, Edward Witten [
36] started a program [
37] to search for a relationship between random surfaces and the algebraic topology of moduli space and formulated a series of conjectures based on his findings. The first of these conjectures was proved by Kontsevich [
38], who showed that the string partition function is a 
-function of the KdV hierarchy. Another topic is topological strings on Calabi–Yau geometries. It provides a unifying picture connecting non-critical (super)strings, integrable hierarchies and various matrix models, see [
39,
40].
In [
41], we studied integrable hierarchies related to deforming various commutative subalgebras of the 
-matrices with coefficients in 
k, 
 or 
. We also constructed solutions of these hierarchies by conjugating the original matrices with matrices from operators of the form identity plus a Hilbert–Schmidt operator. In operator terms, this is a mild perturbation as Hilbert–Schmidt operators are compact. In a later paper [
42], we took the main example of a commutative subalgebra and studied the scaling properties of the corresponding hierarchy. Here, we stick to the basic example and present a far wider class of dressing matrices for both hierarchies with matrices corresponding to bounded operators on suitable Banach spaces. To be more concrete, we first need some notation.
Let 
R be a commutative 
k-algebra over the field 
k. We write 
 for the 
-matrices with coefficients from 
R and similarly 
 for the space of 
-matrices with coefficients from 
R. The transpose 
 of any matrix 
 or any 
 matrix 
A with coefficients from 
R is defined as in the finite dimensional case. Let 
V be the 
R-module of all 
-matrices with coefficients from 
R, i.e.,
	  Inside 
V, we consider the 
R-submodule
	  The space 
 is a free 
R-module with 
 as the basis, where 
 is the vector with the 
i-th coordinate equal to one and the remaining ones equal to zero. For each 
 and each 
, the product 
 is well defined and determines a vector in 
V. Hence, if we write
      then 
 is an 
R-linear map in 
. Inside 
, we have the basic matrices 
 and 
, whose matrix entries, in Kronecker notation, are given by
	  It is convenient to use the notation 
 for an 
. A central role in this paper is played by the shift matrix 
S, its transpose 
 and their powers, where 
S is the matrix 
 that corresponds to the shift operator 
 defined by
The way the two deformations of the commutative algebra  are performed is by conjugating them with invertible lower triangular -matrices with coefficients from R. At one deformation, we conjugate with lower triangular matrices with the identity on the central diagonal. The evolution equations of the deformed generators are Lax equations and are determined by the decomposition of a matrix in  in their upper triangular and strict lower triangular parts. The evolution equations of the deformed generators in this case are called the Lax equations of the -hierarchy.
For the second wider deformation, the central diagonal is merely assumed to be invertible. Also in this case, the evolution equations of the deformed generators are Lax equations. Only now they are based on the splitting of  matrices in their strict upper triangular and lower triangular parts. The evolution equations of these deformations of  are called the Lax equations of the strict -hierarchy.
In the case of the -hierarchy, the dressing matrix of the deformation turns out to be unique, and in the strict case it is determined up to a multiple of the identity. The uniqueness of the dressing operator enables one to prove directly the equivalence of the Lax form of the k[S]-hierarchy with a set of Sato–Wilson equations. There exists an analogue of the Sato–Wilson equations for the strict -hierarchy. It always implies the Lax equations of this hierarchy and it suffices for the geometric solutions that we produce further on. However, we show how they can fail.
Solutions of both hierarchies are constructed by producing wave matrices in the linearization module of each hierarchy. Therefore, we recall the essentials of this approach. We start the last section with the description of a family of Banach spaces B that all have a countable Schauder basis; they have the approximation property and the elements with only a finite number of coordinates with respect to the Schauder basis are dense in B. For these spaces, we show how LU factorizations in  can be used to construct for a nonempty open subset of  solutions of the -hierarchy and its strict version. We show that the dressing matrix coefficients of these solutions are quotients of analytic functions on the group of commuting flows of the hierarchies. Next, we introduce a subgroup  of  that is the analogue of the restricted linear group  that was used in cases in which B is a Hilbert space to construct solutions of the KP hierarchy. We conclude by showing that  is contained in this open subset and that leads to a description of each set of solutions in terms of a homogeneous space of .
The content of the various sections is as follows: 
Section 2 describes the scene of the deformations, the algebra 
, the maximality of 
 and the properties required later on. The next section is devoted to a description of the two deformations; it contains a discussion of the Lax equations they have to satisfy and we describe there the link with their Sato–Wilson form. The form of the relevant 
-module, the equations of the linearization and a characterization of the special vectors, called wave matrices, can all be found in 
Section 4. In the last section, we present the role of LU factorization in constructing solutions of both hierarchies, we discuss properties of the dressing matrix coefficients and prove that 
 is contained in the open set of 
, where LU factorization is effective in producing solutions of the hierarchies.
  3. The -Hierarchy and Its Strict Version
In this section, we discuss the two deformations of 
 that we consider and the evolution equations we want the deformations of 
S to satisfy. At the first deformation, each 
 in 
 is deformed into 
, where 
 is an element of the form
	  One directly checks that any element 
, with 
, has this form and we call 
 a 
-
deformation of 
S. We also call 
U the 
dressing matrix of 
. At the second deformation, we transform each matrix 
 into 
, where 
 is an element of the form
	  Also in this case, one can easily verify that any matrix 
, with 
, possesses the form (
18) and therefore it is called a 
-
deformation of 
S. Likewise, we also call 
P the 
dressing matrix of the deformation 
. Moreover, we showed in [
42].
Lemma 2. Conversely, there holds for the deformations (17) and (18) - (a) 
 Any  of the form (17) can uniquely be written in the form  with , i.e.,  is a -deformation of S. - (b) 
 Any  of the form (18) can be written in the form , where  is unique and  is determined up to a factor from . In particular,  is a -deformation of S. 
 Next, we discuss the evolution equations that an -deformation  of S has to satisfy and those for a -deformation  of S. Thereby, each  is seen as an infinitesimal generator of a flow. In that light, we assume in both cases that R is equipped with a set of commuting k-linear derivations , where each  should be seen as an algebraic substitute for the derivative with respect to the flow parameter corresponding to the flow generated by each . By letting each  act coefficient-wise on matrices in , we obtain a set of derivations of , also denoted by . The data  we call a setting for both deformations.
For each 
-deformation 
 of 
S and all 
, we consider the cut-offs
	  Note that, since all 
 commute, the 
 satisfy for all 
      where the right-hand side is of degree 
 or lower, like 
. This shows that it makes sense to unite the following set of Lax equations for the 
 in one combined system, the so-called 
-hierarchy:
	  It suffices to prove the equations just for 
. For, since 
 and 
 are both 
k-linear derivations of 
, all basis elements 
 of the deformation 
 of 
 satisfy the same Lax equations. Equation (
20) itself is called the 
Lax equations of the -hierarchy. Note that the Lax Equation (
20) shows that the action of each 
 on the coefficients of 
 expresses each of them in a polynomial expression of the coefficients of 
. Any 
-deformation 
 of 
S in 
 that satisfies all Equation (
20), 
, is called a 
solution of the 
-hierarchy in the setting 
. Note that in each setting there is at least one solution of the 
-hierarchy, namely 
, the 
trivial solution of the 
-hierarchy. We can express the conditions when a 
-deformation 
 is a solution of the 
-hierarchy, in terms of 
U. For, there holds
Lemma 3. Any  with  is a solution of the -hierarchy if and only if U satisfies the relations: for all   Proof.  Since 
, we obtain for 
 that
		If 
U satisfies (
21), then 
 yields the Lax equations for 
. Conversely, if 
 is a solution of the 
-hierarchy, then 
 commutes with 
 and thus 
 commutes with 
S. The element 
 only has strict negative diagonals and Lemma 1 implies that 
 and this proves the claim.    □
 Since the relations (
21) are the analogue of the Sato–Wilson equations for the KP hierarchy, we call them the 
Sato–Wilson form of the -hierarchy. This is the form that comes out of the approach in 
Section 4 and that we will use at the construction in 
Section 5.
Remark 1. Let  be the zero-th diagonal of , then the zero curvature form of the -hierarchy, see [42], implies that the commuting diagonal matrices  satisfy the compatibility conditions  Next, we treat the evolution equations for the 
-deformations 
. In that case, we consider for each 
 the strict cut-off
	  Since all the 
 commute, there holds
      which shows that the 
 have the common property that the commutator with 
 has degree 
m or lower. The same holds for the matrix 
, so it makes sense to unite the following set of Lax equations for the 
 in one combined system
	  Because of the form of the 
 and the similarity with the Lax Equation (
20), we call this system the 
strict -hierarchy. Equation (
24) itself is called the 
Lax equations of the strict -hierarchy. Note that also in the strict case the Lax Equation (
24) shows that the action of each 
 on the coefficients of 
 expresses each of them in a polynomial expression of the matrix coefficients of 
. Any 
-deformation 
 of 
S in 
 that satisfies Equation (
24) is called a 
solution of the strict 
-hierarchy in the setting 
. By the same argument as for the 
 case, it suffices to prove Equation (
24) for 
, since all basis elements 
 of the wider deformation 
 of 
 satisfy the same Lax equations. Note that in each setting there is at least one solution of the strict 
-hierarchy, namely 
, the 
trivial solution of the strict 
-hierarchy.
Next, we discuss the Sato–Wilson equations for a 
-deformation 
, with 
, where 
 and 
. The analogues for 
P of Equation (
21) are
	  These are called the 
Sato–Wilson equations for 
P. If 
P satisfies Equation (
25), then
      yields the Lax Equation (
24) for 
. Hence, Equations (
25) are sufficient for 
 to be a solution of the strict 
-hierarchy. They have a characterization in terms of the components 
d and 
UProposition 1. Take a -deformation , with , and let . Then, the Sato–Wilson equations for P are equivalent with  being a solution of the -hierarchy and d is a solution of the systemwhich is a compatible systems of equations if  is a solution of the -hierarchy, see (22).  Proof.  We express 
 in the components 
d and 
U
		Since 
 has only strict negative diagonals, the zero-th diagonal of 
 is 
 and the sum of all strict negative diagonals of 
 is 
. Similarly, we decompose
		Hence, the zero-th diagonal parts of 
 and 
 are equal if and only if 
d satisfies the system (
26) and the equality of the sum of all their strict negative diagonals is equivalent with the Sato–Wilson equations for 
.    □
 Remark 2. Note that the choice of the dressing operator P of  is crucial in (25). Assume that for  Equation (25) holds. Then,  is anyway a solution of the strict -hierarchy. We have seen that all  are also dressing operators of . The right-hand side of Equation (25) is the same for all dressing operators . However, the left-hand side is equal toand that is only equal to the right-hand side if  for all . In other words, r should be a constant for all the . Hence, if you would pick out a dressing operator  with  for some i, then the Sato–Wilson equations would not hold for that . Nevertheless, the Lax equations of the strict KP hierarchy hold for .    4. Wave Matrices
Let 
 be a setting where one looks for solutions to both hierarchies. The homogeneous spaces of solutions of both hierarchies are constructed by producing special vectors, called wave matrices, in a suitable 
-module that we briefly recall. We start with the upper triangular matrix
	  Here, 
t is the shorthand notation for 
, 
 and each 
 is a homogeneous polynomial of degree 
j in the 
, where each 
 has degree 
i. Note that 
 commutes with 
S and satisfies for all 
, 
. Recall that each 
 was the algebraic substitute on 
R for the partial derivative with respect to the flow parameter of 
. Therefore, we write 
. The 
-module that we need consists of formal products of a perturbation factor from 
 and 
. The products will be formal, for making sense out of the product of a matrix from 
 and the matrix 
 as a matrix requires convergence conditions and we want to give an algebraic description of the module. Consider therefore the space 
 consisting of the formal products
	  Addition (resp. scalar multiplication) is defined on 
 by adding up the perturbation factors of two elements (resp. by applying the scalar multiplication on the perturbation factor). Something similar is done with the 
-module structure: for each 
, define
	  Clearly, this makes 
 into a free 
-module with generator 
. Besides the 
-action, each 
 also acts on 
 by the formula
	  Here, we impose a Leibniz rule on the formal product. Finally there is a right-hand action of 
S on 
. Since 
S and 
 commute, we can define it by
	  Analogous to the terminology in the function case, see, e.g., Ref. [
28], we call the elements of 
 oscillating -matrices. Note that any 
 with 
 invertible is a generator of the free 
-module 
. Examples are the choices 
 resp. 
 in which case we call 
 an oscillating 
-matrix of type 
 resp. 
. With the three actions just defined, we can introduce inside 
 two systems of equations leading to solutions of the respective hierarchies.
For the 
-hierarchy, this system is as follows: consider a 
-
deformation of 
S in 
 with the set of projections 
. The goal is now to find an oscillating 
-matrix 
 of type 
 such that in 
 the following set of equations holds
	  Since 
 is a free 
-module with generator 
, the first equation 
 implies 
 and thus 
. By Lemma 2, this determines 
 uniquely. The same argument allows one to translate each 
 into an identity in 
:
	  Thus, we obtain that 
 satisfies the Sato–Wilson Equation (
21) and 
 is a solution of the 
-hierarchy. The system (
28) is called the 
linearization of the -hierarchy and 
 a 
wave matrix of the -hierarchy. Note that 
 is the wave matrix corresponding to the trivial solution of the 
-hierarchy, 
.
In the case of the strict 
-hierarchy, we start with a 
-
deformation of 
S together with the projections 
. Now, we look for an oscillating 
-matrix 
 of type 
 that satisfies in 
 the following set of equations:
	  Also, 
 is a generator of 
 and again we can translate Equation (
29) into identities in 
. Thus, the first equation 
 becomes 
 and the second set of equations in (
29) yields the Sato–Wilson Equation (
25) of the strict 
-hierarchy. In particular, 
 is a solution of that hierarchy. The system (
29) is called the 
linearization of the strict -hierarchy and a 
 satisfying this system a 
wave matrix of the strict -hierarchy.
For both hierarchies, we use in the sequel a milder property that oscillating 
-matrices of a certain type have to satisfy in order to become a wave matrix of that hierarchy. For a proof, see [
41].
Proposition 2. Let  be an oscillating -matrix of type  in  and  the corresponding potential solution of the -hierarchy. Similarly, let  be an oscillating -matrix of type  in  with potential solution  of the strict version.
- (a) 
 Assume there exists for each  an element  such that Then, each  and ψ is a wave matrix for the -hierarchy.
- (b) 
 Suppose there exists for each  an element  such that Then, each  and φ is a wave matrix for the strict -hierarchy.
 Since you do not meet formal products of lower triangular -matrices and upper triangular -matrices in real life, the only way to construct wave matrices of both hierarchies is to give an analytic framework, where you can produce well-defined products of such matrices. This is done in the next section.
  5. LU Factorizations and Solutions of Both Hierarchies
All the relevant 
-matrices that will be produced in the sequel correspond to bounded operators acting on a separable real or complex Banach space with a countable Schauder basis. Since 
 or 
, we have on each 
k the standard norm 
 The Banach spaces 
B we will work with are different spaces of 
 matrices. In all cases the Schauder basis consists of the 
 introduced in 
Section 2. The choice we make for 
B is either one of the spaces 
 of 
 matrices, defined by
      with the norm
      or the space 
 defined by
      with the supremum norm 
. Any 
, the space of all bounded 
k-linear maps from 
B to itself, corresponds to left multiplication 
 with an 
-matrix 
 with respect to 
, i.e.,
	  The invertible transformations in 
 are denoted by 
. For example, the operator 
 as defined in 
Section 2 by Formula (
12) maps each of these Banach spaces 
 to itself and defines an operator in 
 with operator norm equal to 1.
All the Banach spaces 
 have two properties in common that we need later on: first of all, they have the approximation property, i.e., the bounded finite dimensional operators in 
 are dense with respect to the operator norm in the compact operators 
 in 
, see, e.g., Ref. [
43]. Secondly, each vector in 
 is the limit of a Cauchy sequence of vectors with only a finite number of nonzero coordinates with respect to 
.
Now, we come to the description of the role of LU factorization in 
 for the construction of solutions of both hierarchies. Two decompositions of 
 play a role in this process. The first splits 
 as 
, with the corresponding matrices
	  It gives rise to the decomposition of the Lie algebra 
 as 
, where
	  The Lie groups corresponding to 
 and 
 are, respectively
	  Since the exponential map is a local diffeomorphism from an open neighbourhood of 0 in 
 resp. 
 to an open neighbourhood of the identity element in 
 resp. 
, it follows that 
 is a neighbourhood of the identity in 
. As any point 
 of 
 can be reached with left multiplication with elements of 
 and right multiplication with elements of 
, the set 
 is open in 
. It is called the 
big cell in 
 with respect to the groups 
 and 
. Since the groups 
 and 
 intersect only in the identity, the splitting of an 
 from 
 in the product of a unipotent lower triangular matrix and an upper triangular matrix is unique. We use a slightly twisted version of this 
LU factorization of 
: each 
 splits uniquely as
	  The component 
 we call the 
unipotent component of the LU factorization (
32).
The second decomposition is a variation of the foregoing and consists of splitting a 
 as 
, where the matrices of both components are given by
	  This leads to the decomposition 
 of 
, where
	  The Lie groups corresponding to 
 and 
 are, respectively
	  Let 
 denote the diagonal matrices in 
. Then, we have 
 and the big cell 
 with respect to 
 and 
 is also equal to 
. This yields an alternative LU factorization for an element in 
 in the product of a lower triangular matrix and a unipotent upper triangular matrix. We will again use a twisted version of this factorization: each 
 in 
 splits uniquely as follows:
	  The component 
 we call the 
parabolic component of the LU factorization (
35).
Basically, the group of commuting flows that is relevant for both hierarchies is described in the generator 
 of the module 
 of oscillating matrices. What needs to be done still is to make a proper choice for the 
 such that for those parameters 
 is the matrix of an element in 
. We have seen that the operator norm 
 is equal to 1. Hence, the following choice defines a subgroup 
 in 
:
	  For each 
 there holds 
, with the polynomials 
 as in (
27), and 
 is thus a subgroup of 
.
Now, we construct for each 
g in the open set 
 a solution of the 
-hierarchy and the strict 
-hierarchy. For each 
, consider the open subset 
 of 
 defined by
	  The subset 
 is nonempty if and only if 
, which we assume from now on. The appropriate setting in both cases is the algebra
      with the derivations 
 We start with the construction of the solutions of the 
-hierarchy. Then, we have by definition for all 
 that
      and thus on the matrix level
	  Note that all matrix coefficients of 
 and 
 belong to 
, since the map 
 is a diffeomorphism between 
 and 
. Equation (
37) leads to the following identity
	  Clearly, 
 is an oscillating matrix in 
 for which the products between the different factors are real. To show that 
 is a wave matrix for the 
-hierarchy, it suffices to prove the property in Proposition 2. Thus, we compute for all 
, the matrix 
 using both the left- and the right-hand sides of expression (
38). We start with the right-hand side. Since for all 
, 
, we obtain
	  Now, the matrix 
 is of the form 
 with all 
. Next, we use the left-hand side of (
38) to compute 
. This yields
	  In this formula, expression 
 possesses only negative diagonals and 
 has the form
      with all 
 and 
. Combining this with the expression found for the right-hand side gives for all 
	  Thus, 
 satisfies the conditions in part (a) of Proposition 2 and hence it is a wave matrix of the 
-hierarchy. In other words, 
 is a solution of the linearization of the 
-hierarchy. The corresponding solution 
 of the 
-hierarchy is
      and 
. Note that, since the factor 
 plays no role in the construction of 
, multiplying 
g from the right with an element of 
 does not affect the solution 
.
Secondly, we present for a 
 the construction of the solution of the strict 
-hierarchy. We proceed similarly, but now we use the LU factorization (
35) of 
. By definition, we have for all 
 that
      and thus on the matrix level
	  Note that all matrix coefficients of 
 and 
 belong to 
, since the map 
 is a diffeomorphism between 
 and 
. Equation (
40) leads to the following identity
	  Clearly, 
 is an oscillating matrix in 
 for which the product between the different factors is real. The idea is again to show that 
 is a wave matrix for the strict 
-hierarchy and that is done by proving property (b) in Proposition 2. Thus, we compute for all 
, the matrix 
 using both the left- and the right-hand sides of expression (
41). We start with the right-hand side. Again all the 
 are zero; hence,
	  The matrix 
 only has strict positive diagonals. Thus, the expression 
 is equal to a matrix of the form 
 with all 
. Next, we use the left-hand side of (
41) to compute 
 once more. This yields
	  In this formula, the expression 
 does not possess any strict positive diagonals and the matrix 
 has the form
      with all 
 and 
. Combining this with the first expression found yields for all 
	  Thus, 
 satisfies the conditions in part (b) of Proposition 2 and hence is a wave matrix of the strict 
-hierarchy. The corresponding solution 
 of the strict 
-hierarchy is
      and 
. Also, here the factor 
 plays no role at the construction of 
. Hence, multiplying 
g from the right with an element of 
 does not affect the solution 
. For completeness, we resume the foregoing results.
Theorem 1. Let g be an element in the open set .
- (a) 
 For any γ in , let  be the unipotent component of  in the LU factorization (32) of . Let the oscillating matrix  be defined by Formula (38). Then,  is the wave matrix of the -hierarchy with respect to the matrix  defined by Formula (39). The solution  of the -hierarchy satisfies for all  and all  that . - (b) 
 For any γ in , let  be the parabolic component of  in the LU factorization (35) of . Let the oscillating matrix  be defined by Formula (41). Then,  is the wave matrix of the strict -hierarchy with respect to the matrix  defined by Formula (42). The solution  of the strict -hierarchy satisfies for all  and all  that . 
 In the sequel, we need the decomposition 
, where the subspaces 
 of 
B and their complements 
 are defined by
	  Each 
 splits as follows with respect to 
:
	  Next, we have a more detailed look at the matrix coefficients of the dressing matrices constructed in Theorem 1. Consider 
, with 
, and a 
. Then, 
, with
	  Hence, all the matrix coefficients 
 belong to the analytic functions 
 on 
. If 
, then 
 and 
 with 
 and 
. In particular, we have
	  Using the decomposition (
43) for 
 and 
, we obtain for all 
 the finite dimensional LU factorization 
, which enables you to express each 
 and 
 as the quotient of two polynomial expressions in 
. This is the case for a number of matrix coefficients that can be seen directly: for example, the first column of 
 has to equal that of 
. So, 
 The LU factorization of 
 shows that
	  In particular, we obtain that for 
, 
. With these data, one can prove by induction on the size of the matrix that all other matrix coefficients of 
 and 
 are determined uniquely and have the mentioned form. This can be seen as follows: consider the matrix
	  Now, all 
 are nonzero and we apply the induction hypothesis to the submatrix 
 to obtain the unique coefficients 
 and 
 with 
i and 
j varying between 0 and 
n. Now, we know already 
 and 
 so that we merely have to find 
 and 
. For the first set, we have
      and since 
 is invertible, this determines the first set completely and the whole matrix 
. The product of the last row of 
 with the second column of 
 yields the equation 
 and that determines 
 uniquely. From the product of the last row of 
 with the third column of 
 one obtains the equation 
, which fixes 
 and so on. Thus, we obtain the remaining coefficients from the second set. This procedure can best be carried out by a computer if you want to see how the coefficients evolve. Note that the matrix 
 also has the property that all its matrix coefficients are quotients of polynomial expressions in 
. Now, we go back to the dressing matrices 
 and 
 from Theorem 1. Then, 
 and 
 and taking the inverse in both cases maintains the property that we want. So there holds
Theorem 2. The matrix coefficients of the dressing matrices  and  from Theorem 1 are quotients of polynomial expressions in the analytic functions .
 Remark 3. The result in Theorem 2 is for the -hierarchy and its strict version the analogue of the property that the dressing operators for KP and strict KP as constructed in [29,31] have coefficients that depend meromorphically on the group of commuting flows.  Let 
 denote the subspace of compact operators in 
. In the group 
, we consider
	  Clearly, 
 is contained in 
. Note that for each 
 the operator 
 is a Fredholm operator of index zero. Using the fact that 
 is a two-sided ideal in 
 and some properties of Fredholm operators [
44], one shows that 
 is a group. One can see 
 as the analogue for the 
-hierarchy and its strict version of the restricted linear group 
 used to construct solutions of the KP hierarchy and its strict version in the Hilbert setting, see [
29,
31]. For there holds
Proposition 3. The group  lies in the open set .
 Proof.  We prove the statement in the proposition by using the geometry of 
. Since 
B has the approximation property, the finite-dimensional operators 
 in 
 are dense in 
. So it suffices to prove the statement for a 
 with 
 Let 
 be the notation for the topological dual of 
B. Recall that the elements of 
 are the image of the map 
 defined by:
		As vectors in 
B can be approximated arbitrarily close by vectors with a finite number of nonzero coordinates, we may assume that all the 
 have that property and then we have reduced the claim to proving the statement for elements of 
 that decompose for some 
 with respect to the splitting 
 as
		The matrix of each element 
 we split similarly:
		Then, the matrix of the operator 
 has the form
        where 
 Let 
 be the matrix of the identity map in 
. Note that the map from 
 defined by
        is a continuous surjection. Hence, if we find a nonzero open subset of 
 such that for each vector 
 in that open set, it holds that
        with 
 as an invertible upper triangular 
-matrix and 
 a unipotent lower triangular matrix of the same size, then we have on an open subset of 
        and this is the decomposition we are looking for. Clearly, in order that we have the desired splitting of 
, the condition
        for the vector 
 is also necessary. We will show by induction on 
N that there are 
N nonzero polynomials 
 such that for all 
 in the complement of the union of the zero sets of all the 
 one has the decomposition 
. For 
, the matrix 
 is upper triangular and the desired decomposition holds for all 
. Now, we take 
; we split off the first row of 
 in 
 as follows:
        where 
 is the matrix of the identity map in 
 We focus for the moment on the product of the last two matrices in (
45). Since the first column of 
 is nonzero, the polynomial 
 is nonzero. Now, we work on the complement of the zero set of 
, so 
 is invertible. Define for all 
 the polynomials 
. Note that 
 Then, the product of the last two matrices in (
45) is equal to
        where each 
 and all 
 with 
 and 
 defined by 
. Next, we push the top row of the right matrix to the right
		The matrix 
 at the right has determinant 
 and will be part of 
. Next, we move the matrix 
 in product (
45) to the right by using
        where 
 is the column of length 
N equal to 
 with 
. The matrix 
 will be part of 
. Thus, we reduce the case to the product
        where the matrix at the right has determinant 
. The induction hypothesis gives us then the sequence of nonzero polynomials 
, so that on the complement of the union of all their zeros we have the desired decomposition. This proves the claim in the proposition.    □
 From Theorem 1 and Proposition 3, we may conclude
Corollary 2. The manifold  describes solutions of the -hierarchy and the manifold  describes solutions of the strict -hierarchy. As such, these manifolds are in the present context the analogues of the Grassmann manifold Gr(B) and its cover, the flag variety , used in [29,31] to construct solutions of the KP hierarchy and the strict KP hierarchy.