A Proof of the Standard Completeness for the Involutive Uninorm Logic

In this paper, we solve a long-standing open problem in the field of fuzzy logics, that is, the standard completeness for the involutive uninorm logic IUL. In fact, we present a uniform method of density elimination for several semilinear substructural logics. Especially, the density elimination for IUL is proved. Then the standard completeness for IUL follows as a lemma by virtue of previous work by Metcalfe and Montagna.


Introduction
The problem of the completeness of Łukasiewicz infinite-valued logic (Ł , for short) was posed by Łukasiewicz and Tarski in the 1930s.It was twenty-eight years later that it was syntactically solved by Rose and Rosser [1].Chang [2] developed at almost the same time a theory of algebraic systems for Ł, which are called MV-algebras, with an attempt to make MV-algebras correspond to Ł as Boolean algebras to the classical two-valued logic.Chang [3] subsequently finished another proof for the completeness of Ł by virtue of his MV-algebras.
It was Chang who observed that the key role in the structure theory of MV-algebras is not locally finite MV-algebras but linearly ordered ones.The observation was formalized by Hájek [4] who showed the completeness for his basic fuzzy logic (BL for short) with respect to linearly ordered BL-algebras.Starting with the structure of BL-algebras, Hájek [5] reduced the problem of the standard completeness of BL to two formulas to be provable in BL.Here and thereafter, by the standard completeness we mean that logics are complete with respect to algebras with lattice reduct [0, 1].Cignoli et al. [6] subsequently proved the standard completeness of BL, i.e., BL is the logic of continuous t-norms and their residua.
Hajek's approach toward fuzzy logic has been extended by Esteva and Godo in [7], where the authors introduced the logic MTL which aims at capturing the tautologies of left-continuous t-norms and their residua.The standard completeness of MTL was proved by Jenei and Montagna in [8], where the major step is to embed linearly ordered MTL-algebras into the dense ones under the situation that the structure of MTL-algebras have been unknown as of yet.
Esteva and Godo's work was further promoted by Metcalfe and Montagna [9] who introduced the uninorm logic UL and involutive uninorm logic (IUL) which aims at capturing tautologies of left-continuous uninorms and their residua and those of involutive left-continuous ones, respectively.Recently, Cintula and Noguera [10] introduced semilinear substructural logics which are substructural logics complete with respect to linearly ordered models.Almost all well-known families of fuzzy logics such as Ł, BL, MTL, UL and IUL belong to the class of semilinear substructural logics.
Metcalfe and Montagna's method to prove standard completeness for UL and its extensions is of proof theory in nature and consists of two key steps.Firstly, they extended UL with the density rule of Takeuti and Titani [11]: where p does not occur in Γ, A, B or C, and then prove the logics with (D) are complete with respect to algebras with lattice reduct [0, 1].Secondly, they give a syntactic elimination of (D) that was formulated as a rule of the corresponding hypersequent calculus.
Hypersequents are a natural generalization of sequents which were introduced independently by Avron [12] and Pottinger [13] and have proved to be particularly suitable for logics with prelinearity [9,14].Following the spirit of Gentzen's cut elimination, Metcalfe and Montagna succeeded to eliminate the density rule for GUL and several extensions of GUL by induction on the height of a derivation of the premise and shifting applications of the rule upwards, but failed for GIUL and therefore left it as an open problem.
There are several relevant works about the standard completeness of IUL as follows.With an attempt to prove the standard completeness of IUL, we generalized Jenei and Montagna's method [15] for IMTL in [16], but our effort was only partially successful.It seems that the subtle reason why it does not work for UL and IUL is the failure of the finite model property of these systems [17].Jenei [18] constructed several classes of involutive FL e -algebras, as he said, in order to gain a better insight into the algebraic semantic of the substructural logic IUL, and also to the long-standing open problem about its standard completeness.Ciabattoni and Metcalfe [19] introduced the method of density elimination by substitutions which is applicable to a general class of (first-order) hypersequent calculi but fails in the case of GIUL.
We reconsidered Metcalfe and Montagna's proof-theoretic method to investigate the standard completeness of IUL, because they have proved the standard completeness of UL by their method and we cannot prove such a result by the Jenei and Montagna's model-theoretic method.In order to prove the density elimination for GUL, they prove that the following generalized density rule (D 1 ): is admissible for GUL, where they set two constraints to the form of G 0 : (i) n, m ⩾ 1 and λ i ⩾ 1 for some 1 ⩽ i ⩽ n; (ii) p does not occur in Γ i , ∆ i , Π j , Σ k for i = 1⋯n, j = 1⋯m, k = 1⋯o.We may regard (D 1 ) as a procedure whose input and output are the premise and conclusion of (D 1 ), respectively.We denote the conclusion of (D 1 ) by D 1 (G 0 ) when its premise is G 0 .Observe that Metcalfe and Montagna had succeeded in defining the suitable conclusion for an almost arbitrary premise in (D 1 ), but it seems impossible for GIUL (see Section 3 for an example).We then define the following generalized density rule (D 0 ) for GL ∈ {GUL, GIUL, GMTL, GIMTL} and prove its admissibility in Section 9.
Theorem 1 (Main theorem).Let n, m ⩾ 1, p does not occur in G ′ , Γ i , ∆ i , Π j or Σ j for all 1 ⩽ i ⩽ n,1 ⩽ j ⩽ m.Then the strong density rule is admissible in GL.
In proving the admissibility of (D 1 ), Metcalfe and Montagna made some restriction on the proof τ of G 0 , i.e., converted τ into an r-proof.The reason why they need an r-proof is that they set the constraint (i) to G 0 .We may imagine the restriction on τ and the constraints to G 0 as two pallets of a balance, i.e., one is strong if another is weak and vice versa.Observe that we select the weakest form of G 0 in (D 0 ) that guarantees the validity of (D).Then it is natural that we need make the strongest restriction on the proof τ of G 0 .But it seems extremely difficult to follow such a way to prove the admissibility of (D 0 ).In order to overcome such a difficulty, we first of all choose Avron-style hypersequent calculi as the underlying systems (see Appendix A.1).Let τ be a cut-free proof of G 0 in GL.Starting with τ, we construct a proof τ * of G G * in a restricted subsystem GL Ω of GL by a systematic novel manipulations in Section 4. Roughly speaking, each sequent of G is a copy of some sequent of G 0 , and each sequent of G * is a copy of some contraction sequent in τ.In Section 5, we define the generalized density rule (D) in GL Ω and prove that it is admissible.Now, starting with G G * and its proof τ * , we construct a proof τ of G in GL Ω such that each sequent of G is a copy of some sequent of G. Then ⊢ GL Ω D(G ) by the admissibility of (D).Then ⊢ GL D 0 (G 0 ) by Lemma 29.Hence the density elimination theorem holds in GL.Especially, the standard completeness of IUL follows from Theorem 62 of [9].
G is constructed by eliminating (pEC)-sequents in G G * one by one.In order to control the process, we introduce the set I = {H c i 1 , ⋯, H c i m } of (pEC)-nodes of τ * and the set I of the branches relative to I and construct G I such that G I does not contain (pEC)-sequents lower than any node in I, i.e., S c j ∈ G I implies H c j H c i for all H c i ∈ I.The procedure is called the separation algorithm of branches in which we introduce another novel manipulation and call it derivation-grafting operation in Sections 7 and 8.

Preliminaries
In this section, we recall the basic definitions and results involved, which are mainly from [9].Substructural fuzzy logics are based on a countable propositional language with formulas built inductively as usual from a set of propositional variables VAR, binary connectives ⊙, →, ∧, ∨, and constants , ⊺, t, f with definable connective ¬A ∶= A → f .Definition 1. ([9,12]) A sequent is an ordered pair (Γ, ∆) of finite multisets (possibly empty) of formulas, which we denote by Γ ⇒ ∆.Γ and ∆ are called the antecedent and succedents, respectively, of the sequent and each formula in Γ and ∆ is called a sequent-formula.A hypersequent G is a finite multiset of the form Γ 1 ⇒ ∆ 1 ⋯ Γ n ⇒ ∆ n , where each Γ i ⇒ ∆ i is a sequent and is called a component of G for each 1 ⩽ i ⩽ n.If ∆ i contains at most one formula for i = 1⋯n, then the hypersequent is single-conclusion, otherwise it is a multiple-conclusion.Definition 2. Let S be a sequent and G = S 1 ⋯ S m a hypersequent.We say that S ∈ G if S is one of S 1 , ⋯, S m .Notation 1.Let G 1 and G 2 be two hypersequents.We will assume from now on that all set terminology refers to multisets, adopting the conventions of writing Γ, ∆ for the multiset union of Γ and ∆, A for the singleton multiset {A}, and λΓ for the multiset union of λ copies of Γ for λ ∈ N. By G 1 ⊆ G 2 we mean that S ∈ G 2 for all S ∈ G 1 and the multiplicity of S in G 1 is not more than that of S in G 2 .We will use G 1 = G 2 , G 1 ⋂ G 2 , G 1 ⋃ G 2 , G 1 G 2 by their standard meaning for multisets by default and we will declare when we use them for sets.We sometimes write S 1 ⋯ S m and G n copies S ⋯ S as {S 1 , ⋯, S m }, G S n (or G {S} n ), respectively.

Definition 3. ([12]
) A hypersequent rule is an ordered pair consisting of a sequence of hypersequents G 1 , ⋯, G n called the premises (upper hypersequents) of the rule, and a hypersequent G called the conclusion (lower hypersequent), written by G 1 ⋯G n G .If n = 0, then the rule has no premise and is called an initial sequent.
The single-conclusion version of a rule adds the restriction that both the premises and conclusion must be single-conclusion; otherwise the rule is multiple-conclusion.

Definition 8. ([12]
) A derivation τ of a hypersequent G from hypersequents G 1 , ⋯, G n in a hypersequent calculus GL is a labeled tree with the root labeled by G, leaves labeled initial sequents or some G 1 , ⋯, G n , and for each node labeled G ′ 0 with parent nodes labeled G ′ 1 , ⋯, G ′ m (where possibly m = 0), We need the following definition to give each node of τ an identification number, which is used in Construction 3 to differentiate sequents in a hypersequent in a proof.Definition 11.A rule is admissible for a calculus GL if whenever its premises are derivable in GL, then so is its conclusion.

Proof of the Main Theorem: A Computational Example
In this section, we present an example to illustrate the proof of the main theorem.
G 0 is a theorem of IUL and a cut-free proof τ of G 0 is shown in Figure 1, where we use an additional rule Note that we denote three applications of (EC) in τ respectively by (EC) 1 , (EC) 2 , (EC) 3 and three (⊙ r ) by (⊙ r ) 1 , (⊙ r ) 2 and (⊙ r ) 3 .
Symmetry 2019, xx, 5 6 of 49 Definition 11.A rule is admissible for a calculus GL if whenever its premises are derivable in GL, then so is its conclusion.

Proof of the Main Theorem: A Computational Example
In this section, we present an example to illustrate the proof of Main theorem.2. It supports the validity of the generalized density rule (D 0 ) in Section 1, as an instance of (D 0 ).  2. It supports the validity of the generalized density rule (D 0 ) in Section 1, as an instance of (D 0 ).

By applying (D) to free combinations of all sequents in
Our task is to construct ρ, starting from τ.The tree structure of ρ is more complicated than that of τ.Compared with UL, MTL and IMTL, there is no one-to-one correspondence between nodes in τ and ρ.Following the method given by G. Metcalfe and F. Montagna, we need to define a generalized density rule for IUL.We denote such an expected unknown rule by (D x ) for convenience.Then D x (H) must be definable for all H ∈ τ.Naturally, Our task is to construct ρ, starting from τ.The tree structure of ρ is more complicated than that of τ.Compared with UL, MTL and IMTL, there is no one-to-one correspondence between nodes in τ and ρ.
Following the method given by G. Metcalfe and F. Montagna, we need to define a generalized density rule for IUL.We denote such an expected unknown rule by (D x ) for convenience.Then D x (H) must be definable for all H ∈ τ.Naturally, However, we could not find a suitable way to define D x (H ×× ) and D x (H × ) for H ×× and H × in τ, see Figure 1.This is the biggest difficulty we encounter in the case of IUL such that it is hard to prove density elimination for IUL.A possible way is to define D x (⇒ p, p, ¬A ⊙ ¬A p, p ⇒ A ⊙ A) as ⇒ t, A ⊙ A, ¬A ⊙ ¬A.Unfortunately, it is not a theorem of IUL.
Notice that two upper hypersequents ⇒ p, ¬A p, p ⇒ A ⊙ A of (⊙ r ) 3 are permissible inputs of (D x ).Why is H ×× an invalid input?One reason is that, two applications (EC) 1 and (EC) 2 cut off two sequents A ⇒ p such that two p ′ s disappear in all nodes lower than upper hypersequent of (EC) 1 or (EC) 2 , including H ×× .These make occurrences of p ′ s to be incomplete in H ×× .We then perform the following operation in order to get complete occurrences of p ′ s in H ×× .
Step 1 (preprocessing of τ).Firstly, we replace H with H S ′ for all Then we construct a proof without (EC), which we denote by τ 1 , as shown in Figure 3.We call such manipulations sequent-inserting operations, which eliminate applications of (EC) in τ.
However, we also cannot define The reason is that the origins of p ′ s in H ′ ×× are indistinguishable if we regard all leaves in the form p ⇒ p as the origins of p ′ s which occur in the inner node.For example, we do not know which p comes from the left subtree of τ 1 (H ′ ×× ) and which from the right subtree in two occurrences of p ′ s in ⇒ p, p, ¬A ⊙ ¬A ∈ H ′ ×× .We then perform the following operation in order to make all occurrences of p ′ s in H ′ ×× distinguishable.We assign the unique identification number to each leaf in the form p ⇒ p ∈ τ 1 and transfer these identification numbers from leaves to the root, as shown in Figure 4. We denote the proof of G G * resulting from this step by τ * , where sequent is a copy of some external contraction sequent in (EC)-node of τ.We call such manipulations eigenvariable-labeling operations, which make us to trace eigenvariables in τ.
Then all occurrences of p in τ * are distinguishable and we regard them as distinct eigenvariables (See Definition 18 (i)).Firstly, by selecting p 1 as the eigenvariable and applying (D) to G G * , we get Secondly, by selecting p 2 and applying (D) to G ′ , we get Repeatedly, we get We define such iterative applications of (D) as D-rule (See Definition 20).Lemma 10 shows that A miracle happens here!The difficulty that we encountered in GIUL is overcome by converting Why do we assign the unique identification number to each p ⇒ p ∈ τ 1 ?We would return back to the same situation as that of τ 1 if we assign the same indices to all p ⇒ p ∈ τ 1 or, replace p 3 ⇒ p 3 and p 4 ⇒ p 4 by p 2 ⇒ p 2 in τ * .
Note that D(G G * ) = H 1 .So we have built up a one-one correspondence between the proof τ * of G G * and that of H 1 .Observe that each sequent in G * is not a copy of any sequent in G 0 .In the following steps, we work on eliminating these sequents in G * .
Step 2 (extraction of elimination rules).We select A ⇒ p 2 as the focus sequent in H c 1 in τ * and keep A ⇒ p 1 unchanged from H c 1 downward to G G * (See Figure 4).So we extract a derivation from A ⇒ p 2 by pruning some sequents (or hypersequents) in τ * , which we denote by τ * H c 1 ∶A⇒p 2 , as shown in Figure 5. , as shown in Figure 6.Notice that we assign new identification numbers to new occurrences of p in τ * Next, we apply τ * Then we construct a proof τ (1) 1 ∶G G * , as shown in Figure 7, where ¬A ⊙ ¬A A ⇒ p 5 p 5 , p 6 ⇒ A ⊙ A contains more copies of sequents from G * and seems more complex than G G * .We will present a unified method to tackle with it in the following steps.Other derivations are shown in Figures 8-11.Step 3 (separation of one branch).A proof τ (2) 1 ∶G G * is constructed by applying sequentially H c 1 ∶G G * , as shown in Figure 12, where G ′′ ≡ G (1) Then it is permissible to cut off the part H c 1 ∶G G * ).We regard such a manipulation as a constrained contraction rule applied to G (2) , which guarantees the validity of under the condition H c 1 ∶G G * ).A change happens here!There is only one sequent which is a copy of a sequent in G * in G H c 1 ∶G G * .It is simpler than G G * .So we are moving forward.The above procedure is called the separation of G G * as a branch of H c 1 and reformulated as follows (See Section 7 for details).
The separation of G G * as a branch of H c 2 is constructed by a similar procedure as follows.
Step 3 (separation algorithm of multiple branches).We will prove ⊢ GIUL D 0 (G 0 ) in a direct way, i.e., only the major step of Theorem 2 is presented in the following.(See Appendix A.5.4 for a complete illustration.)Recall that By reassigning identification numbers to occurrences of A great change happens here!We have eliminated all sequents which are copies of some sequents in G * and converted G G * into G I in which each sequent is some copy of a sequent in G 0 .
Then ⊢ GIUL D(G I ) by Lemma 8, where So we have built up one-one correspondences between the proof of G I and that of H 0 , i.e., the proof of H 0 can be constructed by applying (D) to the proof of G I .The major steps of constructing G I are shown in the following figure, where In the above example, D(G I ) = D 0 (G 0 ).But that is not always the case.In general, we can prove that ⊢ GL D 0 (G 0 ) if ⊢ GL D(G I ), which is shown in the proof of the main theorem in Page 42.This example shows that the proof of the main theorem essentially presents an algorithm to construct a proof of D 0 (G 0 ) from τ.

Preprocessing of Proof Tree
Let τ be a cut-free proof of G 0 in the main theorem in GL by Lemma 1. Starting with τ, we will construct a proof τ * which contains no application of (EC) and has some other properties in this section.
(ii) is proved by a procedure similar to that of (i) and omitted.
We introduce two new rules by Lemma 2.
) are called the generalized (∧ r ) and (∨ l ) rules, respectively.Now, we begin to process τ as follows.
Step 1.A proof τ 1 is constructed by replacing inductively all applications of (accordingly The replacements in Step 1 are local and the root of τ 1 is also labeled by G 0 . Definition 13.We sometimes may regard G ′ G ′ as a structural rule of GL and denote it by (ID Ω ) for convenience.The focus sequent for (ID Ω ) is undefined.
Proof.The proof is by induction on n .Since ) is an application of the same rule (I I) (or (I)).Thus τ ′ (H n S m−1 ) is a proof.Definition 14.The manipulation described in Lemma 3 is called a sequent-inserting operation.
Clearly, the number of (EC * )-applications in τ ′ is less than τ 1 .Next, we continue to process τ.
Step 2. Let By repeatedly applying sequent-inserting operations, we construct a proof of G 0 G * 0 in GL without applications of (EC * ) and denote it by τ 2 .
We need the following construction to eliminate applications of (EW) in τ 2 .
and τ 2 H∶H ′ (⟨H k+1 ⟩ H∶H ′ ) is constructed by combining trees Proof.The proof is by induction on k.For the base step, For the induction step, suppose that ⟨H k ⟩ H∶H ′ and τ 2 H∶H ′ (⟨H k ⟩ H∶H ′ ) be constructed such that (i) and (ii) hold for some 0 ⩽ k ⩽ n − 1.There are two cases to be considered.
The case of applications of the two-premise rule is proved by a similar procedure and omitted. Case Definition 15.The manipulation described in Construction 1 is called a derivation-pruning operation.
Then Lemma 4 shows that . Now, we continue to process τ as follows. Step . By repeatedly applying the procedure above, we construct a proof Step thus a proof is constructed by replacing top-down p in each Γ ′ with .
Repeatedly applying the procedure above, we construct a proof Define two operations σ l and σ r on sequents by Then G 2 G * 2 is obtained by applying σ l and σ r to some designated sequents in G 1 G * 1 .

Definition 16. The manipulation described in
Step 4 is called eigenvariable-replacing operation.
Step 5. A proof τ * is constructed from τ 4 by assigning inductively one unique identification number to each occurrence of p in τ 4 as follows.
One unique identification number, which is a positive integer, is assigned to each leaf of the form p ⇒ p in τ 4 which corresponds to p k ⇒ p k in τ * .Other nodes of τ 4 are processed as follows.
Suppose that all occurrences of p in G 1 Γ, λp ⇒ µp, ∆ are assigned identification numbers and have the form All applications of (∨ lw ) are processed by the procedure similar to that of (∧ rw ).
All applications of (→ l ) are processed by the procedure similar to that of (⊙ r ).
Suppose that G ′ and G ′′ have the forms where Definition 17.The manipulation described in Step 5 is called eigenvariable-labeling operation.
Notation 4. Let G 2 and G * 2 be converted to G and G * in τ * , respectively.Then τ * is a proof of G G * .
In the preprocessing of τ, each where Step 4. Each occurrence of p in τ 4 is assigned the unique identification number in Step 5.The whole preprocessing above is depicted by Figure 13.
Note that there are no identification numbers for occurrences of variable p in i in τ * , in which case we denote the focus one S c i1 and others We then write G * as {S c iv } v=2⋯m i i=1⋯N .We call H c i , S c iu the i-th pseudo-(EC) node of τ * and pseudo-(EC) sequent, respectively.We abbreviate pseudo-EC as pEC.Let H ∈ τ * , by S c i ∈ H we mean that S c iu ∈ H for some , then it has not any effect on our argument to treat all such S c i as members of G.So we assume that all H c i are always defined for all Now, we replace locally each G ′ G ′ (ID Ω ) in τ * with G ′ and denote the resulting proof also by τ * , which has no essential difference with the original one, but could simplify subsequent arguments.We introduce the system GL Ω as follows.
Definition 18. GL Ω is a restricted subsystem of GL such that (i) p is designated as the unique eigenvariable by which we mean that it is not used to built up any formula containing logical connectives and only used as a sequent-formula.
(ii) Each occurrence of p on each side of every component of a hypersequent in GL is assigned one unique identification number i and written as p i in GL Ω .Initial sequent p ⇒ p of GL has the form p i ⇒ p i in GL Ω .
(iii) Each sequent S of GL in the form Γ, λp ⇒ µp, ∆ has the form Here, l and r in v l and v r indicate the left side and right side of a sequent, respectively.
G ′′ is a copy of G ′ if they are disjoint and there exist two bijections ) such that G ′′ can be obtained by applying σ l to antecedents of sequents in G ′ and σ r to succedents of sequents in G ′ , i.e., G ′′ = σ r ○ σ l (G ′ ).
(v) A closed hypersequent G ′ G ′′ G ′′′ can be contracted as G ′ G ′′ in GL Ω under the condition that G ′′ and G ′′′ are closed and G ′′′ is a copy of G ′′ .We call it the constraint external contraction rule and denote it by Furthermore, if there do not exist two closed hypersequents H ′ , H ′′ ⊆ G ′ G ′′ such that H ′′ is a copy of H ′ then we call it the fully constraint contraction rule and denote by and (CUT) of GL are forbidden.(EC), (∧ r ) and (∨ l ) of GL are replaced with (EC Ω ), (∧ rw ) and (∨ lw ) in GL Ω , respectively.
(vii) G 1 S 1 and G 2 S 2 are closed and disjoint for each two-premise rule (viii) p does not occur in Γ or ∆ for each initial sequent Γ, ⇒ ∆ or Γ ⇒ ⊺, ∆ and, p does not act as the Let τ be a cut-free proof of G 0 in L and τ * be the tree resulting from preprocessing of τ.
Proof.Claims from (i) to (iv) follow immediately from Step 5 in preprocessing of τ and Definition 18. Claim (v) is from Notation 4 and Definition 18.Only (vi) is proved as follows. Suppose is proved by a similar procedure and omitted.

The Generalized Density Rule (D) for GL Ω
In this section, we define the generalized density rule (D) for GL Ω and prove that it is admissible in GL Ω .
Clearly, [S] G = S if v l (S) = v r (S) or p does not occur in S. The following construction gives a procedure to construct [S] G for any given S ∈ G.

Construction 2. Let G and S be as above
and Definition 18 then let G k+1 = G k S k+1 otherwise the procedure terminates and n ∶= k.
(iii) It holds immediately from Construction 2 and (i).(iv) The proof is by induction on k.For the base step, let Definition 20.Let G = S 1 ⋯ S r and S l be in the form (iii) We call (D) the generalized density rule of GL Ω , whose conclusion D(G) is defined by (ii) if its premise is G.
Lemma 8. (Appendix A.5.1)If there exists a proof τ of G in GL Ω then there exists a proof of D(G) in GL, i.e., (D) is admissible in GL Ω .
Proof.We proceed by induction on the height of τ.For the base step, if For the induction step, the following cases are considered.
Case 1 Let Other rules of type (I) are processed by a procedure similar to above.
All applications of (→ l ) are processed by a procedure similar to that of ⊙ r and omitted.
All applications of (∨ lw ) are processed by a procedure similar to that of (∧ rw ) and omitted. where Hence we assume that, without loss of generality, Thus the proof of The following two lemmas are corollaries of Lemma 8.

Lemma 9.
If there exists a derivation of G 0 from G 1 , ⋯, G r in GL Ω then there exists a derivation of D(G 0 ) from D(G 1 ), ⋯, D(G r ) in GL.
Lemma 10.Let τ be a cut-free proof of G 0 in GL and τ * be the proof of G G * in GL Ω resulting from preprocessing of τ.Then ⊢ GL D(G G * ).

Extraction of Elimination Rules
In this section, we will investigate Construction 1 further to extract more derivations from τ * .Any two sequents in a hypersequent seem independent of one another in the sense that they can only be contracted into one by (EC) when it is applicable.Note that one-premise logical rules just modify one sequent of a hypersequent and two-premise rules associate a sequent in a hypersequent with one in a different hypersequent.
τ * (or any proof without (EC Ω ) in GL Ω ) has an essential property, which we call the distinguishability of τ * , i.e., any variables, formulas, sequents or hypersequents which occur at the node H of τ * occur inevitably at H ′ < H in some forms.
Let H ≡ G ′ S ′ S ′′ ∈ τ * .If S ′ is equal to S ′′ as two sequents then the case that τ * H∶S ′ is equal to τ *

H∶S ′′
as two derivations could possibly happen.This means that both S ′ and S ′′ are the focus sequent of one node in τ * when G * H∶S ′ ≠ S ′ and G * H∶S ′′ ≠ S ′′ , which contradicts that each node has the unique focus sequent in any derivation.Thus we need to differentiate S ′ from S ′′ for all G ′ S ′ S ′′ ∈ τ * .Define S ′ ∈ τ * such that G ′ S ′ S ′′ ⩽ S ′ , S ′ ∈ S ′ and S ′ is the principal sequent of S ′ .If S ′ has the unique principal sequent, N S ′ ∶= 0, otherwise N S ′ ∶= 1 (or N S ′ = 2) to indicate that S ′ is one designated principal sequent (or accordingly N S ′ = 2 for another) of such an application as (COM), (∧ rw ) or (∨ lw ).Then we may regard S ′ as (S ′ ; P(S ′ ), N S ′ ).Thus S ′ is always different from S ′′ by P(S ′ ) ≠ P(S ′′ ) or, P(S ′ ) = P(S ′′ ) and N S ′ ≠ N S ′′ .We formulate it by the following construction.Construction 3. (Appendix A.5.2) A labeled tree τ * * , which has the same tree structure as τ * , is constructed as follows.
In the whole paper, we treat τ * as τ * * without mention of τ * * .Note that in preprocessing of τ, some logical applications could also be converted to (ID Ω ) in Step 3 and we need fix the focus sequent at each node H and subsequently assign valid identification numbers to each H ′ < H by eigenvariable-labeling operation.
Proof.The proof is by induction on i for 0 ⩽ i < n.Only (i) is proved as follows and (ii) by a similar procedure and omitted.
For the base step, For the induction step, suppose that ⟨H i ⟩ H∶G 3 = ⟨H i ⟩ H∶G 1 ⋂ ⟨H i ⟩ H∶G 2 for some 0 ⩽ i < n.Only is the case of a one-premise rule given in the following and other cases are omitted. Thus The case of S ′ ∉ ⟨H i ⟩ H∶G 2 , S ′ ∈ ⟨H i ⟩ H∶G 1 is proved by a similar procedure and omitted.
Proof.(i) and (ii) are immediately from Lemma 11.Notation 6.We write τ * , respectively, for the sake of simplicity.for all 2 ⩽ u ⩽ m j .

Proof
Then Lemma 13(vi) shows that i1 .We generalize it by introducing derivations from multiple premises in the following.In the remainder of this section, let } and I r = {H c r 1 , ⋯, H c r m(r) }, which occur in the left subtree τ * (G ′ S ′ ) and right subtree 1 is constructed by induction on I .The base case of I = 1 has been done by Construction 1.For the induction case, suppose that derivations τ * I l of ⟨G G * ⟩ I l from S c l 1 1 , ⋯, S c l m(l) 1 and τ * I r of ⟨G G * ⟩ I r from S c r 1 1 , ⋯, S c r m(r) 1 are constructed.Then τ * I of ⟨G G * ⟩ I from S c i 1 1 , ⋯, S c i m 1 is constructed as follows.
Construction 4. (Appendix A.5.2) (i) (iii) Other nodes of τ * I are built up by Construction 1(ii).The following lemma is a generalization of Lemma 13.

Lemma 16. Let Th(H
holds by a procedure similar to above then Other claims hold immediately from Construction 4. Proof.(i), (ii) and (iii) follow immediately from Lemma 16. (iv) holds by (i) and Lemma 13 (vi).
Lemma 17 (iv) shows that there exists no copy of S c i k in G * I for any 1 ⩽ k ⩽ m.Then we may regard them to be eliminated in τ * I .We then call τ * I an elimination derivation.Let Then G ′ and G ′′ are disjoint and there exist two bijections i m u m and denote it by τ * I ′ and its root by

Separation of One Branch
In the remainder of this paper, we assume that p occur at most one time for each sequent in G 0 as the one in the main theorem, τ be a cut-free proof of G 0 in GL and τ * the proof of G G * in GL Ω resulting from preprocessing of τ.Then v l (S) + v r (S) ≤ 1 for all S ∈ G, which plays a key role in discussing the separation of branches.
Definition 24.By S ′ ∈ c G ′ we mean that there exists some copy of or H c j H c i for all H c i ∈ I.
imply H c j ∉ I.In this section, let I = {H c i }, I = {⌈S c i ⌉ I }, we will give an algorithm to eliminate all S c j ∈ ⌈S c i ⌉ I satisfying H c j ⩽ H c i .

Construction 5. (Appendix A.3) A sequence of hypersequents G
(q) I and their derivations τ (q) I from ⌈S c i ⌉ I for all q ⩾ 0 are constructed inductively as follows.
For the base case, define G I are constructed for some 0 ⩽ q.If there exists no S c j ∈ G (q) I such that H c j ⩽ H c i , then the procedure terminates and define J I to be q; otherwise define H c i q such that S c i q ∈ G (q) I , H c i q ⩽ H c i and H c j ⩽ H c i q for all S c j ∈ G (q) I , H c j ⩽ H c i .Let S c i q 1 , ⋯, S c i q m q be all copies of S c i q in G (q) u=1 and its derivation τ to S c i q 1 , ⋯, S c i q m q in G (q) I , respectively.Notice that we assign new identification numbers to new occurrences of p in τ *

S c
iq u for all 0 ⩽ q ⩽ J I − 1, 1 ⩽ u ⩽ m q .
Lemma 18. (i) H c i 0 = H c i and H c i q+1 < H c i q for all 0 ⩽ q ⩽ J I − 2; where by Lemma 13(vi) thus

I
), G

I
⟩ is constructed by linking up the conclusion of previous derivation to the premise of its successor in the sequence of derivations as shown in Figure 14.
⟨τ * .Then H c i q ≰ H c j by Lemma 13(vi).Thus H c i ≰ H c j by H c i q ⩽ H c i .Hence H c j H c i .
Lemma 18 shows that Construction 5 presents a derivation τ e., all S c j ∈ ⌈S c i ⌉ I satisfying H c j ⩽ H c i are eliminated by Construction 5. We generalize this procedure as follows.
H∶H l and its derivation τ are constructed by procedures similar to that of Construction 5 such that H c j ≰ H for all S c j ∈ G , where ∶= τ * H∶H 1 , which are defined by Construction 1.We sometimes write J I , J H∶H l as J for simplicity.Then the following lemma holds clearly.
i1 and H c i ≰ H then G (J) H∶H ′′ by suitable assignments of identification numbers to new occurrences of p in constructing τ (J) Proof.Part (i) is proved by a procedure similar to that of Lemma 18(iii) and (iv), and omitted.
(ii) Since S c i1 is the focus sequent of H c i then it is revised by some rule at the node lower than H c i .Thus S c i ∈ H is some copy of S c i1 by H c i > H. Hence S c i has the form S c iu for some u ≥ 2. Therefore it is transferred downward to G G * , i.e., S c i ∈ G G * .Then G H∶H ′′ for some q ≥ 0. Then all copies H∶H ′ H ′′ are divided two subsets H∶H ′′ .Thus we can construct G (q+1) Note that the requirement is imposed only on one derivation that distinct occurrence of p has a distinct identification number.We permit G (q+1) H∶H ′ in the proof above, which has no essential effect on the proof of the claim.
Lemma 19 (v) shows that G (J) I could be constructed by applying τ (J) < H c i q in Construction 5 is not necessary, but which make the termination of the procedure obvious.
Proof.The proof follows immediately from Lemma 18.
(i) For any sequent-formula A of S ′ , define Â to be the sequent S of G (J)

I
such that A is a sequent-formula of S or subformula of a sequent-formula of S.
(ii) Let S ′ be in the form A 1 , ⋯, A n ⇒ B 1 , ⋯, B m , define Ŝ′ to be the hypersequent which consists of all distinct sequents among A 1 , ⋯, A n , B 1 , ⋯, B m ; (iii) Let H ′ be in the form S 1 ⋯ S m , define H ′ to be Ŝ1 ⋯ S m .
(iv) We call H ′ to be separable if H ′ ⊆ c G and, call it to be separated into H ′ .

Note that τ (J)
I is a derivation without (EC Ω ) in GL Ω .Then we can extract elimination derivations from it by Construction 1.
I{G ′ ∶H ′ } denotes the derivation from H ′ , which extracts from τ The following two lemmas show that Constructions 5 and 6 force some sequents in ⌈S c i ⌉ I or H ′ to be separable.

I
and there is a unique copy of S ′′ G (J) and ⩽ τ * respectively as ⩽ and ⩽ for simplicity.Since G (J) H∶H ′ and G * (J) H∶H ′ then H c j ≰ H by Construction 6.We prove that H c j H ′ in τ (J) H∶H ′ and H c j ≰ H. Thus we assume that S c j ∉ G * H∶H ′ in the following.Then, by Lemma 18(iv), there exists some τ * H∶H ′ .Therefore G * (J) (ii) Clearly, G (J) H∶G ′′ H ′ and, τ (J) H∶G ′′ H ′ except some applications of (ID Ω ) and identification numbers of some p ′ s.
I{H 1 ∶H ′ } by the same reason as that of (i).Then S ′ , S ′′ are separated into Ŝ′ and S ′′ in τ (J)

I
, respectively.Then S ′′ G (J) Proof.Parts (i) and (ii) are proved by a procedure similar to that of Lemma 21 and are omitted.
Definition 27.The skeleton of τ I , which we denote by τ I , is constructed by replacing all Definition 28.We call Construction 5 together with 7 the separation algorithm of one branch and, Construction 6 the separation algorithm along H.

Separation Algorithm of Multiple Branches
In this section, let for all 1 ⩽ k < l ⩽ m.We will generalize the separation algorithm of one branch to that of multiple branches.Roughly speaking, we give an algorithm to eliminate all and the fully constraint contraction rules, say G 2 where, τ I is the skeleton of τ I which is defined as Definition 27.Then ∈ τ I and it is constructed by applying the separation algorithm along H V I to H and, is an upper hypersequent of either Note that in Claim (i), bold j in I j , I j or I j indicates the w-tuple (j 1 , ⋯, j w ) in S c j 1 , ⋯, S c j w .Claim (iv) shows the final aim of Theorem 2, i.e., there exists no S c j ∈ G I such that H c j ⩽ H c i for some H c i ∈ I.It is almost impossible to construct τ I in a non-recursive way.Thus we use Claims (i)-(iii) in Theorem 2 to characterize the structure of τ I in order to construct it recursively.
Proof.τ I is constructed by induction on I .For the base case, let I = 1.Then τ I is constructed by Construction 5 and 7. Here, Claim (i) holds by Lemma 20(ii), Lemma 18(i) and Lemma 13(vi), Claim (ii) by Lemma 18(i), (iii) is clear and (iv) by Lemma 18(iv).
For the induction case, let .By the induction hypothesis, = ∅ by Lemmas 11 and 12. Then Claims (ii) and (iii) follow directly from the induction hypothesis.

•
For Claim (iv), let S c j ∈ G I .It follows from the induction hypothesis that H c j H c i for all H c i ∈ I l and, S c j ∈ G * I j l for some τ * then H c j H c i for all H c i ∈ I by the definition of branches to I. Thus we assume that S c j ∈ G * I j l for some τ * Case 2. S ′′ ∉ ⟨G ′′ S ′′ ⟩ I jr for all τ * I jr ∈ τ I r .Then τ I ∶= τ I r and G I ∶= G I r .This case is proved by a procedure similar to that of Case 1 and omitted.
Case 3. S ′ ∈ ⟨G ′ S ′ ⟩ I j l for some τ * , S c j r2 , ⋯, S c j rv }, Before proceeding to deal with Case 3, we present the following property of τ I which are derived from Claims (i) ∼ (iv) and applicable to τ I l or τ I r under the induction hypothesis.
Lemma 25. (1) τ I is an m-ary tree and, τ I is a binary tree; (2 for some H c , and Proof.The proof is by induction on n.Let n = 1 then w 1 = 1 by Lemma 25 (5) and ) ⩽ H V I for some 1 < i ⩽ n then w i = 1 by Lemma 25 (5). Then ) ⩽ H V I by w i = 1.Thus w i−1 = 1 by Lemma 25 (5).
The module of τ I at G 2 , which we denote by τ I∶G 2 , is defined as follows: ( Each node of τ I∶G 2 is determined bottom-up, starting with G 2 , whose root is G 2 and leaves may be branches, leaves of τ * or lower hypersequents of ⟨EC * Ω ⟩-applications.While each node of τ * H∶H ′ is determined top-down, starting with H ′ , whose root is a subset of G G * and leaves contain H ′ and some leaves of τ * .with τ * I j l ⋃ I jr in post-order.However, the ordinal postorder-traversal algorithm cannot be used to construct τ I l (τ * I jr ) because the tree structure of τ I l (τ * I jr ) is generally different from that of τ I l at some nodes . Thus we construct a sequence of trees for all q ⩾ 0 inductively as follows.
For the base case, we mark all ⟨EC * Ω ⟩-applications in τ I l as unprocessed and define such marked derivation to be τ (0) . For the induction case, let τ are marked as processed, we firstly delete the root of the tree resulting from the procedure and then, apply ⟨EC * Ω ⟩ to the root of the resulting derivation if it is applicable otherwise add an ⟨ID Ω ⟩-application to it and finally, terminate the procedure.Otherwise we select one of the , and perform the following steps to construct τ (q+1) is constructed by locally revising τ (q) and leaving other nodes of τ (q) for some m q+1 ⩾ 1.

Remark 3. By two superscripts ○ and ⋅ in ⟨EC
, we indicate the unprocessed state and processed state, respectively.This procedure determines an ordering for all ⟨EC * Ω ⟩-applications in τ I l and the subscript q + 1 indicates that it is the q + 1-th application of ⟨EC * Ω ⟩ in a post-order transversal of τ I l .G ○○ q+1 and G ○ q+1 (G ⋅⋅ q+1 and G ⋅ q+1 ) are the premise and conclusion of ), respectively.
Step 1 (Delete).Take the module τ (q) ) by its choice criteria, τ Thus it is a derivation.If , delete all internal nodes of τ (q) We denote the structure resulting from the deletion operation above by τ (q) ) is a tree by Lemma 26.Thus it is also a derivation.
Step 2 (Update).For each G ○ q ′ ∈ τ (q) and S ′ ∈ ⟨G ′ S ′ ⟩ I j l for some τ * I l then q ′ ⩽ q and ⟨EC * Ω ⟩ ⋅ q ′ has been processed.Thus Claims (b) and (c) hold for τ (q) I l (G ⋅ q ′ ) by the induction hypothesis.Then Otherwise all applications between G l ′ and H are one-premise rules by Lemma 26.Then H c Since ∂ τ I l (H) ⩾ G ′ S ′ by Lemma 28 and H c j G ′ S ′ for each S c j ∈ G † by Lemma 24, then G † ⊆ H as side-hypersequent of H. Thus this step updates the revision of G ⋅⋅ q ′ downward to G l ′ .
Let m ′ be the number of G ○ q ′ satisfying the above conditions, τ (q) , respectively.Then τ (q) (2) satisfying the replacement conditions above, τ (q) Step 4 (Separation along H V I ).Apply the separation algorithm along H V I to G l ′′′ and denote the resulting derivation by τ (q) I l ∶G ○○ q+1 (4) whose root is labeled by G ⋅⋅ q+1 .Then all G * H ′ , S ′ and S ′′ are separable in τ (q) I l ∶G ○○ q+1 (4) by a procedure similar to that of Lemma 21.Let S ′ and S ′′ be separated into Ŝ′ and S ′′ , respectively.By Claim (iii), G (J) where m q+1 ∶= m ′ + m ′′ .
Define the tree resulting from Step 5 to be τ (q+1) . Then Claims (a), (b) and (c) hold for q + 1 by the above construction.
Finally, we construct a derivation of , ⋯, G b rv S c j rv in GL Ω , which we denote by τ I l (τ , G b r2 S c j r2 , ⋯, G b rv S c j rv and conclusion is Stage 2. Construction of routine τ I r (τ I l (τ * I jr )).A sequence τ (q) I r of trees for all q ⩾ 0 is constructed inductively as follows.τ for all τ * I j r Step 1 (Delete).τ (q) I r ∶G ○○ q+1 and τ (q) I r ∶G ○○ q+1 (1) are defined as before.
Step 2 (Update).For all G ○ q ′ ∈ τ (q) I r ∶G ○○ q+1 (1) which satisfy and S ′′ ∈ ⟨G ′′ S ′′ ⟩ I j r for some τ * Then Claims (a) and (b) are proved by a procedure as before.Let m ′ be the number of G ⋅ q ′ satisfying the above conditions.τ Step 3 (Replace).All τ * I jr ∈ τ Ω(q) I r ∶G ○○ q+1 (2) are processed in post-order.If H c i ↝ H c j for all H c i ∈ I j r and H c j ∈ I l it proceeds by the following procedure otherwise it remains unchanged.Let τ * I jr be in the form Then there exists the unique 1 , S c j be the focus sequent of some Let m ′′ be the number of τ * I jr ∈ τ Ω(q) I r ∶G ○○ q+1 (2) satisfying the replacement conditions as above, τ where Step 4 (Separation along H V I ).Apply the separation algorithm along H V I to G r ′′′ and denote the resulting derivation by τ (q) I r ∶G ○○ q+1 (4) whose root is labeled by G ⋅⋅ q+1 .
Step 5 (Put back).Replace τ (q) . Define the resulting tree from Step 5 to be τ then Claims (a), (b) and (c) hold for q + 1 by the above construction.Finally, we construct a derivation of • For Claims (i) and (ii): Let Thus Claim (i) holds and Claim (ii) holds by Lemma 25 (5) and Lemma 19(i).Note that Lemma 25(5) is independent of Claims from (ii) to (iv).
and the induction hypothesis from τ * The case of τ * I j built up from τ * I jr is proved by a procedure similar to above and omitted.  .Hence S c j ∈ G (J) Hence H c j H c i for all H c i ∈ I.This completes the proof of Theorem 2. Definition 31.The manipulation described in Theorem 2 is called a derivation-grafting operation.

The Proof of the Main Theorem
Recall that in the main theorem Thus ⊢ GL D 0 (G 0 ) holds by applying (EW) to G ′′ .
(i ′ ), (ii ′ ) and (iii ′ ) are proved by a procedure respectively similar to those of (i), (ii) and (iii) and omitted.For the induction step, suppose that m ⩾ 1 and there exists G I such that ⊢ GL Ω G I for all I ⩽ m − 1.Then there exist } and } and the claim holds clearly.Otherwise there where we define Then G I ′ is constructed by applying the separation algorithm of multiple branches (or one branch if

Final Remarks and Open Problems
Recently, we have generalized our method described in this paper to the non-commutative substructural logic GpsUL * in [20].This result shows that GpsUL * is the logic of pseudo-uninorms and their residua and answered the question posed by Metcalfe, Olivetti, Gabbay and Tsinakis in [21,22].
It has often been the case in the past that metamathematical proofs of the standard completeness have the corresponding algebraic ones, and vise verse.In particular, Baldi and Terui [23] had given an algebraic proof of the standard completeness of UL.A natural problem is whether there is an algebraic proof corresponding to our proof-theoretic one.It seems difficult to obtain it by using the insights gained from the approach described in this paper because ideas and syntactic manipulations introduced here are complicated and specialized.In addition, Baldi and Terui [23] also mentioned some open problems.Whether our method could be applied to their problems is another research direction.
On 21 March 2014, I found the way to deal with the example in Section 3. Then I finished the one branch algorithm in Section 7 on the late April 2014.I devised the multi-branch algorithm in Section 8 on early November 2014.Since I submitted my paper to Transactions of the American Mathematical Society on 20 January 2015, it has been reviewed successively by Annals of Pure and Applied Logic, Fuzzy Sets and Systems and, the Journal of Logic and Computation.As a mathematician, the greatest anxiety is that his work has never been taken seriously by his academic circle during his career, but after his death, someone would say, sir, your proof is wrong.Acknowledgments: I am grateful to Lluis Godo, Arnon Avron, Jean-Yves Girard, George Metcalfe and Agata Ciabattoni for valuable discussions.I would like to thank anonymous reviewers for carefully reading the old version of this article and many instructive suggestions.

Conflicts of Interest:
The authors declare no conflict of interest.

Notations
The symbol G 1 denotes a complex hypersequent G 2 temporarily for convenience.Then the separation algorithm τ H c 1 ∶G G * is abbreviated as where 2 ′ and 3 ′ are abbreviations of A ⇒ p 5 and p 5 , p 6 ⇒ A ⊙ A, respectively.We also write 2 ′ and 3 ′ respectively as 2 and 3 for simplicity.Then the whole separation derivation is given as follows.
1 2 3 where ∅ is an abbreviation of G ′′ in page 14 and means that all sequents in it are copies of sequents in G 0 .Note that the simplified notations become intractable when we decide whether ⟨EC Ω ⟩ is applicable to resulting hypersequents.If no application of ⟨EC Ω ⟩ is used in it, all resulting hypersequents fall into the set {1 2 3 ⋯ 3 11 and call such a procedure the separation algorithm.It is the starting point of the separation algorithm.We introduce branches in order to tackle the case of multiple-premise separation derivations for which it is necessary to apply (EC Ω ) to the resulting hypersequents.
τ * (G ′ S ′ ) of τ * is as a whole contained in τ * I jr or not in it.It is such a division of I into I l and I r that makes the whole algorithm possible.
Claim (i) of Theorem 2 asserts that H c i ⩽ H c j for all S c j ∈ G * I j and H c i ∈ I.It guarantees that τ * I j is not far from the final aim of Theorem 2 but roughly close to it if we define some complexity to calculate it.If H c i ⩽ H c j , the complexity of G * I j is more than or equal to that of ⌈S c i ⌉ I under such a definition of complexity and thus such an application of τ * I j is redundant at least.Claim (iii) of Theorem 2 guarantees the validity of the step 4 of Stages 1 and 2.
The tree structure of the skeleton of τ I l (τ * I jr ) can be obtained by deleting some node H ∈ τ I l satisfying ∂ τ I l (H) ⩽ H V I .The same is true for τ I if τ I l (τ * I jr ) is treated as a rule or a subroutine whose premises are same as ones of τ * I jr .However, it is incredibly difficult to imagine or describe the structure of τ I if you want to expand it as a normal derivation, a binary tree.
All syntactic manipulations in constructing τ I are performed on the skeletons of τ I l or τ I r .The structure of the proof of Theorem 2 is depicted in Figure A1.

k=n k=0 2
k b k and call it the position of H in τ.
single node S c iu for all 2

.
Claims from (i) to (v) follow immediately from Construction 1 and Lemma 4. (vi) Since S c j ∈ G * S c i1 ⊆ G G * then S c j has the form S c ju for some u ≥ 2 by Notation 5. Then G * .Suppose that H c i ⩽ H c j .Then S c j is transferred from H c j downward to H c i and in side-hypersequent of H c i by Notation 5 and G G * < H c i ⩽ H c j .Thus {S c i1 } ⋂{S c j } = ∅ at H c i since S c i1 is the unique focus sequent of H c i .Hence S c j ∉ G * S c i1 by Lemma 11 and (iii), a contradiction therefore We impose a restriction on (I I) such that each sequent in H ′ is different from S ′ or S ′′ otherwise we treat it as an (EW)-application.Since S c j ∈ G * H∶H ′ ⊆ G G * then S c j has the form S c ju for some u ≥ 2 by Notation 5. Thus G * S c j = S c j .Suppose that H c j > H. Then S c j is transferred from H c j downward to H. Thus S c j ∈ H ′ by G * S c j = S c j ∈ G * H∶H ′ and Lemma 11.Hence S c j = S ′ or S c j = S ′′ , a contradiction with the restriction above.Therefore H c j ⩽ H or H c j H. (ii) Let S c j ∈ G * H∶G ′′ .If H c j > H then S c j ∈ H by Proposition 2(i) and thus S c j ∈ G ′′ by Lemma 11 and, hence H

for all 1 ⩽
u ⩽ m j by Lemma 11, i.e., H c i H c j , a contradiction and hence S ′ ∈ ⟨G ′ S ′ ⟩ S c i1 .Lemma 13(ii) shows that τ *

Construction 7 .
Apply (EC * Ω ) to G (J) I and denote the resulting hypersequent by G I and its derivation byτ I .It is possible that (EC * Ω ) is not applicable to G (J)Iin which case we apply ⟨ID Ω ⟩ to it for the regularity of the derivation.Lemma 20.(i)⌈S c i ⌉ I G I ⟨τ I ⟩, G I is closed and H c j H c i for all S c j ∈ G I ;(ii) τ I is constructed by applying elimination rules, say,

iq u in τ I . Lemma 23 .
The parameter τ I is a linear structure with the lowest node G I and the highest ⌈S c i ⌉ I .Proof.It holds by all τ * G b S c iq u and EC * Ω in τ I being one-premise rules.

1 ⌉
H c r m(r) }, which occur in the left subtree τ * (G ′ S ′ ) and right subtree τ * (G ′′ S ′′ ) of τ * (H V I ), respectively.Then m(l) + m(r) = m.Let I l = {⌈S c l I , ⋯, ⌈S c l m(l) ⌉ I }, I r = {⌈S c r 1 ⌉ I , ⋯, ⌈S c r m(r) ⌉ I }.Suppose that derivations τ I l of G I l and τ I r of G I r are constructed such that Claims from (i) to (iv) hold.There are three cases to be considered in the following.Case 1. S ′ ∉ ⟨G ′ S ′ ⟩ I j l for all τ * I j l ∈ τ I l .Then τ I ∶= τ I l and G I ∶= G I l .• For Claim (i), let τ * I j l ∈ τ I l and S c j ∈ G * I j l I j ⊆ I. (3) holds by Proposition 1(iii), (2) and 2 and H ⩾ H ′ .Proof.Part (1) is clear and (2) immediately follows from Lemma 26.Now, we continue to deal with Case 3 in the following.Stage 1 Construction of Subroutine τ I l (τ * I jr ).Roughly speaking, τ I l (τ * I jr ) is constructed by replacing some nodes τ * I j l ∈ τ I l as τ I l ∶G ○○ q+1 by Claim (a).

q+1 ( 2 )⩾
are processed in post-order.If H c i ↝ H c j for all H c i ∈ I j l and H c j ∈ I j r it proceeds by the following procedure otherwise it remains unchanged.Let τ * G ′ S ′ for all 1 ⩽ k ⩽ u by Lemma 28, G b lk S c j lk > G l ′′ .Firstly, replace τ * I j l with τ * I j l∪I jr .We may rewrite the roots of τ *

○⋅
q+1 are defined as those of Stage 1.Then we perform the following steps to construct τ q+1 such that Claims (a) and (b) are same as those ofStage 1 and (c) Otherwise τ * I j is built up from τ * I jr ∈ τ I r , τ * I j l or τ * I j l ∪I j r ∈ τ I l (τ * I jr ) by keeping their focus and principal sequents unchanged and making their side-hypersequents possibly to be modified, but which has no effect on discussing Claim (ii) and then Claim (ii) holds for τ I by the induction hypothesis on Claim (ii) of τ I l or τ I r .If τ * I j is from τ * I j l ∪I j r then S ′ ∈ ⟨G ′ S ′ ⟩ I j l and S ′′ ∈ ⟨G ′′ S ′′ ⟩ I jr by the choice of τ * I j l and τ * I jr at Stage 1.By the induction hypothesis, H

I
and H c i ∈ I.Only (1) is proved as follows and (2) by a similar procedure and omitted.Let S c j ∈ G I l r .Then S c j ∈ G I l and S c j ∉ S ′′ G ∶H ′ { Ŝ′ S ′′ } by the definition of G I l r .By a procedure similar to that of Claim (iv) in Case 1, we get H c j ≰ H V I and assume that S c j ∈ G * I j l for some τ * I j l ∈ τ I l and let G ′ S ′ ≰ H c j in the following.Suppose that G ′′ S ′′ ⩽ H c j .Then S c j ∈ G * H V I ∶G ′′ and S ′ ∈ ⟨G ′ S ′ ⟩ I j l by S c j ∈ G * I j l and H c j H c i for all S c j ∈ G I and H c i ∈ I. Lemma 30.There exists G I such that ⊢ GL Ω G I for all I ⊆ {H c 1 , ⋯, H c N }.Proof.The proof is by induction on m.For the base step, let m = 0, then I = ∅ and G I ∶= G G * and ⊢ GL Ω G I by Lemma 5 (v).

The proof of Theorem 1 :
Theorem 2 (or Lemma 20(i) for one branch).Let G I ∶= G I ′ then ⊢ GL Ω G I clearly.Let I = {H c 1 , ⋯, H c N } in Lemma 30.Then there exists G I such that ⊢ GL Ω G I , G I ⊆ c G G * and H c j H c i for all S c j ∈ G I and H c i ∈ I. Then ⊢ GL D(G I ) by Lemma 8. Suppose that S c j ∈ G I .Then H c j H c i for all H c i ∈ I. Thus H c j H c j by H c j ∈ I, a contradiction with H c j ⩽ H c j and hence there does not exist S c j ∈ G I .Therefore G I ⊆ c G by G I ⊆ c G G * .By removing the identification number of each occurrence of p in G, we obtain the sub-hypersequent G 2 of G 2 G * 2 , which is the root of τ 4 resulting from Step 4 in Section 4. Then ⊢ GL D 0 (G 2 ) by ⊢ GL D(G I ) and G I ⊆ c G. Since G 2 is constructed by adding or removing some

Theorem 3 .
by Lemma 29.This completes the proof of the main theorem.◻ Density elimination holds for all GL in {GUL, GIUL, GMTL, GIMTL}.Proof.It follows immediately from the main theorem.

Funding:
This research was funded by the National Foundation of Natural Sciences of China (Grant No: 61379018 &61662044& 11571013&11671358).

i⟩.
X ∶= Y Define X as Y for two hypersequents (sets or derivations) X and Y. G 0 The upper hypersequent of strong density rule in Theorem 1, page 2 τ A cut-free proof of G 0 in GL, in Theorem 1, page 3 P(H) The position of H ∈ τ, Def. 10, Construction 3, pages 5, 24 ⟨H k ⟩ H∶H ′ and τ 2 H∶H ′ (⟨H k ⟩ H∶H ′ ) Construction 1, page 15 G 2 H∶H ′ and τ 2 H∶H ′ Notation 3, page 16 τ * The proof of G G * in GL Ω resulting from preprocessing of τ, Notation 4, page 17 G G * The root of τ * corresponding to the root G 0 of τ, Notation 4, page 17 H c The i-th (pEC)-node in τ * , the superscript ′ c ′ means contraction, Notation 5, page 18 S c i1 The focus sequent of H c i , Notation 5, page 18 S c i or S c iu S c i1 or one copy of S c i1 , Notation 5, page 18 {H c 1 , ⋯, H c N } The set of all (pEC)-nodes in τ * , Notation 5, page 18 GL Ω A restricted subsystem of GL, Definition 18, page 18 [S] G , G ′ G The minimal closed unit of S and G ′ in G, respectively, Definition 19, page 19 (D) The generalized density rule of GL Ω , Definition 20, page 20 of I and, that of H c i and H c j , Notation 7, page 26 I ′ = {S c i1u1 , ⋯, S c imum } A subset of (pEC)-sequents to I, Definition 22, page 27 I ′ = {G b1 S c i1u1 , ⋯, G bm S c imum } A set of closed hypersequents to I, Definition 22, page 27 ⟨H⟩ I , τ * I and G * I The elimination derivation, Construction 4, Lemma 17, pages 26, 27 τ * I ′ The elimination rule, Definition 22, , page 27 ⌈S c i k ⌉ I A branch of H c i k to I, Definition 25, page 28 G The skeleton of τ I , Definition 27, page 32 ∂ τ I (H) Theorem 2 (ii), page 33 τ I∶G2 The module of τ I at G 2 , Definition 30, page 35 =⇒ p 2 , B B ⇒ p 4 , ¬A ⊙ ¬A A ⇒ p 3 p 3 , p 4 ⇒ A ⊙ A. We denote the derivation Since we focus on sequents in G * in the separation algorithm, we abbreviate A ⇒ p 2

l , 2 2 3 ⋯ 3 m, 1 1 3 ⋯ 3 n 31 ⟩ 31 ⟩ 1 .
∶ l ≥ 0, m ≥ 0, n ≥ 0} and ∅ is never obtained.Appendix A.3.Why Do We Need the Separation of Branches?In Figure11, p 1 and p 2 in the premise ofp 1 , p 2 ⇒ A ⊙ A p 1 ⇒ C C, p 2 ⇒ A ⊙ A ⟨τ * S c couldbe viewed as being tangled in one sequent p 1 , p 2 ⇒ A ⊙ A but in the conclusion of ⟨τ * S c they are separated into two sequents p 1 ⇒ C and C, p 2 ⇒ A ⊙ A, which are copies of sequents in G 0 .In Figure 5, p 2 in A ⇒ p 2 falls into ⇒ p 2 , B in the root of τ * H c 1 ∶A⇒p 2 and ⇒ p 2 , B is a copy of a sequent in G 0 .The same is true for p 4 in A ⇒ p 4 in Figure8.But it's not the case.Lemma13(vi)  shows that in the elimination rule S i and, thus each occurrence of p ′ s in S c11 is fell into a unique sequent which is a copy of a sequent in G 0 .Otherwise there exists S Then each occurrence of p ′ s in S c11 is fell into a unique sequent in G copy of a sequent in G 0 .In such case, we call occurrences of p ′ s in S c 11 are separated in G
denotes the subtree of τ rooted at H; (iv) τ determines a partial order ⩽ τ with the root as the least element.H 1 H 2 denotes H 1 ≰ τ H 2 and H 2 ≰ τ H 1 for any H 1 , H 2 ∈ τ.By H 1 = τ H 2 we mean that H 1 is the same node as H 2 in τ.We sometimes write ⩽ τ as ⩽;(v) An inference of the form G ′ S n G ′ S ∈ τ is called the full external contraction and denoted by(EC * ), if n ⩾ 2,G ′ S n is not a lower hypersequent of an application of (EC) whose contraction sequent is S, and G ′ S not an upper one in τ.Definition 9.Let τ be a derivation of G and H ∈ τ.The thread Th τ (H) of τ at H is a sequence H 0 , ⋯, H n of node hypersequents of τ such that H 0 constructed by replacing p 2 with p 1 , p 3 with p 5 and p 4 B B ⇒ p 3 , ¬A ⊙ ¬A.
Proof.(i) is proved by induction on I .For the base step, let I = 1 then the claim holds clearly.For the induction step, let I ⩾ 2 then I l ⩾ 1 and I r ⩾ 1.Then S ′ ∈ ⟨G ′ S ′ ⟩ S c We regard Construction 1 as a procedure F, whose inputs are τ 2 , H, H ′ and output τ 2 H∶H ′ .With such a viewpoint, we write τ 2 H∶H ′ as F H∶H ′ (τ 2 ).Then τ * I can be constructed by iteratively applying F to τ * , i.e., τ * I = F H c Definition 22.We will use all τ * I ′ as rules of GL Ω and call them elimination rules.Further, we call S c i 1 u 1 , ⋯, S c i m u m focus sequents and, all sequents in G * I ′ principal sequents and, G b 1 , ⋯, G b m side-hypersequents of τ * I ′ .Remark 2. * )⋯).
then each (pEC)-sequent in G * I has the form S c jv for some 1 ⩽ j ⩽ N, 2 ⩽ v ⩽ m j by Proposition 2(ii).Then we introduce the following definition.
* H c i .We usually write ⩽ τ * I as ⩽.
r and H c i ∈ I l by S ′′ ∈ ⟨G ′′ S ′′ ⟩ I jr and Construction 4. For each τ * I jr ∈ τ I r above, we construct a derivation τ I l (τ * I jr ) in which you may regard τ I l as a subroutine, and τ * I jr as its input in the following stage 1.Then a derivation τ I r (τ I l All elimination rules used in constructing τ I l are extracted from τ * .Since τ * I jr is a derivation in GL Ω without (EC Ω ), we may extract elimination rules from τ * I jr which we may use to construct τ I l * I jr ).
I r satisfying S ′′ ∈ ⟨G ′′ S ′′ ⟩ • Claim (iii) holds by Step 4 at Stages 1 and 2. Note that in the whole of Stage 1, we treat {G b rk } v k=1 as a side-hypersequent.But it is possible that there exists S c j ∈ {G b rk } v k=1 such that H c j ⩽ H V I .Since we have not applied the separation algorithm to {G b rk } v k=1 in Step 4 at Stage 1, then it could make Claim (iii) invalid.But it is not difficult to find that we just move the separation of such S c j to Step 4 at Stage 2. Of course, we can move it to Step 4 at Stage 1, but which make the discussion complicated.