Density Elimination for Semilinear Substructural Logics

We present a uniform method of density elimination for several semilinear substructural logics. Especially, the density elimination for the involutive uninorm logic IUL is proved. Then the standard completeness of IUL follows as a lemma by virtue of previous work by Metcalfe and Montagna.


Introduction
The problem of the completeness of Łukasiewicz infinite-valued logic (Ł for short) was posed by Łukasiewicz and Tarski in the 1930s. It was twenty-eight years later that it was syntactically solved by Rose and Rosser [18]. Chang [4] developed at almost the same time a theory of algebraic systems for Ł, which is called MV-algebras, with an attempt to make MV-algebras correspond to Ł as Boolean algebras to the classical two-valued logic. Chang [5] subsequently finished another proof for the completeness of Ł by virtue of his MV-algebras.
It was Chang who observed that the key role in the structure theory of MV-algebras is not locally finite MV-algebras but linearly ordered ones. The observation was formalized by Hájek [12] who showing the completeness for his basic fuzzy logic (BL for short) with respect to linearly ordered BL-algebras. Starting with the structure of BL-algebras, Hájek [13] reduced the problem of the standard completeness of BL to two formulas to be provable in BL. Here and thereafter, by the standard completeness we mean that logics are complete with respect to algebras with lattice reduct [0, 1]. Cignoli et al. [6] subsequently proved the standard completeness of BL, i.e., BL is the logic of continuous t-norms and their residua.
Hajek's approach toward fuzzy logic has been extended by Esteva and Godo in [9], where the authors introduced the logic MTL which aims at capturing the tautologies of left-continuous tnorms and their residua. The standard completeness of MTL was proved by Jenei and Montagna in [15], where the major step is to embed linearly ordered MTL-algebras into the dense ones under the situation that the structure of MTL-algebras has been unknown as yet.
Esteva and Godo's work was further promoted by Metcalfe and Montagna [16] who introduced the uninorm logic UL and involutive uninorm logic IUL which aim at capturing tautologies of left-continuous uninorms and their residua and those of involutive left-continuous ones, respectively. Recently, Cintula and Noguera [8] introduced semilinear substructural logics which are substructural logics complete with respect to linearly ordered models. Almost all well-known families of fuzzy logics such as Ł, BL, MTL, UL and IUL belong to the class of semilinear substructural logics.
Metcalfe and Montagna's method to prove standard completeness for UL and its extensions is of proof theory in nature and consists of two key steps: Firstly, they extended UL with the density rule of Takeuti and Titani [19]: where p does not occur in Γ, A, B or C, and then prove the logics with (D) are complete with respect to algebras with lattice reduct [0,1]; Secondly, they give a syntactic elimination of (D) that was formulated as a rule of the corresponding hypersequent calculus.
Hypersequents are a natural generalization of sequents which were introduced independently by Avron [1] and Pottinger [17] and have proved to be particularly suitable for logics with prelinearity [2,16]. Following the spirit of Gentzen's cut elimination, Metcalfe and Montagna succeeded to eliminate the density rule for GUL and several extensions of GUL by induction on the height of a derivation of the premise and shifting applications of the rule upwards, but failed for GIUL and therefore left it as an open problem.
There are several relevant works about the standard completeness of IUL as follows. With an attempt to prove the standard completeness of IUL, we generalized Jenei and Montagna's method for IMTL in [20], but our effort was only partially successful. It seems that the subtle reason why it does not work for UL and IUL is the failure of FMP of these systems [21]. Jenei [14] constructed several classes of involutive FL e -algebras, as he said, in order to gain a better insight into the algebraic semantic of the substructural logic IUL, and also to the longstanding open problem about its standard completeness. Ciabattoni and Metcalfe [7] introduced the method of density elimination by substitutions which is applicable to a general classes of (first-order) hypersequent calculi but fails to the case of GIUL.
We reconsidered Metcalfe and Montagna's proof-theoretic method to investigate the standard completeness of IUL, because they have proved the standard completeness of UL by their method and we can't prove such a result by the Jenei and Montagna's model-theoretic method. In order to prove the density elimination for GUL, they prove that the following generalized density rule (D 1 ): is admissible for GUL, where they set two constraints to the form of G 0 : (i) n, m ⩾ 1 and λ i ⩾ 1 for some 1 ⩽ i ⩽ n and (ii) p does not occur in Γ i , ∆ i , Π j , Σ k for i = 1⋯n, j = 1⋯m, k = 1⋯o. In this paper, G 1 ≡ G 2 means that symbol G 1 denote a complex hypersequent G 2 temporarily for convenience. We may regard (D 1 ) as a procedure whose input and output are the premise and conclusion of (D 1 ), respectively. We denote the conclusion of (D 1 ) by D 1 (G 0 ) when its premise is G 0 . Observe that Metcalfe and Montagna had succeeded to define the suitable conclusion for almost arbitrary premise in (D 1 ), but it seems impossible for GIUL (See 2 Section 3 for an example). We then define the following generalized density rule (D 0 ) for GL ∈ {GUL, GIUL, GMTL, GIMTL} and prove its admissibility in Section 9.
Lemma 1.1 (Main Lemma). Let n, m ⩾ 1, p does not occur in G ′ , Γ i , ∆ i , Π j or Σ j for all 1 ⩽ i ⩽ n,1 ⩽ j ⩽ m. Then the strong density rule is admissible in GL.
In proving admissibility of (D 1 ), Metcalfe and Montagna made some restriction on the proof τ of G 0 , i.e., converted τ into an r-proof. The reason why they need an r-proof is that they set the constraint (i) to G 0 . We may imagine the restriction on τ and the constraints to G 0 as two pallets of a balance, i.e., one is strong if another is weak and vice versa. Observe that we select the weakest form of G 0 that guarantees the validity of (D) in (D 0 ). Then it is natural that we need make the strongest restriction on the proof τ of G 0 . But it seems extremely difficult to follow such a way to prove the admissibility of (D 0 ).
In order to overcome such difficulty, we first of all choose Avron-style hypersequent calculi as the underlying systems (See Appendix 1). In Section 4, a proof τ * in a restricted subsystem GL Ω is built up from a given proof τ in GL by a systematic novel manipulations such as sequent-inserting operation, derivation-pruning operation, eigenvariable-replacing operation and eigenvariable-labeling operation so on. In Section 5, we define the generalized density rule (D) in GL Ω and prove that it is admissible. More derivations are constructed from τ * by derivationpruning operation in Section 6 and applied to build up much more complicated derivations by the separation algorithm of one branch in Section 7. We give the separation algorithm of multiple branches by derivation-grafting operation in Section 8 and prove that the density elimination theorem in GL holds in Section 9. Especially, the standard completeness of IUL follows from Theorem 62 of [16].

Global notations
G 0 − Upper hypersequent of strong density rule in Lemma 1.1. τ− A cut-free proof of G 0 in GL, in Lemma 1.1. P(H)− The position of H ∈ τ, Def. 2.13. τ * − the proof of G G * in GL Ω resulting from preprocessing of τ, Notation 4.13. G G * − The root of τ * corresponding to the root G 0 of τ, Notation 4.13. H c i − The i-th (pEC)-node in τ * , the superscript 'c' means contraction, Notation 4.14. S c i1 − The focus sequent of H c i , Notation 4.14. S c i or S c iu − S c i1 or one copy of S c i1 , Notation 4.14.
set of closed hypersequents to I, Def. 6.14. X ∶= Y− Define X as Y for two hypersequents (sets or derivations) X and Y.

Preliminaries
In this section, we recall the basic definitions and results involved, which are mainly from [16]. Substructural fuzzy logics are based on a countable propositional language with formulas FOR built inductively as usual from a set of propositional variables VAR, binary connectives ⊙, →, ∧, ∨, and constants , ⊺, t, f with definable connectives: Definition 2.1. ( [1,16]) A sequent is an ordered pair (Γ, ∆) of finite multisets (possibly empty) of formulas, which we denote by Γ ⇒ ∆. Γ and ∆ are called the antecedent and succedents, respectively, of the sequent and each formula in Γ and ∆ is called a sequent-formula. A hypersequent G is a finite multiset of the form Γ 1 ⇒ ∆ 1 ⋯ Γ n ⇒ ∆ n , where Γ i ⇒ ∆ i is a sequent and called component of G for each 1 ⩽ i ⩽ n. If ∆ i contains at most one formula for i = 1⋯n, then the hypersequent is single-conclusion, otherwise it is multiple-conclusion.
Definition 2.2. Let S be a sequent and G = S 1 ⋯ S m a hypersequent. We say that S ∈ G if S is one of S 1 , ⋯, S m . Notation 2.3. Let G 1 and G 2 be two hypersequents. We will assume from now on that all set terminology refers to multisets, adopting the conventions of writing Γ, ∆ for the multiset union of Γ and ∆, A for the singleton multiset {A}, and λΓ for the multiset union of λ copies of Γ for λ ∈ N. By G 1 ⊆ G 2 we mean that S ∈ G 2 for all S ∈ G 1 and the multiplicity of S in G 1 is not more than that of S in G 2 . We will use G 1 = G 2 , G 1 ⋂ G 2 , G 1 ⋃ G 2 , G 1 G 2 by their standard meaning for multisets by default and we will declare when we use them for sets. We sometimes write S 1 ⋯ S m and G n copies S ⋯ S as {S 1 , ⋯, S m }, G S n (or G {S } n ), respectively. Definition 2.4. ([1]) A hypersequent rule is an ordered pair consisting of a sequence of hypersequents G 1 , ⋯, G n called the premises (upper hypersequents) of the rule, and a hypersequent G called the conclusion (lower hypersequent), written by G 1 ⋯G n G . If n = 0, then the rule has no premise and is called an initial sequent. The single-conclusion version of a rule adds the restriction that both the premises and conclusion must be single-conclusion; otherwise the rule is multiple-conclusion.
Definition 2.5. ( [16]) GUL and GIUL consist of the single-conclusion and multiple-conclusion versions of the following initial sequents and rules, respectively: Initial Sequents We sometimes write ⩽ τ as ⩽; (v) G ′ S n G ′ S (EC * ) ∈ τ denotes the full external contraction, where n ≥ 2, G ′ S results from G ′ S n by multiple applications of (EC) and, G ′ S n is not a lower hypersequent of an application of (EC) whose contraction sequent is S and G ′ S not an upper one in τ.
Definition 2.11. Let τ be a derivation of G and H ∈ τ. The thread T h τ (H) of τ at H is a sequence for all 0 ≤ k ≤ n − 1. Then P(H) ∶= ∑ k=n k=0 2 k b k and call it the position of H in τ.
respectively by (EC) 1 , (EC) 2 , (EC) 3 and three (⊙ r ) by (⊙ r ) 1 , (⊙ r ) 2 and (⊙ r ) 3 . H 0 is a theorem of IUL and a cut-free proof ρ of H 0 is shown in Figure 2. It supports the validity of the generalized density rule (D 0 ) in Section 1, as an instance of (D 0 ).
Our task is to construct ρ, starting from τ. The tree structure of ρ is more complicated than that of τ. Compared with UL, MTL and IMTL, there is no one-to-one correspondence between nodes in τ and ρ.
Following the method given by G. Metcalfe and F. Montagna, we need to define a generalized density rule for IUL. We denote such an expected unknown rule by (D x ) for convenience. Then D x (H) must be definable for all H ∈ τ. Naturally, However, we couldn't find a suitable way to define D x (H ×× ) and D x (H × ) for H ×× and H × in τ, see Figure 1. This is the biggest difficulty we encounter in the case of IUL such that it is hard to prove density elimination for IUL. A possible way is to define D x (⇒ p, p, ¬A⊙¬A p, p ⇒ A⊙A) as ⇒ t, A ⊙ A, ¬A ⊙ ¬A. Unfortunately, it is not a theorem of IUL.
Notice that two upper hypersequents ⇒ p, ¬A p, p ⇒ A ⊙ A of (⊙ r ) 3 are permissible inputs of (D x ). Why is H ×× an invalid input? One reason is that, two applications (EC) 1 and (EC) 2 cut off two sequents A ⇒ p such that two p ′ s disappear in all nodes lower than upper hypersequent of (EC) 1 or (EC) 2 , including H ×× . These make occurrences of p ′ s to be incomplete in H ×× . We then perform the following operation in order to get complete occurrences of p ′ s in H ×× .
Step 1 (Preprocessing of τ). Firstly, we replace H with H S ′ for all Then we construct a proof without (EC), which we denote by τ 1 , as shown in Figure 3. We call such manipulations 8 sequent-inserting operations, which eliminate applications of (EC) in τ.
The reason is that the origins of p ′ s in H ′ ×× are indistinguishable if we regard all leaves in the form p ⇒ p as the origins of p ′ s which occur in the inner node. For example, we don't know which p comes from the left subtree of τ 1 (H ′ ×× ) and which from the right subtree in two occurrences of p ′ s in ⇒ p, p, ¬A ⊙ ¬A ∈ H ′ ×× . We then perform the following operation in order to make all occurrences of p ′ s in H ′ ×× distinguishable. We assign the unique identification number to each leaf in the form p ⇒ p ∈ τ 1 and transfer these identification numbers from leaves to the root, as shown in Figure 4. We denote the proof of G G * resulting from this step by in which each sequent is a copy of some external contraction sequent in (EC)-node of τ. We call such manipulations eigenvariable-labeling operations, which make us to trace eigenvariables in τ.
Then all occurrences of p in τ * are distinguishable and we regard them as distinct eigenvariables (See Definition 4.16 (i)). Firstly, by selecting p 1 as the eigenvariable and applying (D) to G G * , we get Next, we apply τ * H c 1 ∶A⇒p1 to A ⇒ p 1 in G G * . Then we construct a proof τ Why do you reassign identification numbers to occurrences of p ′ s in G H c 2 ∶G G * ? It makes different occurrences of p ′ s to be assigned different identification numbers in two nodes G H c A great change happens here! We have eliminated all sequents which are copies of some sequents in G * and convert G G * into G I in which each sequent is some copy of a sequent in G 0 . Then So we have built up one-one correspondences between the proof of G I and that of H 0 , i.e., the proof of H 0 can be constructed by applying (D) to the proof of G I . The major steps of constructing G I are shown in the following figure, where In the above example, D(G I ) = D 0 (G 0 ). But it is not always the case. In general, we can prove that ⊢ GL D 0 (G 0 ) if ⊢ GL D(G I ), which is shown in the proof of Main lemma in Page 46. This example shows that the proof of Main lemma essentially presents an algorithm to construct a proof of D 0 (G 0 ) from τ.

Preprocessing of Proof Tree
Let τ be a cut-free proof of G 0 in Main Lemma in GL by Lemma 2.15. Starting with τ, we will construct a proof τ * which contains no application of (EC) and have some other properties in this section.
(ii) is proved by a procedure similar to that of (i) and omitted.
We introduce two new inference rules by Lemma 4.1. and respectively. Now, we begin to process τ as follows.
Step 1 A proof τ ′ is constructed by replacing inductively all applications of (accordingly The replacements in Step 1 are local and the root of τ ′ is also labeled by G 0 . Definition 4.3. We sometimes may regard G ′ G ′ as a structural rule of GL and denote it by (ID Ω ) for convenience. The focus sequent for (ID Ω ) is undefined.
Proof. The proof is by induction on n . Since is an application of the same rule (II) (or (I)). Thus τ 1 (H n S m−1 ) is a proof. Clearly, the number of (EC * )-applications in τ 1 is less than τ ′ . Next, we continue to process τ. Step By repeatedly applying sequent-inserting operation, we construct a proof of G 0 G * 0 in GL without applications of (EC * ) and denote it by τ ′′ . and For the induction step, suppose that ⟨H k ⟩ H∶H ′ and τ ′′ H∶H ′ (⟨H k ⟩ H∶H ′ ) be constructed such that (i) and (ii) hold for some 0 ⩽ k ⩽ n − 1. There are two cases to be considered.
The case of applications of two-premise rule is proved by a similar procedure and omitted.
Definition 4.9. The manipulation described in Construction 4.7 is called derivation-pruning operation.
Then Lemma 4.8 shows that we continue to process τ as follows.
Step 3 Let . By repeatedly applying the procedure above, we construct a proof τ ′′′ without (EW) Step Repeatedly applying the procedure above, we construct a proof τ ′′′′ of is available. Define two operations σ l and σ r on sequents by Then G 2 G * 2 is obtained by applying σ l and σ r to some designated sequents in G 1 G * 1 .
Definition 4.11. The manipulation described in Step 4 is called eigenvariable-replacing operation.
Step 5 A proof τ * is constructed from τ ′′′′ by assigning inductively one unique identification number to each occurrence of p in τ ′′′′ as follows.
One unique identification number, which is a positive integer, is assigned to each leaf of the form p ⇒ p in τ ′′′′ which corresponds to p k ⇒ p k in τ * . Other nodes of τ ′′′′ are processed as follows.
All applications of (∨ lw ) are processed by the procedure similar to that of (∧ rw ).
All applications of (→ l ) are processed by the procedure similar to that of (⊙ r ). Notation 4.13. Let G 2 and G * 2 be converted to G and G * in τ * , respectively. Then τ * is a proof of G G * .
Step 4. Each occurrence of p in τ ′′′′ is assigned the unique identification number in Step 5. The whole preprocessing above is depicted by Figure 13.
Now, we replace locally each G ′ G ′ (ID Ω ) in τ * with G ′ and denote the resulting proof also by τ * , which has no essential difference with the original one but could simplify subsequent arguments. We introduce the system GL Ω as follows.
Definition 4.16. GL Ω is a restricted subsystem of GL such that (i) p is designated as the unique eigenvariable by which we mean that it is not used to built up any formula containing logical connectives and only used as a sequent-formula.
(ii) Each occurrence of p in a hypersequent of GL is assigned one unique identification number i and written as p i in GL Ω . Initial sequent p ⇒ p of GL has the form p i ⇒ p i in GL Ω .
(iii) Each sequent S of GL in the form Γ, λp ⇒ µp, ∆ has the form such that G ′′ can be obtained by applying σ l to antecedents of sequents in G ′ and σ r to succedents of sequents in G ′ , i.e., G ′′ = σ r ○ σ l (G ′ ).
(v) A closed hypersequent G ′ G ′′ G ′′′ can be contracted as G ′ G ′′ in GL Ω under the condition that G ′′ and G ′′′ are closed and G ′′′ is a copy of G ′′ . We call it the constraint external contraction rule and denote by (vii) G 1 S 1 and G 2 S 2 are closed and disjoint for each two-premise inference rule (viii) p doesn't occur in Γ or ∆ for each initial sequent Γ, ⇒ ∆ or Γ ⇒ ⊺, ∆ and, p doesn't act as the weakening formula A in is available.
Lemma 4.17. Let τ be a cut-free proof of G 0 in L and τ * be the tree resulting from preprocessing of τ.
Proof. Claims from (i) to (iv) are immediately from Step 5 in preprocessing of τ and Definition 4.16.
(v) is from Notation 4.13 and Definition 4.16. Only (vi) is proved as follows.
proved by a similar procedure and omitted.

The generalized density rule (D) for GL Ω
In this section, we define the generalized density rule (D) for GL Ω and prove that it is admissible in GL Ω .

Construction 5.2. Let G and S be as above. A sequence G 1 , G 2 , ⋯, G n of hypersequents is constructed recursively as follows. (i) G 1 = {S }; (ii) Suppose that G k is constructed for
by v l (G) = v r (G) and Definition 4.16 then let G k+1 = G k S k+1 otherwise the procedure terminates and n ∶= k.
Proof. (i) Since G k ⊆ G k+1 for 1 ≤ k ≤ n − 1 and S ∈ G 1 then S ∈ G n ⊆ G thus [S ] G ⊆ G n by v l (G n ) = v r (G n ). We prove G k ⊆ [S ] G for 1 ≤ k ≤ n by induction on k in the following. Clearly, (iii) It holds immediately from Construction 5.2 and (i).
(iv) The proof is by induction on k. For the base step, let k = 1 then G k = 1 thus v lr (G k ) +1 ⩾ G k by v lr (G k ) ⩾ 0. For the induction step, suppose that v lr (G k ) + 1 ⩾ G k for some 1 ⩽ k < n.
(iii) We call (D) the generalized density rule of GL Ω , whose conclusion D(G) is defined by (ii) if its premise is G. Proof. We proceed by induction on the height of τ. For the base step, if G is p k ⇒ p k then D(G) is ⇒ t otherwise D(G) is G then ⊢ GL D(G) holds. For the induction step, the following cases are considered.
• Let is constructed by combining the proof of D(G ′ S ′ ) and Other rules of type (I) are processed by a procedure similar to above.
. All applications of (→ l ) are processed by a procedure similar to that of ⊙ r and omitted.
. All applications of (∨ lw ) are processed by a procedure similar to that of (∧ rw ) and omitted.
• Let Hence we assume that, without loss of generality, Thus the proof of is constructed by Then G ′ , G ′′ and G ′′′ are closed and G ′′′ is a copy of G ′′ thus The following two lemmas are corollaries of Lemma 5.6.
Lemma 5.8. Let τ be a cut-free proof of G 0 in GL and τ * be the proof of G G * in GL Ω resulting from preprocessing of τ. Then ⊢ GL D(G G * ).

Extraction of Elimination Rules
In this section, we will investigate Construction 4.7 further to extract more derivations from τ * .
Any two sequents in a hypersequent seem independent of one another in the sense that they can only be contracted into one by (EC) when it is applicable. Note that one-premise logical rules just modify one sequent of a hypersequent and two-premise rules associate a sequent in a hypersequent with one in a different hypersequent. τ * (or any proof without (EC Ω ) in GL Ω ) has an essential property, which we call the distinguishability of τ * , i.e., any variables, formulas, sequents or hypersequents which occur at the node H of τ * occur inevitably at H ′ < H in some forms.
Let H ≡ G ′ S ′ S ′′ ∈ τ * . If S ′ is equal to S ′′ as two sequents then the case that τ * H∶S ′ is equal to τ * H∶S ′′ as two derivations would happen. This means that both S ′ and S ′′ are the focus sequent of one node in T h τ * (H) when G * H∶S ′ ≠ S ′ and G * H∶S ′′ ≠ S ′′ , which contradicts that each node has the unique focus sequent in any derivation. Thus we need distinguish S ′ from S ′′ for all Define S ′ ∈ τ * such that G ′ S ′ S ′′ ⩽ S ′ , S ′ ∈ S ′ and S ′ is the principal sequent of S ′ and, define N S ′ = 0 if S ′ has the unique principal sequent otherwise N S ′ = 1 (or N S ′ = 2) to indicate that S ′ is the left principal sequent (or accordingly N S ′ = 2 for the right one) of such an application as (COM), (∧ rw ) or (∨ lw ). Then we may regard S ′ as (S ′ , P(S ′ ), N S ′ ). Thus S ′ is always different from S ′′ by P(S ′ ) ≠ P(S ′′ ) or, P(S ′ ) = P(S ′′ ) and N S ′ ≠ N S ′′ . We formulate it by the following construction. 25 Construction 6.1. A labeled tree τ * * , which has the same tree structure as τ * , is constructed as follows.
(i) If S is a leaf τ * , define S = S , N S = 0 and the node P(S ) of τ * * is labeled by (S , P(S ), N S ); In the whole paper, we treat τ * as τ * * without mention of τ * * . Note that in preprocessing of τ, some logical applications could also be converted to (ID Ω ) in Step 3 and we need fix the focus sequent at each node H and subsequently assign valid identification numbers to each H ′ < H by eigenvariable-labeling operation.
For the induction step, suppose that ⟨H i ⟩ H∶G3 = ⟨H i ⟩ H∶G1 ⋂ ⟨H i ⟩ H∶G2 for some 0 ⩽ i < n. Only is the case of one-premise rule given in the following and other cases are omitted.
The case of S ′ ∉ ⟨H i ⟩ H∶G2 , S ′ ∈ ⟨H i ⟩ H∶G1 is proved by a similar procedure and omitted.
(i) and (ii) are immediately from Lemma 6.3.
, respectively, for the sake of simplicity.
by Lemma 6.3 and (iii), a contradiction therefore H c i ≰ H c j .
We impose a restriction on (II) such that each sequent in H ′ is different from S ′ or S ′′ otherwise we treat it as an (EW)-application. Since S c j ∈ G * H∶H ′ ⊆ G G * then S c j has the form S c ju for some u ≥ 2 by Notation 4.14. Thus G *   from one premise S c i1 . We generalize it by introducing derivations from multiple premises in the following. In the remainder of this section, let Let (iii) Other nodes of τ * I are built up by Construction 4.7 (ii).
The following lemma is a generalization of Lemma 6.6.
Then, for all 0 ⩽ u ⩽ n ik , Proof: (i) is proved by induction on I . For the base step, let I = 1 then the claim holds clearly. For the induction step, let I ⩾ 2 then I l ⩾ 1 and I r ⩾ 1.
for all H c i ∈ I l by Lemma 6.9 and holds by a procedure similar to above then Other claims hold immediately from Construction 6.11.
Proof. (i), (ii) and (iii) are immediately from Lemma 6.12. (iv) holds by (i) and Lemma 6.6 (vi). Lemma 6.13 (iv) shows that there exists no copy of S c ik in G * I for all 1 ≤ k ≤ m. Then we may regard them to be eliminated in τ * I . We then call τ * I an elimination derivation. Let I ′ = {S c i1u1 , ⋯, S c imum } be another set of sequents to I such that G ′ ≡ S c i1u1 ⋯ S c imum is a copy of G ′′ ≡ S c i11 ⋯ S c im1 . Then G ′ and G ′′ are disjoint and there exist two bijections σ l ∶ v l (G ′ ) → v l (G ′′ ) and σ r ∶ v r (G ′ ) → v r (G ′′ ) such that σ r ○ σ l (G ′ ) = G ′′ . By applying σ r ○ σ l to τ * I , we construct a derivation from S c i1u1 , ⋯, S c imum and denote it by τ * I ′ and its root by G * I ′ . Let I ′ = {G b1 S c i1u1 , ⋯, G bm S c imum } be a set of hypersequents to I, where G bk S c ikuk be closed for all 1 ≤ k ≤ m. By applying τ * I ′ to S c i1u1 , ⋯, S c imum in G b1 S c i1u1 , ⋯, G bm S c imum , we construct a derivation from G b1 S c i1u1 , ⋯, G bm S c imum and denote it by τ * I ′ and its root by G * I ′ .
Definition 6.14. We will use all τ * I ′ as inference rules of GL Ω and call them elimination rules. Further, we call S c i1u1 , ⋯, S c imum focus sequents and, all sequents in G * I ′ principal sequents and, G b1 , ⋯, G bm side-hypersequents of τ * I ′ .
Remark 6.15. We regard Construction 4.7 as a procedure F , whose inputs are τ ′′ , H, H ′ and output τ ′′ H∶H ′ . With such a viewpoint, we write τ ′′ H∶H ′ as F H∶H ′ (τ ′′ ). Then τ * I can be constructed by iteratively applying F to τ * , i.e., τ * We replace locally each G ′ G ′ (ID Ω ) in τ * I with G ′ and denote the resulting derivation also by τ * I . Then each non-root node in τ * I has the focus sequent. Since the relative position of any two nodes in τ * keep unchanged in constructing τ * I ,

Separation of one branch
In the remainder of this paper, we assume that p occur at most one time for each sequent in G 0 as the one in Main lemma, τ be a cut-free proof of G 0 in GL and τ * the proof of G G * in GL Ω resulting from preprocessing of τ. Then v l (S ) + v r (S ) ≤ 1 for all S ∈ G, which plays a key role in discussing the separation of branches.
I ⟩ is constructed by linking up the conclusion of previous derivation to the premise of its successor in the sequence of derivations as shown in the following figure.
[S c We sometimes write J I , J H∶Hl as J for simplicity. Then the following lemma holds clearly.
Proof. (i) is proved by a procedure similar to that of Lemma 7.4 (iii), (iv) and omitted.
(ii) Since S c i1 is the focus sequent of H c i then it is revised by some inference rule at the node lower than H c i . Thus S c i ∈ H is some copy of S c i1 by H c i > H. Hence S c i has the form S c iu for some u ≥ 2. Therefore it is transferred downward to G G * , i.e., S c i ∈ G G * . Then G H∶H ′′ . Note that the requirement is imposed only on one derivation that distinct occurrence of p has different identification number. We permit G (q+1) H∶H ′ in the proof above, which has no essential effect on the proof of the claim.
(v) is immediately from (iv).  is generally different from that of τ Ω Il at some nodes H ∈ τ Ω Il that ∂ τ Ω are marked as processed, we firstly delete the root of the tree resulting from the procedure and then, apply ⟨EC * Ω ⟩ to the root of the resulting derivation if it is applicable otherwise add an ⟨ID Ω ⟩-application to it and finally, terminate the procedure. Otherwise we select one of the outermost unprocessed ⟨EC * Ω ⟩-applications in τ and leaving other nodes of τ Ω(q) Il unchanged, particularly including G ○ q+1 ; j l for all τ * for some m q+1 ≥ 1.