Next Article in Journal
Use of the PVM Method Computed in Vector Space of Increments in Decision Aiding Related to Urban Development
Previous Article in Journal
Towards Infinite Tilings with Symmetric Boundaries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Proof of the Standard Completeness for the Involutive Uninorm Logic

Faculty of Science, Zhejiang Sci-Tech University, Hangzhou 310018, China
Symmetry 2019, 11(4), 445; https://doi.org/10.3390/sym11040445
Submission received: 20 February 2019 / Revised: 15 March 2019 / Accepted: 22 March 2019 / Published: 27 March 2019

Abstract

:
In this paper, we solve a long-standing open problem in the field of fuzzy logics, that is, the standard completeness for the involutive uninorm logic IUL. In fact, we present a uniform method of density elimination for several semilinear substructural logics. Especially, the density elimination for IUL is proved. Then the standard completeness for IUL follows as a lemma by virtue of previous work by Metcalfe and Montagna.

1. Introduction

The problem of the completeness of Łukasiewicz infinite-valued logic (Ł, for short) was posed by Łukasiewicz and Tarski in the 1930s. It was twenty-eight years later that it was syntactically solved by Rose and Rosser [1]. Chang [2] developed at almost the same time a theory of algebraic systems for Ł, which are called MV-algebras, with an attempt to make MV-algebras correspond to Ł as Boolean algebras to the classical two-valued logic. Chang [3] subsequently finished another proof for the completeness of Ł by virtue of his MV-algebras.
It was Chang who observed that the key role in the structure theory of MV-algebras is not locally finite MV-algebras but linearly ordered ones. The observation was formalized by Hájek [4] who showed the completeness for his basic fuzzy logic (BL for short) with respect to linearly ordered BL-algebras. Starting with the structure of BL-algebras, Hájek [5] reduced the problem of the standard completeness of BL to two formulas to be provable in BL. Here and thereafter, by the standard completeness we mean that logics are complete with respect to algebras with lattice reduct [0, 1]. Cignoli et al. [6] subsequently proved the standard completeness of BL, i.e., BL is the logic of continuous t-norms and their residua.
Hajek’s approach toward fuzzy logic has been extended by Esteva and Godo in [7], where the authors introduced the logic MTL which aims at capturing the tautologies of left-continuous t-norms and their residua. The standard completeness of MTL was proved by Jenei and Montagna in [8], where the major step is to embed linearly ordered MTL-algebras into the dense ones under the situation that the structure of MTL-algebras have been unknown as of yet.
Esteva and Godo’s work was further promoted by Metcalfe and Montagna [9] who introduced the uninorm logic UL and involutive uninorm logic (IUL) which aims at capturing tautologies of left-continuous uninorms and their residua and those of involutive left-continuous ones, respectively. Recently, Cintula and Noguera [10] introduced semilinear substructural logics which are substructural logics complete with respect to linearly ordered models. Almost all well-known families of fuzzy logics such as Ł, BL, MTL, UL and IUL belong to the class of semilinear substructural logics.
Metcalfe and Montagna’s method to prove standard completeness for UL and its extensions is of proof theory in nature and consists of two key steps. Firstly, they extended UL with the density rule of Takeuti and Titani [11]:
Γ ( A p ) ( p B ) C Γ ( A B ) C ( D ) ,
where p does not occur in Γ , A , B or C, and then prove the logics with ( D ) are complete with respect to algebras with lattice reduct [0, 1]. Secondly, they give a syntactic elimination of ( D ) that was formulated as a rule of the corresponding hypersequent calculus.
Hypersequents are a natural generalization of sequents which were introduced independently by Avron [12] and Pottinger [13] and have proved to be particularly suitable for logics with prelinearity [9,14]. Following the spirit of Gentzen’s cut elimination, Metcalfe and Montagna succeeded to eliminate the density rule for GUL and several extensions of GUL by induction on the height of a derivation of the premise and shifting applications of the rule upwards, but failed for GIUL and therefore left it as an open problem.
There are several relevant works about the standard completeness of IUL as follows. With an attempt to prove the standard completeness of IUL, we generalized Jenei and Montagna’s method [15] for IMTL in [16], but our effort was only partially successful. It seems that the subtle reason why it does not work for UL and IUL is the failure of the finite model property of these systems [17]. Jenei [18] constructed several classes of involutive FL e -algebras, as he said, in order to gain a better insight into the algebraic semantic of the substructural logic IUL, and also to the long-standing open problem about its standard completeness. Ciabattoni and Metcalfe [19] introduced the method of density elimination by substitutions which is applicable to a general class of (first-order) hypersequent calculi but fails in the case of GIUL.
We reconsidered Metcalfe and Montagna’s proof-theoretic method to investigate the standard completeness of IUL, because they have proved the standard completeness of UL by their method and we cannot prove such a result by the Jenei and Montagna’s model-theoretic method. In order to prove the density elimination for GUL, they prove that the following generalized density rule ( D 1 ) :
G 0 { Γ i , λ i p Δ i } i = 1 n | { Σ k , ( μ k + 1 ) p p } k = 1 o | { Π j p } j = 1 m D 1 ( G 0 ) { Γ i , λ i Π j Δ i } i = 1 n j = 1 m | { Σ k , μ k Π j t } k = 1 o j = 1 m ( D 1 )
is admissible for GUL, where they set two constraints to the form of G 0 : (i) n , m 1 and λ i 1 for some 1 i n ; (ii) p does not occur in Γ i , Δ i , Π j , Σ k for i = 1 n , j = 1 m , k = 1 o .
We may regard ( D 1 ) as a procedure whose input and output are the premise and conclusion of ( D 1 ) , respectively. We denote the conclusion of ( D 1 ) by D 1 ( G 0 ) when its premise is G 0 . Observe that Metcalfe and Montagna had succeeded in defining the suitable conclusion for an almost arbitrary premise in ( D 1 ) , but it seems impossible for GIUL (see Section 3 for an example). We then define the following generalized density rule ( D 0 ) for
GL { GUL , GIUL , GMTL , GIMTL }
and prove its admissibility in Section 9.
Theorem 1 (Main theorem).
Let n , m 1 , p does not occur in G , Γ i , Δ i , Π j or Σ j for all 1 i n , 1 j m . Then the strong density rule
G 0 G | Γ i , p Δ i i = 1 n | Π j p , Σ j j = 1 m D 0 G 0 G | { Γ i , Π j Δ i , Σ j } i = 1 n ; j = 1 m ( D 0 )
is admissible in GL .
In proving the admissibility of ( D 1 ) , Metcalfe and Montagna made some restriction on the proof τ of G 0 , i.e., converted τ into an r-proof. The reason why they need an r-proof is that they set the constraint (i) to G 0 . We may imagine the restriction on τ and the constraints to G 0 as two pallets of a balance, i.e., one is strong if another is weak and vice versa. Observe that we select the weakest form of G 0 in ( D 0 ) that guarantees the validity of ( D ) . Then it is natural that we need make the strongest restriction on the proof τ of G 0 . But it seems extremely difficult to follow such a way to prove the admissibility of ( D 0 ) .
In order to overcome such a difficulty, we first of all choose Avron-style hypersequent calculi as the underlying systems (see Appendix A.1). Let τ be a cut-free proof of G 0 in GL . Starting with τ , we construct a proof τ * of G | G * in a restricted subsystem GL Ω of GL by a systematic novel manipulations in Section 4. Roughly speaking, each sequent of G is a copy of some sequent of G 0 , and each sequent of G * is a copy of some contraction sequent in τ . In Section 5, we define the generalized density rule ( D ) in GL Ω and prove that it is admissible.
Now, starting with G | G * and its proof τ * , we construct a proof τ of G in GL Ω such that each sequent of G is a copy of some sequent of G. Then GL Ω D ( G ) by the admissibility of ( D ) . Then GL D 0 ( G 0 ) by Lemma 29. Hence the density elimination theorem holds in GL . Especially, the standard completeness of IUL follows from Theorem 62 of [9].
G is constructed by eliminating ( p E C ) -sequents in G | G * one by one. In order to control the process, we introduce the set I = { H i 1 c , , H i m c } of ( p E C ) -nodes of τ * and the set I of the branches relative to I and construct G I such that G I does not contain ( p E C ) -sequents lower than any node in I, i.e., S j c G I implies H j c | | H i c for all H i c I . The procedure is called the separation algorithm of branches in which we introduce another novel manipulation and call it derivation-grafting operation in Section 7 and Section 8.

2. Preliminaries

In this section, we recall the basic definitions and results involved, which are mainly from [9]. Substructural fuzzy logics are based on a countable propositional language with formulas built inductively as usual from a set of propositional variables VAR, binary connectives , , , , and constants , , t , f with definable connective ¬ A : = A f .
Definition 1.
([9,12]) A sequent is an ordered pair ( Γ , Δ ) of finite multisets (possibly empty) of formulas, which we denote by Γ Δ . Γ and Δ are called the antecedent and succedents, respectively, of the sequent and each formula in Γ and Δ is called a sequent-formula. A hypersequent G is a finite multiset of the form Γ 1 Δ 1 | | Γ n Δ n , where each Γ i Δ i is a sequent and is called a component of G for each 1 i n . If Δ i contains at most one formula for i = 1 n , then the hypersequent is single-conclusion, otherwise it is a multiple-conclusion.
Definition 2.
Let S be a sequent and G = S 1 | | S m a hypersequent. We say that S G if S is one of S 1 , , S m .
Notation 1.
Let G 1 and G 2 be two hypersequents. We will assume from now on that all set terminology refers to multisets, adopting the conventions of writing Γ , Δ for the multiset union of Γ and Δ, A for the singleton multiset { A } , and λ Γ for the multiset union of λ copies of Γ for λ N . By G 1 G 2 we mean that S G 2 for all S G 1 and the multiplicity of S in G 1 is not more than that of S in G 2 . We will use G 1 = G 2 , G 1 G 2 , G 1 G 2 , G 1 \ G 2 by their standard meaning for multisets by default and we will declare when we use them for sets. We sometimes write S 1 | | S m and G | S | | S n c o p i e s as { S 1 , , S m } , G | S n (or G | { S } n ), respectively.
Definition 3.
([12]) A hypersequent rule is an ordered pair consisting of a sequence of hypersequents G 1 , , G n called the premises (upper hypersequents) of the rule, and a hypersequent G called the conclusion (lower hypersequent), written by G 1 G n G . If n = 0 , then the rule has no premise and is called an initial sequent. The single-conclusion version of a rule adds the restriction that both the premises and conclusion must be single-conclusion; otherwise the rule is multiple-conclusion.
Definition 4.
([9])GULandGIULconsist of the single-conclusion and multiple-conclusion versions of the following initial sequents and rules, respectively:
Initial sequents
A A ( I D ) Γ , Δ ( r ) Γ , Δ ( l ) t ( t r ) f ( f l )
Structural rules
G | Γ A | Γ A G | Γ A ( E C ) G G | Γ A ( E W )
G 1 | Γ 1 , Π 1 Σ 1 , Δ 1 G 2 | Γ 2 , Π 2 Σ 2 , Δ 2 G 1 | G 2 | Γ 1 , Γ 2 Δ 1 , Δ 2 | Π 1 , Π 2 Σ 1 , Σ 2 ( C O M )
Logical rules
G | Γ Δ G | Γ , t , Δ ( t l ) G 1 | Γ 1 A , Δ 1 G 2 | Γ 2 , B Δ 2 G 1 | G 2 | Γ 1 , Γ 2 , A B Δ 1 , Δ 2 ( l ) G | Γ , A , B Δ G | Γ , A B Δ ( l ) G | Γ , A Δ G | Γ , A B Δ ( l r ) G 1 | Γ A , Δ G 2 | Γ B , Δ G 1 | G 2 | Γ A B , Δ ( r ) G | Γ B , Δ G | Γ A B , Δ ( r l ) G | Γ Δ G | Γ f , Δ ( f r ) G | Γ , A B , Δ G | Γ A B , Δ ( r ) G 1 | Γ 1 A , Δ 1 G 2 | Γ 2 B , Δ 2 G 1 | G 2 | Γ 1 , Γ 2 A B , Δ 1 , Δ 2 ( r ) G | Γ , B Δ G | Γ , A B Δ ( l l ) G | Γ A , Δ G | Γ A B , Δ ( r r ) G 1 | Γ , A Δ G 2 | Γ , B Δ G 1 | G 2 | Γ , A B Δ ( l )
Cut rule
G 1 | Γ 1 , A Δ 1 G 2 | Γ 2 A , Δ 2 G 1 | G 2 | Γ 1 , Γ 2 Δ 1 , Δ 2 ( C U T )
Definition 5.
([9])GMTLandGIMTLareGULandGIULplus the single conclusion and multiple-conclusion versions, respectively, of:
G | Γ Δ G | Γ , A Δ ( W L ) , G | Γ Δ G | Γ A , Δ ( W R ) .
Definition 6.
(i) ( I ) { ( t l ) , ( f r ) , ( r ) , ( l ) , ( l r ) , ( l l ) , ( r r ) , ( r l ) , ( W L ) , ( W R ) } and
( I I ) { ( l ) , ( r ) , ( r ) , ( l ) , ( C O M ) } ;
(ii) By G | S G | S G | G | H ( I I ) (or G | S G | H ( I ) ) we denote an instance of a two-premise rule ( I I ) (or one-premise rule ( I ) ) of GL , where S and S are its focus sequents and H is its principle sequent (for ( l ) , ( r ) , ( r ) and ( l ) ) or hypersequent (for ( C O M ) , ( r w ) and ( l w ) , see Definition 12).
Definition 7.
([9]) GL D is GL extended with the following density rule:
G | Γ 1 , p Δ 1 | Γ 2 p , Δ 2 G | Γ 1 , Γ 2 Δ 1 , Δ 2 ( D )
where p does not occur in G , Γ 1 , Γ 2 , Δ 1 or Δ 2 .
Definition 8.
([12]) A derivation τ of a hypersequent G from hypersequents G 1 , , G n in a hypersequent calculus GL is a labeled tree with the root labeled by G, leaves labeled initial sequents or some G 1 , , G n , and for each node labeled G 0 with parent nodes labeled G 1 , , G m (where possibly m = 0 ) , G 1 G m G 0 is an instance of a rule of GL .
Notation 2.
(i) G 1 G n ̲ G 0 τ denotes that τ is a derivation of G 0 from G 1 , , G n ;
(ii) Let H be a hypersequent. H τ denotes that H is a node of τ. We call H a leaf hypersequent if H is a leaf of τ, the root hypersequent if it is the root of τ. G 1 G m G 0 τ denotes that G 0 τ and its parent nodes are G 1 , , G m ;
(iii) Let H τ then τ ( H ) denotes the subtree of τ rooted at H;
(iv) τ determines a partial order τ with the root as the least element. H 1 H 2 denotes H 1 τ H 2 and H 2 τ H 1 for any H 1 , H 2 τ . By H 1 = τ H 2 we mean that H 1 is the same node as H 2 in τ. We sometimes write τ as ⩽;
(v) An inference of the form G | S n G | S τ is called the full external contraction and denoted by ( E C * ) , if n 2 , G | S n is not a lower hypersequent of an application of ( E C ) whose contraction sequent is S, and G | S not an upper one in τ.
Definition 9.
Let τ be a derivation of G and H τ . The thread T h τ ( H ) of τ at H is a sequence H 0 , , H n of node hypersequents of τ such that H 0 = τ H , H n = τ G , H k H k + 1 τ or there exists G τ such that H k G H k + 1 or G H k H k + 1 in τ for all 0 k n 1 .
Proposition 1.
Let H 1 , H 2 τ . Then
(i) H 1 H 2 if and only if H 1 T h τ ( H 2 ) ;
(ii) H 1 H 2 and H 1 H 3 imply H 2 H 3 ;
(iii) H 1 H 3 and H 2 H 3 imply H 1 H 2 .
We need the following definition to give each node of τ an identification number, which is used in Construction 3 to differentiate sequents in a hypersequent in a proof.
Definition 10.
(Appendix A.5.2) Let H τ and T h ( H ) = ( H 0 , , H n ) . Let b n : = 1 ,
b k : = 1 if G H k H k + 1 τ , 0 if H k H k + 1 τ or H k G H k + 1 τ
for all 0 k n 1 . Then P ( H ) : = k = 0 k = n 2 k b k and call it the position of H in τ.
Definition 11.
A rule is admissible for a calculusGLif whenever its premises are derivable inGL, then so is its conclusion.
Lemma 1.
([9]) Cut-elimination holds for GL , i.e., proofs using ( C U T ) can be transformed syntactically into proofs not using ( C U T ) .

3. Proof of the Main Theorem: A Computational Example

In this section, we present an example to illustrate the proof of the main theorem.
Let G 0 p , B | B p , ¬ A ¬ A | p C | C , p A A . G 0 is a theorem of IUL and a cut-free proof τ of G 0 is shown in Figure 1, where we use an additional rule Γ , A Δ Γ ¬ A , Δ ( ¬ r ) for simplicity. Note that we denote three applications of ( E C ) in τ respectively by ( E C ) 1 , ( E C ) 2 , ( E C ) 3 and three ( r ) by ( r ) 1 , ( r ) 2 and ( r ) 3 .
By applying (D) to free combinations of all sequents in p , B | B p , ¬ A ¬ A and in p C | C , p A A , we get that H 0 B , C | C A A , B | B C , ¬ A ¬ A | C , B A A , ¬ A ¬ A . H 0 is a theorem of IUL and a cut-free proof ρ of H 0 is shown in Figure 2. It supports the validity of the generalized density rule ( D 0 ) in Section 1, as an instance of ( D 0 ) .
Our task is to construct ρ , starting from τ . The tree structure of ρ is more complicated than that of τ . Compared with UL, MTL and IMTL, there is no one-to-one correspondence between nodes in τ and ρ .
Following the method given by G. Metcalfe and F. Montagna, we need to define a generalized density rule for IUL. We denote such an expected unknown rule by ( D x ) for convenience. Then D x ( H ) must be definable for all H τ . Naturally,
D x ( p p ) = t ;
D x ( A p | p A ) = A A ;
D x ( p , ¬ A | p , p A A ) = ¬ A , ¬ A , A A ;
D x ( p , B | B p , ¬ A ¬ A | p , p A A ) =
B , B , A A | B , B A A , ¬ A ¬ A , ¬ A ¬ A | B A A , B , ¬ A ¬ A ;
D x ( G 0 ) = D 0 ( G 0 ) = H 0 .
However, we could not find a suitable way to define D x ( H × × ) and D x ( H × ) for H × × and H × in τ , see Figure 1. This is the biggest difficulty we encounter in the case of IUL such that it is hard to prove density elimination for IUL. A possible way is to define D x ( p , p , ¬ A ¬ A | p , p A A ) as t , A A , ¬ A ¬ A . Unfortunately, it is not a theorem of IUL .
Notice that two upper hypersequents p , ¬ A | p , p A A of ( r ) 3 are permissible inputs of ( D x ) . Why is H × × an invalid input? One reason is that, two applications ( E C ) 1 and ( E C ) 2 cut off two sequents A p such that two p s disappear in all nodes lower than upper hypersequent of ( E C ) 1 or ( E C ) 2 , including H × × . These make occurrences of p s to be incomplete in H × × . We then perform the following operation in order to get complete occurrences of p s in H × × .
Step 1 (preprocessing of τ ). Firstly, we replace H with H | S for all G | S | S G | S ( E C ) k τ , H G | S then replace G | S | S G | S | S ( E C ) k with G | S | S for all k = 1 , 2 , 3 . Then we construct a proof without ( E C ) , which we denote by τ 1 , as shown in Figure 3. We call such manipulations sequent-inserting operations, which eliminate applications of ( E C ) in τ .
However, we also cannot define D x ( H × × ) for H × × τ 1 in that p , p , ¬ A ¬ A | p , p A A H × × . The reason is that the origins of p s in H × × are indistinguishable if we regard all leaves in the form p p as the origins of p s which occur in the inner node. For example, we do not know which p comes from the left subtree of τ 1 ( H × × ) and which from the right subtree in two occurrences of p s in p , p , ¬ A ¬ A H × × . We then perform the following operation in order to make all occurrences of p s in H × × distinguishable.
We assign the unique identification number to each leaf in the form p p τ 1 and transfer these identification numbers from leaves to the root, as shown in Figure 4. We denote the proof of G | G * resulting from this step by τ * , where G p 2 , B | B p 4 , ¬ A ¬ A | p 1 C | C , p 2 A A in which each sequent is a copy of some sequent in G 0 and G * A p 1 | A p 3 | p 3 , p 4 A A in which each sequent is a copy of some external contraction sequent in ( E C ) -node of τ . We call such manipulations eigenvariable-labeling operations, which make us to trace eigenvariables in τ .
Then all occurrences of p in τ * are distinguishable and we regard them as distinct eigenvariables (See Definition 18 (i)). Firstly, by selecting p 1 as the eigenvariable and applying ( D ) to G | G * , we get
G A C | p 2 , B | B p 4 , ¬ A ¬ A | C , p 2 A A | A p 3 | p 3 , p 4 A A .
Secondly, by selecting p 2 and applying ( D ) to G , we get
G A C | B p 4 , ¬ A ¬ A | C B , A A | A p 3 | p 3 , p 4 A A .
Repeatedly, we get
G A C | A , B A A , ¬ A ¬ A | C A A , B .
We define such iterative applications of ( D ) as D -rule (See Definition 20). Lemma 10 shows that GIUL D ( G | G * ) if GIUL G | G * . Then we obtain GIUL D ( G | G * ) , i.e., GIUL G .
A miracle happens here! The difficulty that we encountered in GIUL is overcome by converting H × × = A p | p , p , ¬ A ¬ A | p , p A A | A p | p , p A A into A p 1 | p 2 , p 4 , ¬ A ¬ A | p 1 , p 2 A A | A p 3 | p 3 , p 4 A A and using ( D ) to replace ( D x ) .
Why do we assign the unique identification number to each p p τ 1 ? We would return back to the same situation as that of τ 1 if we assign the same indices to all p p τ 1 or, replace p 3 p 3 and p 4 p 4 by p 2 p 2 in τ * .
Note that D ( G | G * ) = H 1 . So we have built up a one-one correspondence between the proof τ * of G | G * and that of H 1 . Observe that each sequent in G * is not a copy of any sequent in G 0 . In the following steps, we work on eliminating these sequents in G * .
Step 2 (extraction of elimination rules). We select A p 2 as the focus sequent in H 1 c in τ * and keep A p 1 unchanged from H 1 c downward to G | G * (See Figure 4). So we extract a derivation from A p 2 by pruning some sequents (or hypersequents) in τ * , which we denote by τ H 1 c : A p 2 * , as shown in Figure 5.
A derivation τ H 1 c : A p 1 * from A p 1 is constructed by replacing p 2 with p 1 , p 3 with p 5 and p 4 with p 6 in τ H 1 c : A p 2 * , as shown in Figure 6. Notice that we assign new identification numbers to new occurrences of p in τ H 1 c : A p 1 * .
Next, we apply τ H 1 c : A p 1 * to A p 1 in G | G * . Then we construct a proof τ H 1 c : G | G * ( 1 ) , as shown in Figure 7, where G G | G * \ { A p 1 } .
However, G H 1 c : G | G * ( 1 ) = p 2 , B | B p 4 , ¬ A ¬ A | p 1 C | C , p 2 A A | A p 3 | p 3 , p 4 A A | p 1 , B | B p 6 , ¬ A ¬ A | A p 5 | p 5 , p 6 A A contains more copies of sequents from G * and seems more complex than G | G * . We will present a unified method to tackle with it in the following steps. Other derivations are shown in Figure 8, Figure 9, Figure 10 and Figure 11.
Step 3 (separation of one branch). A proof τ H 1 c : G | G * ( 2 ) is constructed by applying sequentially
τ H 3 c : p 3 , p 4 A A * , τ H 3 c : p 5 , p 6 A A *
to p 3 , p 4 A A and p 5 , p 6 A A in G H 1 c : G | G * ( 1 ) , as shown in Figure 12, where G G H 1 c : G | G * ( 1 ) \ { p 3 , p 4 A A , p 5 , p 6 A A }
G H 1 c : G | G * ( 2 ) = p 2 , B | B p 4 , ¬ A ¬ A | p 1 C | C , p 2 A A | A p 3 | p 1 , B |
B p 6 , ¬ A ¬ A | A p 5 | p 3 C | C , p 4 A A | p 5 C | C , p 6 A A .
Notice that
D ( B p 4 , ¬ A ¬ A | A p 3 | p 3 C | C , p 4 A A ) = D ( B p 6 , ¬ A ¬ A | A p 5 | p 5 C | C , p 6 A A ) = A C | C , B A A , ¬ A ¬ A .
Then it is permissible to cut off the part
B p 6 , ¬ A ¬ A | A p 5 | p 5 C | C , p 6 A A
of G H 1 c : G | G * ( 2 ) , which corresponds to applying ( E C ) to D ( G H 1 c : G | G * ( 2 ) ) . We regard such a manipulation as a constrained contraction rule applied to G H 1 c : G | G * ( 2 ) and denote it by ( E C Ω ) . Define G H 1 c : G | G * to be
p 2 , B | B p 4 , ¬ A ¬ A | p 1 C | C , p 2 A A |
A p 3 | p 1 , B | p 3 C | C , p 4 A A .
Then we construct a proof of G H 1 c : G | G * by G H 1 c : G | G * ( 2 ) G H 1 c : G | G * ( E C Ω ) , which guarantees the validity of
GIUL D ( G H 1 c : G | G * )
under the condition
GIUL D ( G H 1 c : G | G * ( 2 ) ) .
A change happens here! There is only one sequent which is a copy of a sequent in G * in G H 1 c : G | G * . It is simpler than G | G * . So we are moving forward. The above procedure is called the separation of G | G * as a branch of H 1 c and reformulated as follows (See Section 7 for details).
G | G * ̲ G H 1 c : G | G * ( 1 ) τ H 1 c : A p 1 * ̲ G H 1 c : G | G * ( 2 ) τ H 3 c : p 3 , p 4 A A * , τ H 3 c : p 5 , p 6 A A * ̲ G H 1 c : G | G * E C Ω
The separation of G | G * as a branch of H 2 c is constructed by a similar procedure as follows.
G | G * ̲ G H 2 c : G | G * ( 1 ) τ H 2 c : A p 3 * ̲ G H 2 c : G | G * ( 2 ) τ H 3 c : p 3 , p 4 A A * ̲ G H 2 c : G | G * E C Ω
Note that D ( G H 1 c : G | G * ) = H 2 and D ( G H 2 c : G | G * ) = H 3 . So we have built up one-one correspondences between proofs of G H 1 c : G | G * , G H 1 c : G | G * and those of H 2 , H 3 .
Step 3 (separation algorithm of multiple branches). We will prove GIUL D 0 ( G 0 ) in a direct way, i.e., only the major step of Theorem 2 is presented in the following. (See Appendix A.5.4 for a complete illustration.) Recall that
G H 1 c : G | G * = p 2 , B | B p 4 , ¬ A ¬ A | p 1 C | C , p 2 A A |
A p 3 | p 1 , B | p 3 C | C , p 4 A A ,
G H 2 c : G | G * = A p 1 | p 2 , B | B p 4 , ¬ A ¬ A | p 1 C | C , p 2 A A |
B p 3 , ¬ A ¬ A | p 3 C | C , p 4 A A .
By reassigning identification numbers to occurrences of p s in G H 2 c : G | G * ,
G H 2 c : G | G * = A p 5 | p 6 , B | B p 8 , ¬ A ¬ A | p 5 C | C , p 6 A A |
B p 7 , ¬ A ¬ A | p 7 C | C , p 8 A A .
By applying τ { H 1 c : A p 5 , H 2 c : A p 3 } * to A p 3 in G H 1 c : G | G * and A p 5 in G H 2 c : G | G * , we get GIUL G , where
G p 2 , B | B p 4 , ¬ A ¬ A | p 1 C | C , p 2 A A | p 1 , B |
p 3 C | C , p 4 A A | p 6 , B | B p 8 , ¬ A ¬ A | p 5 C | C , p 6 A A |
B p 7 , ¬ A ¬ A | p 7 C | C , p 8 A A | p 5 , B | B p 3 , ¬ A ¬ A .
Why reassign identification numbers to occurrences of p s in G H 2 c : G | G * ? It makes different occurrences of p s to be assigned different identification numbers in two nodes G H 1 c : G | G * and G H 2 c : G | G * of the proof of G .
By applying E C Ω * to G , we get GIUL Ω G I , where
G I p 2 , B | B p 4 , ¬ A ¬ A | p 1 C | C , p 2 A A | p 1 , B |
p 3 C | C , p 4 A A | B p 3 , ¬ A ¬ A .
A great change happens here! We have eliminated all sequents which are copies of some sequents in G * and converted G | G * into G I in which each sequent is some copy of a sequent in G 0 .
Then GIUL D ( G I ) by Lemma 8, where D ( G I ) = H 0 =
C , B | C B , A A | B C , ¬ A ¬ A | C , B A A , ¬ A ¬ A .
So we have built up one-one correspondences between the proof of G I and that of H 0 , i.e., the proof of H 0 can be constructed by applying ( D ) to the proof of G I . The major steps of constructing G I are shown in the following figure, where D ( G | G * ) = H 1 , D ( G H 1 c : G | G * ) = H 2 , D ( G H 2 c : G | G * ) = H 3 and D ( G I ) = H 0 . Symmetry 11 00445 i001
In the above example, D ( G I ) = D 0 ( G 0 ) . But that is not always the case. In general, we can prove that GL D 0 ( G 0 ) if GL D ( G I ) , which is shown in the proof of the main theorem in Page 42. This example shows that the proof of the main theorem essentially presents an algorithm to construct a proof of D 0 ( G 0 ) from τ .

4. Preprocessing of Proof Tree

Let τ be a cut-free proof of G 0 in the main theorem in GL by Lemma 1. Starting with τ , we will construct a proof τ * which contains no application of ( E C ) and has some other properties in this section.
Lemma 2.
(i) If GL Γ 1 A , Δ 1 and GL Γ 2 B , Δ 2
t h e n GL Γ 1 A B , Δ 1 | Γ 2 A B , Δ 2 ;
(ii) If GL Γ 1 , A Δ 1 and GL Γ 2 , B Δ 2
t h e n GL Γ 1 , A B Δ 1 | Γ 2 , A B Δ 2 .
Proof. 
(i)
Γ 2 B , Δ 2 Γ 1 A , Δ 1 B B A A A A B B A B | B A ( C O M ) A A B | B A ( r ) A A B | B A B ( r ) Γ 1 A B , Δ 1 | B A B ( C U T ) Γ 1 A B , Δ 1 | Γ 2 A B , Δ 2 ( C U T )
(ii) is proved by a procedure similar to that of (i) and omitted.  □
We introduce two new rules by Lemma 2.
Definition 12.
G 1 | Γ 1 A , Δ 1 G 2 | Γ 2 B , Δ 2 G 1 | G 2 | Γ 1 A B , Δ 1 | Γ 2 A B , Δ 2 ( r w ) and G 1 | Γ 1 , A Δ 1 G 2 | Γ 2 , B Δ 2 G 1 | G 2 | Γ 1 , A B Δ 1 | Γ 2 , A B Δ 2 ( l w ) are called the generalized ( r ) and ( l ) rules, respectively.
Now, we begin to process τ as follows.
Step 1. A proof τ 1 is constructed by replacing inductively all applications of
G 1 | Γ A , Δ G 2 | Γ B , Δ G 1 | G 2 | Γ A B , Δ ( r ) ( or G 1 | Γ , A Δ G 2 | Γ , B Δ G 1 | G 2 | Γ , A B Δ ( l ) )
in τ with
G 1 | Γ A , Δ G 2 | Γ B , Δ G 1 | G 2 | Γ A B , Δ | Γ A B , Δ ( r w ) G 1 | G 2 | Γ A B , Δ ( E C )
( accordingly G 1 | Γ , A Δ G 2 | Γ , B Δ G 1 | G 2 | Γ , A B Δ | Γ , A B Δ ( l w ) G 1 | G 2 | Γ , A B Δ ( E C ) for ( l ) ) .
The replacements in Step 1 are local and the root of τ 1 is also labeled by G 0 .
Definition 13.
We sometimes may regard G G as a structural rule of GL and denote it by ( I D Ω ) for convenience. The focus sequent for ( I D Ω ) is undefined.
Lemma 3.
Let G | S m G | S ( E C * ) τ 1 , T h τ 1 ( G | S ) = ( H 0 , H 1 , , H n ) , where H 0 = G | S and H n = G 0 . A tree τ is constructed by replacing each H k in τ 1 with H k | S m 1 for all 0 k n . Then τ is a proof of G 0 | S m 1 .
Proof. 
The proof is by induction on n. Since τ 1 ( G | S m ) is a proof and G | S m H 0 | S m 1 ( I D Ω ) is valid in GL , then τ ( H 0 | S m 1 ) is a proof. Suppose that τ ( H n 1 | S m 1 ) is a proof. Since H n 1 G H n ( I I ) (or H n 1 H n ( I ) ) in τ 1 , then H n 1 | S m 1 G H n | S m 1 (or H n 1 | S m 1 H n | S m 1 ) is an application of the same rule ( I I ) (or ( I ) ). Thus τ ( H n | S m 1 ) is a proof.  □
Definition 14.
The manipulation described in Lemma 3 is called a sequent-inserting operation.
Clearly, the number of ( E C * ) -applications in τ is less than τ 1 . Next, we continue to process τ .
Step 2. Let G 1 | { S 1 c } m 1 G 1 | S 1 c ( E C * ) , , G N | { S N c } m N G N | S N c ( E C * ) be all applications of ( E C * ) in τ 1 and G 0 * : = { S 1 c } m 1 1 | | { S N c } m N 1 . By repeatedly applying sequent-inserting operations, we construct a proof of G 0 | G 0 * in GL without applications of ( E C * ) and denote it by τ 2 .
Remark 1.
(i) τ 2 is constructed by converting ( E C ) into ( I D Ω ) ; (ii) Each node of τ 2 has the form H 0 | H 0 * , where H 0 τ 1 and H 0 * is a (possibly empty) subset of G 0 * .
We need the following construction to eliminate applications of ( E W ) in τ 2 .
Construction 1.
Let H τ 2 , H H and T h τ 2 ( H ) = ( H 0 , , H n ) , where H 0 = H , H n = G 0 | G 0 * . Hypersequents H k H : H and trees τ H : H 2 ( H k H : H ) for all 0 k n are constructed inductively as follows.
(i) H 0 H : H : = H and τ H : H 2 ( H 0 H : H ) consists of a single node H ;
(ii) Let G | S G | S G | G | H ( I I ) (or G | S G | S ( I ) ) be in τ 2 , H k = G | S and H k + 1 = G | G | H (accordingly H k + 1 = G | S for ( I ) ) for some 0 k n 1 .
If S H k H : H
H k + 1 H : H : = H k H : H \ { S } | G | H
( accordingly H k + 1 H : H : = H k H : H \ { S } | S for ( I ) )
and τ H : H 2 ( H k + 1 H : H ) is constructed by combining trees
τ H : H 2 ( H k H : H ) , τ 2 ( G | S ) with H k H : H G | S H k + 1 H : H ( I I )
( accordingly τ H : H 2 ( H k H : H ) with H k H : H H k + 1 H : H ( I ) for ( I ) )
otherwise H k + 1 H : H : = H k H : H and τ H : H 2 ( H k + 1 H : H ) is constructed by combining
τ H : H 2 ( H k H : H ) with H k H : H H k + 1 H : H ( I D Ω ) .
(iii) Let G G | S ( E W ) τ 2 , H k = G and H k + 1 = G | S then H k + 1 H : H : = H k H : H and τ H : H 2 ( H k + 1 H : H ) is constructed by combining τ H : H 2 ( H k H : H ) with H k H : H H k + 1 H : H ( I D Ω ) .
Lemma 4.
(i) H k H : H H k for all 0 k n ;
(ii) τ H : H 2 ( H k H : H ) is a derivation of H k H : H from H without ( E C ) .
Proof. 
The proof is by induction on k. For the base step, H 0 H : H = H and τ H : H 2 ( H 0 H : H ) consists of a single node H . Then H 0 H : H H 0 = H , τ H : H 2 ( H 0 H : H ) is a derivation of H 0 H : H from H without ( E C ) .
For the induction step, suppose that H k H : H and τ H : H 2 ( H k H : H ) be constructed such that (i) and (ii) hold for some 0 k n 1 . There are two cases to be considered.
Case 1. Let G | S G | S ( I ) τ 2 , H k = G | S and H k + 1 = G | S . If S H k H : H then H k H : H \ { S } G by H k H : H H k = G | S . Thus H k + 1 H : H = ( H k H : H \ { S } ) | S G | S = H k + 1 . Otherwise S H k H : H then H k H : H G by H k H : H H k = G | S . Thus H k + 1 H : H H k + 1 by H k + 1 H : H = H k H : H G H k + 1 . τ H : H 2 ( H k + 1 H : H ) is a derivation of H k + 1 H : H from H without ( E C ) since τ H : H 2 ( H k H : H ) is such one and H k H : H H k + 1 H : H ( I ) is a valid instance of a rule ( I ) of GL . The case of applications of the two-premise rule is proved by a similar procedure and omitted.
Case 2. Let G G | S ( E W ) τ 2 , H k = G and H k + 1 = G | S . Then H k + 1 H : H H k + 1 by H k + 1 H : H = H k H : H H k H k + 1 . τ H : H 2 ( H k + 1 H : H ) is a derivation of H k + 1 H : H from H without ( E C ) since τ H : H 2 ( H k H : H ) is such one and H k H : H H k + 1 H : H ( I D Ω ) is valid by Definition 13.   □
Definition 15.
The manipulation described in Construction 1 is called a derivation-pruning operation.
Notation 3.
We denote H n H : H by G H : H 2 , τ H : H 2 ( H n H : H ) by τ H : H 2 and say that H is transformed into G H : H 2 in τ 2 .
Then Lemma 4 shows that H ̲ G H : H 2 τ H : H 2 , G H : H 2 G 0 | G 0 * . Now, we continue to process τ as follows.
Step 3. Let G G | S ( E W ) τ 2 then τ G | S : G 2 ( H n G | S : G ) is a derivation of H n G | S : G from G thus a proof of H n G | S : G is constructed by combining τ 2 ( G ) and τ G | S : G 2 ( H n G | S : G ) with G G ( I D Ω ) . By repeatedly applying the procedure above, we construct a proof τ 3 of G 1 | G 1 * without ( E W ) in GL , where G 1 G 0 , G 1 * G 0 * by Lemma 4(i).
Step 4. Let Γ , p , Δ τ 3 (or Γ , p , Δ , G | Γ Δ G | Γ , p Δ ( W L ) ) then there exists Γ Δ H such that p Γ for all H T h τ 3 ( Γ , p , Δ ) (accordingly H T h τ 3 ( Γ , p , Δ ) , H T h τ 3 ( Γ , p Δ ) ) thus a proof is constructed by replacing top-down p in each Γ with ⊤.
Let Γ , p , Δ (or Γ , p , Δ , G | Γ Δ G | Γ p , Δ ( W R ) ) is a leaf of τ 3 then there exists Γ Δ H such that p Δ for all H T h τ 3 ( Γ , p , Δ ) (accordingly H T h τ 3 ( Γ , p , Δ ) or H T h τ 3 ( Γ p , Δ ) ) thus a proof is constructed by replacing top-down p in each Γ with ⊥.
Repeatedly applying the procedure above, we construct a proof τ 4 of G 2 | G 2 * in GL such that there does not exist occurrence of p in Γ or Δ at each leaf labeled by Γ , Δ or Γ , Δ , or p is not the weakening formula A in G | Γ Δ G | Γ A , Δ ( W R ) or G | Γ Δ G | Γ , A Δ ( W L ) when ( W R ) or ( W L ) is available. Define two operations σ l and σ r on sequents by σ l ( Γ , p Δ ) : = Γ , Δ and σ r ( Γ p , Δ ) : = Γ , Δ . Then G 2 | G 2 * is obtained by applying σ l and σ r to some designated sequents in G 1 | G 1 * .
Definition 16.
The manipulation described in Step 4 is called eigenvariable-replacing operation.
Step 5. A proof τ * is constructed from τ 4 by assigning inductively one unique identification number to each occurrence of p in τ 4 as follows.
One unique identification number, which is a positive integer, is assigned to each leaf of the form p p in τ 4 which corresponds to p k p k in τ * . Other nodes of τ 4 are processed as follows.
  • Let G 1 | Γ , λ p μ p , Δ G 1 | Γ , λ p μ p , Δ ( I ) τ 4 . Suppose that all occurrences of p in G 1 | Γ , λ p μ p , Δ are assigned identification numbers and have the form G 1 | Γ , p i 1 , , p i λ p j 1 , , p j μ , Δ in τ * , which we often write as G 1 | Γ , { p i k } k = 1 λ { p j k } k = 1 μ , Δ . Then G 1 | Γ , λ p μ p , Δ has the form G 1 | Γ , { p i k } k = 1 λ { p j k } k = 1 μ , Δ .
  • Let G G G ( r w ) τ 4 , where G G 1 | Γ , λ p μ p , A , Δ , G G 2 | Γ , λ p μ p , B , Δ , G G 1 | G 2 | Γ , λ p μ p , A B , Δ | Γ , λ p μ p , A B , Δ . Suppose that G and G have the forms G 1 | Γ , { p i 1 k } k = 1 λ { p j 1 k } k = 1 μ , A , Δ and G 2 | Γ , { p i 2 k } k = 1 λ { p j 2 k } k = 1 μ , B , Δ in τ * , respectively. Then G has the form G 1 | G 2 | Γ , { p i 1 k } k = 1 λ { p j 1 k } k = 1 μ , A B , Δ | Γ , { p i 2 k } k = 1 λ { p j 2 k } k = 1 μ , A B , Δ . All applications of ( l w ) are processed by the procedure similar to that of ( r w ) .
  • Let G G G ( r ) τ 4 , where G G 1 | Γ 1 , λ 1 p μ 1 p , A , Δ 1 ,
    G G 2 | Γ 2 , λ 2 p μ 2 p , B , Δ 2 , G G 1 | G 2 | Γ 1 , Γ 2 , ( λ 1 + λ 2 ) p ( μ 1 + μ 2 ) p , A B , Δ 1 , Δ 2 . Suppose that G and G have the forms G 1 | Γ 1 , { p i 1 k } k = 1 λ 1 { p j 1 k } k = 1 μ 1 , A , Δ 1 and G 2 | Γ 2 , { p i 2 k } k = 1 λ 2 { p j 2 k } k = 1 μ 2 , B , Δ 2 in τ * , respectively. Then G has the form G 1 | G 2 | Γ 1 , Γ 2 , { p i 1 k } k = 1 λ 1 , { p i 2 k } k = 1 λ 2 { p j 1 k } k = 1 μ 1 , { p j 2 k } k = 1 μ 2 , A B , Δ 1 , Δ 2 . All applications of ( l ) are processed by the procedure similar to that of ( r ) .
  • Let G G G ( C O M ) τ 4 , where G G 1 | Γ 1 , Π 1 , λ 1 p μ 1 p , Σ 1 , Δ 1 ,
    G G 2 | Γ 2 , Π 2 , λ 2 p μ 2 p , Σ 2 , Δ 2 , G G 1 | G 2 | Γ 1 , Γ 2 , ( λ 11 + λ 21 ) p ( μ 11 + μ 21 ) p , Δ 1 , Δ 2 |
    Π 1 , Π 2 , ( λ 12 + λ 22 ) p ( μ 12 + μ 22 ) p , Σ 1 , Σ 2 , where λ 11 + λ 12 = λ 1 , λ 21 + λ 22 = λ 2 , μ 11 + μ 12 = μ 1 , μ 21 + μ 22 = μ 2 .
Suppose that G and G have the forms G 1 | Γ 1 , Π 1 , { p i k 1 } k = 1 λ 1 { p j k 1 } k = 1 μ 1 , Σ 1 , Δ 1 and G 2 | Γ 2 , Π 2 , { p i k 2 } k = 1 λ 2 { p j k 2 } k = 1 μ 2 , Σ 2 , Δ 2 in τ * , respectively. Then G has the form
G 1 | G 2 | Γ 1 , Γ 2 , { p i 1 k 1 } k = 1 λ 11 , { p i 1 k 2 } k = 1 λ 21 { p j 1 k 1 } k = 1 μ 11 , { p j 1 k 2 } k = 1 μ 21 , Δ 1 , Δ 2 |
Π 1 , Π 2 , { p i 2 k 1 } k = 1 λ 12 , { p i 2 k 2 } k = 1 λ 22 { p j 2 k 1 } k = 1 μ 12 , { p j 2 k 2 } k = 1 μ 22 , Σ 1 , Σ 2 ,
where
{ p i k w } k = 1 λ w = { p i 1 k w } k = 1 λ w 1 { p i 2 k w } k = 1 λ w 2 , { p j k w } k = 1 μ w = { p j 1 k w } k = 1 μ w 1 { p j 2 k w } k = 1 μ w 2
for w = 1 , 2 .
Definition 17.
The manipulation described in Step 5 is called eigenvariable-labeling operation.
Notation 4.
Let G 2 and G 2 * be converted to G and G * in τ * , respectively. Then τ * is a proof of G | G * .
In the preprocessing of τ , each G i | { S i c } m i G i | S i c ( E C * ) i is converted into G i | { S i c } m i G i | { S i c } m i ( I D Ω ) i in Step 2, where G i G i by Lemma 3. G G | S ( E W ) τ 2 is converted into G G ( I D Ω ) in Step 3, where G G by Lemma 4(i). Some G | Γ , p Δ τ 3 (or G | Γ p , Δ ) is revised as G | Γ , Δ (or G | Γ , Δ ) in Step 4. Each occurrence of p in τ 4 is assigned the unique identification number in Step 5. The whole preprocessing above is depicted by Figure 13.
Notation 5.
Let G i | { S i c } m i G i | S i c ( E C * ) i , 1 i N be all ( E C * ) -nodes of τ 1 and G i | { S i c } m i be converted to G i | { S i c } m i in τ * . Note that there are no identification numbers for occurrences of variable p in S i c G i | { S i c } m i meanwhile they are assigned to p in S i c G i | { S i c } m i . But we use the same notations for S i c G i | { S i c } m i and S i c G i | { S i c } m i for simplicity.
In the whole paper, let H i c = G i | { S i c } m i denote the unique node of τ * such that H i c G i | { S i c } m i and S i c is the focus sequent of H i c in τ * , in which case we denote the focus one S i 1 c and others S i 2 c | | S i m i c among { S i c } m i . We sometimes denote H i c also by G i | { S i v c } v = 1 m i or G i | S i 1 c | { S i v c } v = 2 m i . We then write G * as { S i v c } i = 1 N v = 2 m i .
We call H i c , S i u c the i-th pseudo- ( E C ) node of τ * and pseudo- ( E C ) sequent, respectively. We abbreviate pseudo- E C as p E C . Let H τ * , by S i c H we mean that S i u c H for some 1 u m i .
It is possible that there does not exist H i c G i | { S i c } m i such that S i c is the focus sequent of H i c in τ * , in which case { S i c } m i G | G * , then it has not any effect on our argument to treat all such S i c as members of G. So we assume that all H i c are always defined for all G i | { S i c } m i in τ * , i.e., H i c > G | G * .
Proposition 2.
(i) { S i v c } v = 2 m i H for all H H i c ; (ii) G * = { S i v c } i = 1 N v = 2 m i .
Now, we replace locally each G G ( I D Ω ) in τ * with G and denote the resulting proof also by τ * , which has no essential difference with the original one, but could simplify subsequent arguments. We introduce the system GL Ω as follows.
Definition 18.
GL Ω is a restricted subsystem of GL such that
(i) p is designated as the unique eigenvariable by which we mean that it is not used to built up any formula containing logical connectives and only used as a sequent-formula.
(ii) Each occurrence of p on each side of every component of a hypersequent in GL is assigned one unique identification number i and written as p i in GL Ω . Initial sequent p p of GL has the form p i p i in GL Ω .
(iii) Each sequent S of GL in the form Γ , λ p μ p , Δ has the form
Γ , { p i k } k = 1 λ { p j k } k = 1 μ , Δ
in GL Ω , where p does not occur in Γ or Δ, i k i l for all 1 k < l λ , j k j l for all 1 k < l μ . Define v l ( S ) = { i 1 , , i λ } and v r ( S ) = { j 1 , , j μ } . Let G be a hypersequent of GL Ω in the form S 1 | | S n then v l ( S k ) v l ( S l ) = and v r ( S k ) v r ( S l ) = for all 1 k < l n . Define v l ( G ) = k = 1 n v l ( S k ) , v r ( G ) = k = 1 n v r ( S k ) . Here, l and r in v l and v r indicate the left side and right side of a sequent, respectively.
(iv) A hypersequent G of GL Ω is called closed if v l ( G ) = v r ( G ) . Two hypersequents G and G of GL Ω are called disjoint if v l ( G ) v l ( G ) = , v l ( G ) v r ( G ) = , v r ( G ) v l ( G ) = and v r ( G ) v r ( G ) = . G is a copy of G if they are disjoint and there exist two bijections σ l : v l ( G ) v l ( G ) and σ r : v r ( G ) v r ( G ) such that G can be obtained by applying σ l to antecedents of sequents in G and σ r to succedents of sequents in G , i.e., G = σ r σ l ( G ) .
(v) A closed hypersequent G | G | G can be contracted as G | G in GL Ω under the condition that G and G are closed and G is a copy of G . We call it the constraint external contraction rule and denote it by
G | G | G G | G ( E C Ω ) .
Furthermore, if there do not exist two closed hypersequents H , H G | G such that H is a copy of H then we call it the fully constraint contraction rule and denote by G | G | G ̲ G | G E C Ω * .
(vi) ( E W ) and ( C U T ) of GL are forbidden. ( E C ) , ( r ) and ( l ) of GL are replaced with ( E C Ω ) , ( r w ) and ( l w ) in GL Ω , respectively.
(vii) G 1 | S 1 and G 2 | S 2 are closed and disjoint for each two-premise rule
G 1 | S 1 G 2 | S 2 G 1 | G 2 | H ( I I ) of GL Ω and, G | S is closed for each one-premise rule G | S G | S ( I ) .
(viii) p does not occur in Γ or Δ for each initial sequent Γ , Δ or Γ , Δ and, p does not act as the weakening formula A in G | Γ Δ G | Γ A , Δ ( W R ) or G | Γ Δ G | Γ , A Δ ( W L ) when ( W R ) or ( W L ) is available.
Lemma 5.
Let τ be a cut-free proof of G 0 in L and τ * be the tree resulting from preprocessing of τ.
(i) If G | S G | S ( I ) τ * then v l ( G | S ) = v r ( G | S ) = v r ( G | S ) = v l ( G | S ) ;
(ii) If G | S G | S G | G | H ( I I ) τ * then v l ( G | G | H ) = v l ( G | S ) v l ( G | S ) = v r ( G | G | H ) = v r ( G | S ) v r ( G | S ) ;
(iii) If H τ * and k v l ( H ) then k v r ( H ) ;
(iv) If H τ * and k v l ( H ) (or k v r ( H ) ) then H p k p k ;
(v) τ * is a proof of G | G * in GL Ω without ( E C Ω ) ;
(vi) If H , H τ * and H H then v l ( H ) v l ( H ) = , v r ( H ) v r ( H ) = .
Proof. 
Claims from (i) to (iv) follow immediately from Step 5 in preprocessing of τ and Definition 18. Claim (v) is from Notation 4 and Definition 18. Only (vi) is proved as follows.
Suppose that k v l ( H ) v l ( H ) . Then H p k p k , H p k p k by Claim (iv). Thus H H or H H , a contradiction with H H hence v l ( H ) v l ( H ) = .
v r ( H ) v r ( H ) = is proved by a similar procedure and omitted.  □

5. The Generalized Density Rule ( D ) for GL Ω

In this section, we define the generalized density rule ( D ) for GL Ω and prove that it is admissible in GL Ω .
Definition 19.
Let G be a closed hypersequent of GL Ω and S G . Define S G = { H : S H G , v l ( H ) = v r ( H ) } , i.e., S G is the minimal closed unit of G containing S . In general, for G G , define G G = { H : G H G , v l ( H ) = v r ( H ) } .
Clearly, S G = S if v l ( S ) = v r ( S ) or p does not occur in S. The following construction gives a procedure to construct S G for any given S G .
Construction 2.
Let G and S be as above. A sequence G 1 , G 2 , , G n of hypersequents is constructed recursively as follows. (i) G 1 = { S } ; (ii) Suppose that G k is constructed for k 1 . If v l ( G k ) v r ( G k ) then there exists i k + 1 v l ( G k ) \ v r ( G k ) (or i k + 1 v r ( G k ) \ v l ( G k ) ) thus there exists the unique S k + 1 G \ G k such that i k + 1 v r ( S k + 1 ) \ v l ( S k + 1 ) (or i k + 1 v l ( S k + 1 ) \ v r ( S k + 1 ) ) by v l ( G ) = v r ( G ) and Definition 18 then let G k + 1 = G k | S k + 1 otherwise the procedure terminates and n : = k .
Lemma 6.
(i) G n = S G ;
(ii)Let S S G then S G = S G ;
(iii)Let G G | H , G G | H , v l ( G ) = v r ( G ) , v l ( G ) = v r ( G ) , v l ( H ) v r ( H ) = v l ( H ) v r ( H ) then H G \ H = H G \ H , where A B is the symmetric difference of two multisets A , B ;
(iv)Let v l r ( G k ) = v l ( G k ) v r ( G k ) then v l r ( G k ) + 1 G k for all 1 k n ;
(v) v l ( S G ) + 1 S G .
Proof. 
(i) Since G k G k + 1 for 1 k n 1 and S G 1 then S G n G thus S G G n by v l ( G n ) = v r ( G n ) . We prove G k S G for 1 k n by induction on k in the following. Clearly, G 1 S G . Suppose that G k S G for some 1 k n 1 . Since i k + 1 v l ( G k ) \ v r ( G k ) (or i k + 1 v r ( G k ) \ v l ( G k ) ) and i k + 1 v r ( S k + 1 ) (or i k + 1 v l ( S k + 1 ) ) then S k + 1 S G by G k S G and v l ( S G ) = v r ( S G ) thus G k + 1 S G . Then G n S G thus G n = S G .
(ii) By (i), S G = S 1 | S 2 | | S n , where S 1 = S . Then S = S k for some 1 k n thus i k v r ( S k ) \ v l ( S k ) (or i k v l ( S k ) \ v r ( S k ) ) hence there exists the unique k < k such that i k v l ( S k ) \ v r ( S k ) (or i k v r ( S k ) \ v l ( S k ) ) if k 2 hence S k S k G . Repeatedly, S 1 S k G , i.e., S S G then S G S G . S G S G by S S G then S G = S G .
(iii) It holds immediately from Construction 2 and (i).
(iv) The proof is by induction on k. For the base step, let k = 1 then G k = 1 thus v l r ( G k ) + 1 G k by v l r ( G k ) 0 . For the induction step, suppose that v l r ( G k ) + 1 G k for some 1 k < n . Then v l r ( G k + 1 ) v l r ( G k ) + 1 by i k + 1 v l r ( G k + 1 ) \ v l r ( G k ) and v l r ( G k ) v l r ( G k + 1 ) . Then v l r ( G k + 1 ) + 1 G k + 1 by G k + 1 = G k + 1 = k + 1 .
(v) It holds by (iv) and v l r ( S G ) = v l ( S G ) .  □
Definition 20.
Let G = S 1 | | S r and S l be in the form Γ l , { p i k l } k = 1 λ l { p j k l } k = 1 μ l , Δ l for 1 l r .
(i) If S G and S G be S k 1 | | S k u then D G ( S ) is defined as
Γ k 1 , , Γ k u ( v l ( S G ) S G + 1 ) t , Δ k 1 , , Δ k u ;
(ii) Let k = 1 v S q k G = G and S q k G S q l G = for all 1 k < l v then D ( G ) is defined as D G ( S q 1 ) | | D G ( S q v ) .
(iii) We call ( D ) the generalized density rule of GL Ω , whose conclusion D ( G ) is defined by (ii) if its premise is G.
Clearly, D ( p k p k ) is t and D ( S ) = S if p does not occur in S.
Lemma 7.
Let G G | S and G G | S 1 | S 2 be closed and S 1 G S 2 G = , where S 1 = Γ 1 , { p i k 1 } k = 1 λ 1 { p j k 1 } k = 1 μ 1 , Δ 1 ; S 2 = Γ 2 , { p i k 2 } k = 1 λ 2 { p j k 2 } k = 1 μ 2 , Δ 2 ; S = Γ 1 , Γ 2 , { p i k 1 } k = 1 λ 1 , { p i k 2 } k = 1 λ 2 { p j k 1 } k = 1 μ 1 , { p j k 2 } k = 1 μ 2 , Δ 1 , Δ 2 ; D G ( S 1 ) = Γ 1 , Σ 1 Π 1 , Δ 1 and D G ( S 2 ) = Γ 2 , Σ 2 Π 2 , Δ 2 . Then D G ( S ) = Γ 1 , Σ 1 , Γ 2 , Σ 2 Π 1 , Δ 1 , Π 2 , Δ 2 .
Proof. 
Since S 1 G S 2 G = then S G = S 1 G \ { S 1 } S 2 G \ { S 2 } { S } by v l ( S ) = v l ( S 1 | S 2 ) , v r ( S ) = v r ( S 1 | S 2 ) and Lemma 6 (iii). Thus v l ( S G ) = v l ( S 1 G ) + v l ( S 2 G ) , S G = S 1 G + S 2 G 1 . Hence
v l ( S G ) S G + 1 = v l ( S 1 G ) S 1 G + 1 + v l ( S 2 G ) S 2 G + 1 .
Therefore D G ( S ) = Γ 1 , Σ 1 , Γ 2 , Σ 2 Π 1 , Δ 1 , Π 2 , Δ 2 by
Π 1 = ( v l ( S 1 G ) S 1 G + 1 ) t , Π 1 \ ( v l ( S 1 G ) S 1 G + 1 ) t
Π 2 = ( v l ( S 2 G ) S 2 G + 1 ) t , Π 2 \ ( v l ( S 2 G ) S 2 G + 1 ) t
D G ( S ) = Γ 1 , Σ 1 , Γ 2 , Σ 2 ( v l ( S G ) S G + 1 ) t ,
Π 1 \ ( v l ( S 1 G ) S 1 G + 1 ) t , Δ 1 , Π 2 \ ( v l ( S 2 G ) S 2 G + 1 ) t , Δ 2
where λ t = { t , , t } λ .  □
Lemma 8.
(Appendix A.5.1) If there exists a proof τ of G in GL Ω then there exists a proof of D ( G ) in GL , i.e., ( D ) is admissible in GL Ω .
Proof. 
We proceed by induction on the height of τ . For the base step, if G is p k p k then D ( G ) is t otherwise D ( G ) is G then GL D ( G ) holds. For the induction step, the following cases are considered.
Case 1 Let
G | S G | S ( r ) τ
where
S A , Γ , { p i k } k = 1 λ { p j k } k = 1 μ , Δ , B ,
S Γ , { p i k } k = 1 λ { p j k } k = 1 μ , Δ , A B .
Then S G | S = S G | S \ { S } | S by v l ( S ) = v l ( S ) , v r ( S ) = v r ( S ) and Lemma 6(iii). Let D G | S ( S ) = A , Γ , Γ Δ , Δ , B then D G | S ( S ) = Γ , Γ Δ , Δ , A B thus a proof of D ( G | S ) is constructed by combining the proof of D ( G | S ) and D G | S ( S ) D G | S ( S ) ( r ) . Other rules of type ( I ) are processed by a procedure similar to above.
Case 2 Let
G 1 | S 1 G 2 | S 2 G 1 | G 2 | S 3 ( r ) τ
where
S 1 Γ 1 , { p i k 1 } k = 1 λ 1 { p j k 1 } k = 1 μ 1 , A , Δ 1
S 2 Γ 2 , { p i k 2 } k = 1 λ 2 { p j k 2 } k = 1 μ 2 , B , Δ 2
S 3 Γ 1 , Γ 2 , { p i k 1 } k = 1 λ 1 , { p i k 2 } k = 1 λ 2 { p j k 2 } k = 1 μ 2 , { p j k 1 } k = 1 μ 1 , A B , Δ 1 , Δ 2 .
Let
D G 1 | S 1 ( S 1 ) = Γ 1 , Γ 11 Δ 11 , ( v l ( S 1 G 1 | S 1 ) S 1 G 1 | S 1 + 1 ) t , A , Δ 1 ,
D G 2 | S 2 ( S 2 ) = Γ 2 , Γ 21 Δ 21 , ( v l ( S 2 G 2 | S 2 ) S 2 G 2 | S 2 + 1 ) t , B , Δ 2 .
Then D G 1 | G 2 | S 3 ( S 3 ) is
Γ 1 , Γ 2 , Γ 11 , Γ 21 Δ 11 , Δ 21 , A B , Δ 1 , Δ 2 ,
( v l ( S 1 G 1 | S 1 ) + v l ( S 2 G 2 | S 2 ) S 1 G 1 | S 1 S 2 G 2 | S 2 + 2 ) t
by S 3 G 1 | G 2 | S 3 = ( S 1 G 1 | S 1 \ { S 1 } ) ( S 2 G 2 | S 2 \ { S 2 } ) { S 3 } . Then the proof of D ( G 1 | G 2 | S 3 ) is constructed by combining GL D ( G 1 | S 1 ) and
GL D ( G 2 | S 2 ) with D G 1 | S 1 ( S 1 ) D G 2 | S 2 ( S 2 ) D G 1 | G 2 | S 3 ( S 3 ) ( r ) . All applications of ( l ) are processed by a procedure similar to that of r and omitted.
Case 3 Let
G G G ( r w ) τ
where
G G 1 | S 1 , G G 2 | S 2 , G G 1 | G 2 | S 1 | S 2 ,
S w Γ w , { p i k w } k = 1 λ w { p j k w } k = 1 μ w , A w , Δ w ,
S w Γ w , { p i k w } k = 1 λ w { p j k w } k = 1 μ w , A 1 A 2 , Δ w
for w = 1 , 2 . Then S 1 G = S 1 G \ { S 1 } | S 1 , S 2 G = S 2 G \ { S 2 } | S 2 by Lemma 6 (iii). Let
D G w | S w ( S w ) = Γ w , Γ w 1 Δ w 1 , ( v l ( S w G w | S w ) S w G w | S w + 1 ) t , A w , Δ 1
for w = 1 , 2 . Then
D G ( S w ) = Γ w , Γ w 1 Δ w 1 , ( v l ( S w G w | S w ) S w G w | S w + 1 ) t , A 1 A 2 , Δ w
for w = 1 , 2 . Then the proof of D ( G ) is constructed by combining GL D ( G ) and GL D ( G ) with D G ( S 1 ) D G ( S 2 ) D G ( S 1 | S 2 ) ( r w ) . All applications of ( l w ) are processed by a procedure similar to that of ( r w ) and omitted.
Case 4 Let
G G G ( C O M ) τ
where
G G 1 | S 1 , G G 2 | S 2 , G G 1 | G 2 | S 3 | S 4
S 1 Γ 1 , Π 1 , { p i k 1 } k = 1 λ 1 { p j k 1 } k = 1 μ 1 , Σ 1 , Δ 1 ,
S 2 Γ 2 , Π 2 , { p i k 2 } k = 1 λ 2 { p j k 2 } k = 1 μ 2 , Σ 2 , Δ 2 ,
S 3 Γ 1 , Γ 2 , { p i 1 k 1 } k = 1 λ 11 , { p i 1 k 2 } k = 1 λ 21 { p j 1 k 1 } k = 1 μ 11 , { p j 1 k 2 } k = 1 μ 21 , Δ 1 , Δ 2 ,
S 4 Π 1 , Π 2 , { p i 2 k 1 } k = 1 λ 12 , { p i 2 k 2 } k = 1 λ 22 { p j 2 k 1 } k = 1 μ 12 , { p j 2 k 2 } k = 1 μ 22 , Σ 1 , Σ 2
where { p i k w } k = 1 λ w = { p i 1 k w } k = 1 λ w 1 { p i 2 k w } k = 1 λ w 2 , { p j k w } k = 1 μ w = { p j 1 k w } k = 1 μ w 1 { p j 2 k w } k = 1 μ w 2 for w = 1 , 2 .
Case 4.1. S 3 S 4 G . Then S 3 G = S 4 G by Lemma 6(ii) and
S 3 G = S 1 G | S 2 G | S 3 | S 4 \ { S 1 , S 2 } by Lemma 6(iii). Then
v l ( S 3 G ) S 3 G + 1 = v l ( S 1 G ) + v l ( S 2 G ) S 1 G S 2 G + 1 0 .
Thus v l ( S 1 G ) S 1 G + 1 1 or v l ( S 2 G ) S 2 G + 1 1 . Hence we assume that, without loss of generality,
D G ( S 1 ) = Γ 1 , Π 1 , Γ Δ , t , Σ 1 , Δ 1 ,
D G ( S 2 ) = Γ 2 , Π 2 , Γ Δ , Σ 2 , Δ 2 .
Then
D G ( S 3 | S 4 ) = Γ 1 , Π 1 , Γ , Γ 2 , Π 2 , Γ Δ , Σ 1 , Δ 1 , Δ , Σ 2 , Δ 2 .
Thus the proof of D G ( S 1 ) D G ( S 2 ) ̲ D G ( S 3 | S 4 ) is constructed by
Γ 1 , Π 1 , Γ Δ , t , Σ 1 , Δ 1 Γ 2 , Π 2 , Γ Δ , Σ 2 , Δ 2 Γ 2 , Π 2 , Γ , t Δ , Σ 2 , Δ 2 ( t l ) Γ 1 , Π 1 , Γ , Γ 2 , Π 2 , Γ Δ , Σ 1 , Δ 1 , Δ , Σ 2 , Δ 2 ( C U T ) .
Case 4.2. S 3 S 4 G . Then S 3 G S 4 G = by Lemma 6(ii). Let
S 3 w Γ w , { p i 1 k w } k = 1 λ w 1 { p j 1 k w } k = 1 μ w 1 , Δ w ,
S 4 w Π w , { p i 2 k w } k = 1 λ w 2 { p j 2 k w } k = 1 μ w 2 , Σ w ,
for w = 1 , 2 . Then
S 3 G = S 31 G 1 | S 31 | S 41 \ { S 31 } S 32 G 2 | S 32 | S 42 \ { S 32 } { S 3 } ,
S 4 G = S 41 G 1 | S 31 | S 41 \ { S 41 } S 42 G 2 | S 32 | S 42 \ { S 42 } { S 4 }
by v l ( S 3 ) = v l ( S 31 | S 32 ) , v l ( S 1 ) = v l ( S 31 | S 41 ) , v l ( S 2 ) = v l ( S 32 | S 42 ) and v l ( S 4 ) = v l ( S 41 | S 42 ) . Let
D G w | S 3 w | S 4 w ( S 3 w ) = Γ w , X 3 w Ψ 3 w , Δ w ,
D G w | S 3 w | S 4 w ( S 4 w ) = Π w , X 4 w Ψ 4 w , Σ w
for w = 1 , 2 . Then
D G ( S 1 ) = Γ 1 , Π 1 , X 31 , X 41 Ψ 31 , Ψ 41 , Σ 1 , Δ 1 ,
D G ( S 2 ) = Γ 2 , Π 2 , X 32 , X 42 Ψ 32 , Ψ 42 , Σ 2 , Δ 2 ,
D G ( S 3 ) = Γ 1 , X 31 , Γ 2 , X 32 Ψ 31 , Δ 1 , Ψ 32 , Δ 2 ,
D G ( S 4 ) = Π 1 , X 41 , Π 2 , X 42 Ψ 41 , Σ 1 , Ψ 42 , Σ 2
by Lemma 7, S 3 G S 4 G = , S 31 G 1 | S 31 | S 41 S 41 G 1 | S 31 | S 41 = , S 32 G 2 | S 32 | S 42 S 42 G 2 | S 32 | S 42 = . Then the proof of D G ( S 3 | S 4 ) is constructed by combing the proofs of D G ( S 1 ) and D G ( S 2 ) with D G ( S 1 ) D G ( S 2 ) D G ( S 3 | S 4 ) ( C O M ) .
Case 5 G | G | G G | G ( E C Ω ) τ . Then G , G and G are closed and G is a copy of G thus D G | G | G ( G ) = D G | G | G ( G ) hence a proof of D ( G | G ) is constructed by combining the proof of D ( G | G | G ) and D ( G | G | G ) D ( G | G ) ( E C * ) .  □
The following two lemmas are corollaries of Lemma 8.
Lemma 9.
If there exists a derivation of G 0 from G 1 , , G r in GL Ω then there exists a derivation of D ( G 0 ) from D ( G 1 ) , , D ( G r ) in GL .
Lemma 10.
Let τ be a cut-free proof of G 0 in GL and τ * be the proof of G | G * in GL Ω resulting from preprocessing of τ. Then GL D ( G | G * ) .

6. Extraction of Elimination Rules

In this section, we will investigate Construction 1 further to extract more derivations from τ * .
Any two sequents in a hypersequent seem independent of one another in the sense that they can only be contracted into one by ( E C ) when it is applicable. Note that one-premise logical rules just modify one sequent of a hypersequent and two-premise rules associate a sequent in a hypersequent with one in a different hypersequent.
τ * (or any proof without ( E C Ω ) in GL Ω ) has an essential property, which we call the distinguishability of τ * , i.e., any variables, formulas, sequents or hypersequents which occur at the node H of τ * occur inevitably at H < H in some forms.
Let H G | S | S τ * . If S is equal to S as two sequents then the case that τ H : S * is equal to τ H : S * as two derivations could possibly happen. This means that both S and S are the focus sequent of one node in τ * when G H : S * S and G H : S * S , which contradicts that each node has the unique focus sequent in any derivation. Thus we need to differentiate S from S for all G | S | S τ * .
Define S ¯ τ * such that G | S | S S ¯ , S S ¯ and S is the principal sequent of S ¯ . If S ¯ has the unique principal sequent, N S : = 0 , otherwise N S : = 1 (or N S = 2 ) to indicate that S is one designated principal sequent (or accordingly N S = 2 for another) of such an application as ( C O M ) , ( r w ) or ( l w ) . Then we may regard S as ( S ; P ( S ¯ ) , N S ) . Thus S is always different from S by P ( S ¯ ) P ( S ¯ ) or, P ( S ¯ ) = P ( S ¯ ) and N S N S . We formulate it by the following construction.
Construction 3.
(Appendix A.5.2) A labeled tree τ * * , which has the same tree structure as τ * , is constructed as follows.
(i) If S is a leaf τ * , define S ¯ = S , N S = 0 and the node P ( S ) of τ * * is labeled by ( S ; P ( S ¯ ) , N S ) ;
(ii) If G | S H G | S ( I ) τ * and P ( G | S ) be labeled by G | ( S ; P ( S ¯ ) , N S ) in τ * * . Then define S ¯ = H , N S = 0 and the node P ( H ) of τ * * is labeled by G | ( S ; P ( S ¯ ) , N S ) ;
(iii) If G | S G | S H G | G | H ( I I ) τ * , P ( G | S ) and P ( G | S ) be labeled by G | ( S ; P ( S ¯ ) , N S ) and G | ( S ; P ( S ¯ ) , N S ) in τ * * , respectively. If H = S 1 | S 2 then define S 1 ¯ = S 2 ¯ = H , N S 1 = 1 , N S 2 = 2 and the node P ( H ) of τ * * is labeled by G | G | ( S 1 ; P ( S 1 ¯ ) , N S 1 ) | ( S 2 ; P ( S 2 ¯ ) , N S 2 ) . If H = S 1 then define S 1 ¯ = H , N S 1 = 0 and P ( H ) is labeled by G | G | ( S 1 ; P ( S 1 ¯ ) , N S 1 ) .
In the whole paper, we treat τ * as τ * * without mention of τ * * . Note that in preprocessing of τ , some logical applications could also be converted to ( I D Ω ) in Step 3 and we need fix the focus sequent at each node H and subsequently assign valid identification numbers to each H < H by eigenvariable-labeling operation.
Proposition 3.
(i) G | S | S τ * implies { S } { S } = ; (ii) H τ * and H | H H imply H H = ; (iii) Let H τ * and S i c H then H H i c or H i c H .
Proof. 
(iii) Let S i c H then S i c = S i u c for some 1 u m i by Notation 5. Thus S i c H i c also by Notation 5. Hence H S i c ¯ and H i c S i c ¯ by Construction 3. Therefore H H i c or H i c H .  □
Lemma 11.
Let H τ * and T h ( H ) = ( H 0 , , H n ) , where H 0 = H , H n = G | G * , G k H for 1 k 3 .
(i) If G 3 = G 1 G 2 then H i H : G 3 = H i H : G 1 H i H : G 2 for all 0 i n ;
(ii) If G 3 = G 1 | G 2 then H i H : G 3 = H i H : G 1 | H i H : G 2 for all 0 i n .
Proof. 
The proof is by induction on i for 0 i < n . Only (i) is proved as follows and (ii) by a similar procedure and omitted.
For the base step, H 0 H : G 3 = H 0 H : G 1 H 0 H : G 2 holds by H 0 H : G 1 = G 1 , H 0 H : G 2 = G 2 , H 0 H : G 3 = G 3 and G 3 = G 1 G 2 .
For the induction step, suppose that H i H : G 3 = H i H : G 1 H i H : G 2 for some 0 i < n . Only is the case of a one-premise rule given in the following and other cases are omitted.
Let G | S G | S ( I ) τ * , H i = G | S and H i + 1 = G | S .
Let S H i H : G 3 . Then H i + 1 H : G 3 = ( H i H : G 3 \ { S } ) | S , H i + 1 H : G 1 = ( H i H : G 1 \ { S } ) | S by S H i H : G 1 and
H i + 1 H : G 2 = ( H i H : G 2 \ { S } ) | S   by   S H i H : G 2 .
Thus
H i + 1 H : G 3 = H i + 1 H : G 1 H i + 1 H : G 2   by   H i H : G 3 = H i H : G 1 H i H : G 2 .
Let S H i H : G 1 and S H i H : G 2 . Then H i + 1 H : G 1 = H i H : G 1 ,
H i + 1 H : G 2 = H i H : G 2   and   H i + 1 H : G 3 = H i H : G 3 .
Thus
H i + 1 H : G 3 = H i + 1 H : G 1 H i + 1 H : G 2   by   H i H : G 3 = H i H : G 1 H i H : G 2 .
Let S H i H : G 1 , S H i H : G 2 . Then H i + 1 H : G 1 = H i H : G 1 ,
H i + 1 H : G 3 = H i H : G 3   and   H i + 1 H : G 2 = ( H i H : G 2 \ { S } ) | S .
Thus
H i + 1 H : G 3 = H i + 1 H : G 1 H i + 1 H : G 2   by   H i H : G 3 = H i H : G 1 H i H : G 2 , S H i + 1 H : G 1 .
The case of S H i H : G 2 , S H i H : G 1 is proved by a similar procedure and omitted.  □
Lemma 12.
(i) Let G | S τ * then G G | S : S * G G | S : G * = , G G | S : G * | G G | S : S * = G | G * ;
(ii) H τ * , H | H H then G H : H | H * = G H : H * | G H : H * .
Proof. 
(i) and (ii) are immediately from Lemma 11.  □
Notation 6.
We write τ H i c : S i 1 c * , G H i c : S i 1 c * as τ S i 1 c * , G S i 1 c * , respectively, for the sake of simplicity.
Lemma 13.
(i) G S i 1 c * G | G * ;
(ii) τ S i 1 c * is a derivation of G S i 1 c * from S i 1 c , which we denote by S i 1 c ̲ G S i 1 c * τ S i 1 c * ;
(iii) G S i u c * = S i u c and τ S i u c * consists of a single node S i u c for all 2 u m i ;
(iv) v l ( G S i 1 c * ) \ v l ( S i 1 c ) = v r ( G S i 1 c * ) \ v r ( S i 1 c ) ;
(v) H S i 1 c τ S i 1 c * implies H H i c . Note that H S i 1 c is undefined for any H > H i c or H H i c .
(vi) S j c G S i 1 c * implies H i c H j c .
Proof. 
Claims from (i) to (v) follow immediately from Construction 1 and Lemma 4.
(vi) Since S j c G S i 1 c * G | G * then S j c has the form S j u c for some u 2 by Notation 5. Then G S j c * = S j c by (iii). Suppose that H i c H j c . Then S j c is transferred from H j c downward to H i c and in side-hypersequent of H i c by Notation 5 and G | G * < H i c H j c . Thus { S i 1 c } { S j c } = at H i c since S i 1 c is the unique focus sequent of H i c . Hence S j c G S i 1 c * by Lemma 11 and (iii), a contradiction therefore H i c H j c .  □
Lemma 14.
Let G | S G | S H G | G | H ( I I ) τ * . (i) If S j c G H : H * then H j c H or H j c H ; (ii) If S j c G H : G * then H j c H or H j c G | S .
Proof. 
(i) We impose a restriction on ( I I ) such that each sequent in H is different from S or S otherwise we treat it as an ( E W ) -application. Since S j c G H : H * G | G * then S j c has the form S j u c for some u 2 by Notation 5. Thus G S j c * = S j c . Suppose that H j c > H . Then S j c is transferred from H j c downward to H. Thus S j c H by G S j c * = S j c G H : H * and Lemma 11. Hence S j c = S or S j c = S , a contradiction with the restriction above. Therefore H j c H or H j c H .
(ii) Let S j c G H : G * . If H j c > H then S j c H by Proposition 2(i) and thus S j c G by Lemma 11 and, hence H j c G | S by H j c G | S , G | S G | S . If H j c H then H j c G | S by H G | S . Thus H j c H or H j c G | S .  □
Definition 21.
(i) By H i c H j c we mean that S j u c G S i 1 c * for some 2 u m j ; (ii) By H i c H j c we mean that H i c H j c and H j c H i c ; (iii) H i c H j c means that S j u c G S i 1 c * for all 2 u m j .
Then Lemma 13(vi) shows that H i c H j c implies H i c H j c .
Lemma 15.