1. Introduction
The problem of the completeness of Łukasiewicz infinite-valued logic (Ł, for short) was posed by Łukasiewicz and Tarski in the 1930s. It was twenty-eight years later that it was syntactically solved by Rose and Rosser [
1]. Chang [
2] developed at almost the same time a theory of algebraic systems for Ł, which are called
MV-algebras, with an attempt to make
MV-algebras correspond to Ł as Boolean algebras to the classical two-valued logic. Chang [
3] subsequently finished another proof for the completeness of Ł by virtue of his
MV-algebras.
It was Chang who observed that the key role in the structure theory of
MV-algebras is not locally finite
MV-algebras but linearly ordered ones. The observation was formalized by Hájek [
4] who showed the completeness for his basic fuzzy logic (
BL for short) with respect to linearly ordered
BL-algebras. Starting with the structure of
BL-algebras, Hájek [
5] reduced the problem of the standard completeness of
BL to two formulas to be provable in
BL. Here and thereafter, by the standard completeness we mean that logics are complete with respect to algebras with lattice reduct [0, 1]. Cignoli et al. [
6] subsequently proved the standard completeness of
BL, i.e.,
BL is the logic of continuous t-norms and their residua.
Hajek’s approach toward fuzzy logic has been extended by Esteva and Godo in [
7], where the authors introduced the logic
MTL which aims at capturing the tautologies of left-continuous t-norms and their residua. The standard completeness of
MTL was proved by Jenei and Montagna in [
8], where the major step is to embed linearly ordered
MTL-algebras into the dense ones under the situation that the structure of
MTL-algebras have been unknown as of yet.
Esteva and Godo’s work was further promoted by Metcalfe and Montagna [
9] who introduced the uninorm logic
UL and involutive uninorm logic (
IUL) which aims at capturing tautologies of left-continuous uninorms and their residua and those of involutive left-continuous ones, respectively. Recently, Cintula and Noguera [
10] introduced semilinear substructural logics which are substructural logics complete with respect to linearly ordered models. Almost all well-known families of fuzzy logics such as Ł,
BL,
MTL,
UL and
IUL belong to the class of semilinear substructural logics.
Metcalfe and Montagna’s method to prove standard completeness for
UL and its extensions is of proof theory in nature and consists of two key steps. Firstly, they extended
UL with the density rule of Takeuti and Titani [
11]:
where
p does not occur in
or
C, and then prove the logics with
are complete with respect to algebras with lattice reduct [0, 1]. Secondly, they give a syntactic elimination of
that was formulated as a rule of the corresponding hypersequent calculus.
Hypersequents are a natural generalization of sequents which were introduced independently by Avron [
12] and Pottinger [
13] and have proved to be particularly suitable for logics with prelinearity [
9,
14]. Following the spirit of Gentzen’s cut elimination, Metcalfe and Montagna succeeded to eliminate the density rule for
GUL and several extensions of
GUL by induction on the height of a derivation of the premise and shifting applications of the rule upwards, but failed for
GIUL and therefore left it as an open problem.
There are several relevant works about the standard completeness of
IUL as follows. With an attempt to prove the standard completeness of
IUL, we generalized Jenei and Montagna’s method [
15] for
IMTL in [
16], but our effort was only partially successful. It seems that the subtle reason why it does not work for
UL and
IUL is the failure of the finite model property of these systems [
17]. Jenei [
18] constructed several classes of involutive FL
-algebras, as he said, in order to gain a better insight into the algebraic semantic of the substructural logic
IUL, and also to the long-standing open problem about its standard completeness. Ciabattoni and Metcalfe [
19] introduced the method of density elimination by substitutions which is applicable to a general class of (first-order) hypersequent calculi but fails in the case of
GIUL.
We reconsidered Metcalfe and Montagna’s proof-theoretic method to investigate the standard completeness of
IUL, because they have proved the standard completeness of
UL by their method and we cannot prove such a result by the Jenei and Montagna’s model-theoretic method. In order to prove the density elimination for
GUL, they prove that the following generalized density rule
:
is admissible for
GUL, where they set two constraints to the form of
: (i)
and
for some
; (ii)
p does not occur in
,
,
,
for
,
,
.
We may regard
as a procedure whose input and output are the premise and conclusion of
, respectively. We denote the conclusion of
by
when its premise is
. Observe that Metcalfe and Montagna had succeeded in defining the suitable conclusion for an almost arbitrary premise in
, but it seems impossible for
GIUL (see
Section 3 for an example). We then define the following generalized density rule
for
and prove its admissibility in
Section 9.
Theorem 1 (Main theorem).
Let , p does not occur in or for all ,. Then the strong density ruleis admissible in . In proving the admissibility of , Metcalfe and Montagna made some restriction on the proof of , i.e., converted into an r-proof. The reason why they need an r-proof is that they set the constraint (i) to . We may imagine the restriction on and the constraints to as two pallets of a balance, i.e., one is strong if another is weak and vice versa. Observe that we select the weakest form of in that guarantees the validity of . Then it is natural that we need make the strongest restriction on the proof of . But it seems extremely difficult to follow such a way to prove the admissibility of .
In order to overcome such a difficulty, we first of all choose Avron-style hypersequent calculi as the underlying systems (see
Appendix A.1). Let
be a cut-free proof of
in
. Starting with
, we construct a proof
of
in a restricted subsystem
of
by a systematic novel manipulations in
Section 4. Roughly speaking, each sequent of
G is a copy of some sequent of
, and each sequent of
is a copy of some contraction sequent in
. In
Section 5, we define the generalized density rule
in
and prove that it is admissible.
Now, starting with
and its proof
, we construct a proof
of
in
such that each sequent of
is a copy of some sequent of
G. Then
by the admissibility of
. Then
by Lemma 29. Hence the density elimination theorem holds in
. Especially, the standard completeness of
IUL follows from Theorem 62 of [
9].
is constructed by eliminating
-sequents in
one by one. In order to control the process, we introduce the set
of
-nodes of
and the set
of the branches relative to
I and construct
such that
does not contain
-sequents lower than any node in
I, i.e.,
implies
for all
. The procedure is called the separation algorithm of branches in which we introduce another novel manipulation and call it derivation-grafting operation in
Section 7 and
Section 8.
2. Preliminaries
In this section, we recall the basic definitions and results involved, which are mainly from [
9]. Substructural fuzzy logics are based on a countable propositional language with formulas built inductively as usual from a set of propositional variables VAR, binary connectives
and constants
with definable connective
Definition 1. ([9,12]) A sequent is an ordered pair of finite multisets (possibly empty) of formulas, which we denote by . Γ and Δ are called the antecedent and succedents, respectively, of the sequent and each formula in Γ and Δ is called a sequent-formula. A hypersequent G is a finite multiset of the form , where each is a sequent and is called a component of G for each . If contains at most one formula for , then the hypersequent is single-conclusion, otherwise it is a multiple-conclusion. Definition 2. Let S be a sequent and a hypersequent. We say that if S is one of .
Notation 1. Let and be two hypersequents. We will assume from now on that all set terminology refers to multisets, adopting the conventions of writing for the multiset union of Γ and Δ, A for the singleton multiset , and for the multiset union of λ copies of Γ for . By we mean that for all and the multiplicity of S in is not more than that of S in . We will use , , , by their standard meaning for multisets by default and we will declare when we use them for sets. We sometimes write and as , (or ), respectively.
Definition 3. ([12]) A hypersequent rule is an ordered pair consisting of a sequence of hypersequents called the premises (upper hypersequents) of the rule, and a hypersequent G called the conclusion (lower hypersequent), written by If , then the rule has no premise and is called an initial sequent. The single-conclusion version of a rule adds the restriction that both the premises and conclusion must be single-conclusion; otherwise the rule is multiple-conclusion. Definition 4. ([9])GULandGIULconsist of the single-conclusion and multiple-conclusion versions of the following initial sequents and rules, respectively: Definition 5. ([9])GMTLandGIMTLareGULandGIULplus the single conclusion and multiple-conclusion versions, respectively, of: Definition 6. (i) and
;
(ii) By (or we denote an instance of a two-premise rule (or one-premise rule ) of , where and are its focus sequents and is its principle sequent (for , , and or hypersequent (for , and , see Definition 12).
Definition 7. ([9]) is extended with the following density rule:where p does not occur in or . Definition 8. ([12]) A derivation τ of a hypersequent G from hypersequents in a hypersequent calculus is a labeled tree with the root labeled by G, leaves labeled initial sequents or some , and for each node labeled with parent nodes labeled (where possibly , is an instance of a rule of . Notation 2. (i) denotes that τ is a derivation of from ;
(ii) Let H be a hypersequent. denotes that H is a node of τ. We call H a leaf hypersequent if H is a leaf of τ, the root hypersequent if it is the root of τ. denotes that and its parent nodes are ;
(iii) Let then denotes the subtree of τ rooted at H;
(iv) τ determines a partial order with the root as the least element. denotes and for any . By we mean that is the same node as in τ. We sometimes write as ⩽;
(v) An inference of the form is called the full external contraction and denoted by , if , is not a lower hypersequent of an application of whose contraction sequent is S, and not an upper one in τ.
Definition 9. Let τ be a derivation of G and . The thread of τ at H is a sequence of node hypersequents of τ such that , , or there exists such that or in τ for all .
Proposition 1. Let . Then
(i) if and only if ;
(ii) and imply ;
(iii) and imply .
We need the following definition to give each node of an identification number, which is used in Construction 3 to differentiate sequents in a hypersequent in a proof.
Definition 10. (Appendix A.5.2) Let and . Let ,for all . Then and call it the position of H in τ. Definition 11. A rule is admissible for a calculusGLif whenever its premises are derivable inGL, then so is its conclusion.
Lemma 1. ([9]) Cut-elimination holds for , i.e., proofs using can be transformed syntactically into proofs not using . 3. Proof of the Main Theorem: A Computational Example
In this section, we present an example to illustrate the proof of the main theorem.
Let
.
is a theorem of
and a cut-free proof
of
is shown in
Figure 1, where we use an additional rule
for simplicity. Note that we denote three applications of
in
respectively by
and three
by
and
.
By applying (D) to free combinations of all sequents in
and in
, we get that
.
is a theorem of
and a cut-free proof
of
is shown in
Figure 2. It supports the validity of the generalized density rule
in
Section 1, as an instance of
.
Our task is to construct , starting from . The tree structure of is more complicated than that of . Compared with UL, MTL and IMTL, there is no one-to-one correspondence between nodes in and .
Following the method given by G. Metcalfe and F. Montagna, we need to define a generalized density rule for
IUL. We denote such an expected unknown rule by
for convenience. Then
must be definable for all
. Naturally,
However, we could not find a suitable way to define
and
for
and
in
, see
Figure 1. This is the biggest difficulty we encounter in the case of
such that it is hard to prove density elimination for
IUL. A possible way is to define
as
. Unfortunately, it is not a theorem of
.
Notice that two upper hypersequents of are permissible inputs of . Why is an invalid input? One reason is that, two applications and cut off two sequents such that two disappear in all nodes lower than upper hypersequent of or , including . These make occurrences of to be incomplete in . We then perform the following operation in order to get complete occurrences of in .
Step 1 (preprocessing of ). Firstly, we replace
H with
for all
,
then replace
with
for all
. Then we construct a proof without
, which we denote by
, as shown in
Figure 3. We call such manipulations sequent-inserting operations, which eliminate applications of
in
.
However, we also cannot define for in that . The reason is that the origins of in are indistinguishable if we regard all leaves in the form as the origins of which occur in the inner node. For example, we do not know which p comes from the left subtree of and which from the right subtree in two occurrences of in . We then perform the following operation in order to make all occurrences of in distinguishable.
We assign the unique identification number to each leaf in the form
and transfer these identification numbers from leaves to the root, as shown in
Figure 4. We denote the proof of
resulting from this step by
, where
in which each sequent is a copy of some sequent in
and
in which each sequent is a copy of some external contraction sequent in
-node of
. We call such manipulations eigenvariable-labeling operations, which make us to trace eigenvariables in
.
Then all occurrences of
p in
are distinguishable and we regard them as distinct eigenvariables (See Definition 18 (i)). Firstly, by selecting
as the eigenvariable and applying
to
, we get
Secondly, by selecting
and applying
to
, we get
We define such iterative applications of as -rule (See Definition 20). Lemma 10 shows that if . Then we obtain , i.e., .
A miracle happens here! The difficulty that we encountered in GIUL is overcome by converting into and using to replace .
Why do we assign the unique identification number to each ? We would return back to the same situation as that of if we assign the same indices to all or, replace and by in .
Note that . So we have built up a one-one correspondence between the proof of and that of . Observe that each sequent in is not a copy of any sequent in . In the following steps, we work on eliminating these sequents in .
Step 2 (extraction of elimination rules). We select
as the focus sequent in
in
and keep
unchanged from
downward to
(See
Figure 4). So we extract a derivation from
by pruning some sequents (or hypersequents) in
, which we denote by
, as shown in
Figure 5.
A derivation
from
is constructed by replacing
with
,
with
and
with
in
, as shown in
Figure 6. Notice that we assign new identification numbers to new occurrences of
p in
.
Next, we apply
to
in
. Then we construct a proof
, as shown in
Figure 7, where
.
However,
contains more copies of sequents from
and seems more complex than
. We will present a unified method to tackle with it in the following steps. Other derivations are shown in
Figure 8,
Figure 9,
Figure 10 and
Figure 11.
Step 3 (separation of one branch). A proof
is constructed by applying sequentially
to
and
in
, as shown in
Figure 12, where
Then it is permissible to cut off the part
of
, which corresponds to applying
to
. We regard such a manipulation as a constrained contraction rule applied to
and denote it by
. Define
to be
Then we construct a proof of
by
, which guarantees the validity of
under the condition
A change happens here! There is only one sequent which is a copy of a sequent in
in
. It is simpler than
. So we are moving forward. The above procedure is called the separation of
as a branch of
and reformulated as follows (See
Section 7 for details).
The separation of
as a branch of
is constructed by a similar procedure as follows.
Note that and . So we have built up one-one correspondences between proofs of and those of .
Step 3 (separation algorithm of multiple branches). We will prove
in a direct way, i.e., only the major step of Theorem 2 is presented in the following. (See
Appendix A.5.4 for a complete illustration.) Recall that
By reassigning identification numbers to occurrences of
in
,
By applying
to
in
and
in
, we get
, where
Why reassign identification numbers to occurrences of in ? It makes different occurrences of to be assigned different identification numbers in two nodes and of the proof of .
By applying
to
, we get
, where
A great change happens here! We have eliminated all sequents which are copies of some sequents in and converted into in which each sequent is some copy of a sequent in .
Then
by Lemma 8, where
=
So we have built up one-one correspondences between the proof of
and that of
, i.e., the proof of
can be constructed by applying
to the proof of
. The major steps of constructing
are shown in the following figure, where
,
,
and
.
![Symmetry 11 00445 i001]()
In the above example, . But that is not always the case. In general, we can prove that if , which is shown in the proof of the main theorem in Page 42. This example shows that the proof of the main theorem essentially presents an algorithm to construct a proof of from .
4. Preprocessing of Proof Tree
Let be a cut-free proof of in the main theorem in by Lemma 1. Starting with , we will construct a proof which contains no application of and has some other properties in this section.
Lemma 2. (i) If and (ii) If and Proof. (ii) is proved by a procedure similar to that of (i) and omitted. □
We introduce two new rules by Lemma 2.
Definition 12. and are called the generalized and rules, respectively.
Now, we begin to process as follows.
Step 1. A proof
is constructed by replacing inductively all applications of
in
with
The replacements in Step 1 are local and the root of is also labeled by .
Definition 13. We sometimes may regard as a structural rule of and denote it by for convenience. The focus sequent for is undefined.
Lemma 3. Let , , where and . A tree is constructed by replacing each in with for all . Then is a proof of .
Proof. The proof is by induction on n. Since is a proof and is valid in , then is a proof. Suppose that is a proof. Since (or in , then (or is an application of the same rule (or ). Thus is a proof. □
Definition 14. The manipulation described in Lemma 3 is called a sequent-inserting operation.
Clearly, the number of -applications in is less than . Next, we continue to process .
Step 2. Let be all applications of in and . By repeatedly applying sequent-inserting operations, we construct a proof of in without applications of and denote it by .
Remark 1. (i) is constructed by converting into ; (ii) Each node of has the form , where and is a (possibly empty) subset of .
We need the following construction to eliminate applications of in .
Construction 1. Let , and , where , . Hypersequents and trees for all are constructed inductively as follows.
(i) and consists of a single node ;
(ii) Let (or be in , and (accordingly for ) for some .
If and is constructed by combining treesotherwise and is constructed by combining (iii) Let , and then and is constructed by combining
Lemma 4. (i) for all ;
(ii) is a derivation of from without .
Proof. The proof is by induction on k. For the base step, and consists of a single node . Then , is a derivation of from without .
For the induction step, suppose that and be constructed such that (i) and (ii) hold for some . There are two cases to be considered.
Case 1. Let , and . If then by . Thus Otherwise then by . Thus by . is a derivation of from without since is such one and is a valid instance of a rule of . The case of applications of the two-premise rule is proved by a similar procedure and omitted.
Case 2. Let , and . Then by . is a derivation of from without since is such one and is valid by Definition 13. □
Definition 15. The manipulation described in Construction 1 is called a derivation-pruning operation.
Notation 3. We denote by , by and say that is transformed into in .
Then Lemma 4 shows that , . Now, we continue to process as follows.
Step 3. Let then is a derivation of from thus a proof of is constructed by combining and with . By repeatedly applying the procedure above, we construct a proof of without in , where by Lemma 4(i).
Step 4. Let (or , then there exists such that for all (accordingly , thus a proof is constructed by replacing top-down p in each with ⊤.
Let (or , is a leaf of then there exists such that for all (accordingly or thus a proof is constructed by replacing top-down p in each with ⊥.
Repeatedly applying the procedure above, we construct a proof of in such that there does not exist occurrence of p in or at each leaf labeled by or , or p is not the weakening formula A in or when or is available. Define two operations and on sequents by and . Then is obtained by applying and to some designated sequents in .
Definition 16. The manipulation described in Step 4 is called eigenvariable-replacing operation.
Step 5. A proof is constructed from by assigning inductively one unique identification number to each occurrence of p in as follows.
One unique identification number, which is a positive integer, is assigned to each leaf of the form in which corresponds to in . Other nodes of are processed as follows.
Let . Suppose that all occurrences of p in are assigned identification numbers and have the form in , which we often write as Then has the form
Let , where Suppose that and have the forms and in , respectively. Then has the form All applications of are processed by the procedure similar to that of .
Let , where
Suppose that and have the forms and in , respectively. Then has the form All applications of are processed by the procedure similar to that of .
Let , where
where , .
Suppose that
and
have the forms
and
in
, respectively. Then
has the form
where
for
.
Definition 17. The manipulation described in Step 5 is called eigenvariable-labeling operation.
Notation 4. Let and be converted to G and in , respectively. Then is a proof of .
In the preprocessing of
, each
is converted into
in Step 2, where
by Lemma 3.
is converted into
in Step 3, where
by Lemma 4(i). Some
(or
) is revised as
(or
) in Step 4. Each occurrence of
p in
is assigned the unique identification number in Step 5. The whole preprocessing above is depicted by
Figure 13.
Notation 5. Let be all -nodes of and be converted to in . Note that there are no identification numbers for occurrences of variable p in meanwhile they are assigned to p in . But we use the same notations for and for simplicity.
In the whole paper, let denote the unique node of such that and is the focus sequent of in , in which case we denote the focus one and others among . We sometimes denote also by or . We then write as .
We call , the i-th pseudo- node of and pseudo- sequent, respectively. We abbreviate pseudo- as Let , by we mean that for some .
It is possible that there does not exist such that is the focus sequent of in , in which case , then it has not any effect on our argument to treat all such as members of G. So we assume that all are always defined for all in , i.e., .
Proposition 2. (i) for all ; (ii) .
Now, we replace locally each in with and denote the resulting proof also by , which has no essential difference with the original one, but could simplify subsequent arguments. We introduce the system as follows.
Definition 18. is a restricted subsystem of such that
(i) p is designated as the unique eigenvariable by which we mean that it is not used to built up any formula containing logical connectives and only used as a sequent-formula.
(ii) Each occurrence of p on each side of every component of a hypersequent in is assigned one unique identification number i and written as in . Initial sequent of has the form in .
(iii) Each sequent S of in the form has the formin , where p does not occur in Γ or Δ, for all , for all . Define and . Let G be a hypersequent of in the form then and for all . Define , . Here, l and r in and indicate the left side and right side of a sequent, respectively. (iv) A hypersequent G of is called closed if . Two hypersequents and of are called disjoint if , , and . is a copy of if they are disjoint and there exist two bijections and such that can be obtained by applying to antecedents of sequents in and to succedents of sequents in , i.e., .
(v) A closed hypersequent can be contracted as in under the condition that and are closed and is a copy of . We call it the constraint external contraction rule and denote it by Furthermore, if there do not exist two closed hypersequents such that is a copy of then we call it the fully constraint contraction rule and denote by .
(vi) and of are forbidden. , and of are replaced with , and in , respectively.
(vii) and are closed and disjoint for each two-premise rule
of and, is closed for each one-premise rule .
(viii) p does not occur in Γ or Δ for each initial sequent or and, p does not act as the weakening formula A in or when or is available.
Lemma 5. Let τ be a cut-free proof of in and be the tree resulting from preprocessing of τ.
(i) If then ;
(ii) If then ;
(iii) If and then ;
(iv) If and (or ) then ;
(v) is a proof of in without ;
(vi) If and then , .
Proof. Claims from (i) to (iv) follow immediately from Step 5 in preprocessing of and Definition 18. Claim (v) is from Notation 4 and Definition 18. Only (vi) is proved as follows.
Suppose that . Then , by Claim (iv). Thus or , a contradiction with hence .
is proved by a similar procedure and omitted. □
5. The Generalized Density Rule for
In this section, we define the generalized density rule for and prove that it is admissible in .
Definition 19. Let G be a closed hypersequent of and . Define , i.e., is the minimal closed unit of G containing In general, for , define .
Clearly, if or p does not occur in S. The following construction gives a procedure to construct for any given .
Construction 2. Let G and S be as above. A sequence of hypersequents is constructed recursively as follows. (i) ; (ii) Suppose that is constructed for . If then there exists (or thus there exists the unique such that (or by and Definition 18 then let otherwise the procedure terminates and .
Lemma 6. (i) ;
(ii)Let then ;
(iii)Let , ,, , then , where is the symmetric difference of two multisets ;
(iv)Let then for all ;
(v)
Proof. (i) Since for and then thus by . We prove for by induction on k in the following. Clearly, . Suppose that for some . Since (or and (or then by and thus . Then thus .
(ii) By (i), , where . Then for some thus (or hence there exists the unique such that (or if hence . Repeatedly, , i.e., then . by then .
(iii) It holds immediately from Construction 2 and (i).
(iv) The proof is by induction on k. For the base step, let then thus by . For the induction step, suppose that for some . Then by and . Then by .
(v) It holds by (iv) and . □
Definition 20. Let and be in the form for .
(i) If and be then is defined as
(ii) Let and for all then is defined as .
(iii) We call the generalized density rule of , whose conclusion is defined by (ii) if its premise is G.
Clearly, is and if p does not occur in S.
Lemma 7. Let and be closed and , where and Then
Proof. Since
then
by
,
and Lemma 6 (iii). Thus
,
Hence
Therefore
by
where
. □
Lemma 8. (Appendix A.5.1) If there exists a proof τ of G in then there exists a proof of in , i.e., is admissible in . Proof. We proceed by induction on the height of . For the base step, if G is then is otherwise is G then holds. For the induction step, the following cases are considered.
Then by , and Lemma 6(iii). Let then thus a proof of is constructed by combining the proof of and . Other rules of type are processed by a procedure similar to above.
Then
is
by
. Then the proof of
is constructed by combining
and
with . All applications of are processed by a procedure similar to that of and omitted.
Case 3 Let
where
for
Then
,
by Lemma 6 (iii). Let
for
Then
for
Then the proof of
is constructed by combining
and
with
. All applications of
are processed by a procedure similar to that of
and omitted.
Case 4 Let
where
where
for
Case 4.1.. Then by Lemma 6(ii) and
by Lemma 6(iii). Then
Thus
or
. Hence we assume that, without loss of generality,
Thus the proof of
is constructed by
Case 4.2.. Then
by Lemma 6(ii). Let
for
Then
by
,
,
and
. Let
for
Then
by Lemma 7,
. Then the proof of
is constructed by combing the proofs of
and
with
Case 5. Then and are closed and is a copy of thus hence a proof of is constructed by combining the proof of and . □
The following two lemmas are corollaries of Lemma 8.
Lemma 9. If there exists a derivation of from in then there exists a derivation of from in .
Lemma 10. Let τ be a cut-free proof of in and be the proof of in resulting from preprocessing of τ. Then .
6. Extraction of Elimination Rules
In this section, we will investigate Construction 1 further to extract more derivations from .
Any two sequents in a hypersequent seem independent of one another in the sense that they can only be contracted into one by when it is applicable. Note that one-premise logical rules just modify one sequent of a hypersequent and two-premise rules associate a sequent in a hypersequent with one in a different hypersequent.
(or any proof without in ) has an essential property, which we call the distinguishability of , i.e., any variables, formulas, sequents or hypersequents which occur at the node H of occur inevitably at in some forms.
Let . If is equal to as two sequents then the case that is equal to as two derivations could possibly happen. This means that both and are the focus sequent of one node in when and , which contradicts that each node has the unique focus sequent in any derivation. Thus we need to differentiate from for all .
Define such that , and is the principal sequent of . If has the unique principal sequent, , otherwise (or to indicate that is one designated principal sequent (or accordingly for another) of such an application as or . Then we may regard as . Thus is always different from by or, and . We formulate it by the following construction.
Construction 3. (Appendix A.5.2) A labeled tree , which has the same tree structure as , is constructed as follows. (i) If S is a leaf , define , and the node of is labeled by ;
(ii) If and be labeled by in . Then define , and the node of is labeled by ;
(iii) If , and be labeled by and in , respectively. If then define , , and the node of is labeled by . If then define , and is labeled by .
In the whole paper, we treat as without mention of . Note that in preprocessing of , some logical applications could also be converted to in Step 3 and we need fix the focus sequent at each node H and subsequently assign valid identification numbers to each by eigenvariable-labeling operation.
Proposition 3. (i) implies ; (ii) and imply ; (iii) Let and then or .
Proof. (iii) Let then for some by Notation 5. Thus also by Notation 5. Hence and by Construction 3. Therefore or . □
Lemma 11. Let and , where , , for .
(i) If then for all ;
(ii) If then for all .
Proof. The proof is by induction on i for . Only (i) is proved as follows and (ii) by a similar procedure and omitted.
For the base step, holds by , , and .
For the induction step, suppose that for some . Only is the case of a one-premise rule given in the following and other cases are omitted.
Let , and .
Let
. Then
,
by
and
Let
and
. Then
,
Let
. Then
,
The case of is proved by a similar procedure and omitted. □
Lemma 12. (i) Let then
(ii) , then .
Proof. (i) and (ii) are immediately from Lemma 11. □
Notation 6. We write , as , , respectively, for the sake of simplicity.
Lemma 13. (i) ;
(ii) is a derivation of from , which we denote by ;
(iii) and consists of a single node for all ;
(iv) ;
(v) implies . Note that is undefined for any or .
(vi) implies .
Proof. Claims from (i) to (v) follow immediately from Construction 1 and Lemma 4.
(vi) Since then has the form for some by Notation 5. Then by (iii). Suppose that . Then is transferred from downward to and in side-hypersequent of by Notation 5 and . Thus at since is the unique focus sequent of . Hence by Lemma 11 and (iii), a contradiction therefore . □
Lemma 14. Let . (i) If then or ; (ii) If then or .
Proof. (i) We impose a restriction on such that each sequent in is different from or otherwise we treat it as an -application. Since then has the form for some by Notation 5. Thus . Suppose that . Then is transferred from downward to H. Thus by and Lemma 11. Hence or , a contradiction with the restriction above. Therefore or .
(ii) Let . If then by Proposition 2(i) and thus by Lemma 11 and, hence by , . If then by . Thus or . □
Definition 21. (i) By we mean that for some ; (ii) By we mean that and ; (iii) means that for all .
Then Lemma 13(vi) shows that implies .