Next Article in Journal
A Supersymmetry and Quantum Cryptosystem with Path Integral Approach in Biology
Previous Article in Journal
Heat Transfer Enhancement in a Novel Annular Tube with Outer Straight and Inner Twisted Oval Tubes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Binary Context-Free Grammars

1
Department of Computer Science & Software Engineering, College of Information Technology, United Arab Emirates University, Al Ain 15551, UAE
2
Department of Computer Science, Faculty of Information and Communication Technology, International Islamic University Malaysia, Gombak, Selangor 53100, Malaysia
3
Faculty of Engineering and Natural Sciences, International University of Sarajevo, 71210 Sarajevo, Bosnia and Herzegovina
4
Department of Management Information Systems, King Faisal University, Al-Ahsa 31982, Saudi Arabia
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(8), 1209; https://doi.org/10.3390/sym12081209
Submission received: 17 June 2020 / Revised: 9 July 2020 / Accepted: 21 July 2020 / Published: 24 July 2020
(This article belongs to the Section Computer)

Abstract

:
A binary grammar is a relational grammar with two nonterminal alphabets, two terminal alphabets, a set of pairs of productions and the pair of the initial nonterminals that generates the binary relation, i.e., the set of pairs of strings over the terminal alphabets. This paper investigates the binary context-free grammars as mutually controlled grammars: two context-free grammars generate strings imposing restrictions on selecting production rules to be applied in derivations. The paper shows that binary context-free grammars can generate matrix languages whereas binary regular and linear grammars have the same power as Chomskyan regular and linear grammars.

1. Introduction

A “traditional” phrase-structure grammar (also known as a Chomskyan grammar) is a generative computational mechanism that produces strings (words) over some alphabet starting from the initial symbol and sequentially applying production rules that rewrite sequences of symbols [1,2,3]. According to the forms of production rules, phrase-structure grammars and their languages are divided into four families: regular, context-free, context-sensitive, and recursively enumerable [4,5].
Regular and context-free grammars, which have good computational and algorithmic properties, are widely used in modeling and studying of phenomena appearing in linguistics, computer science, artificial intelligence, biology, etc. [6,7]. However, many complex structures such as duplication ( w w ), multiple agreements ( a n b n c n ) and crossed agreements ( a n b m c n d m ) found in natural languages, programming languages, molecular biology, and many other areas cannot be represented by context-free grammars [6]. Context-sensitive grammars that can model these and other “non-context-free" structures are too powerful in order to be used in applications [5,8]. In addition many computational problems related to context-sensitive grammars are undecidable, and known algorithms for decidable problems concerning to these grammars have exponential complexities [4,5,6,9].
An approach to overcome this problem is to define “in-between’’ grammars that are powerful than context-free grammars but have similar computational properties. Regulated grammars are such types of grammars that are defined by adding control mechanisms to underlying context-free grammars in order to select specific strings for their languages [5]. Several variants of regulated grammars, such as matrix, programmed, ordered, context-conditional, random-context, tree-controlled grammars, have been defined according to control mechanisms used with the grammars [5,6,8]. Regulated grammars are classified into two main categories: rule-based regulated grammars that generate their languages under various production-related restrictions, and context-based regulated grammars that produce their languages under different context-related restrictions [10].
Though this type of the classification of regulated grammars allows us to understand the nature of restrictions imposed on the grammars, it does not clarify the role of control mechanisms from the aspect of the computational power. If we observe the control mechanisms used in both categories of regulated grammars, we can see that they consist of two parts [6]: (1) a “regular part", which is represented by a regular language of the labels of production rules (in rule-based case) or a regular language over the nonterminal and/or terminal alphabets (in context-based case) and (2) an “irregular part", which is represented by appearance checking (in rule-based case) and forbidding context (in context-based case), provides an additional power to the regular part.
If a regular part is solely used, the regulated grammars can generate small subsets of context-sensitive languages. On the other hand, if the both of regulations are used, then most regulated grammars generate all context-sensitive languages [6,10]. At this point, we again return to the same computational problems related to context-sensitive grammars, which we discussed early. Thus, we need to consider such regulation mechanisms that can grant to extend the family of context-free languages only to required ranges that cover necessary aspects of modeled phenomena. One of the possibilities to realize this idea can be the combination of several regular control mechanisms. One can consider a matrix of matrices or a matrix conditional context as combined regulation mechanisms. For instance, paper [11] studied simple-semi-conditional versions of matrix grammars with this approach. The problem in this case is that the combinations of such control mechanisms are, firstly, not natural and, secondly, they are too complex.
We propose, as a solution, an idea of imposing multiple regulations on a context-free grammar without combining them. We realize this idea by using relational grammars. A relational grammar is an n-ary grammar, i.e., a system of n terminal alphabets, n nonterminal alphabets, a set of n-tuples of productions and the initial n-tuple of nonterminals that generates the language of relations, i.e., tuples of strings over the terminal alphabets. On the other hand, a relational grammar can be considered of a system of n grammars in which each grammar generates own language using the corresponding productions enclosed in n-tuples. Thus, we can redefine a relational grammar as a system of mutually controlled n grammars where the grammars in relation generate their languages imposing restrictions on the applications of productions of other grammars. If we specify one grammar as the main, the other grammars in the system can be considered to be the regulation mechanisms controlling the generative processes of the main grammar.
This work is a preliminary work in studying mutually controlled grammars. In this paper, we define binary context-free grammars and study their generative power. We show that even mutually controlled two grammars can be as powerful as matrix grammars or other regulated grammars without appearance checking or forbidding context.
The paper is organized as follows. Section 2 surveys on regulated, parallel and relational grammars that are related to the introduced grammars. Section 3 contains necessary notions and notations used throughout the paper. Section 4 defines binary strings, languages and grammars Section 5 introduces synchronized normal forms for binary grammars and shows that for any binary context-free (regular, linear) grammar there exists an equivalent binary grammar in synchronized form. Section 6 investigate the generative powers of binary regular, linear and context-free grammars. Section 7 discusses the results of the paper and the power of mutually controlled grammars, and indicates to possible topics for future research.

2. Regulated, Parallel and Relational Grammars

In this section, we briefly survey some variants of regulated grammars with respect to the control mechanisms associated with them, parallel grammars as well as relational grammars, which are related to the introduced mutually controlled grammars.
The purpose of regulation is to restrict the use of the productions in a context-free grammar to select only specific terminal derivations successful hence to obtain a subset of the context-free language generated in usual way. Various regulation mechanisms used in regulated grammars can be classified into general types by their common features.
Control by prescribed sequences of production rules where the sequence of productions applied in a derivation belong to a regular language associated with the grammar:
  • matrix grammars [12]—the set of production rules is divided into matrices and if the application of a matrix is started, a second matrix can be started after finishing the application of the first one, as well as the rules have to been applied in the order given a matrix;
  • vector grammars [13]—in which a new matrix can be started before finishing those which have been started earlier;
  • regularly controlled grammars [14]—the sequence of production rules applied in a derivation belong to a given regular language associated with the grammar.
Control by computed sequences of production rules where a derivation is accompanied by a computation, which selects the allowed derivations:
  • programmed grammars [15]—after applying a production rule, the next production rule has to be chosen from its success field, and if the left hand side of the rule does not occur in the sentential form, a rule from its failure field has to be chosen;
  • valence grammars [16]—where with each sentential form an element of a monoid is associated, which is computed during the derivation and derivations where the element associated with the terminal word is the neutral element of the monoid are accepted.
Control by context conditions where the applicability of a rule depends on the current sentential form and with any rule some restrictions are associated for sentential forms which have to be satisfied in order to apply the rule:
  • random context grammars [17]—the restriction is the belonging to a regular language associated with the rule;
  • conditional grammars [18]—the restriction to special regular sets;
  • semi-conditional grammars [19]—the restriction to words of length one in the permitting and forbidden contexts;
  • ordered grammars [18]—a production rule can be applied if there is no greater applicable production rule.
Control by memory where with any nonterminal in a sentential form, its derivation is associated:
  • indexed grammars [20]—the application of production rules gives sentential forms where the nonterminal symbols are followed by sequences of indexes (stack of special symbols), and indexes can be erased only by rules contained in these indexes but erasing of the indexes is done in reverse order of their appearance.
Control by external mechanism where a mechanism used to select derivations does not belong to the grammar:
  • graph-controlled grammars [21,22]—the sequence of productions applied in a derivation to obtain a string corresponds to a path, whose nodes represent the production rules, in an associated bicolored digraph;
  • Petri net controlled grammars [23,24]—the sequence of productions used to obtain a string of the language of a grammar corresponds to a firing sequence of transitions, which are labeled by the productions, from the initial marking to a final marking.
Parallelism is another nontraditional approach used with grammars where, instead of rewriting a single symbol in each derivation step, several symbols can be rewritten simultaneously. There are two main variants of parallel mechanisms associated with grammars. The first is total parallelism, which is used in the broad varieties of (Deterministic Extended Tabled Zero-Sided) Lindenmayer systems [5,25] where all symbols of strings including terminals are in each step rewritten by productions. The second is partial parallelism where all or some nonterminal symbols (not terminal symbols) are written in each step of the derivations:
  • absolutely parallel grammars [26]—all nonterminals of the sentential form are rewritten in one derivation step;
  • Indian parallel grammars [27]—all occurrences of one letter are replaced (according to one rule);
  • Russian parallel grammars [28]—which combines the context-free and Indian parallel feature;
  • scattered context grammars [29]—in which only a fixed number of symbols can be replaced in a step but the symbols can be different;
  • concurrently controlled grammars [30]—the control over a parallel application of the productions is realized by a Petri net with different parallel firing strategies.
Another perspective in using the notion of parallelism with grammars is a grammar system, which is a system of several phrase-structure grammars with own axioms, symbols and rewriting productions that can work simultaneously and generate own strings. One of such grammar systems is a parallel communicating grammar system [31,32], where the grammars start from separate axioms, work parallelly rewriting their own sentential forms, and also communicate with each other by request. The language of one distinguished grammar in the system is considered the language of the system.
A relational grammar (an n-ary grammar) can be considered to be another type of grammar systems consisting of several grammars that work by applying productions synchronously or asynchronously [33,34]. More precisely, an n-ary grammar (where n is a positive integer) is a system of n terminal alphabets, n nonterminal alphabets, a set of productions and an initial n-tuple of nonterminals. Each production is an n-tuple of common productions or empty places. An n-ary grammar generates the language of relations, i.e., n-tuples of strings over the terminal alphabets. Work [33] showed that classes of languages generated by relational grammars forms a hierarchy between the family of context-free languages and the family of context-sensitive languages. Paper [34] studied closure, projective and other properties of relational grammars, and generalized the Chomsky’s classification for n-ary grammars. Several other papers [35,36,37,38,39,40] also investigated the properties of relational grammars and applied in solving problems appeared in natural and visual language processing.

3. Notions and Notations

Throughout the paper, we assume that the reader is familiar with the basic concepts and results of the theory of formal languages, Petri nets and relations; for details we refer to [4,9,41] (formal languages, automata, computation), [6,10] (regulated rewriting systems), [42,43] (Petri nets), [23,24,44] (Petri net controlled grammars), and [33,34,45,46,47,48] (finitary relations). Though, in this section, we recall all necessary notions and notations that are important for understanding this paper.
Basic conventions: the inclusion is denoted by ⊆ and the strict (proper) inclusion is denoted by ⊂. The symbol denotes the empty set. The powerset of a set X is denoted by P ( X ) , while its cardinality is denoted by | X | . An ordered sequence of elements a , b is called a pair and denoted by ( a , b ) . Two pairs ( a 1 , a 2 ) and ( b 1 , b 2 ) are equal iff a 1 = b 1 and a 2 = b 2 . Let X , Y be sets. The set of all pairs ( a , b ) , where a X and b Y , is called the Cartesian product of X and Y, and denoted by X × Y . Then, X × X = X 2 . A binary relation on sets X , Y is a subset of the Cartesian product X × Y .

Strings, Languages and Grammars

We first recall the fundamental concepts of the formal language theory such as an alphabet, a string and a language from [41]:
Definition 1.
An alphabet is a nonempty set of abstract symbols.
Definition 2.
A string (or a word) over an alphabet Σ is a finite sequence of symbols from Σ. The sequence of zero symbols is called the empty string, and denoted by λ. The set of all strings over Σ is denoted by Σ * . The set Σ * { λ } is denoted by Σ + .
Definition 3.
A subset of Σ * is called a language.
Definition 4.
The number of the occurrences of symbols in w Σ * is called its length and denoted by | w | . The number of occurrences of a symbol x in a string w is denoted by | w | x .
Example 1.
Let Σ = { a , b , c } be an alphabet. Then, w = a a a b b b c c is a string over Σ where | w | = 8 and | w | a = | w | b = 3 , | w | c = 2 . We can notice that w belongs to L = { a n b n c m n 1 , m 1 } , which is a language over Σ.
Next, we cite the definitions of context-free, matrix grammars and related notations which are more detailly given in [4,6].
Definition 5.
A context-free grammar is a quadruple G = ( V , Σ , S , R ) where V and Σ are disjoint alphabets of nonterminal and terminal symbols, respectively, S V is the start symbol and R V × ( V Σ ) * is a finite set of (production) rules. Usually, a rule ( A , x ) is written as A x . A rule of the form A λ is called an erasing rule, and a rule of the form A x , where x Σ * , is called terminal.
Definition 6.
Let G = ( V , Σ , S , R ) be a context-free grammar. If R V × Σ * ( V { λ } ) , then G is called regular, and if R V × Σ * ( V { λ } ) Σ * , then it is called linear.
The families of regular, linear and context-free languages are denoted by L ( REG ) , L ( LIN ) and L ( CF ) , respectively.
Definition 7.
Let G = ( V , Σ , S , R ) be a context-free grammar.
  • The string x ( V Σ ) + directly derives y ( V Σ ) * , written as x y , if and only if there is a rule r = A α R such that x = x 1 A x 2 and y = x 1 α x 2 .
  • The reflexive and transitive closure of the relation ⇒ is denoted by * .
  • A derivation using the sequence of rules τ = r 1 r 2 r n is denoted by τ or r 1 r 2 r n .
  • The language generated by a grammar G is defined by L ( G ) = { w Σ * S * w } .
Example 2.
G 1 = ( { S , A , B } , { a , b , c } , S , R ) where R contains the productions:
r 0 : S A B , r 1 : A a A b , r 2 : A a b , r 3 : B c B , r 4 : B c
is a context-free grammar, and it generates the language L in Example 1.
Definition 8.
A matrix grammar is a quadruple G = ( V , Σ, S, M ) where V , Σ , S are defined as for a context-free grammar, M is a finite set of matrices which are finite strings over a set of context-free rules (or finite sequences of context-free rules). The language generated by a matrix grammar G is defined by L ( G ) = { w Σ * S π w and π M * } .
Example 3.
G = ( { S , A , B } , { a , b , c } , S , M ) where M contains the matrices:
m 0 : ( S A B ) , m 1 : ( A a A b , B c B ) , m 2 : ( A a b , B c )
is a matrix grammar, and it generates the language { a n b n c n n 1 } .
The family of languages generated by matrix grammars is denoted by L ( MAT ) .
Lastly, we retrieve the notions of a Petri net, a context-free Petri net and a Petri net controlled grammar from [23,24,44].
Definition 9.
A Petri net is a construct N = ( P , T , F , ϕ ) where P and T are disjoint finite sets of places and transitions, respectively, F ( P × T ) ( T × P ) is a set of directed arcs, ϕ : ( P × T ) ( T × P ) { 0 , 1 , 2 , } is a weight function, where ϕ ( x , y ) = 0 for all ( x , y ) ( ( P × T ) ( T × P ) ) F .
A Petri net can be represented by a bipartite directed graph with the node set P T where places are drawn as circles, transitions as boxes and arcs as arrows with labels ϕ ( p , t ) or ϕ ( t , p ) . If ϕ ( p , t ) = 1 or ϕ ( t , p ) = 1 , the label is omitted. A mapping μ : P { 0 , 1 , 2 , } is called a marking. For each place p P , μ ( p ) gives the number of tokens in p.
Definition 10.
A context-free Petri net (in short, a cf Petri net) with respect to a context-free grammar G = ( V , Σ , S , R ) is a tuple N = ( P , T , F , ϕ , β , γ , ι ) where
(1)
( P , T , F , ϕ ) is a Petri net;
(2)
labeling functions β : P V and γ : T R are bijections;
(3)
there is an arc from place p to transition t if and only if γ ( t ) = A α and β ( p ) = A . The weight of the arc ( p , t ) is 1;
(4)
there is an arc from transition t to place p if and only if γ ( t ) = A α and β ( p ) = x where x V and | α | x > 0 . The weight of the arc ( t , p ) is | α | x ;
(5)
the initial marking ι is defined by ι ( β 1 ( S ) ) = 1 and ι ( p ) = 0 for all p P { β 1 ( S ) } .
The following example ([23]) explains the construction of a cf Petri net.
Example 4.
Let G 1 be a context-free grammar defined in Example 2. Figure 1 illustrates a cf Petri net N with respect to the grammar G 1 .
Definition 11.
A Petri net controlled grammar is a tuple G = ( V , Σ , S , R , N , γ , M ) where V, Σ, S, R are defined as for a context-free grammar and the construct N = ( P , T , F , ϕ , ι ) is a Petri net, γ : T R { λ } is a labeling function and M is a set of final markings. The language generated by a Petri net controlled grammar G, denoted by L ( G ) , consists of all strings w Σ * such that there is a derivation S r 1 r 2 r k w and an occurrence sequence ν = t 1 t 2 t s which is successful for M such that r 1 r 2 r k = γ ( t 1 t 2 t s ) .
The family of languages generated by Petri net controlled grammars is denoted by L ( PN ) .
The hierarchical relationships of the language families defined above are summarized as follows.
Theorem 1.
L ( REG ) L ( LIN ) L ( CF ) L ( MAT ) = L ( PN ) .
The correctness of inclusions L ( REG ) L ( LIN ) L ( CF ) was first shown in [1,3]. The proof of the strict inclusion L ( CF ) L ( MAT ) can be found in [41]. The equality L ( MAT ) = L ( PN ) was established in [23].

4. Binary Strings, Languages and Grammars

In this section, we define binary strings, languages and grammars by modifying the n-ary counterparts initially studied in [33,34].
Definition 12.
A pair of strings u 1 , u 2 , is called an binary string over V and denoted by u = ( u 1 , u 2 ) . The binary empty string is denoted by ( λ , λ ) .
Definition 13.
A subset L of ( V * ) 2 is called a binary language.
Definition 14.
The concatenation of binary strings u = ( u 1 , u 2 ) ( V * ) 2 and v = ( v 1 , v 2 ) ( V * ) 2 is defined as u v = ( u 1 v 1 , u 2 v 2 ) .
Definition 15.
For two binary languages L 1 , L 2 ( V * ) 2 , their
  • union is defined as L 1 L 2 = { w w L 1 or w L 2 } ,
  • concatenation is defined as L 1 L 2 = { u v u L 1 and v L 2 } .
Definition 16.
For L ( V * ) 2 , its Kleene star is defined as
L * = L 0 L 1 L 2
where L 0 = { ( λ , λ ) } and L i = L i 1 L , i 1 .
Definition 17.
A binary context-free grammar is a quadruple
G = ( V 1 × V 2 , Σ 1 × Σ 2 , ( S 1 , S 2 ) , R )
where
(1)
V i , i = 1 , 2 , are sets of nonterminal symbols,
(2)
Σ i , with V i Σ i = , i = 1 , 2 , are sets of terminal symbols,
(3)
( S 1 , S 2 ) V 1 × V 2 is the start (initial) pair, and
(4)
R is a finite set nonempty set of binary productions (rules).
A binary production is a pair ( r 1 , r 2 ) where r i , i = 1 , 2 , is either empty or it is a context-free production, i.e., r i V i × ( V i Σ i ) * .
Remark 1.
A binary production ( r 1 , r 2 ) can be written as u v where u = ( u 1 , u 2 ) , v = ( v 1 , v 2 ) and u i v i if r i , u i = v i = λ , if r i = . For production r i = u i v i , to indicate its left-hand side and right-hand side, we also use notations l h s ( r i ) and r h s ( r i ) , i.e., l h s ( ( r i ) = u i and r h s ( ( r i ) = v i .
Remark 2.
In Definition 17, if each nonempty r i is regular production, then the grammar G is called regular and if each nonempty r i is linear production, then it is called linear.
Remark 3.
We say G l = ( V 1 , Σ 1 , S 1 , R 1 ) and G r = ( V 2 , Σ 2 , S 2 , R 2 ) are left and right grammars with respect to the binary grammar G, respectively, where
R 1 = { r 1 ( r 1 , r 2 ) R and r 1 } and R 2 = { r 2 ( r 1 , r 2 ) R and r 2 } .
Definition 18.
Let G be a binary context-free grammar. Let u , v be pairs in ( V 1 Σ 1 ) * × ( V 2 Σ 2 ) * . We say that u directly derives v, written as u v , if there exist pairs x , z ( V 1 Σ 1 ) * × ( V 2 Σ 2 ) * and a production y w R such that u = x y z and v = x w z . The reflexive and transitive closure of → is denoted by * .
Definition 19.
The binary language generated by a binary context-free grammar G is defined as
L ( G ) = { ( w 1 , w 2 ) ( Σ 1 * × Σ 2 * ) ( S 1 , S 2 ) * ( w 1 , w 2 ) } .
Definition 20.
The left and right languages are defined as
L l ( G ) = { w 1 ( w 1 , w 2 ) L ( G ) } and L r ( G ) = { w 2 ( w 1 , w 2 ) L ( G ) } ,
i.e., the sets of left and right strings in all binary strings of L ( G ) , respectively.
Example 5.
Consider the binary grammar
G 2 = ( { S 1 } × { S 2 } , { a , b } × { a , b } , ( S 1 , S 2 ) , R )
where R consists of the following productions
( S 1 , S 2 ) ( S 1 S 1 , a S 2 ) , ( S 1 , S 2 ) ( λ , b S 2 ) , ( S 1 , S 2 ) ( λ , λ ) .
It is not difficult to see that, after n steps, the first production produces the pair ( S 1 n + 1 , a n S 1 ) . Then in order to eliminate all S 1 s, we apply the second production n times, and terminate derivation with applying the third production, which generates the pair string ( λ , a n b n ) . Thus, L ( G 2 ) = { ( λ , a n b n ) n > 0 } , and L r ( G 2 ) = { a n b n n > 0 } L ( CF ) .
Example 6.
The grammar
G 3 = ( { S 1 , A , B , C } × { S 2 , X , Y , Z } , { a , b , c } 2 , ( S 1 , S 2 ) , R )
where R consists of the productions
( S 1 A B C , S 2 X ) , ( A a A , X Y ) , ( B b B , Y Z ) , ( C c C , Z X ) , ( A a , X Z ) , ( B b , Z Y ) , ( C c , Y λ ) ,
generates the language L ( G 3 ) = { ( λ , a n b n c n ) n > 0 } where the right language L r ( G 3 ) = { a n b n c n n > 0 } L ( CF ) .
Example 6 illustrates that binary context-free grammars can generate non-context-free languages, which implies that binary context-free grammars are more powerful than the Chomskyan context-free grammars. Thus, binary context-free grammars can be used in studying non-context-free structures such as cross-serial dependencies appearing in natural and programming languages using “context-free” tools such as parsing (derivation) trees.
We denote the families of binary languages generated by binary grammars with left and right grammars of type X , Y { REG , LIN , CF } by B ( X , Y ) . We also denote the the families of the left and right languages generated by binary grammars by B ( X ) , X { REG , LIN , CF } .

5. Synchronized Forms for Binary Grammars

In a binary grammar, the derivation of a string in the left or right grammar can pause or stop while the other still continues because of the rules of the forms ( , r 2 ) and ( r 1 , ) . However, we show that the both first and second derivations by binary context-free grammars can be synchronized, i.e., in each derivation step, some pair of nonempty productions is applied and both derivations stop at the same time.
Definition 21.
A binary context-free (regular, linear) grammar G is called synchronized if it does not have any production of the form ( , r 2 ) or ( r 1 , ) .
Lemma 1.
For every binary context-free grammar, there exist an equivalent synchronized binary context-free grammar.
Proof. 
Let G = ( V 1 × V 2 , Σ 1 × Σ 2 , ( S 1 , S 2 ) , R ) be a binary context-free grammar. Let
R = { ( r 1 , r 2 ) R r 1 = or r 2 = } .
We define the binary cf grammar G = ( V 1 × V 2 , Σ 1 × Σ 2 , ( S 1 , S 2 ) , R ) where V i = V i { X , S 1 , S 2 } , i = 1 , 2 , where X, S 1 and S 2 are new nonterminals, and
R = ( R R ) { ( X X , r 2 ) ( , r 2 ) R } { ( r 1 , X X ) ( r 1 , ) R } { ( S 1 S 1 X , S 2 S 2 X ) , ( X λ , X λ ) } .
Then, the equality L ( G ) = L ( G ) is obvious. □
The proof of Lemma 1 cannot be used for showing that there is also a synchronized form for a binary regular or linear grammar. The next lemma illustrates the existence of an equivalent synchronized form for any binary regular grammar too.
Lemma 2.
For every binary regular grammar, there exist an equivalent synchronized binary regular grammar.
Proof. 
Let G = ( V 1 × V 2 , Σ 1 × Σ 2 , ( S 1 , S 2 ) , R ) be a binary regular grammar. Let R = { ( r 1 , r 2 ) R r 1 = or r 2 = } and
R 1 = { ( r 1 , r 2 ) R rhs ( r 1 ) T 1 * , rhs ( r 2 ) T 2 * } , R 2 = { ( r 1 , r 2 ) R rhs ( r 1 ) T 1 * N 1 , rhs ( r 2 ) T 2 * N 2 } .
The proof of the lemma consists of two parts. First, we replace all binary productions of the form ( A 1 x 1 , r 2 ) and ( r 1 , A 2 x 2 ) with the productions ( A 1 x 1 E , r 2 ) and ( r 1 , A 2 x 2 E ) where E is a new nonterminal, x i T 1 * T 2 * and rhs ( r i ) T 1 * T 2 * , i = 1 , 2 . By this change, the early stop of one of derivations by the left and right grammars is prevented. Let
R 3 = { ( r 1 , r 2 ) R rhs ( r 1 ) T 1 * , rhs ( r 2 ) T 2 * } , R 4 = { ( r 1 , r 2 ) R rhs ( r 1 ) T 1 * , rhs ( r 2 ) T 2 * } .
We set V 1 = V 1 { E } , V 2 = V 2 { E } and
R 3 = { ( A 1 x 1 E , r 2 ) ( A 1 x 1 , r 2 ) R 3 } , R 4 = { ( r 1 , A 2 x 2 E ) ( r 1 , A 2 x 2 ) R 4 }
and R = ( R ( R 3 R 4 ) ) R 3 R 4 .
Second, in order to eliminate empty productions, we replace all binary production rules of the form ( A 1 x 1 B 1 , ) and ( , A 2 x 2 B 2 ) with the pairs of production rules ( A 1 A r , A 2 A r ) , ( A r x 1 B 1 , A r A 2 ) and ( A 1 A r , A 2 A r ) , ( A r A 1 , A r x 2 B 2 ) , respectively, where A r s are new nonterminals. Thus, we define the following sets of new productions:
R 5 = { ( A 1 A r , A 2 A r ) , ( A r A 1 , A r x B 2 ) r = ( , A 2 x B 2 ) R and ( A 1 , A 2 ) V 1 × V 2 } , R 6 = { ( A 1 A r , A 2 A r ) , ( A r x B 1 , A r A 2 ) r = ( A 1 x B 1 , ) R and ( A 1 , A 2 ) V 1 × V 2 }
where A r s are new nonterminal symbols introduced for each production r = ( A 1 α 1 , ) or r = ( , A 2 α 2 ) in R . Let
V R , 1 = { A r r = ( , A 2 x B 2 ) R }
and
V R , 2 = { A r r = ( A 1 x B 1 , ) R } .
We define the binary regular grammar G = ( V 1 × V 2 , Σ 1 × Σ 2 , ( S 1 , S 2 ) , R ) as follows:
V 1 = V 1 V R , 1 , V 2 = V 2 V R , 2 , R = R 1 R 2 R 5 R 6 { ( E λ , E λ ) } .
First, we show that L ( G ) L ( G ) . Consider a derivation
( S 1 , S 2 ) = ( w 1 , 0 , w 2 , 0 ) ( w 1 , 1 , w 2 , 1 ) ( w 1 , n , w 2 , n ) = ( w 1 , w 2 )
in G where ( w 1 , w 2 ) Σ 1 * × Σ 2 * .
For a derivation step ( w 1 , i 1 , w 2 , i 1 ) ( w 1 , i , w 2 , i ) , 1 i n , the following cases are possible:
Case 1: the right-hand side is obtained by applying a production from R 1 R 2 { ( E , E ) } . Then, the same rule is also applied in the corresponding step in the simulating derivation of G .
Case 2: the right-hand side is obtained by applying a production of the form r = ( , A 2 α 2 ) or r = ( A 1 α 1 , ) . Then, the production sequence ( A 1 A r , A 2 A r ) , ( A r A 1 , A r x B 2 ) R 5 or the production sequence ( A 1 A r , A 2 A r ) , ( A r x B 1 , A r A 2 ) R 6 is applied in the corresponding step in the simulating derivation of G .
Case 3: the right-hand side is obtained by applying a rule of the form ( A 1 x 1 , r 2 ) , rhs ( r 2 ) or ( r 1 , A 2 x 2 ) , rhs ( r 1 ) . Then, the rule ( A 1 x 1 E , r 2 ) or ( r 1 , A 2 x 2 E ) is applied in the corresponding step in the simulating derivation in G , and the derivation terminates with applying ( E , E ) .
The inclusion L ( G ) L ( G ) is obvious:
(1) the application of a rule of the form ( A 1 x 1 E , r 2 ) , rhs ( r 2 ) , or ( r 1 , A 2 x 2 E ) , rhs ( r 1 ) , can be immediately replaced with the pair ( A 1 x 1 , r 2 ) or ( r 1 , A 2 x 2 ) , respectively;
(2) if a rule of the form ( A 1 A r , A 2 A r ) or ( A 1 A r , A 2 A r ) is applied at some derivation step, the only applicable pair of productions then is ( A r A 1 , A r x B 2 ) or ( A r x B 1 , A r A 2 ) , respectively, since A r is the unique for each pair of productions. Thus, the sequence of pairs of productions ( A 1 A r , A 2 A r ) and ( A r A 1 , A r x B 2 ) or ( A 1 A r , A 2 A r ) and ( A r x B 1 , A r A 2 ) are replaced with r = ( , A 2 α 2 ) or r = ( A 1 α 1 , ) , respectively. □
Using the same arguments of the proof of Lemma 2, one can show that the similar fact also holds for binary linear grammars.
Lemma 3.
For every binary linear grammar, there exist an equivalent synchronized binary linear grammar.

6. Generative Capacities of Binary Grammars

In this section, we discuss the generative capacities of binary regular, linear and context free grammars.
The following two lemmas immediately follows from the definitions of binary languages.
Lemma 4.
B ( X , Y ) = B ( Y , X ) , X { REG , LIN , CF } .
Lemma 5.
B ( REG ) B ( LIN ) B ( CF ) .
Lemma 6.
L ( REG ) B ( REG ) , L ( LIN ) B ( LIN ) , L ( CF ) B ( CF ) .
Proof. 
Let G = ( V , Σ , S , R ) be a context-free (regular, linear) grammar. Then, we define the binary context-free (regular, linear) grammar G = ( V × V , Σ × Σ , R , ( S , S ) ) by setting, for each production r = A α R , the production r = ( , A α ) in R . Then, it is not difficult to see that L ( G ) = L r ( G ) . In the same way, we can also show that L ( G ) = L l ( G ) . □
Now we show that binary regular and linear grammars generate regular and linear languages, respectively.
Lemma 7.
B ( LIN ) L ( LIN ) .
Proof. 
Let G = ( V 1 × V 2 , Σ 1 × Σ 2 , ( S 1 , S 2 ) , R ) be a binary linear grammar. Without loss of generality, we can assume that the grammar G is synchronized. We set V = { [ A 1 , A 2 ] ( A 1 , A 2 ) V 1 × V 2 } where [ A 1 , A 2 ] , ( A 1 , A 2 ) V 1 × V 2 , are new nonterminals, and we define
R = { [ A 1 , A 2 ] x 2 [ B 1 , B 2 ] y 2 ( A 1 x 1 B 1 y 1 , A 2 x 2 B 2 y 2 ) R } { [ A 1 , A 2 ] x 2 ( A 1 x 1 , A 2 x 2 ) R } .
Then, G = ( V , Σ 2 , [ S 1 , S 2 ] , R ) is a linear grammar, and L r ( G ) = L ( G ) is obvious. Hence, L r ( G ) is linear, i.e., B ( LIN ) L ( LIN ) . □
Corollary 1.
B ( REG ) L ( REG ) .
Next, we show that binary context-free grammars are more powerful than Chomskyan context-free grammars.
Lemma 8.
L ( CF ) B ( CF ) .
Proof. 
By Lemma 6, L ( CF ) B ( CF ) . Let us consider a binary context-free grammar G 4 = ( { S , C , D , E } × { S , A , B } , { a , b , c } 2 , ( S , S ) , R ) where R consists of the following productions:
( 1 ) ( S S , S A B ) , ( 2 ) ( S C , A a A ) , ( C S , B a B ) , ( 3 ) ( S D , A b A ) , ( D S , B b B ) , ( 4 ) ( S E , A λ ) , ( E a , B λ ) ,
Any successful derivation starts with production ( 1 ) . For each production sequence ( i ) , 2 i 4 , if the first production is applied, then, the only applicable production in the derivation is the second one. If after ( 1 ) , production sequence ( 4 ) is applied, then the derivation generates the binary string ( a , λ ) . Else, after some steps, the derivation results in ( S , w A w B ) , w { a , b } + , by applying production sequences ( 2 ) or/and ( 3 ) , and then, it terminates by applying production sequence ( 4 ) . Thus,
L ( G 4 ) = { ( a , w w ) w { a , b } * } .
Since L r ( G 4 ) L ( CF ) , we have the strict inclusion L ( CF ) B ( CF ) . □
Next, we show that binary context-free grammars are at least as powerful as matrix grammars.
Lemma 9.
L ( MAT ) B ( CF ) .
Proof. 
Let G = ( V , Σ , S , M ) be a matrix grammar where M consists of matrices m 1 , m 2 , , m n with m i : ( r i , 1 , r i , 2 , , r i , k ( i ) ) , 1 i n . We set the following sets of new nonterminals
V 1 = { Y i 1 i n } { Z i , j 1 i n , 1 j k ( i ) } { S 1 , X } , V 2 = V { S 2 } ,
We construct the following binary productions
(1)
the start production: ( S 1 X , S 2 S ) ,
(2)
the matrix entry productions: ( X Y i , ) , 1 i n ,
(3)
the matrix processing productions:
Y i Z i , 1 , r i , 1 , ( Z i , 1 Z i , 2 , r i , 2 ) , , ( Z i , k ( i ) 1 Z i , k ( i ) , r i , k ( i ) )
where 1 i n ,
(4)
the matrix exit productions: ( Z i , k ( i ) X , ) , 1 i n .
(5)
the terminating production: ( X λ , ) .
We define the binary context-free grammar G = ( V 1 × V 2 , Σ × Σ , ( S 1 , S 2 ) , R ) where R consists of all productions (1)–(5) constructed above.
Claim 1: L ( G ) L ( G ) . Let
D : S = w 0 m i 1 w 1 m i 2 w 2 m i k w k = w Σ *
be a derivation in G. We construct the derivation D in G that simulates D. The derivation starts with the step:
( S 1 , S 2 ) ( X , S ) .
Since, m i 1 is the first matrix applied in derivation D, the next step in D is
( X , S ) ( Y i 1 , S ) .
Furthermore,
( Y i 1 , S ) ( Y i 1 Z i 1 , r i 1 ) ( Z i 1 , k ( i 1 ) 1 Z i 1 , k ( i 1 ) , r i 1 , k ( i 1 ) ) w 1 .
By applying ( Z i 1 , k ( i 1 ) X , ) ,
w 1 ( Z i 1 , k ( i 1 ) X , ) w 1 ,
we return X to the derivation, and then, we can continue the simulation of the application of the matrix m i 2 in the same manner. Thus, D simulates D, and L ( G ) L ( G ) .
Claim 2: L ( G ) L ( G ) . Any successfully terminating derivation in G starts by applying production ( S 1 X , S 2 S ) , followed by applying ( X Y i , ) for some 1 i n . Then only possible productions to be applied are matrix processing productions of the form (3). When the application of the productions in the currently active sequence starts, the productions of another sequence of the form (3) cannot be applied. In order to switch to another sequence of productions of the form (3), the corresponding production of the form (4) must be applied after finishing the application of all productions in the current sequence in the given order. To successfully terminate the derivation, the productions of the forms (4) and (5) must be applied. By construction, each sequence of productions of the form (3) simulates some matrix from G, any successful derivation in G can be simulated by a successful derivation in G. Thus, L ( G ) L ( G ) . □
The lemma above shows that any matrix language can be generated by a binary grammar where one of its grammars is regular and the other is context-free. Here, the natural question arises whether there is a binary grammar with both grammars are context-free that generates a non-matrix language or not. Next lemma shows that binary context-free grammars can only generate matrix languages even if their both grammars are context-free.
Lemma 10.
B ( CF ) L ( MAT ) .
Proof. 
Let G = ( V 1 × V 2 , Σ 1 × Σ 2 , ( S 1 , S 2 ) , R ) be a binary context-free grammar. Without loss of generality, we assume that G is in a synchronized form. Let R 2 = { r 2 ( r 1 , r 2 ) R } . The proof idea is as follows: First, we will construct a context-free Petri net N with respect to G. Second, we define a Petri net controlled grammar G where the underlying right grammar G r = ( V 2 , Σ 2 , S 2 , R 2 ) is controlled by the Petri net N. Then we show that L r ( G ) = L ( G ) , i.e., B ( CF ) L ( PN ) .
Part 1: We construct the cf Petri net N = ( P , T , F , ϕ , β , γ , ι ) with respect to the nonterminals of the left grammar and productions of the grammar G by setting its components in the following way:
  • ( P , T , F , ϕ ) is a Petri net;
  • the labeling functions β : P V 1 and γ : T R are bijections;
  • there is an arc from place p to transition t if and only if γ ( t ) = ( r 1 , r 2 ) and β ( p ) = lhs ( r 1 ) . The weight of the arc ( p , t ) is 1;
  • there is an arc from transition t to place p if and only if γ ( t ) = ( r 1 , r 2 ) and β ( p ) = X where X V 1 and | rhs ( r 1 ) | X > 0 . The weight of the arc ( t , p ) is | rhs ( r 1 ) | X ;
  • the initial marking ι is defined by ι ( β 1 ( S 1 ) ) = 1 and ι ( p ) = 0 for all p P { β 1 ( S 1 ) } .
Part 2: Using the right grammar ( V 2 , Σ 2 , R 2 , S 2 ) , we define the PN controlled grammar G = ( V 2 , Σ 2 , R 2 , S 2 , N , η , M ) where N = ( P , T , F , ϕ , β , γ , ι ) is the cf Petri net defined above, η : T R 2 is a labeling function and M is a set of final markings. We set M = and η ( t ) = r 2 if and only if γ ( t ) = ( r 1 , r 2 ) .
Part 3: Now we show that L r ( G ) = L ( G ) . Let
D : ( S 1 , S 2 ) ( r 1 , 1 , r 2 , 1 ) ( w 1 , 1 , w 2 , 1 ) ( r 1 , 2 , r 2 , 2 ) ( r 1 , k , r 2 , k ) ( w 1 , k , w 2 , k )
be a derivation in G where ( w 1 , k , w 2 , k ) Σ 1 * × Σ 2 * . We show that the derivation D can be simulated by the derivation D in the grammar G constructed as follows. D starts with S 2 , and by definition of N, ι ( β 1 ( S 1 ) ) = 1 , thus, transition t 1 = γ 1 ( ( r 1 , 1 , r 2 , 1 ) ) is enabled. In the first step, we obtain
D : S 2 r 2 , 1 w 2 , 1 where η ( t 1 ) = r 2 , 1 .
When transition t 1 occurs the place in N corresponding to each nonterminal in rhs ( r 1 , 1 ) = w 1 , 1 receive the tokens whose number is equal to the number of the occurrence of the nonterminal in rhs ( r 1 , 1 ) .
Suppose that for some 1 i < k , we constructed the first i steps of the derivation D :
D : S 2 r 2 , 1 w 2 , 1 r 2 , 2 r 2 , i w 2 , i
with r 2 , 1 r 2 , 2 r 2 , i = η ( t 1 t 2 t i ) where ( r 1 , 1 , r 2 , 1 ) ( r 1 , 2 , r 2 , 2 ) ( r 1 , i , r 2 , i ) = γ ( t 1 t 2 t i ) , which corresponds to the first i steps of D. By definition, γ ( t i ) = ( r 1 , i , r 2 , i ) and η ( t i ) = r 2 , i . When transition t i fires, a token moves from the input place of t i , that is labeled by the left-hand side of r 1 , i , to its output places, that are labeled by the nonterminals occurring in the right-hand side of r 1 , i . Thus, these nonterminals are also occur in w 1 , i . The next step in D occurs by applying the pair ( r 1 , i + 1 , r 2 , i + 1 ) :
( w 1 , i , w 2 , i ) ( r 1 , i + 1 , r 2 , i + 1 ) ( w 1 , i + 1 , w 2 , i + 1 ) .
Since production r 1 , i + 1 is applicable in the current step, its left-hand side occurs in w 1 , i . It follows that the transition t i + 1 , γ ( t i + 1 ) = ( r 1 , i + 1 , r 2 , i + 1 ) , can fire. Consequently, we choose the production r 2 , i + 1 with η ( t i + 1 ) = r 2 , i + 1 , in D , and obtain
w 2 , i r 2 , i + 1 w 2 , i + 1 .
The last step in D results in ( w 1 , k , w 2 , k ) Σ 1 * × Σ 2 * that is obtained by applying the pair ( r 1 , k , r 2 , k ) . Then, we choose r 2 , k with η ( t k ) = r 2 , k in D . Since w 1 , k Σ 1 * , i.e., it does not contain nonterminals, all places of N have no tokens. Thus, M = . It shows that L r ( G ) L ( G ) .
Let
D : S 2 r 2 , 1 w 2 , 1 r 2 , 2 r 2 , k w 2 , k
with r 2 , 1 r 2 , 2 r 2 , k = η ( t 1 t 2 t k ) . By definition, we immediately obtain γ ( t 1 t 2 t k ) = ( r 1 , 1 , r 2 , 1 ) ( r 2 , 2 , r 2 , 2 ) ( r 1 , k , r 2 , k ) for some ( r 1 , i , r 2 , i ) R , 1 i k . Then, we can construct the derivation D in the grammar G
D : ( S 1 , S 2 ) ( r 1 , 1 , r 2 , 1 ) ( w 1 , 1 , w 2 , 1 ) ( r 1 , 2 , r 2 , 2 ) ( r 1 , k , r 2 , k ) ( w 1 , k , w 2 , k ) ,
which shows that L ( G ) L r ( G ) . Thus, B ( CF ) L ( PN ) . By Theorem 1, B ( CF ) L ( MAT )  □.
We summarize the results obtained above in the following theorem.
Theorem 2.
B ( REG ) = L ( REG ) B ( LIN ) = L ( LIN ) L ( CF ) L ( MAT ) = B ( CF ) .

7. Conclusions

In this paper, we redefined binary grammars as mutually controlled grammars where either grammar in a relation generates own language imposing restriction to the other.
Though binary grammars are asynchronous systems by their definitions, we showed that they can also work in synchronized mode (Lemmas 1–3), i.e., the both grammars in a binary relation generate strings with derivations where the grammars apply some productions in each step, and stop at the same time. This feature of binary grammars allows using one grammar in a relation as a regulation mechanism for the other.
We have studied the generative capacity of binary context-free grammars, and showed that binary regular and linear grammars have the same power as their Chomskyan alternatives, i.e., traditional regular and linear grammars, respectively (Lemmas 6 and 7 and Corollary 1). On the other hand, we have proved that binary context-free grammars are much more powerful than traditional context-free grammars (Lemma 8), that is, they generate all matrix languages even if binary grammars consist of regular and context-free pairs (Lemma 9). Moreover, we established that using context-free grammars as the components of relations does not increase the computational power of binary context-free grammars, i.e., they remain equivalent to matrix grammars (Lemma 10). Using the inclusion hierarchies in Theorem 1 and the results of the paper, we obtained the comparative hierarchy for binary regular, linear and context-free grammars (Theorem 2). We have also illustrated that binary grammars have practical significance: Example 6, and Lemma 8 show that cross-serial dependencies such as duplication and multiple agreements—non-context-free syntactical structures appearing in natural and programming languages—can be expressed with binary grammars.
Here, we emphasize that ternary or higher degree relational context-free grammars are more powerful than binary ones, and can be used in modeling “nested” cross-serial dependencies. Let us assume that a ternary context-free grammar is defined similarly to binary grammars. Then, the reader can convince himself that the following language
( a 1 n 1 b 1 n 1 c 1 n 1 ) n ( a 2 n 2 b 2 n 2 c 2 n 2 ) n ( a 3 n 3 b 3 n 3 c 3 n 3 ) n n , n 1 , n 2 , n 3 1 ,
the language of nested mutual agreements, can be generated by a ternary context-free grammar
G = ( { S 1 , S 2 , S 3 } , { a i , b i , c i i = 1 , 2 , 3 } , ( S 1 , S 2 , S 3 ) , R )
where R consists of the following tuples of productions:
( S 1 X 1 , S 2 X 2 , S 3 A 1 A 2 B 1 B 2 C 1 C 2 ) , ( X 1 X 1 , X 2 X 2 , A 1 a 1 A 1 b 1 ) , ( X 1 X 1 , X 2 X 2 , A 2 c 1 A 2 ) , ( X 1 Y 1 , X 2 Y 2 , A 2 c 1 A 2 ) , ( Y 1 Y 1 , Y 2 Y 2 , B 1 a 2 B 1 b 2 ) , ( Y 1 Y 1 , Y 2 Y 2 , B 2 c 2 B 2 ) , ( Y 1 Z 1 , Y 2 Z 2 , B 2 c 2 B 2 ) , ( Z 1 Z 1 , Z 2 Z 2 , C 1 a 3 C 1 b 3 ) , ( Z 1 Z 1 , Z 2 Z 2 , C 2 c 3 C 2 ) , ( Z 1 X 1 , Z 2 X 2 , C 2 c 3 C 2 ) , ( X 1 X 1 , X 2 X 2 , A 1 a 1 b 1 ) , ( X 1 Y 1 , X 2 Y 2 , A 2 c 1 ) , ( Y 1 Y 1 , Y 2 Y 2 , B 1 a 2 b 2 ) , ( Y 1 Z 1 , Y 2 Z 2 , B 2 c 2 ) , ( Z 1 Z 1 , Z 2 Z 2 , C 1 a 3 b 3 ) , ( Z 1 λ , Z 2 λ , C 2 c 3 ) .
The detailed study of higher degree relational grammars as mutually controlled grammars will be the topic of our next investigation.

Author Contributions

Conceptualization, S.T. and A.A.A. (Ali Amer Alwan); methodology, S.T. and R.A.; validation, A.A.A. (Ali Abd Almisreb) and Y.G.; formal analysis, A.A.A. (Ali Abd Almisreb) and Y.G.; investigation, S.T. and R.A.; writing—original draft preparation, S.T. and R.A.; writing—review and editing, S.T. and A.A.A. (Ali Amer Alwan); supervision, S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by the United Arab Emirates University Start-Up Grant 31T137.

Acknowledgments

We would like to thank the anonymous reviewers for their valuable comments and useful remarks about this paper.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study and in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CFcontext-free
REGregular
LINlinear
MATmatrix

References

  1. Chomsky, N. Three models for the description of languages. IRE Trans. Inf. Theory 1956, 2, 113–124. [Google Scholar] [CrossRef] [Green Version]
  2. Chomsky, N. Syntactic Structure; Mouton: Gravenhage, The Netherland, 1957. [Google Scholar]
  3. Chomsky, N. On certain formal properties of grammars. Inf. Control 1959, 2, 137–167. [Google Scholar] [CrossRef] [Green Version]
  4. Hopcroft, J.; Motwani, R.; Ullman, J. Introduction to Automata Theory, Languages, and Computation; Pearson: London, UK, 2007. [Google Scholar]
  5. Rozenberg, G.; Salomaa, A. (Eds.) Handbook of Formal Languages; Springer: Berlin/Heidelberg, Germany, 1997; Volume 1–3. [Google Scholar]
  6. Dassow, J.; Păun, G. Regulated Rewriting in Formal Language Theory; Springer: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
  7. Pǎun, G.; Rozenberg, G.; Salomaa, A. DNA Computing. New Computing Paradigms; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  8. Meduna, A.; Soukup, O. Modern Language Models and Computation. Theory with Applications; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  9. Sipser, M. Introduction to the Theory of Computation; Cengage Learning: Boston, MA, USA, 2013. [Google Scholar]
  10. Meduna, A.; Zemek, P. Regulated Grammars and Automata; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  11. Meduna, A.; Kopeček, T. Simple-Semi-Conditional Versions of Matrix Grammars with a Reduced Regulated Mechanism. Comput. Inform. 2004, 23, 287–302. [Google Scholar]
  12. Abraham, A. Some questions of phrase-structure grammars. Comput. Linguist. 1965, 4, 61–70. [Google Scholar] [CrossRef]
  13. Cremers, A.; Mayer, O. On vector languages. J. Comp. Syst. Sci. 1974, 8, 158–166. [Google Scholar] [CrossRef] [Green Version]
  14. Ginsburg, S.; Spanier, E. Control sets on grammars. Math. Syst. Theory 1968, 2, 159–177. [Google Scholar] [CrossRef]
  15. Rozenkrantz, D. Programmed grammars and classes of formal languages. J. ACM 1969, 16, 107–131. [Google Scholar] [CrossRef]
  16. Pǎun, G. A new generative device: Valence grammars. Rev. Roum. Math. Pures Appl. 1980, 25, 911–924. [Google Scholar]
  17. Cremers, A.; Maurer, H.; Mayer, O. A note on leftmost restricted random context grammars. Inform. Proc. Lett. 1973, 2, 31–33. [Google Scholar] [CrossRef]
  18. Fris, I. Grammars with partial ordering of the rules. Inform. Control 1968, 12, 415–425. [Google Scholar] [CrossRef] [Green Version]
  19. Kelemen, J. Conditional grammars: Motivations, definitions and some properties. In Proc. Conf. Automata, Languages and Mathematical Sciences; Peak, I., Szep, J., Eds.; Salgótarján, Hungary, 1984; pp. 110–123. [Google Scholar]
  20. Aho, A. Indexed grammars. An extension of context-free grammars. J. ACM 1968, 15, 647–671. [Google Scholar] [CrossRef]
  21. Wood, D. Bicolored Digraph Grammar Systems. RAIRO Inform. Thérique et Appl./Theor. Inform. Appl. 1973, 1, 145–150. [Google Scholar] [CrossRef] [Green Version]
  22. Wood, D. A Note on Bicolored Digraph Grammar Systems. IJCM 1973, 3, 301–308. [Google Scholar]
  23. Dassow, J.; Turaev, S. Petri net controlled grammars: The power of labeling and final markings. Rom. J. Inf. Sci. Technol. 2009, 12, 191–207. [Google Scholar]
  24. Dassow, J.; Turaev, S. Petri net controlled grammars: The case of special Petri nets. J. Univers. Comput. Sci. 2009, 15, 2808–2835. [Google Scholar]
  25. Prusinkiewicz, P.; Hanan, J. Lindenmayer Systems, Fractals, and Plants; Lecture Notes in Biomathematics; Springer: Berlin, Germany, 1980; Volume 79. [Google Scholar]
  26. Rajlich, V. Absolutely parallel grammars and two-way deterministic finite state transducers. J. Comput. Syst. Sci. 1972, 6, 324–342. [Google Scholar] [CrossRef] [Green Version]
  27. Siromoney, R.; Krithivasan, K. Parallel context-free languages. Inform. Control 1974, 24, 155–162. [Google Scholar] [CrossRef] [Green Version]
  28. Levitina, M. On some grammars with global productions. NTI Ser. 1972, 2, 32–36. [Google Scholar]
  29. Greibach, S.; Hopcroft, J. Scattered context grammars. J. Comput. Syst. Sci. 1969, 3, 232–247. [Google Scholar] [CrossRef] [Green Version]
  30. Mavlankulov, G.; Othman, M.; Turaev, S.; Selamat, M.; Zhumabayeva, L.; Zhukabayeva, T. Concurrently Controlled Grammars. Kybernetika 2018, 54, 748–764. [Google Scholar] [CrossRef]
  31. Păun, G.; Santean, L. Parallel communicating grammar systems: The regular case. Ann. Univ. Buc. Ser. Mat.-Inform. 1989, 37, 55–63. [Google Scholar]
  32. Csuhaj-Varjú, E.; Dassow, J.; Kelemen, J.; Păun, G. Grammar Systems: A Grammatical Approach to Distribution and Cooperation; Gordon and Beach Science Publishers: New York, NY, USA, 1994. [Google Scholar]
  33. Král, J. On Multiple Grammars. Kybernetika 1969, 5, 60–85. [Google Scholar]
  34. Čulík II, K. n-ary Grammars and the Description of Mapping of Languages. Kybernetika 1970, 6, 99–117. [Google Scholar]
  35. Bellert, I. Relational Phrase Structure Grammar and Its Tentative Applications. Inf. Control 1965, 8, 503–530. [Google Scholar] [CrossRef]
  36. Bellert, I. Relational Phrase Structure Grammar Applied to Mohawk Constructions. Kybernetika 1966, 3, 264–273. [Google Scholar]
  37. Crimi, C.; Guercio, A.; Nota, G.; Pacini, G.; Tortora, G.; Tucci, M. Relation Grammars and their Application to Multidimensional Languages. J. Vis. Lang. Comput. 1991, 4, 333–346. [Google Scholar] [CrossRef]
  38. Wittenburg, K. Earley-Style Parsing for Relational Grammars. In Proceedings of the IEEE Workshop on Visual Languages, Seattle, WA, USA, 15–18 September 1992; pp. 192–199. [Google Scholar]
  39. Wittenburg, K.; Weitzman, L. Relational Grammars: Theory and Practice in a Visual Language Interface for Process Modeling. In Visual Language Theory; Marriott, K., Meyer, B., Eds.; Springer Science & Business Media: New York, NY, USA, 1998; pp. 193–217. [Google Scholar]
  40. Johnson, D. On Relational Constraints on Grammars. In Grammatical Relations; Cole, P., Sadock, J., Eds.; BRILL: Leiden, The Netherlands, 2020; pp. 151–178. [Google Scholar]
  41. Martín-Vide, C.; Mitrana, V.; Păun, G. (Eds.) Formal Languages and Applications; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  42. Baumgarten, B. Petri-Netze. Grundlagen und Anwendungen; Wissensschaftverlag: Mannheim, Germany, 1990. [Google Scholar]
  43. Reisig, W.; Rozenberg, G. (Eds.) Lectures on Petri Nets I: Basic Models; Springer: Berlin, Germany, 1997; Volume 1441. [Google Scholar]
  44. Dassow, J.; Turaev, S. Petri net controlled grammars with a bounded number of additional places. Acta Cybernetica 2009, 19, 609–634. [Google Scholar]
  45. Novák, V.; Novotný, M. Binary and Ternary Relations. Math. Bohem. 1992, 117, 283–292. [Google Scholar]
  46. Novák, V.; Novotný, M. Pseudodimension of Relational Structures. Czechoslov. Math. J. 1999, 49, 541–560. [Google Scholar]
  47. Cristea, I.; Ştefănescu, M. Hypergroups and n-ary Relations. Eur. J. Comb. 2010, 31, 780–789. [Google Scholar] [CrossRef] [Green Version]
  48. Chaisansuk, N.; Leeratanavalee, S. Some Properties on the Powers of n-ary Relational Systems. Novi Sad J. Math. 2013, 43, 191–199. [Google Scholar]
Figure 1. A cf Petri net N associated with the grammar G 1 , where the places are labeled with the nonterminals and the transitions are labeled with the productions in one-to-one manner. Moreover, the input place of each transition corresponds to the left-hand side of the associated production and its output places correspond to the nonterminals in the right-hand side of the production. If a transition does not have output places, then the associated production is terminal.
Figure 1. A cf Petri net N associated with the grammar G 1 , where the places are labeled with the nonterminals and the transitions are labeled with the productions in one-to-one manner. Moreover, the input place of each transition corresponds to the left-hand side of the associated production and its output places correspond to the nonterminals in the right-hand side of the production. If a transition does not have output places, then the associated production is terminal.
Symmetry 12 01209 g001

Share and Cite

MDPI and ACS Style

Turaev, S.; Abdulghafor, R.; Amer Alwan, A.; Abd Almisreb, A.; Gulzar, Y. Binary Context-Free Grammars. Symmetry 2020, 12, 1209. https://doi.org/10.3390/sym12081209

AMA Style

Turaev S, Abdulghafor R, Amer Alwan A, Abd Almisreb A, Gulzar Y. Binary Context-Free Grammars. Symmetry. 2020; 12(8):1209. https://doi.org/10.3390/sym12081209

Chicago/Turabian Style

Turaev, Sherzod, Rawad Abdulghafor, Ali Amer Alwan, Ali Abd Almisreb, and Yonis Gulzar. 2020. "Binary Context-Free Grammars" Symmetry 12, no. 8: 1209. https://doi.org/10.3390/sym12081209

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop