You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

5 February 2021

Sequences of Groups, Hypergroups and Automata of Linear Ordinary Differential Operators

,
,
and
1
Department of Mathematics, Faculty of Electrical Engineeering and Communication, Brno University of Technology, Technická 8, 616 00 Brno, Czech Republic
2
Department of Quantitative Methods, University of Defence in Brno, Kounicova 65, 662 10 Brno, Czech Republic
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Hypercompositional Algebra and Applications

Abstract

The main objective of our paper is to focus on the study of sequences (finite or countable) of groups and hypergroups of linear differential operators of decreasing orders. By using a suitable ordering or preordering of groups linear differential operators we construct hypercompositional structures of linear differential operators. Moreover, we construct actions of groups of differential operators on rings of polynomials of one real variable including diagrams of actions–considered as special automata. Finally, we obtain sequences of hypergroups and automata. The examples, we choose to explain our theoretical results with, fall within the theory of artificial neurons and infinite cyclic groups.

1. Introduction

This paper discusses sequences of groups, hypergroups and automata of linear differential operators. It is based on the algebraic approach to the study of linear ordinary differential equations. Its roots lie in the work of Otakar Borůvka, a Czech mathematician, who tied the algebraic, geometrical and topological approaches, and his successor, František Neuman, who advocated the algebraic approach in his book [1]. Both of them (and their students) used the classical group theory in their considerations. In several papers, published mainly as conference proceedings such as [2,3,4], the existing theory was extended by the use of hypercompositional structures in place of the usual algebraic structures. The use of hypercompositional generalizations has been tested in the automata theory, where it has brought several interesting results; see, e.g., [5,6,7,8]. Naturally, this approach is not the only possible one. For another possible approach, investigations of differential operators by means of orthognal polynomials, see, e.g., [9,10].
Therefore, in this present paper we continue in the direction of [2,4] presenting results parallel to [11]. Our constructions, no matter how theoretical they may seem, are motivated by various practical issues of signal processing [12,13,14,15,16]. We construct sequences of groups and hypergroups of linear differential operators. This is because, in signal processing (but also in other real-life contexts), two or more connecting systems create a standing higher system, characteristics of which can be determined using characteristics of the original systems. Cascade (serial) and parallel connecting of systems of signal transfers are used in this. Moreover, series of groups motivated by the Galois theory of solvability of algebraic equations and the modern theory of extensions of fields, are often discussed in literature. Notice also paper [11] where the theory of artificial neurons, used further on in some examples, has been studied.
Another motivation for the study of sequences of hypergroups and their homomorphisms can be traced to ideas of classical homological algebra which comes from the algebraic description of topological spaces. A homological algebra assigns to any topological space a family of abelian groups and to any continuous mapping of topological spaces a family of group homomorphisms. This allows us to express properties of spaces and their mappings (morphisms) by means of properties of the groups or modules or their homomorphisms. Notice that a substantial part of homology theory is devoted to the study of exact short and long sequences of the above mentiones structures.

2. Sequences of Groups and Hypergroups: Definitions and Theorems

2.1. Notation and Preliminaries

It is crucial that one understands the notation used in this paper. Recall that we study, by means of algebra, linear ordinary differential equations. Therefore, our notation, which follows the original model of Borůvka and Neuman [1], uses a mix of algebraic and functional notation.
First, we denote intervals by J and regard open intervals (bounded or unbounded). Systems of functions with continuous derivatives of order k on J are denoted by C k ( J ) ; for k = 0 we write C ( J ) instead of C 0 ( J ) . We treat C k ( J ) as a ring with respect to the usual addition and multiplication of functions. We denote by δ i j the Kronecker delta, i , j N , i.e., δ i i = δ j j = 1 and δ i j = 0 , whenever i j ; by δ i j ¯ we mean 1 δ i j . Since we will be using some notions from the theory of hypercompositional structures, recall that by P ( X ) one means the power set of X while ( P ) ( X ) means P ( X ) \ .
We regard linear homogeneous differential equations of order n 2 with coefficients, which are real and continuous on J, and–for convenience reasons–such that p 0 ( x ) > 0 for all x J , i.e., equations
y ( n ) ( x ) + p n 1 ( x ) y ( n 1 ) ( x ) + + p 0 ( x ) y ( x ) = 0 .
By A n we, adopting the notation of Neuman [1], mean the set of all such equations.
Example 1.
The above notation can be explained on an example taken from [17], in which Neuman considers the third-order linear homogeneous differential equation
y ( x ) q 1 ( x ) q 1 ( x ) y ( x ) + ( q 1 ( x ) 1 ) 2 y ( x ) q 1 ( x ) q 1 ( x ) y ( x ) = 0
on the open interval J R . One obtains this equation from the system
y 1 = y 2 y 2 = y 1 + q 1 ( x ) y 3 y 3 = q 1 ( x ) y 2
Here q 1 C + ( J ) satisfies the condition q 1 ( x ) 0 on J. In the above differential equation we have n = 3 , p 0 ( x ) = q 1 ( x ) q 1 ( x ) , p 1 ( x ) = ( q 1 ( x ) 1 ) 2 and p 2 ( x ) = q 1 ( x ) q 1 ( x ) . It is to be noted that the above three equations form what is known as set of global canonical forms for the third-order equation on the interval J.
Denote L n ( p n 1 , , p 0 ) : C n ( J ) C n ( J ) the above linear differential operator defined by
L n ( p n 1 , , p 0 ) y ( x ) = y ( n ) ( x ) + k = 0 n 1 p k ( x ) y ( k ) ( x ) ,
where y ( x ) C n ( J ) and p 0 ( x ) > 0 for all x J . Further, denote by LA n ( J ) the set of all such operators, i.e.,
LA n ( J ) = { L ( p n 1 , , p 0 ) p k ( x ) C ( J ) , p 0 ( x ) > 0 } .
By LA n ( J ) m we mean subsets of LA n ( J ) such that p m C + ( J ) , i.e., there is p m ( x ) > 0 for all x J . If we want to explicitly emphasize the variable, we write y ( x ) , p k ( x ) , etc. However, if there is no specific need to do this, we write y, p k , etc. Using vector notation p ( x ) = ( p n 1 ( x ) , , p 0 ( x ) ) , we can write
L n ( p ) y = y ( n ) + k = 0 n 1 p k y ( k ) .
Writing L ( p ) LA n ( J ) (or L ( p ) LA n ( J ) m ) is a shortcut for writing L n ( p ) y LA n ( J ) (or, L n ( p ) y LA n ( J ) m ).
On the sets of linear differential operators, i.e., on sets LA n ( J ) , or their subsets LA n ( J ) m , we define some binary operations, hyperoperations or binary relations. This is possible because our considerations happen within a ring (of functions).
For an arbitrary pair of operators L ( p ) , L ( q ) LA n ( J ) m , where p = ( p n 1 , , p 0 ) , q = ( q n 1 , , q 0 ) , we define an operation “ m ” with respect to the m-th component by L ( p ) m L ( q ) = L ( u ) , where u = ( u n 1 , , u 0 ) and
u k ( x ) = p m ( x ) q k ( x ) + ( 1 δ k m ) p k ( x )
for all k = n 1 , , 0 , k m and all x J . Obviously, such an operation is not commutative.
Moreover, apart from the above binary operation we can define also a relation “ m ” comparing the operators by their m-th component, putting L ( p ) m L ( q ) whenever, for all x J , there is
p m ( x ) = q m ( x ) and   at   the   same   time p k ( x ) q k ( x )
for all k = n 1 , , 0 . Obviously, LA n ( J ) m , m is a partially ordered set.
At this stage, in order to simplify the notation, we write LA n ( J ) instead of LA n ( J ) m because the lower index m is kept in the operation and relation. The following lemma is proved in [2].
Lemma 1.
Triads ( LA n ( J ) , m , m ) are partially ordered (noncommutative) groups.
Now we can use Lemma 1 to construct a (noncommutative) hypergroup. In order to do this, we will need the following lemma, known as Ends lemma; for details see, e.g., [18,19,20]. Notice that a join space is a special case of a hypergroup–in this paper we speak of hypergroups because we want to stress the parallel with groups.
Lemma 2.
Let ( H , · , ) be a partially ordered semigroup. Then ( H , ) , where : H × H H is defined, for all a , b H by
a b = [ a · b ) = { x H a · b x } ,
is a semihypergroup, which is commutative if and only if “·” is commutative. Moreover, if ( H , · ) is a group, then ( H , ) is a hypergroup.
Thus, to be more precise, defining
m : LA n ( J ) × LA n ( J ) P ( LA n ( J ) ) ,
by
L ( p ) m L ( q ) = { L ( u ) L ( p ) m L ( q ) m L ( u ) }
for all pairs L ( p ) , L ( q ) LA n ( J ) m , lets us state the following lemma.
Lemma 3.
Triads ( LA n ( J ) , m ) are (noncommutative) hypergroups.
Notation 1.
Hypergroups ( LA n ( J ) , m ) will be denoted by HLA n ( J ) m for an easier distinction.
Remark 1.
As a parallel to (2) and (3) we define
L ¯ ( q n , , q 0 ) y ( x ) = k = 0 n q k ( x ) y ( k ) ( x ) , q 0 0 , q k C ( J )
and
LA ¯ n ( J ) = { q n , , q 0 ) q 0 0 , q k ( x ) C ( J ) }
and, by defining the binary operation “ m ” and “ m ” in the same way as for LA n ( J ) m , it is easy to verify that also ( LA ¯ n ( J ) , m , m ) are noncommutative partially ordered groups. Moreover, given a hyperoperation defined in a way parallel to (8), we obtain hypergroups ( LA ¯ n ( J ) m , m ) , which will be, in line with Notation 1, denoted HLA ¯ n ( J ) m .

2.2. Results

In this subsection we will construct certain mappings between groups or hypergroups of linear differential operators of various orders. The result will have a form of sequences of groups or hypergroups.
Define mappings F n : LA n ( J ) LA n 1 ( J ) by
F n ( L ( p n 1 , , p 0 ) ) = L ( p n 2 , , p 0 )
and ϕ n : LA n ( J ) LA ¯ n 1 ( J ) by
ϕ n : ( L ( p n 1 , , p 0 ) ) = L ¯ ( p n 2 , , p 0 ) .
It can be easily verify that both F n and ϕ n are, for an arbitrary n 2 , group homomorphisms.
Evidently, LA n ( J ) LA ¯ n ( J ) , LA ¯ n 1 ( J ) LA ¯ n ( J ) for all admissible n N . Thus we obtain two complete sequences of ordinary linear differential operators with linking homomorphisms F n and ϕ n :
Mathematics 09 00319 i001
where i d ¯ k , k + 1 , i d k are corresponding inclusion embeddings.
Notice that this diagram, presented at the level of groups, can be lifted to the level of hypergroups. In order to do this, one can use Lemma 3 and Remark 1. However, this is not enough. Yet, as Lemma 4 suggests, it is possible to show that the below presented assignment is functorial, i.e., not only objects are mapped onto objects but also morphisms (isotone group homomorphisms) are mapped onto morphisms (hypergroup homomorphisms). Notice that Lemma 4 was originally proved in [4]. However, given the minimal impact of the proceedings and its very limited availability and accessibility, we include it here with a complete proof.
Lemma 4.
Let ( G k , · k , k ) , k = 1 , 2 be preordered groups and f : ( G 1 , · 1 , 1 ) ( G 2 , · 2 , 2 ) a group homomorphism, which is isotone, i.e., the mapping f : ( G 1 , 1 ) ( G 2 , 2 ) is order-preserving. Let ( H k , k ) , k = 1 , 2 be hypergroups constructed from ( G k , · k , k ) , k = 1 , 2 by Lemma 2, respectively. Then f : ( H 1 , 1 ) ( H 2 , 2 ) is a homomorphism, i.e., f ( a 1 b ) f ( a ) 2 f ( b ) for any pair of elements a , b H 1 .
Proof. 
Let a , b H 1 be a pair of elements and c f ( a 1 b ) be an arbitrary element. Then there is d a 1 b = [ a · 1 b ) 1 , i.e., a · 1 b 1 d such that c = f ( d ) . Since the mapping f is an isotone homomorphism, we have f ( a ) · 2 f ( b ) = f ( a · 1 b ) f ( d ) = c , thus c [ f ( a ) · 2 f ( b ) ) 2 . Hence
f ( a 1 b ) = f [ a · 1 b ) 1 f ( a ) · 2 f ( b ) = f ( a ) 2 f ( b ) .
Consider a sequence of partially ordered groups of linear differential operators
LA 0 ( J ) F 1 LA 1 ( J ) F 2 LA 2 ( J ) F 3 F n 2 LA n 2 ( J ) F n 1 LA n 1 ( J ) F n LA n ( J ) F n + 1 LA n + 1 ( J )
given above with their linking group homomorphisms F k : LA k ( J ) LA k 1 ( J ) for k = 1 , 2 , . Since mappings F n : LA n ( J ) LA n 1 ( J ) , or rather
F n : ( LA n ( J ) , m , m ) ( LA n 1 ( J ) , m , m ) ,
for all n 2 , are group homomorphisms and obviously mappings isotone with respect to the corresponding orderings, we immediately get the following theorem.
Theorem 1.
Suppose J R is an open interval, n N is an integer n 2 , m N such that m n . Let ( HLA n ( J ) m , m ) be the hypergroup obtained from the group ( LA n ( J ) m , m ) by Lemma 2. Suppose that F n : ( LA n ( J ) m , m ) ( LA n 1 ( J ) m , m ) are the above defined surjective group-homomorphisms, n N , n 2 . Then F n : ( HLA n ( J ) m , m ) HLA n 1 ( J ) m , m ) are surjective homomorphisms of hypergroups.
Proof. 
See the reasoning preceding the theorem. □
Remark 2.
It is easy to see that the second sequence from (11) can be mapped onto the sequence of hypergroups
HLA 0 ( J ) m F 1 HLA 1 ( J ) m F 2 HLA 2 ( J ) m F 3 F n 2 HLA n 1 ( J ) m F n 1 HLA n ( J ) m
This mapping is bijective and the linking mappings are surjective homomorphisms F n . Thus this mapping is functorial.

4. Practical Applications of the Sequences

In this section, we will include several examples of the above reasoning. We will apply the theoretical results in the area of artificial neurons, i.e., in a way, continue with the paper [11] which focuses on artificial neurons. For notation, recall [11]. Further on we consider a generalization of the usual concept of artificial neurons. We assume that the inputs u x i and weight w i are functions of an argument t, which belongs into a linearly ordered (tempus) set T with the least element 0 . The index set is, in our case, the set C ( J ) of all continuous functions defined on an open interval J R . Now, denote by W the set of all non-negative functions w : T R . Obviously W is a subsemiring of the ring of all real functions of one real variable x : R R . Further, denote by N e ( w r ) = N e ( w r 1 , , w r n ) for r C ( J ) , n N the mapping
y r ( t ) = k = 1 n w r , k ( t ) x r , k ( t ) + b r
which will be called the artificial neuron with the bias b r R . By AN ( T ) we denote the collection of all such artificial neurons.

4.1. Cascades of Neurons Determined by Right Translations

Similarly as in the group of linear differential operators we will define a binary operation in the system AN ( T ) of artificial neurons N e ( · ) and construct a non-commutative group.
Suppose N e ( w r ) , N e ( w s ) AN ( T ) such that r , s C ( J ) and w r = ( w r , 1 , , w r , n ) , w s = ( w s , 1 , , w s , n ) , where n N . Let m N , 1 m n be a such an integer that w r , m > 0 . We define
N e ( w r ) · m N e ( w s ) = N e ( w u ) ,
where
w u = ( w u , 1 , , w u , n ) = ( w u , 1 ( t ) , , w u , n ( t ) ) ,
w u , k ( t ) = w r , m ( t ) w s , k ( t ) + ( 1 δ m , k ) w r , k ( t ) , t T
and, of course, the neuron N e ( w u ) is defined as the mapping y u ( t ) = k = 1 n w k ( t ) x k ( t ) + b u , t T , b u = b r b s .
The algebraic structure ( AN ( T ) , · m ) is a non-commutative group. We proceed to the construction of the cascade of neurons. Let ( Z , + ) be the additive group of all integers. Let N e ( w s ( t ) ) AN ( T ) be an arbitrary but fixed chosen artificial neuron with the output function
y s ( t ) = k = 1 n w s , k ( t ) x s , k ( t ) + b s .
Denote by ρ s : AN ( T ) AN ( T ) the right translation within the group of time varying neurons determined by N e ( w s ( t ) ) , i.e.,
ρ s ( N e ( w p ( t ) ) = N e ( w p ( t ) · m N e ( w s ( t ) )
for any neuron N e ( w p ( t ) ) AN ( T ) . In what follows, denote by ρ s r the r-th iteration of ρ s for r Z . Define the projection π s : AN ( T ) × Z AN ( T ) by
π s ( N e ( w p ( t ) ) , r ) = ρ s r ( N e ( w p ( t ) ) .
One easily observes that we get a usual (discrete) transformation group, i.e., the action of ( Z , + ) (as the phase group) on the group AN ( T ) . Thus the following two requirements are satisfied:
  • π s ( N e ( w p ( t ) ) , 0 ) = N e ( w p ( t ) ) for any neuron N e ( w p ( t ) ) AN ( T ) .
  • π s ( N e ( w p ( t ) ) , r + u ) = π s ( π s ( N e ( w p ( t ) ) , r ) , u ) for any integers r , u Z and any artificial neuron N e ( w p ( t ) ) . Notice that the just obtained structure is called a cascade within the framework of the dynamical system theory.

4.2. An Additive Group of Differential Neurons

As usually denote by R n [ t ] the ring of polynomials of variable t over R of the grade at most n N 0 . Suppose w = ( w 1 ( t ) , , w n ( t ) ) be the fixed vector of continuous functions w k : R R , b p be the bias for any polynomial p R n [ t ] . For any such polynomial p R n [ t ] we define a differential neuron D N e ( w ) given by the action
y ( t ) = k = 1 n w k ( t ) d k 1 p ( t ) d t k 1 + b 0 d n p ( t ) d t n .
Considering the additive group of R n [ t ] we obtain an additive commutative group DN ( T ) of differential neurons which is assigned to the group of R n [ t ] . Thus for D N e p ( w ) , D N e q ( w ) DN ( T ) with actions
y ( t ) = k = 1 n w k ( t ) d k 1 p ( t ) d t k 1 + b 0 d n p ( t ) d t n
and
z ( t ) = k = 1 n w k ( t ) d k 1 q ( t ) d t k 1 + b 0 d n q ( t ) d t n
we have D N e p + q ( w ) = D N e p ( w ) + D N e q ( w ) DN ( T ) with the action
u ( t ) = y ( t ) + z ( t ) = k = 1 n w k ( t ) d k 1 ( p ( t ) + q ( t ) ) d t k 1 + b 0 d n ( p ( t ) + q ( t ) ) d t n
Considering the chain of inclusions
R n [ t ] R n + 1 [ t ] R n + 2 [ t ]
we obtain the corresponding sequence of commutative groups of differential neurons.

4.3. A Cyclic Subgroup of the Group AN ( T ) m Generated by Neuron N e ( w r ) AN ( T ) m

First of all recall that if N e ( w r ) , N e ( w s ) AN ( T ) m , r , s C ( J ) , where w r ( t ) = ( w r , 1 ( t ) , , w r , n ( t ) ) , w s ( t ) = ( w s , 1 ( t ) , , w s , n ( t ) ) , are vector function of weights such that w r , m ( t ) 0 w s , m ( t ) , t T with outputs y r ( t ) = k = 1 n w r , k ( t ) x k ( t ) + b r , y s ( t ) = k = 1 n w s , k ( t ) x k ( t ) + b s (with inputs x k ( t ) ), then the product N e ( w r ) · m N e ( w s ) = N e ( w u ) has the vector of weights
w u ( t ) = ( w u , 1 ( t ) , , w u , n ( t ) )
with w u , k ( t ) = w r , m ( t ) w s , k ( t ) + ( 1 δ m , k ) w r , k ( t ) , t T .
The binary operation “ · m ” is defined under the assumption that all values of functions which are m-th components of corresponding vectors of weights are different from zero.
Let us denote by ZAN r ( T ) the cyclic subgroup of the group AN ( T ) m generated by the neuron N e ( w r ) AN ( T ) m . Then denoting the neutral element by N 1 ( e ) m we have ZAN r ( T ) =
= { , [ N e ( w r ) ] 2 , [ N e ( w r ) ] 1 , N 1 ( e ) m , N e ( w r ) , [ N e ( w r ) ] 2 , , [ N e ( w r ) ] p , } .
Now we describe in detail objects
[ N e ( w r ) ] 2 , [ N e ( w r ) ] p , p N , p 2 , N 1 ( e ) m and [ N e ( w r ) ] 1 ,
i.e., the inverse element to the neuron N e ( w r ) .
Let us denote [ N e ( w r ) ] 2 = N e ( w s ) , with w s ( t ) = ( w s , 1 ( t ) , , w s , n ( t ) ) . then
w s , k ( t ) = w r , m ( t ) w r , k ( t ) + ( 1 δ m , k ) w r , k ( t ) = ( w r , m ( t ) + 1 δ m , k ) w r , k ( t ) .
Then the vector of weights of the neuron [ N e ( w r ) ] 2 is
w s ( t ) = ( ( w r , m ( t ) + 1 ) w r , 1 ( t ) , , w r , m 2 ( t ) , , ( w r , m ( t ) + 1 ) w r , n ( t ) ) ,
the output function is of the form
y s ( t ) = k = 1 k m n ( w r , m ( t ) + 1 ) w r , k ( t ) x k ( t ) ) + w r , m 2 x n ( t ) + b r 2 .
It is easy to calculate the vector of weights of the neuron [ N e ( w r ) ] 3 :
( ( w r , m 2 ( t ) + 1 ) w r , 1 ( t ) , , w r , m 3 ( t ) , , ( w r , m 2 ( t ) + 1 ) w r , n ( t ) ) .
Finally, putting [ N e ( w r ) ] p = N e ( w v ) for p N , p 2 , the vector of weights of this neuron is
w v ( t ) = ( ( w r , m p 1 ( t ) + 1 ) w r , 1 ( t ) , , w r , m p ( t ) , , ( w r , m p 1 ( t ) + 1 ) w r , n ( t ) ) .
Now, consider the neutral element (the unit) N 1 ( e ) m of the cyclic group ZAN r ( T ) . Here the vector e of weights is e = ( e 1 , , e m , , e n ) , where e m = 1 and e k = 0 for each k m . Moreover the bias b = 1 .
We calculate products N e ( w s ) · N 1 ( e ) m , N 1 ( e ) m ) · N e ( w s ) . Denote N e ( w u ) , N e ( w v ) results of corresponding products, respectively–we have w u ( t ) = ( w u , 1 ( t ) , , w u , n ( t ) ) , where
w u , k ( t ) = w s , m ( t ) e k ( t ) + ( 1 δ m , k ) w s , k ( t ) = w s , k ( t )
if k m and w u , k ( t ) = w s , m ( t ) ( e m ( t ) + 0 · w s , m ( t ) ) = w s , m ( t ) for k = m . Since the bias is b = 1 , we obtain y u ( t ) = x m ( t ) + 1 . Thus N e ( w u ) = N e ( w s ) . Similarly, denoting w v ( t ) = ( w v , 1 ( t ) , , w v , n ( t ) ) , we obtain w v , k ( t ) = e m ( t ) w s , k ( t ) + ( 1 δ m , k ) e k ( t ) = w s , k ( t ) for k m and w v , k ( t ) = w s , m ( t ) if k = m , thus w v ( t ) = ( w s , 1 ( t ) , , w s , n ( t ) ) , consequently N e ( w u ) = N e ( w s ) again.
Consider the inverse element [ N e ( w r ) ] 1 to the element N e ( w r ZAN ( T ) m . Denote N e ( w s ) = [ N e ( w r ) ] 1 , w s ( t ) = ( w s , 1 ( t ) , , w s , n ( t ) ) , t T . We have N e ( w r ) · N e ( w s ) = N e ( w r ) · m [ N e ( w r ) ] 1 = N 1 ( e ) m . Then
0 = e 1 = w r , m ( t ) w s , 1 ( t ) + w r , 1 ( t ) ,
0 = e 2 = w r , m ( t ) w s , 2 ( t ) + w r , 2 ( t ) ,
1 = e m = w r , m ( t ) w s , m ( t ) ,
0 = e n = w r , m ( t ) w s , n ( t ) + w r , n ( t ) .
From the above equalities we obtain
w s , 1 ( t ) = w r , 1 ( t ) w r , m ( t ) , , , w s , m ( t ) = 1 w r , m ( t ) , , w s , n ( t ) = w r , n ( t ) w r , m ( t ) .
Hence, for [ N e ( w r ) ] 1 = N e ( w s ) , we get
w s ( t ) = w r , 1 ( t ) w r , m ( t ) , , 1 w r , m ( t ) , , w r , n ( t ) w r , m ( t ) =
= 1 w r , m ( t ) w r , 1 ( t ) , , 1 , , w r , n ( t ) ,
where the number 1 is on the m-th position. Of course, the bias of the neuron [ N e ( w r ) ] 1 is b r 1 , where b r is the bias of the neuron N e ( w r ) .

5. Conclusions

The scientific school of O. Borůvka and F. Neuman used, in their study of ordinary differencial equations and their transformations [1,28,29,30], the algebraic approach with the group theory as a main tool. In our study, we extended this existing theory with the employment of hypercomposiional structures—semihypergroups and hypergroups. We constructed hypergroups of ordinary linear differential operators and certain sequences of such structures. This served as a background to investigate systems of artificial neurons and neural networks.

Author Contributions

Investigation, J.C., M.N., B.S.; writing—original draft preparation, J.C., M.N., D.S.; writing—review and editing, M.N., B.S.; supervision, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Neuman, F. Global Properties of Linear Ordinary Differential Equations; Academia Praha-Kluwer Academic Publishers: Dordrecht, The Netherlands; Boston, MA, USA; London, UK, 1991. [Google Scholar]
  2. Chvalina, J.; Chvalinová, L. Modelling of join spaces by n-th order linear ordinary differential operators. In Proceedings of the Fourth International Conference APLIMAT 2005, Bratislava, Slovakia, 1–4 February 2005; Slovak University of Technology: Bratislava, Slovakia, 2005; pp. 279–284. [Google Scholar]
  3. Chvalina, J.; Moučka, J. Actions of join spaces of continuous functions on hypergroups of second-order linear differential operators. In Proc. 6th Math. Workshop with International Attendance, FCI Univ. of Tech. Brno 2007. [CD-ROM]. Available online: https://math.fce.vutbr.cz/pribyl/workshop_2007/prispevky/ChvalinaMoucka.pdf (accessed on 4 February 2021).
  4. Chvalina, J.; Novák, M. Laplace-type transformation of a centralizer semihypergroup of Volterra integral operators with translation kernel. In XXIV International Colloquium on the Acquisition Process Management; University of Defense: Brno, Czech Republic, 2006. [Google Scholar]
  5. Chvalina, J.; Křehlík, Š.; Novák, M. Cartesian composition and the problem of generalising the MAC condition to quasi-multiautomata. Analele Universitatii “Ovidius" Constanta-Seria Matematica 2016, 24, 79–100. [Google Scholar] [CrossRef]
  6. Křehlík, Š.; Vyroubalová, J. The Symmetry of Lower and Upper Approximations, Determined by a Cyclic Hypergroup, Applicable in Control Theory. Symmetry 2020, 12, 54. [Google Scholar] [CrossRef]
  7. Novák, M.; Křehlík, Š. EL–hyperstructures revisited. Soft Comput. 2018, 22, 7269–7280. [Google Scholar] [CrossRef]
  8. Novák, M.; Křehlík, Š.; Staněk, D. n-ary Cartesian composition of automata. Soft Comput. 2020, 24, 1837–1849. [Google Scholar]
  9. Cesarano, C. A Note on Bi-orthogonal Polynomials and Functions. Fluids 2020, 5, 105. [Google Scholar] [CrossRef]
  10. Cesarano, C. Multi-dimensional Chebyshev polynomials: A non-conventional approach. Commun. Appl. Ind. Math. 2019, 10, 1–19. [Google Scholar] [CrossRef]
  11. Chvalina, J.; Smetana, B. Series of Semihypergroups of Time-Varying Artificial Neurons and Related Hyperstructures. Symmetry 2019, 11, 927. [Google Scholar] [CrossRef]
  12. Jan, J. Digital Signal Filtering, Analysis and Restoration; IEEE Publishing: London, UK, 2000. [Google Scholar]
  13. Koudelka, V.; Raida, Z.; Tobola, P. Simple electromagnetic modeling of small airplanes: Neural network approach. Radioengineering 2009, 18, 38–41. [Google Scholar]
  14. Krenker, A.; Bešter, J.; Kos, A. Introduction to the artificial neural networks. In Artificial Neural Networks: Methodological Advances and Biomedical Applications; Suzuki, K., Ed.; InTech: Rijeka, Croatia, 2011; pp. 3–18. [Google Scholar]
  15. Raida, Z.; Lukeš, Z.; Otevřel, V. Modeling broadband microwave structures by artificial neural networks. Radioengineering 2004, 13, 3–11. [Google Scholar]
  16. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  17. Neuman, F. Global theory of ordinary linear homogeneous differential equations in the ral domain. Aequationes Math. 1987, 34, 1–22. [Google Scholar] [CrossRef]
  18. Křehlík, Š.; Novák, M. From lattices to Hv–matrices. An. Şt. Univ. Ovidius Constanţa 2016, 24, 209–222. [Google Scholar] [CrossRef]
  19. Novák, M. On EL-semihypergroups. Eur. J. Comb. 2015, 44, 274–286. [Google Scholar] [CrossRef]
  20. Novák, M. Some basic properties of EL-hyperstructures. Eur. J. Comb. 2013, 34, 446–459. [Google Scholar] [CrossRef]
  21. Bavel, Z. The source as a tool in automata. Inf. Control 1971, 18, 140–155. [Google Scholar] [CrossRef][Green Version]
  22. Dörfler, W. The cartesian composition of automata. Math. Syst. Theory 1978, 11, 239–257. [Google Scholar] [CrossRef]
  23. Gécseg, F.; Peák, I. Algebraic Theory of Automata; Akadémia Kiadó: Budapest, Hungary, 1972. [Google Scholar]
  24. Massouros, G.G. Hypercompositional structures in the theory of languages and automata. An. Şt. Univ. A.I. Çuza Iaşi, Sect. Inform. 1994, III, 65–73. [Google Scholar]
  25. Massouros, G.G.; Mittas, J.D. Languages, Automata and Hypercompositional Structures. In Proceedings of the 4th International Congress on Algebraic Hyperstructures and Applications, Xanthi, Greece, 27–30 June 1990. [Google Scholar]
  26. Massouros, C.; Massouros, G. Hypercompositional Algebra, Computer Science and Geometry. Mathematics 2020, 8, 138. [Google Scholar] [CrossRef]
  27. Křehlík, Š. n-Ary Cartesian Composition of Multiautomata with Internal Link for Autonomous Control of Lane Shifting. Mathematics 2020, 8, 835. [Google Scholar] [CrossRef]
  28. Borůvka, O. Lineare Differentialtransformationen 2. Ordnung; VEB Deutscher Verlager der Wissenschaften: Berlin, Germany, 1967. [Google Scholar]
  29. Borůvka, O. Linear Differential Transformations of the Second Order; The English University Press: London, UK, 1971. [Google Scholar]
  30. Borůvka, O. Foundations of the Theory of Groupoids and Groups; VEB Deutscher Verlager der Wissenschaften: Berlin, Germany, 1974. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.