Next Article in Journal
Research on the Modeling of Automatic Pricing and Replenishment Strategies for Perishable Goods with Time-Varying Deterioration Rates
Previous Article in Journal
Differential Cohomology and Gerbes: An Introduction to Higher Differential Geometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Remarks on the Mathematical Modeling of Gene and Neuronal Networks by Ordinary Differential Equations

by
Diana Ogorelova
1,* and
Felix Sadyrbaev
1,2
1
Faculty of Natural Sciences and Mathematics, Daugavpils University, Vienibas Street 13, LV-5401 Daugavpils, Latvia
2
Institute of Mathematics and Computer Science, University of Latvia, Rainis boulevard 29, LV-1459 Riga, Latvia
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(1), 61; https://doi.org/10.3390/axioms13010061
Submission received: 6 November 2023 / Revised: 7 January 2024 / Accepted: 11 January 2024 / Published: 19 January 2024
(This article belongs to the Section Mathematical Analysis)

Abstract

:
In the theory of gene networks, the mathematical apparatus that uses dynamical systems is fruitfully used. The same is true for the theory of neural networks. In both cases, the purpose of the simulation is to study the properties of phase space, as well as the types and the properties of attractors. The paper compares both models, notes their similarities and considers a number of illustrative examples. A local analysis is carried out in the vicinity of critical points and the necessary formulas are derived.

1. Introduction

In this article, we study Neural Networks, called also Artificial Neural Networks (ANN), and their mathematical models, using ordinary differential equations. The motivation for the study of ANNs came from attempts to understand the principles and organization of the human brain. Understanding came that human brains work differently from digital computers. Their effectiveness comes from high complexity, nonlinear modes of regulation, and parallelism of actions. The elements of the human brain were called neurons.
These elements still perform calculations faster than the fastest digital computers. The human brain is able to perceive information about the environment in the form of images and, moreover, it can process the received information needed for interaction with the environment.
At birth, the human brain has a ready structure for learning which, in familiar terms, is understood as experience. So, the neural network is designed to model the way in which the human brain solves usual problems and performs a particular task. A particular interest in ANN stems from the fact that an important group of neural networks is needed to solve a problem computations through the process of learning. So, following [1], an ANN can generally be imagined as a parallel distributed processor, consisting of separate units, which is able to analyze experimental data and prepare them for use.
Many natural processes involve networks of elements that affect each other following a general pattern of conditions and the updating rules for any elements. Both genomic networks and neuronal networks are of this kind. In mathematical models of networks of both types, the regulatory effect of one element to the outputs of other elements is defined by a weight matrix. Therefore, the models describing the evolution of these networks have a lot in common. But, there are also differences. This paper compares models using systems of ordinary differential equations. To distinguish between these systems, we use the designations GRN system and ANN system. At the same time, we realize that the term ANN system has too general a meaning. An ANN system in the established sense is understood as a network that operates according to certain rules and is focused on performing certain tasks. At the same time, the networks undergo training and thus improve their qualities. This article looks at neural networks from a different point of view. We are interested in the behavior of systems of both types for different forms of interaction of elements. The structure of both systems assumes the presence of attractors that determine future states. The description and comparison of possible attractors for the systems of both types is our result.
ANNs are made up of many interconnected elements. Weighted signals from different elements are received by a separate element and processed. A positive signal is understood as an excitatory connection, while negative one means an inhibitory connection. The received signals are linearly summed and modified by a nonlinear sigmoidal function which is called an activation one. The activation function controls the amplitude of an output. “Each neuron has a sigmoid transfer function, and a continuous positive and bounded output activity that evolves according to weighted sums of the activities in the networks. Neural networks with arbitrary connections are often called recurrent networks” [2]. The dynamics of the continuous time recurrent neural network with n units, can be described by the system of ordinary differential equations (ODE) ([3])
x i = b i x i + f i ( Σ a i j x j ) + I i ( t ) ,
where x i is the internal state of the i-th unit, b i is the time constant for the i-th unit, a i j are connection weights, I i ( t ) is the input to the i-th unit, and f i ( Σ a i j x j ) is the response function of the i-th unit. Usually, f is taken as a sigmoidal function. There are particular response functions that are non-negative. For instance, functions f i ( z ) = ( 1 + exp ( μ i ( z θ i ) ) 1 were used in [4]. More general cases can be modeled by the system using the function f i ( z ) = tanh ( a i z θ i ) , which takes values in the open interval ( 1 , 1 ) . If the recurrent neural networks without input are considered, the system
x i = f i ( Σ ( a i j x j θ i ) ) b i x i
can be considered.
Applications of Artificial Neural Networks are multiple. They can be used in different fields. These fields can be categorized as function approximations, including time series prediction and modeling; pattern and sequence recognition, novelty detection and sequential decision making; and data processing, including filtering and clustering. For applications in Machine Learning (ML), Deep Learning and related problems, consult the review article [5]. For neuroscience applications and their relation to ML, and machine learning using biologically realistic models of neurons to carry out the computation, consider the review [6]. The problems of pattern recognition by ANNs, including applications in manufacturing industries, were studied and analyzed in the review paper [7]. In the paper [8], the ANN approach is applied for the study of a genetic system.
In this article, we mainly study properties of the mathematical model of a three-dimensional ANN, but part of our results will refer to two-dimensional or, more generally, to n-dimensional networks. In particular, we provide information on the types of possible attractors, and their birth and evolution under changes in multiple parameters. The asymptotic properties of the system are important for prediction of future states. This, in turn, can provide instruments for control and management of the modeling network. We use analytical tools for the study of the phase space and its elements. A set of formulas is obtained for the local analysis near equilibria. The necessary data for the analysis were collected by conducting computational experiments and constructing several examples. A broader study involves examining the model and interpreting the findings for the actual process being modeled. Examples of this approach are the works [9,10].
Let us describe the structure of the paper. The Problem formulation section provides the necessary material for the study. The Preliminary results section describes some basic properties of the main systems of ordinary differential equations. It deals also with technical details concerning nullclines, critical points, local analysis by linearization, and some special cases. The next two sections concern some particular but important cases. The systems possessing critical points of the type focus, and systems exhibiting the inhibition-activation behavior, are treated. Both types of systems can have periodic solutions, and that means that cyclic processes can occur in the modeled network. The system of the special triangular structure is analyzed in Section 6. It is convenient for analysis and the main conclusions can be transferred to systems of arbitrary dimensions. The process of birth of stable periodic trajectories from stable critical points of the type focus is considered in Section 7. The mechanism of the Andronov–Hopf bifurcation is illustrated for two-dimensional and three-dimensional neuronal systems. As a by-product, an example of a 3D system that has three limit cycles is constructed. Some suggestions on the management of neuronal systems are provided in Section 7. The possibility of effectively changing the properties of the system, and therefore to partially controlling the network in question, is emphasized. The last section summarizes the results obtained so far, and outlines further studies in this direction.

2. Problem Formulation

The mathematical model using ordinary differential equations, is
d x 1 d t = 2 1 1 + e 2 ( a 11 x 1 + a 12 x 2 + a 13 x 3 θ 1 ) 1 b 1 x 1 , d x 2 d t = 2 1 1 + e 2 ( a 21 x 1 + a 22 x 2 + a w 23 x 3 θ 2 ) 1 b 2 x 2 , d x 3 d t = 2 1 1 + e 2 ( a 31 x 1 + a 32 x 2 + a 33 x 3 θ 3 ) 1 b 3 x 3 .
The same system can be written as ([11])
d x 1 d t = tanh ( a 11 x 1 + a 12 x 2 + a 13 x 3 θ 1 ) b 1 x 1 , d x 2 d t = tanh ( a 21 x 1 + a 22 x 2 + a 23 x 3 θ 2 ) b 2 x 2 , d x 3 d t = tanh ( a 31 x 1 + a 32 x 2 + a 33 x 3 θ 3 ) b 3 x 3 ,
since
2 1 1 + e 2 z 1 = 1 e 2 z 1 + e 2 z = e 2 z 1 e 2 z + 1 = tanh ( z ) = tanh ( z ) .
The elements of this 3D network are called neurons. The connections between them are synapses (or nerves). There is an algorithm that describes how the impulses are propagated through the network. In the above model, this algorithm is encoded by the matrix
W = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 .
Each neuron accepts signals from others and produces a single output. The extent to which the input of neuron i is driven by the output of neuron j is characterized by its output and synaptic weight a i j . The dynamic evolution leads to attractors of the system (4), and it was experimentally observed in neural systems. In theoretical modeling, the emphasis is put on the attractors of a system. We wish to study them for System (4).
Similar systems arise in the theory of genetic regulatory networks. The difference is that the nonlinearity is represented by a positive valued sigmoidal functions. One of such systems is
d x 1 d t = 1 1 + e μ 1 ( a 11 x 1 + a 12 x 2 + a 13 x 3 θ 1 ) b 1 x 1 , d x 2 d t = 1 1 + e μ 2 ( a 21 x 1 + a 22 x 2 + a 23 x 3 θ 2 ) b 2 x 2 , d x 3 d t = 1 1 + e μ 3 ( a 31 x 1 + a 32 x 2 + a 33 x 3 θ 3 ) b 3 x 3 .
Notice that System (3), and therefore also System (4), can be obtained from System (6), where μ i = 2 ,   i = 1 , 2 , 3 , by two arithmetic operations, namely multiplication of the nonlinearity in (6) by 2 and subtracting 1. This changes the range of values in (3) to ( 1 , 1 ) .
Systems of the form (6) were studied before by many authors. The interested reader may consult the works ([12,13,14,15,16,17,18,19,20]). Similar systems appear in the theory of telecommunication networks ([21]).
In this article, we study the different dynamic regimes for System (4) which can be observed under various conditions. In particular, we first speak about critical points in System (4) and evaluate the number of them. Then, we focus on periodic regimes, study their attractiveness for other trajectories. This can be performed, under some restrictions, for systems of relatively high dimensionality. Also, the evidences of chaotic behavior are presented.

3. Preliminary Results

This section contains the description of basic properties of systems under consideration, and provides information about nullclines, critical points, and their role in the study.

3.1. Invariant Set

Consider the 3D system (4).
Proposition 1. 
System (4) has an invariant set Q 3 = { 1 / b 1 < x 1 < 1 / b 1 , 1 / b 2 < x 2 < 1 / b 2 , 1 / b 3 < x 3 < 1 / b 3 } .
Proof. 
By inspection of the vector field generated by System (4) on the opposite faces of the three-dimensional cube Q 3 . Notice, that the value range for the function tanh z is ( 1 , 1 ) .

3.2. Nullclines

The nullclines for the system are defined by the relations
x 1 = 1 b 1 tanh ( a 11 x 1 + a 12 x 2 + a 13 x 3 θ 1 ) , x 2 = 1 b 2 tanh ( a 21 x 1 + a 22 x 2 + a 23 x 3 θ 2 ) , x 3 = 1 b 3 tanh ( a 31 x 1 + a 32 x 2 + a 33 x 3 θ 3 ) .
Example 1. 
Consider the system with the matrix
W = 1.2 1.5 0 1.5 1.2 0 0 0 1.2
and b 1 = b 2 = b 3 = 1 , θ 1 = θ 2 = 0.5 , θ 3 = 1 .
The three nullclines for system (4) with matrix (8) are depicted in Figure 1.

3.3. Critical Points

The critical points, which are also called the equilibria, can be obtained from System (4). Geometrically, they are the cross points of the nullclines. The nullclines are defined by the relations
x 1 1 b 1 tanh ( a 11 x 1 + a 12 x 2 + a 13 x 3 θ 1 ) = 0 , x 2 1 b 2 tanh ( a 21 x 1 + a 22 x 2 + a 23 x 3 θ 2 ) = 0 , x 3 1 b 3 tanh ( a 31 x 1 + a 32 x 2 + a 33 x 3 θ 3 ) = 0 .
Proposition 2. 
All critical points are in the invariant set.
The nullclines are located in the sets { 1 / b 1 < x 1 < 1 / b 1 ,   1 / b 2 < x 2 < 1 / b 2 , 1 / b 3 < x 3 < 1 / b 3 } , respectively, and these sets intersect by the invariant set Q 3 only.
Proposition 3. 
At least one critical point exists.
The invariant set Q 3 may be considered as a topological ball. Since the vector field on the border is directed inward, Q 3 is mapped into itself continuously. The continuous contraction mapping Q 3 to Q 3 has a fixed point. Any fixed point is a solution of the system (7).
Remark 1. 
The number of critical points may be greater, up to 27, but finite.
Remark 2. 
Both assertions 2 and 3 are valid for the n-dimensional case also.
Example 2. 
Consider System (4) with the matrix
W = 1.2 2 0 2 1.2 0 0 0 1.2
and b 1 = b 2 = b 3 = 1 , θ 1 = 0.7 , θ 2 = 0.3 , θ 3 = 0.25 . There is one critical point ( 0.122 ; 0.362 ; 0.640 ) .
The three nullclines for system (4) with matrix (10) are depicted in Figure 2.
Example 3. 
Consider example of multiple critical points and the system (4) with the matrix
W = 1.2 2 0 2 1.2 0 0 0 1.2
and b 1 = b 2 = b 3 = 1 , θ 1 = 0.7 , θ 2 = 0.3 , θ 3 = 0.01 .
The three nullclines for system (4) with matrix (11) are depicted in Figure 3.
There are three critical points ( 0.122 ; 0.362 ; 0.640 ) , ( 0.122 ; 0.362 ; 0.050 ) and ( 0.122 ; 0.362 ; 0.675 ) .

3.4. Linearization at a Critical Point

Let ( x 1 * , x 2 * , x 3 * ) be a critical point. The linearization around it is given by the system
u 1 = b 1 u 1 + a 11 g 1 u 1 + a 12 g 1 u 2 + a 13 g 1 u 3 , u 2 = b 2 u 2 + a 21 g 2 u 1 + a 22 g 2 u 2 + a 23 g 2 u 3 , u 3 = b 3 u 3 + a 31 g 3 u 1 + a 32 g 3 u 2 + a 33 g 3 u 3 ,
where
g 1 = 4 e 2 ( a 11 x 1 * + a 12 x 2 * + a 13 x 3 * θ 1 ) [ 1 + e 2 ( a 11 x 1 * + a 12 x 2 * + a 13 x 3 * θ 1 ) ] 2 ,
g 2 = 4 e 2 ( a 21 x 1 * + a 22 x 2 * + a 23 x 3 * θ 2 ) [ 1 + e 2 ( a 21 x 1 * + a 22 x 2 * + a 23 x 3 * θ 2 ) ] 2 ,
g 3 = 4 e 2 ( a 31 x 1 * + a 32 x 2 * + a 33 x 3 * θ 3 ) [ 1 + e 2 ( a 31 x 1 * + a 32 x 2 * + a 33 x 3 * θ 3 ) ] 2 .
One has
A λ I = a 11 g 1 b 1 λ a 12 g 1 a 13 g 1 a 21 g 2 a 22 g 2 b 2 λ a 23 g 2 a 31 g 3 a 32 g 3 a 33 g 3 b 3 λ
and the characteristic equation for b 1 = b 2 = b 3 = 1 is
det | A λ I | = Λ 3 + ( a 11 g 1 + a 22 g 2 + a 33 g 3 ) Λ 2 + [ g 1 g 2 ( a 12 a 21 a 11 a 22 ) + g 1 g 3 ( a 13 a 31 a 11 a 33 ) + g 2 g 3 ( a 23 a 32 a 22 a 33 ) ] Λ + g 1 g 2 g 3 ( a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 11 a 23 a 32 a 12 a 21 a 33 a 13 a 22 a 31 ) = 0 ,
where Λ = λ + 1 .

3.5. Regulatory Matrices With Zero Diagonal Elements

Set a 11 = a 22 = a 33 = 0 . The regulatory matrix is
W = 0 a 12 a 13 a 21 0 a 23 a 31 a 32 0
and the system of differential equations takes the form
x 1 = tanh ( a 12 x 2 + a 13 x 3 θ 1 ) x 1 , x 2 = tanh ( a 21 x 1 + a 23 x 3 θ 2 ) x 2 , x 3 = tanh ( a 31 x 1 + a 32 x 2 θ 3 ) x 3 .
Let ( x 1 * , x 2 * , x 3 * ) be a critical point. The respective linearized system around it is
u 1 = u 1 + a 12 g 1 u 2 + a 13 g 1 u 3 , u 2 = u 2 + a 21 g 2 u 1 + a 23 g 2 u 3 , u 3 = u 3 + a 31 g 3 u 1 + a 32 g 3 u 2 ,
where g 1 , g 2 , g 3 , given in (13) to (15), are computed assuming that the regulatory matrix is (18). The characteristic equation for Λ = λ + 1 takes the form
Λ 3 + B Λ + C = 0 ,
where
B = g 1 g 2 ( a 12 a 21 ) + g 1 g 3 ( a 13 a 31 ) + g 2 g 3 ( a 23 a 32 ) ,
C = g 1 g 2 g 3 ( a 12 a 23 a 31 + a 13 a 21 a 32 ) .
Equation (21) has the form
y 3 + p y + q = 0 .
Recall the Cardano formulas for Equation (24). This equation has complex roots if
Q : = p 3 3 + q 2 2
is positive. The complex roots can be obtained as
y 2 , 3 = a + b 2 ± i ( a b ) 3 2 ,
where
a = ( q 2 + Q ) 1 3 , b = ( q 2 Q ) 1 3
are real cubic roots satisfying a · b = p 3 . The real root of Equation (24) is y 1 = a + b .
Example 4. 
Consider System (19) with the matrix
W = 0 1.2 2 2 0 1.2 0.1 0.1 0
and b 1 = b 2 = b 3 = 1 , θ 1 = 0.3 , θ 2 = 0.3 , θ 3 = 0.01 .
The three nullclines for system (19) with matrix (53) are depicted in Figure 4.
There is a single critical point ( 0.496 ; 0.311 ; 0.308 ) . The characteristic numbers obtained by the linearization process are λ 1 = 1.125 , λ 2 , 3 = 0.937 ± 1.178 i .

4. Focus Type Critical Points

Consider again Equation (21). In our notation,
Q : = B 3 3 + C 2 2 .
Suppose that Q > 0 . Let ( x 1 * , x 2 * , x 3 * ) be a critical point in question. The associated characteristic numbers λ are
λ 1 = 1 + ( a + b ) , λ 2 , 3 = 1 a + b 2 ± i ( a b ) 3 2 ,
where
a = C 2 + Q 1 3 , b = C 2 Q 1 3
are the real values of cubic roots, and Q is given by (28). We will call such a critical point 3D-focus. It is unstable if the real part 1 a + b 2 is positive. We arrive at the following assertion.
Proposition 4. 
Let ( x 1 * , x 2 * , x 3 * ) be a critical point of the system (19). Suppose that
C 2 2 > B 3 3 .
Then, Q > 0 and this critical point is a 3D-focus.
Proof. 
Follows from (28) to (30). □
Corollary 1. 
Suppose the condition B < 0 holds for a critical point. Then, this point is a 3D-focus.
Proof. 
The relation (31) is fulfilled if B < 0 .
Proposition 5. 
Suppose ( x 1 * , x 2 * , x 3 * ) is a critical point of type focus of the system (19). This point is an unstable focus if the condition 1 a + b 2 > 0 holds.
Proof. 
Follows from (29), since then the real part of λ 2 , 3 in (29) is positive. □
Example 5. 
Consider System (19) with the matrix
W = 0 1.5 3 3 0 1.5 3 0.1 0
and b 1 = b 2 = b 3 = 1 , θ 1 = 0.6 , θ 2 = 0.3 , θ 3 = 0.1 .
The three nullclines for system (19) with matrix (32) are depicted in Figure 5.
The system has three critical points: p 1 , p 2 and p 3 at ( 0.790 ; 0.836 ; 0.975 ) , ( 0.176 ; 0.248 ; 0.384 ) and ( 0.982 ; 0.819 ; 0.995 ) . The characteristic numbers λ are given in Table 1.

5. Inhibition-Activation

Consider the system
x 1 = tanh ( a 12 x 2 + a 13 x 3 θ 1 ) x 1 , x 2 = tanh ( a 21 x 1 + a 23 x 3 θ 2 ) x 2 , x 3 = tanh ( a 31 x 1 + a 32 x 2 θ 3 ) x 3 ,
where a 12 , a 13 , a 23 are negative, a 21 ,   a 31 , a 32 are positive.
Let the regulatory matrix be
W = 0 1 1 1 0 1 1 1 0 ,
and θ 1 = θ 2 = θ 3 = θ . There is a single critical point. Introduce
g 1 = 4 e 2 ( x 2 x 3 θ ) [ 1 + e 2 ( x 2 x 3 θ ) ] 2 ,
g 2 = 4 e 2 ( x 1 x 3 θ ) [ 1 + e 2 ( x 1 x 3 θ ) ] 2 ,
g 3 = 4 e 2 ( x 1 + x 2 θ ) [ 1 + e 2 ( x 1 + x 2 θ ) ] 2 .
The range of values of g i is the interval ( 0 , 1 ) . The linearized system is
u 1 = u 1 g 1 u 2 g 1 u 3 , u 2 = u 2 + g 2 u 1 g 2 u 3 , u 3 = u 3 + g 3 u 1 + g 3 u 2 .
One can obtain the matrix
A λ I = 1 λ g 1 g 1 g 2 1 λ g 2 g 3 g 3 1 λ
and the characteristic equation
det | A λ I | = λ 3 3 λ 2 + ( g 1 g 2 + g 1 g 3 + g 2 g 3 3 ) λ + ( g 1 g 2 + g 1 g 3 + g 2 g 3 1 ) = 0 .
The roots of the characteristic equation are
λ 1 = 1 , λ 2 = 1 g 1 g 2 + g 1 g 3 + g 2 g 3 i , λ 3 = 1 + g 1 g 2 + g 1 g 3 + g 2 g 3 i .
Summing up, we arrive at the following assertion.
Proposition 6. 
A critical point of System (33) under the above conditions is 3D-focus; that is, the following is true: there is 2D-subspace with a stable focus and attraction in the remaining dimension.

6. The Case of Triangular Regulatory Matrix

We consider the special case of the regulatory matrix being in triangular form,
W = a 11 a 12 . . . a 1 n 0 a 22 . . . a 2 n . . . 0 0 . . . a n n .
Since the presentation for the general case differs little from the three-dimensional case, let us consider the n-dimensional variant. The system of differential equations takes the form
x 1 = tanh ( a 11 x 1 + a 12 x 2 + . . . + a 1 n x n θ 1 ) x 1 , x 2 = tanh ( a 21 x 1 + a 22 x 2 + . . . + a 2 n x n θ 2 ) x 2 , . . . x n = tanh ( a 31 x 1 + a 32 x 2 + a n n x n θ n ) x n ,
where n > 1 . Suppose that the coefficients a i j take values in the interval ( 0 ; 1 ] .

6.1. Critical Points

The critical points of System (43) can be determined from
x 1 = tanh ( a 11 x 1 + a 12 x 2 + . . . + a 1 n x n θ 1 ) , x 2 = tanh ( a 21 x 1 + a 22 x 2 + . . . + a 2 n x n θ 2 ) , . . . x n = tanh ( a 31 x 1 + a 32 x 2 + a n n x n θ n ) .
Since the right sides in (44) are less than unity in modulus, all critical points locate in ( 1 ; 1 ) × ( 1 ; 1 ) × . . . × ( 1 ; 1 ) . Due to sigmoidal character of the function tanh z , the last equation in (44) may have one , two or three roots.
Proposition 7. 
There are, at most, three values for x n in System (44).
Proposition 8. 
At most, 3 n critical points are possible in System (43).
Proof. 
The last equation in (44) may have, at most, three roots, due to the S-shape of the graph to a sigmoidal function on the right side. Consequently, the penultimate equation in (44) may have, at most, 3 × 3 roots x n 1 . In total, there are nine roots. Proceeding in this way, we obtain, at most, 3 n roots for the very first equation in (44), and therefore, at most 3 n critical points for System (43). Hence, the proof. □

6.2. Linearized System

The linearized system is
u 1 = u 1 + a 11 g 1 u 1 + a 12 g 1 u 2 + . . . + a 1 n g 1 u n , u 2 = u 2 + a 22 g 2 u 2 + . . . + a 2 n g 2 u n , . . . u n = u n + a n n g n u n ,
where
g 1 = 4 e 2 ( a 11 x 1 + a 12 x 2 + . . . + a 1 n x n θ 1 ) [ 1 + e 2 ( a 11 x 1 + a 12 x 2 + . . . + a 1 n x n θ 1 ) ] 2 ,
g 2 = 4 e 2 ( a 22 x 2 + . . . + a 2 n x n θ 2 ) [ 1 + e 2 ( a 22 x 2 + . . . + a 2 n x n θ 2 ) ] 2 ,
g n = 4 e 2 ( a n n x n θ n ) [ 1 + e 2 ( a n n x n θ n ) ] 2 .
The values of g i are positive and not greater than unity. The characteristic values for a critical point are to be obtained from
A λ I = a 11 g 1 1 λ a 12 g 1 . . . a 1 n g 1 0 a 22 g 2 1 λ . . . a 2 n g 2 . . . . . . . . . . . . 0 0 . . . a n n g n 1 λ
and
det | A λ I | = ( a 11 g 1 1 λ ) ( a 22 g 2 1 λ ) . . . . . . ( a n n g n 1 λ ) = 0 . .
Evidently,
λ 1 = 1 + a 11 g 1 , λ 2 = 1 + a 22 g 2 , . . . λ n = 1 + a n n g n .
Therefore, the characteristic values for any critical point are real, and the following assertion follows.
Proposition 9. 
The triangular system (43) cannot have critical points of type focus.

7. Systems with Stable Periodic Solutions: Andronov–Hopf Type Bifurcations

7.1. 2D Case

We first study the second-order system
d x 1 d t = tanh ( k x 1 + b x 2 θ 1 ) b 1 x 1 , d x 2 d t = tanh ( a x 1 + k x 2 θ 2 ) v 2 x 2 ,
where b = a = 2 , and k > 0 is the parameter. Choose a k small enough that a unique critical point is a stable focus. Then, increase k until the stable focus turns to unstable one. Then, the limit cycle emerges, surrounding the critical point. This is called Andronov–Hopf bifurcation for 2D systems.
Example 6. 
Consider System (52) with the matrix
W = k 2 2 k
and k = 0.5 , b 1 = b 2 = 1 , θ 1 = 0.1 , θ 2 = 0.3 .
The two nullclines and vector field for system (52) with matrix (53) are depicted in Figure 6.
There is one critical point: the stable focus. If the parameter k increases, the stable focus turns to an unstable one. Then, the limit cycle emerges, surrounding the critical point.
Example 7. 
Consider System (52) with the matrix
W = k 2 2 k
and k = 1.1 , b 1 = b 2 = 1 , θ 1 = 0.1 , θ 2 = 0.3 .
The two nullclines, vector field and limit cycle for system (52) with matrix (54) are depicted in Figure 7.

7.2. 3D Case

Consider now the 3D system with the matrix
W = k 0 b 0 a 22 0 a 0 k ,
where a , b , k are as in 2D system (52). The second nullcline is defined by the relation
x 2 = 1 b 2 tanh ( a 22 x 2 θ 2 ) .
Choose the parameters so that Equation (56) has three roots. Then, the second nullcline is a union of three parallel planes.
Example 8. 
Consider picture of nullclines in Figure 8. There are three periodic solutions in System (56) with the matrix (57) are depicted in Figure 9.
W = 1.5 0 2 0 2.5 0 2 0 1.5
and b 1 = b 2 = b 3 = 1 , θ 1 = 0.1 , θ 2 = 0 , θ 3 = 0.2 .

8. Control and Management of ANN

First, a citation from [22]: “Models of ANN are specified by three basic entities: models of the neurons themselves–that is, the node characteristics; models of synaptic interconnections and structures–that is, net topology and weights; and training or learning rules—that is, the method of adjusting the weights or the way the network interprets the information it receives”.
In this section, we discuss the problem of changing the behavior of the trajectories of System (4). This may be interpreted as partial control over the system. The system has as parameters the coefficients a i j , the values θ i and b i in the linear part. Properties of the system may be changed by varying any of mentioned.
We would like demonstrate how a system of the form (4) can be modified so that trajectories start to tend to some of indicated attractor. For this, consider the system (4), which has as attractors three limit cycles. This can be performed via three operations: (1) put the entries of the 2D regulatory matrix, which corresponds to 2D system with the limit cycle L, to the four corners of a 3D matrix A; (2) choose the middle element of the 3D matrix A so, that the equation x 2 = tanh ( a 22 x 2 θ 2 ) with respect to x 2 has exactly three roots r 1 < r 2 < r 3 ; (3) set the four remaining values of a i j to zero. Set also b i to unity. After finishing these preparations, the second nullcline will be three parallel planes P i , going through x 2 = r i ,   i = 1 , 2 , 3 . Each of these planes will contain the limit cycle. Two side limit cycles will attract trajectories from their neighborhoods. The middle limit cycle will attract only trajectories, lying in the plane P 2 .
Now, let us solve the problem of control. Let the limit cycle at P 3 be conditionally “bad”. The problem is to change the system so that all trajectories in Q 3 are attracted to the limit cycle which, at the beginning of the process, was in the plane P 1 . Problems of this kind may arise often. In the paper [20], a similar problem was treated mathematically for genetic networks.
Solution: Change θ 2 so that the equation x 2 = tanh ( a 22 x 2 θ 2 ) has now the unique root near P 1 . The second nullcline is now the plane, passing near r 1 . This operation is possible, since the graph of tanh ( a 22 x 2 θ 2 ) is sigmoidal, and changing θ 2 means shifting the original plane P 1 in both directions. After that, only one attractor (limit cycle) remains. The problem is solved.
In neuronal systems, the θ parameters express the threshold of a response function f ([4]). In genetic networks, θ i stands for the influence of external input on gene i , which modulates the gene’s sensitivity of response ([23]). The technique of changing the θ parameters and thus shifting the nullclines was applied in the work [24] for building the partial control over model of genetic network.

9. Conclusions

Modeling of genetic and neural networks, using dynamical systems, is effective in both cases. The advantage of this approach, compared with other models, is the possibility of following the evolution of modeled networks. Both systems have invariant sets trapping the trajectories. As a consequence, the attracting sets exist. The structure and properties of attractors are important for the prediction of future states of networks. Both systems must have critical points. These points may be attractive (stable) or repelling. The limit cycles are possible in both cases. The attractors, exhibiting sensitivity to the initial data, are possible for three-dimensional GRN and ANN systems. Systems with specific structures can have predictable properties. For instance, the triangular systems cannot have critical points of the focus type. In contrast, the inhibition-activation systems typically have critical points of this type, and can suffer bifurcations of Andronov–Hopf type. Partial control and management are possible for GRN and ANN systems. In particular, some realistically large-sized GRN systems allow for control and management by changing the adjustable parameters. This problem is relevant to modern medicine.

Author Contributions

Writing—review & editing, D.O. and F.S. The authors contributed equally to the creation of this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No data created in this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Haykin, S. Neural networks. In A Comprehensive Foundation, 2nd ed.; Prentice Hall: Hoboken, NJ, USA, 1999. [Google Scholar]
  2. Chapeau-Blondeau, F.; Chauvet, G. Stable, Oscillatory, and Chaotic Regimes in the Dynamics of Small Neural Networks with Delay. Neural Netw. 1992, 5, 735–743. [Google Scholar]
  3. Funahashi, K.; Nakamura, Y. Approximation of dynamical systems by continuous time recurrent neural networks. Neural Netw. 1993, 6, 801–806. [Google Scholar]
  4. Das, A.; Roy, A.B.; Das, P. Chaos in a three dimensional neural network. Appl. Math. Model. 2000, 24, 511–522. [Google Scholar] [CrossRef]
  5. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef] [PubMed]
  6. Yamazaki, K.; Vo-Ho, V.K.; Darshan, B.; Le, N. Spiking Neural Networks and Their Applications: A Review. Brain Sci. 2022, 12, 63. [Google Scholar] [CrossRef]
  7. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Umar, A.M.; Linus, O.U.; Arshad, H.; Kazaure, A.A.; Gana, U.; Kiru, M.U. Comprehensive Review of Artificial Neural Network Applications to Pattern Recognition. IEEE Access 2019, 7, 158820–158846. [Google Scholar] [CrossRef]
  8. Vohradský, J. Neural network model of gene expression. Faseb J. 2001, 15, 846–854. [Google Scholar] [CrossRef]
  9. Kraynyukova, N.; Tchumatchenko, T. Stabilized supralinear network can give rise to bistable, oscillatory, and persistent activity. Proc. Natl. Acad. Sci. USA 2018, 115, 3464–3469. [Google Scholar]
  10. Ahmadian, Y.; Miller, K.D. What is the dynamical regime of cerebral cortex? Neuron 2021, 109, 3373–3391. [Google Scholar] [CrossRef]
  11. Sprott, J.C. Elegant Chaos; World Scientific: Singapore, 2010. [Google Scholar]
  12. Wilson, H.R.; Cowan, J.D. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 1972, 12, 1–24. [Google Scholar]
  13. Furusawa, C.; Kaneko, K. A generic mechanism for adaptive growth rate regulation. PLoS Comput. Biol. 2008, 4, e3. [Google Scholar] [CrossRef]
  14. Alakwaa, F.M. Modeling of Gene Regulatory Networks: A Literature Review. J. Comput. Syst. Biol. 2014, 1, 1. [Google Scholar] [CrossRef]
  15. Brokan, E.; Sadyrbaev, F. On a differential system arising in the network control theory. Nonlinear Anal. Model. Control. 2016, 21, 687–701. [Google Scholar] [CrossRef]
  16. Schlitt, T. Approaches to Modeling Gene Regulatory Networks: A Gentle Introduction. In Silico Systems Biology; Methods in Molecular Biology (Methods and Protocols); Humana: Totowa, NJ, USA, 2013; Volume 1021, pp. 13–35. [Google Scholar] [CrossRef]
  17. Jong, H.D. Modeling and Simulation of Genetic Regulatory Systems: A Literature Review. J. Comput Biol. 2002, 9, 67–103. [Google Scholar] [CrossRef]
  18. Ogorelova, D.; Sadyrbaev, F.; Sengileyev, V. Control in Inhibitory Genetic Regulatory Network Models. Contemp. Math. 2020, 1, 421–428. [Google Scholar] [CrossRef]
  19. Sadyrbaev, F.; Ogorelova, D.; Samuilik, I. A nullclines approach to the study of 2D artificial network. Contemp. Math. 2019, 1, 1–11. [Google Scholar] [CrossRef]
  20. Wang, L.Z.; Su, R.Q.; Huang, Z.G.; Wang, X.; Wang, W.X.; Grebogi, C.; Lai, Y.C. A geometrical approach to control and controllability of nonlinear dynamical networks. Nat. Commun. 2016, 7, 11323. [Google Scholar] [CrossRef]
  21. Koizumi, Y.; Miyamura, T.; Arakawa, S.I.; Oki, E.; Shiomoto, K.; Murata, M. Adaptive Virtual Network Topology Control Based on Attractor Selection. J. Light. Technol. 2010, 28, 1720–1731. [Google Scholar] [CrossRef]
  22. Vemuri, V. Artificial Neural Networks in Control Applications. Adv. Comput. 1993, 36, 203–254. [Google Scholar]
  23. Kozlovska, O.; Samuilik, I. Quasi-periodic solutions for a three-dimensional system in gene regulatory network. WSEAS Trans. Syst. 2023, 22, 727–733. [Google Scholar] [CrossRef]
  24. Ogorelova, D.; Sadyrbaev, F.; Samuilik, I. On Targeted Control over Trajectories on Dynamical Systems Arising in Models of Complex Networks. Mathematics 2023, 11, 2206. [Google Scholar] [CrossRef]
Figure 1. The nullclines for System (4) with Matrix (8) ( x 1 —red, x 2 —green, x 3 —blue).
Figure 1. The nullclines for System (4) with Matrix (8) ( x 1 —red, x 2 —green, x 3 —blue).
Axioms 13 00061 g001
Figure 2. The nullclines for system (4) ( x 1 —red, x 2 —green, x 3 —blue) with Matrix (10).
Figure 2. The nullclines for system (4) ( x 1 —red, x 2 —green, x 3 —blue) with Matrix (10).
Axioms 13 00061 g002
Figure 3. The nullclines for System (4) ( x 1 —red, x 2 —green, x 3 —blue) with Matrix (11).
Figure 3. The nullclines for System (4) ( x 1 —red, x 2 —green, x 3 —blue) with Matrix (11).
Axioms 13 00061 g003
Figure 4. The nullclines for System (19) ( x 1 —red, x 2 —green, x 3 —blue) with Matrix (53).
Figure 4. The nullclines for System (19) ( x 1 —red, x 2 —green, x 3 —blue) with Matrix (53).
Axioms 13 00061 g004
Figure 5. The nullclines for System (19) ( x 1 —red, x 2 —green, x 3 —blue).
Figure 5. The nullclines for System (19) ( x 1 —red, x 2 —green, x 3 —blue).
Axioms 13 00061 g005
Figure 6. The nullclines and vector field for System (52) ( x 1 —blue, x 2 —red) with Matrix (53).
Figure 6. The nullclines and vector field for System (52) ( x 1 —blue, x 2 —red) with Matrix (53).
Axioms 13 00061 g006
Figure 7. The limit cycle in System (52) ( x 1 —blue, x 2 —red) with Matrix (54).
Figure 7. The limit cycle in System (52) ( x 1 —blue, x 2 —red) with Matrix (54).
Axioms 13 00061 g007
Figure 8. The nullclines of System (56) with the regulatory matrix (57).
Figure 8. The nullclines of System (56) with the regulatory matrix (57).
Axioms 13 00061 g008
Figure 9. The three periodic solutions of System (56) with the regulatory matrix (57).
Figure 9. The three periodic solutions of System (56) with the regulatory matrix (57).
Axioms 13 00061 g009
Table 1. The characteristic numbers λ .
Table 1. The characteristic numbers λ .
- λ 1 λ 2 λ 3
p 1 −0.9268−1.0366 − 0.6101 i−1.0366 + 0.6101 i
p 2 1.1972−2.0986 − 0.8406 i−2.0986 + 0.8406 i
p 3 −0.9821−1.0090 − 0.2189 i−1.0090 + 0.2189 i
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ogorelova, D.; Sadyrbaev, F. Remarks on the Mathematical Modeling of Gene and Neuronal Networks by Ordinary Differential Equations. Axioms 2024, 13, 61. https://doi.org/10.3390/axioms13010061

AMA Style

Ogorelova D, Sadyrbaev F. Remarks on the Mathematical Modeling of Gene and Neuronal Networks by Ordinary Differential Equations. Axioms. 2024; 13(1):61. https://doi.org/10.3390/axioms13010061

Chicago/Turabian Style

Ogorelova, Diana, and Felix Sadyrbaev. 2024. "Remarks on the Mathematical Modeling of Gene and Neuronal Networks by Ordinary Differential Equations" Axioms 13, no. 1: 61. https://doi.org/10.3390/axioms13010061

APA Style

Ogorelova, D., & Sadyrbaev, F. (2024). Remarks on the Mathematical Modeling of Gene and Neuronal Networks by Ordinary Differential Equations. Axioms, 13(1), 61. https://doi.org/10.3390/axioms13010061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop