Next Article in Journal
An Elastodiffusive Orthotropic Euler–Bernoulli Beam Considering Diffusion Flux Relaxation
Previous Article in Journal
Minimizing an Insurer’s Ultimate Ruin Probability by Reinsurance and Investments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mathematical Analysis of a Prey–Predator System: An Adaptive Back-Stepping Control and Stochastic Approach

1
Department of Mathematics, National Institute of Food Technology Entrepreneurship and Management (NIFTEM), HSIIDC Industrial Estate, Kundli 131028, Haryana, India
2
Department of Mathematics, School of Advanced Sciences, VIT, Vellore 632014, Tamil Nadu, India
3
Department of Mathematics, S.A. Engineering College, Chennai 600077, Tamil Nadu, India
4
Departamento de Ciencias Exatas e Naturais, Academia Militar, Av. Conde Castro Guimaraes, 2720-113 Amadora, Portugal
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2019, 24(1), 22; https://doi.org/10.3390/mca24010022
Submission received: 4 December 2018 / Revised: 16 January 2019 / Accepted: 30 January 2019 / Published: 5 February 2019
(This article belongs to the Section Natural Sciences)

Abstract

:
In this paper, stochastic analysis of a diseased prey–predator system involving adaptive back-stepping control is studied. The system was investigated for its dynamical behaviours, such as boundedness and local stability analysis. The global stability of the system was derived using the Lyapunov function. The uniform persistence condition for the system is obtained. The proposed system was studied with adaptive back-stepping control, and it is proved that the system stabilizes to its steady state in nonlinear feedback control. The value of the system is described mostly by the environmental stochasticity in the form of Gaussian white noise. We also established some conditions for oscillations of all positive solutions of the delayed system. Numerical simulations are illustrated, and sustained our analytical findings. We concluded that controlled harvesting on the susceptible and infected prey is able to control prey infection.

1. Introduction

Theoretical research and field observations have established the prevalence of various infectious diseases amongst the majority of the ecosystem population. In the ecological system, the impact of such infectious diseases is an important area of research for ecologists and mathematicians. The processes of merging ecology and epidemiology in the past few decades have been challenging and interesting. By nature, species are always dependent on other species for its food and living space. It is responsible for spreading infectious diseases and also competes against and is predated by other species. The dynamical behavior of such systems is analyzed using mathematical models that are described by differential equations. Mathematical epidemic models have gained much attention from researchers after the pioneering work of Kermack and McKendrick [1] on the SIRS (Susceptible-Infective-Removal-Susceptible) system, in which the evolution of a disease which gets transmitted upon contact is described. The influence of epidemics on predation was first studied by Anderson and May [2,3]. Hadler and Freedman [4] considered the prey–predator model in which predation is more likely on infected prey. In their model, they considered that predators only became infected from infected prey by predation. Haque and Venturino [5,6] discussed the models of diseases spreading in symbolic communities. Mukhopadhyay [7] studied the role of harvesting and switching on the dynamics of disease propagation and/or eradication. The role of prey infection on the stability aspects of a prey–predator models with different functional responses was studied by Bairagi et al. [8]. Han et al. [9] analyzed four epidemiological models for SIS (Susceptible-Infectious-Susceptible) and SIR (Susceptible-Infectious-Recovered) diseases with standard and mass action incidents. Das [10] showed that parasite infection in predator populations stabilized prey–predator oscillations. Pal and Samanta [11] studied the dynamical behavior of a ratio-dependent prey–predator system with infection in the prey population. They proved that prey refuge had a stabilizing effect on the prey–predator interaction. Numerous examples of the prey–predator relationship with infection in the prey population have been found in various studies [12,13,14,15,16,17].
Adaptation is a fundamental characteristic of living organisms, such as in the prey–predator system and other ecological models, since they attempt to maintain physiological equilibrium in the midst of changing environmental conditions. Adaptive control is an active field in the design of control systems that helps deal with uncertainties. Back-stepping is a technique for designing stability controls for a nonlinear dynamical system, and this approach is a recursive method for stabilizing the origin of the system. The control process terminates when the final external control reaches. El-Gohary and Al-Ruzaiza [18,19] discussed the chaos and adaptive control of a three-species, continuous-time prey–predator model. Recently, Madhusudanan et al. [20] studied back-stepping control in a diseased prey–predator system. They proved that the system was globally asymptotically stable at the origin with the help of nonlinear feedback control. Numerous examples of control techniques in the prey–predator system have been found in various studies [21,22].
The rest of the paper is structured as follows. In Section 2, we formulate a mathematical model with an assumption, and the positivity and boundedness of the deterministic model is also discussed. Section 3 deals with the existence of equilibrium points with a feasible condition. In Section 4, local stability analysis of equilibrium points is discussed. Section 5 deals with global stability analysis of the interior equilibrium point E 3 ( x * , y * , z * ) . We discuss the condition for permanence of the system in Section 6. In Section 7, we introduce adaptive back-stepping control in the prey–predator system. In Section 8, we compute the population intensity of fluctuation due to the incorporation of noise, which leads to chaos in reality. In Section 9, we propose and analyze a delayed prey–predator system. Numerical simulation of the proposed model is presented in Section 10. Finally, the discussion is presented in Section 11 and conclusions are presented in the final section.

2. Mathematical Model

In this section, a continuous-time prey–predator system with susceptible, infected prey and a predator is considered. It is assumed that the susceptible prey population was developed on the basis of logistic law, and that only infected prey are predated. The disease is inherited only from the prey population, and they remain infected and do not recover.
The model becomes:
d x d t = x 1 x K a x y 1 + x h 1 x ,
d y d t = a x y 1 + x b y z h 2 y ,
d z d t = f y z d z .
Here, the parameters x ( t ) , y ( t ) , and z ( t ) denote the susceptible prey, infected prey, and predator populations, respectively. The parameters a, d, b, f, h 1 , and h 2 denote the rate of transmission from the susceptible to infected prey population, death rate of predators, searching efficiency of the predator, conversion-efficiency rate of the predator, and constant harvesting rate of susceptible prey and infected prey, respectively. Now, we will analyze system (1) with the following initial conditions:
x ( 0 ) 0 , y ( 0 ) 0 , z ( 0 ) 0 .

Positiveness and Boundedness of the System

In theoretical eco-epidemiology, boundedness of the system implies that the system is biologically well-behaved. The following theorems ensure the positivity and boundedness of the system (1):
Theorem 1.
All solutions of ( x ( t ) , y ( t ) , z ( t ) ) of system (1) with the initial condition (2) are positive for all t 0 .
Proof. 
From (1), it is observed that
d x x = 1 x K a y 1 + x h 1 d t = ϕ 1 ( x , y ) d t ,
where ϕ 1 ( x , y ) = 1 x K a y 1 + x h 1 .
Integrating in the region [ 0 , t ] , we get x ( t ) = x ( 0 ) exp ϕ 1 ( x , y ) d t > 0 for all t. From (1), it is observed that
d y y = a x 1 + x b z h 2 d t = ϕ 2 ( x , z ) d t ,
where ϕ 2 ( x , z ) = a x 1 + x b z h 2 .
Integrating in region [ 0 , t ] , we get y ( t ) = y ( 0 ) exp ϕ 2 ( x , z ) d t > 0 for all t. From (1), it is observed that
d z z = [ f y d ] d t = ϕ 3 ( y ) d t ,
where ϕ 3 ( y ) = f y d .
Integrating in the region [ 0 , t ] , we get z ( t ) = z ( 0 ) exp ϕ 3 ( y ) d t > 0 for all t. Hence, all solutions starting from interior of the first octant ( I n + 3 ) remain positive for the future. □
Theorem 2.
All the non-negative solutions of the model system (1) that initiate in + 3 are uniformly bounded.
Proof. 
Let x ( t ) , y ( t ) , z ( t ) be any solution of system (1). Since from (1) d x d t x 1 x K , we have lim t sup x ( t ) K . Let ξ = x + y + b f z ; therefore,
d ξ d t = d x d t + d y d t + b f d z d t .
Substituting Equation (1) in Equation (3), we get
d ξ d t + m ξ = x ( 1 + m h 1 ) x K + ( m h 2 ) y + b z f ( m d ) x ( 1 + m h 1 ) x K ,
d ξ d t + m ξ μ since K ( 1 + m h 1 ) = μ ,
where m and μ are positive constants. Applying Lemma on differential inequalities [23], we obtain
0 ξ ( x , y , z ) ( μ / m ) ( 1 e m t ) + ( ξ ( x ( 0 ) , y ( 0 ) , z ( 0 ) ) / e m t )
and, for t , we have 0 ξ ( x , y , z ) ( μ / m ) . Thus, all solutions of system (1) enter into the region
Γ = ( x , y , z ) + 3 : 0 x K , 0 ξ ( μ / m ) + ϵ , ϵ > 0 .
This completes the proof. □

3. Equilibrium Points and Criteria for Existence

The possible steady states for system (1) and their existence conditions for each of them are as follows:
  • The vanishing equilibrium point, E 0 = ( 0 , 0 , 0 ) , always exists.
  • The disease and predator free equilibrium point, E 1 = ( x ˜ , 0 , 0 ) , where x ˜ = K ( 1 h 1 ) , exists provided that h 1 < 1 .
  • The predator free equilibrium point, E 2 = ( x ^ , y ^ , 0 ) , where x ^ = ( h 2 / ( a h 2 ) ) and y ^ = [ ( K ( a h 2 ) ( 1 h 1 ) h 2 ) / ( ( a h 2 ) 2 K ) ] , exists provided that a > h 2 , h 1 < 1 , K ( a h 2 ) ( 1 h 1 ) > h 2 .
  • The steady state, E 3 = ( x * , y * , z * ) , where y * = d / f and z * = [ ( ( a h 2 ) x * h 2 ) / ( b ( 1 + x * ) ) ] , exists.
However, x * is a positive root of (5)
A x * 2 + B x * + C = 0 ,
where A = f , B = K f h 1 + f K f , C = K f h 1 + a d K K f .
By Discard’s rule of sign, Equation (5) has a unique positive root, if K h 1 + 1 < K , f h 1 + a d > f .

4. Stability Analysis

In this section, we analyzed the local stability of system (1) that is examined by constructing the Jacobian matrix relating to every equilibrium point. The Jacobian matrix of the system at any point ( x , y , z ) is given by
J ( x , y , z ) = 1 2 x K h 1 a y ( 1 + x ) 2 a x ( 1 + x ) 0 a y ( 1 + x ) 2 a x 1 + x b z h 2 b y 0 f z f y d .
  • The variational matrix for the equilibrium point at E 0 ( 0 , 0 , 0 ) is
    J ( E 0 ) = 1 h 1 0 0 0 h 2 0 0 0 d .
    The eigenvalues of J ( E 0 ) are λ 1 = 1 h 1 , λ 2 = h 2 , λ 3 = d . All the eigenvalues are negative if h 1 > 1 . Hence, E 0 is locally asymptotically stable in the xyz direction.
  • The variational matrix for the equilibrium point at E 1 ( K ( 1 h 1 ) , 0 , 0 ) is
    J ( E 1 ) = h 1 1 a K ( 1 h 1 ) 1 + K ( 1 h 1 ) 0 0 a K ( 1 h 1 ) 1 + K ( 1 h 1 ) h 2 0 0 0 d .
    The eigenvalues of J ( E 1 ) are λ 1 = h 1 1 , λ 2 = a K ( 1 h 1 ) 1 + K ( 1 h 1 ) h 2 , λ 3 = d . All the eigenvalues are negative if h 1 < 1 and a K ( 1 h 1 ) < h 2 ( 1 + K ( 1 h 1 ) ) . Hence, E 1 is asymptotically stable in the xyz direction.
  • The variational matrix for the equilibrium point at E 2 ( x ^ , y ^ , 0 ) is
    J ( E 2 ) = 1 2 x ^ K a y ^ ( 1 + x ^ ) 2 h 1 h 2 0 K ( a h 2 ) ( 1 h 1 ) h 2 K a 0 b y ^ 0 0 f y ^ d .
    The eigenvalues of J ( E 3 ) are negative if it satisfies f y ^ < d and K ( 1 h 1 ) ( 1 + x ^ ) 2 < ( 2 x ^ ( 1 + x ^ ) 2 + a K y ^ ) . Hence, the equilibrium point E 2 is locally asymptotically stable in the xyz direction.
Theorem 3.
The co-existent equilibrium point E 3 ( x * , y * , z * ) of system (1) exists. Then, E 3 is locally asymptotically stable if the following conditions satisfy
1. 
a x * < ( b z * + h 2 ) ( 1 + x * ) and
2. 
K ( 1 h 1 ) ( 1 + x * ) 2 < ( 2 x * ( 1 + x * ) 2 + a K y * ) .
Proof. 
The variational matrix at the interior point E 3 ( x * , y * , z * ) is
J ( E 3 ) = a 11 a 12 0 a 21 a 22 a 23 0 a 32 0 ,
where
a 11 = 1 2 x * K h 1 a y * ( 1 + x * ) 2 a 12 = a x * 1 + x * a 21 = a y * ( 1 + x * ) 2 , a 22 = a x * 1 + x * b z * h 2 a 23 = b y * a 32 = f z * .
The characteristic equations of Jacobian matrix J ( E 3 ) is given by λ 3 + A 1 λ 2 + A 2 λ + A 3 = 0 , where
A 1 = a 11 a 22 , A 2 = a 11 a 22 a 12 a 21 a 23 a 32 , A 3 = a 11 a 23 a 32
and
A 1 A 2 A 3 = a 11 a 12 a 21 + a 12 a 21 a 22 + a 22 a 23 a 32 a 11 2 a 22 a 11 a 22 2 .
The sufficient conditions for A 1 A 2 A 3 > 0 are a 11 0 , a 22 0 , which implies
a x * < ( b z * + h z ) ( 1 + x * ) , K ( 1 h 1 ) ( 1 + x * ) 2 < [ 2 x * ( 1 + x * ) 2 + a K y * ] .

5. Global Stability Analysis

In this section, we investigated the global stability behavior of the system (1) at the interior equilibrium E 3 ( x * , y * , z * ) by using the Lyapunov stability theorem.
Theorem 4.
If ( y x * / y * ) < x < x * or x * < x < ( y x * / y * ) , then E 3 ( x * , y * , z * ) is globally asymptotically stable.
Proof. 
Let us define
V ( x , y , z ) = P x x * x * l n x x * + y y * y * l n y y * + Q z z * z * l n z z * ,
where P , Q are positive constants to be chosen later.
Differentiating (6) along the solution of the system (1) with respect to t , we get
d V d t = P x x * x d x d t + y y * y d y d t + Q z z * z d z d t = P 1 x K a y 1 + x h 1 ( x x * ) + ( y y * ) a x 1 + x b z h 2 + Q ( z z * ) ( f y d ) = P 1 K ( x x * ) a y 1 + x y * 1 + x ( x x * ) + ( y y * ) a x 1 + x a x * 1 + x * b ( z z * ) + Q ( f ( z z * ) ( y y * ) ) .
Choosing P = 1 , Q = b / f , and then simplified to
d V d t = 1 K ( x x * ) 2 a ( y x * x y * ) ( 1 + x ) ( 1 + x * ) ( x x * ) .
Now,
d V d t < 0 if x * < x < y x * y * or y x * y * < x < x * .
Then, d V d t is negative definite. Consequently, V is a Lyapunov function with respect to all solutions in the interior of the positive octant. □

6. Permanence of the System

In this section, our main intuition is that the long time survival of species in an ecological system. Many notions and terms are identified in the literature to discuss and analyze the long-term survival of populations. Out of such, permanence and persistence are the ones to better analyze the system. From an ecological point of view, permanence of a system means that the long-term survival of all populations of the system.
Definition 1.
The system (1) is said to be permanent if M m > 0 , such that for any solution of ( x ( t ) , y ( t ) , z ( t ) ) of system (1), ( x ( 0 ) , y ( 0 ) , z ( 0 ) ) > 0 ,
m lim t ( inf ( x ( t ) ) lim t ( sup ( x ( t ) ) M ,
m lim t ( inf ( y ( t ) ) lim t ( sup ( y ( t ) ) M ,
m lim t ( inf ( z ( t ) ) lim t ( sup ( z ( t ) ) M .
Now, we show that system (1) is uniformly persistent. The survival of all populations of the system in the future time is nothing but persistence in the view of ecology.
In the mathematical point of view, persistence of a system means that a strictly positive solution does not have omega limit points on the boundary of the non-negative cone.
Definition 2.
A population is said to be uniformly persistent if there exists δ > 0 , independent of x ( 0 ) where x ( 0 ) > 0 , such that
lim t inf ( x ( t ) ) > δ .
Theorem 5.
The system (1) is uniformly persistent if
f y ^ d > 0 holds .
Proof. 
We will prove this theorem by the method of Lyapunov average function.
Let the average Lyapunov function for the system (1) be σ ( X ) = x p y q z r , where p, q, r are positive constants. Clearly, σ ( X ) is non-negative function defined in D of + 3 , where
+ 3 = { ( x , y , z ) , x > 0 , y > 0 , ( 1 + x ) 2 a y K > 0 } .
Then, we have Ψ ( X ) = σ ˙ ( X ) σ ( X ) = p x ˙ x + q y ˙ y + r z ˙ z ,
Ψ ( X ) = p 1 x K a y 1 + x h 1 + q a x 1 + x b z h 2 + r ( f y d ) .
Furthermore, there are no periodic orbits in the interior of positive quadrant of xy plane. Thus, to prove the uniform persistence of the system, it is enough to show that Ψ ( X ) > 0 in + 3 for a suitable choice of p , q , r > 0 :
Ψ ( E 0 ) = p ( 1 h 1 ) q ( h 2 ) r d > 0 ,
Ψ ( E 1 ) = q a K ( 1 h 1 ) 1 + K ( 1 h 1 ) h 2 r d > 0 ,
Ψ ( E 2 ) = r ( f y ^ d ) > 0 .
We noticed that, by increasing the value of p, while h 1 < 1 , p ( 1 h 1 ) > ( q h 2 + r d ) , Ψ ( E 0 ) can be made positive. Thus, the inequality (9) holds. If q a K ( 1 h 1 ) > ( q h 2 + r d ) ( 1 + K ( 1 h 1 ) ) , then Ψ ( E 1 ) is positive. Thus, the inequality (10) holds. If the inequality in Equation (7) holds, then (11) is satisfied. □

7. Introduction of Adaptive Back-Stepping Control in a Prey–Predator System

Adaptive back-stepping method is the back-stepping control and adaptive laws. The back-stepping design is initiated with the first state equation whose state variable has the highest integration order from the control input. The second state variable is considered as the virtual control and the stabilizing function is replaced by it. This stabilizing function can stabilize the first state variables and we set the error between the virtual control and stabilizing function as η . Then, for the second state equation, we will design a new stabilizing law to replace the third state variable for the second order system, and step back to control the signal. From the steps above, we can see that the term back-stepping means that we use the latter state as a virtual control to stabilize the previous one. The Lyapunov direct method is utilized from the stabilization method for the error between virtual control and stabilizing function. The control Lyapunov function is to be used which will be a positive definite and includes the quadratic form of the errors. In this section, the system with susceptible prey, infected prey and a predator population controlled by back-stepping using a nonlinear feedback control approach is studied. We initiate the study by assuming that system (1) can be written in the suitable form by introducing nonlinear feedback control inputs u 1 , u 2 , u 3 into the system to better analyze the prey–predator interactions:
d x d t = x 1 x K a x y 1 + x h 1 x + u 1 ,
d y d t = a x y 1 + x b y z h 2 y + u 2 ,
d z d t = f y z d z + u 3 ,
where u 1 , u 2 , u 3 are back-stepping nonlinear feedback controllers that are the functions of state variables and will be suitable choices to make the trajectories of the whole system (12)–(14). As long as this feedback stabilizes the system (1), lim t | | x ( t ) | | = 0 converges.
Theorem 6.
A diseased prey–predator system (12)–(14) is globally asymptotically stable provided the following adaptive nonlinear controls
u 1 = x 2 K 2 x + h ^ 1 x ,
u 2 = a x 2 1 + x a x η 1 1 + x + h ^ 2 η 1 η 1 ,
u 3 = ( b η 1 2 f η 1 η 2 + d η 2 η 2 ) ,
with the errors
η 1 = y ν 1 ( x , y ) ,
η 2 = z ν 2 ( x , y , z ) .
Proof. 
Consider the parameter estimators
e a = a a ^ , e b = b b ^ , e h 1 = h 1 h ^ 1 , e h 2 = h 2 h ^ 2 , e d = d d ^ , e f = f f ^ .
Considering Equation (12), the Lyapunov function of x is given by
V 1 ( x , e a , e h 1 ) = 1 2 x 2 + 1 2 e a 2 + 1 2 e h 1 2 .
By using the derivative of (20),
e ˙ a = a ^ ˙ , e ˙ b = b ^ ˙ , e ˙ h 1 = h 1 ^ ˙ , e ˙ h 2 = h 2 ^ ˙ , e ˙ d = d ^ ˙ , e ˙ f = f ^ ˙ .
Differentiating (21) with respect to t , we have
V ˙ 1 = x x 1 x K a x y 1 + x h 1 x + u 1 + e a ( a ^ ˙ ) + e h 1 ( h 1 ^ ˙ ) .
Considering y as a virtual controller, then y = ν 1 ( x )
V 1 ^ ˙ = x x 1 x K a x γ 1 1 + x h 1 x + u 1 + e a ( a ^ ˙ ) + e h 1 ( h 1 ^ ˙ ) .
Choosing ν 1 = 0 and using the controller (15), Ref. (22) becomes
V 1 ^ ˙ = x ( x x e h 1 ) + e a ( a ^ ˙ ) + e h 1 ( h 1 ^ ˙ ) .
The updated law by the unknown parameter
a ^ ˙ = e a a n d h 1 ^ ˙ = x 2 + e h 1 .
Substituting (24) in (22), we get V 1 ˙ = x 2 e a 2 e h 1 2 is the negative definite function, where
ν 1 = 0 η 1 = y .
Differentiating (25), we get
η ˙ 1 = y ˙ = a x y 1 + x b y z h 2 y + u 2 .
Now, Equation (13) along with Equation (26), we get that
x ˙ = x a x η 1 1 + x e h 1 x .
Considering the Lyapunov function of ( x , η 1 ) ,
V 2 = V 1 + 1 2 η 1 2 + 1 2 e b 2 + 1 2 e h 2 2 .
Differentiating (28) with respect to t , we get
V ˙ 2 = x x a x η 1 1 + x e h 1 x + η 1 a x η 1 1 + x b η 1 z h 2 η 1 + u 2 e a ( a ^ ˙ ) + e h 1 ( h 1 ^ ˙ ) + e b ( b ^ ˙ ) + e h 2 ( h 2 ^ ˙ ) .
Again, considering a new virtual controller z = ν 2 ( x , y ) where ν 2 = 0 , and using this in (24), we have
V 2 ˙ = x 2 e h 1 2 e a 2 + e h 2 ( h ^ 2 ˙ ) + η 1 a x 2 1 + x + a x η 1 1 + x h 2 η 1 + u 2 .
Now, choosing the controller (16) along with (30), we get
V ˙ 2 = x 2 e a 2 e h 1 2 + e h 2 ( h 2 ^ ˙ ) + e b ( b ^ ˙ ) η 1 2 .
The updated law for the unknown parameter b ^ ˙ and h 2 ^ ˙ is
b ^ ˙ = e b a n d h 2 ^ ˙ = η 1 2 + e h 2 .
Substituting (32) in (31), we get
V ˙ 2 = x 2 e b 2 e a 2 e h 1 2 e h 2 2 η 1 2 ,
which is again a negative definite function where
ν 2 = 0 η 2 = z .
Differentiating (34) with respect to t , we have η ˙ 2 = z ˙ .
Now,
η ˙ 1 = a x η 1 1 + x b η 1 z h 2 η 1 + u 2 ,
where the controller (17) along with (35) gives
η ˙ 1 = b η 1 η 2 e h 2 η 1 + a x 2 1 + x ,
η ˙ 2 = f y z d z + u 3 .
Now, considering the Lyapunov function of ( x , η 1 , η 2 ) ,
V 3 = V 2 + 1 2 η 2 2 + 1 2 e d 2 + 1 2 e f 2 .
Differentiating (38) with respect to t gives
V ˙ 3 = x 2 e h 1 ( x 2 ) e h 2 η 1 2 η 1 2 η 1 2 + η 2 ( b η 1 2 + f η 1 η 2 d η 2 + u 3 ) + e a ( a ^ ˙ ) + e b ( b ^ ˙ ) + e f ( f ^ ˙ ) + e h 1 ( h 1 ^ ˙ ) + e h 2 ( h 2 ^ ˙ ) + e d ( d ^ ˙ ) .
The unknown parameters a ^ ˙ , b ^ ˙ , f ^ ˙ , h 2 ^ ˙ are updated by
a ^ ˙ = e a , b ^ ˙ = e b , f ^ ˙ = e f , h 2 ^ ˙ = η 1 2 + e h 2 , ( d ^ ˙ ) = + d .
Substituting the updated parameters along with choosing the controller (17) and by updating law (40) in Equation (39), we get V ˙ 3 = x 2 η 1 2 η 2 2 e a 2 e b 2 e f 2 e h 1 2 e h 2 2 e d 2 is the negative definite function. Thus, by the Lyapunov stability theorem, systems (10)–(12) is globally asymptotically stable with adaptive back-stepping controllers. □

8. Stochastic Analysis

All usual occurrences explicitly in the ecosystem are continuously under random fluctuations of the environment. The stochastic examination of any ecosystem gives an enhanced vision on the dynamic forces of the populace by means of population variances. In a stochastic model, the model parameters oscillate about their average values [24,25,26,27]. Therefore, the steady state which we anticipated as permanent will now oscillate around the mean state. The method to measure the mean-square fluctuations of population is proposed by [24] and it was applied by [28] nicely. Furthermore, many researchers like Samanta [29], Maiti, Jana and Samanta [30] have investigated critically the stochastic analysis to interpret local as well as global stability using mean-square fluctuations on population variances.
Now, this segment is meant for the extension of the deterministic model (1), which is formed by adding a noisy term. There are several ways in which environmental noise may be incorporated in the model system (1). External noise may arise from random fluctuations of a finite number of parameters around some known mean values of the population densities around some fixed values. Since the aquatic ecosystem always has unsystematic fluctuations of the environment, it is difficult to define the usual phenomenon as a deterministic ideal. The stochastic investigation enables us to get extra intuition about the continuous changing aspects of any ecological unit. The deterministic model (1) with the effect of random noise of the environment results in a stochastic system (41)–(43) given in the following discussion:
d x d t = x 1 x K a x y 1 + x h 1 x + α 1 ξ 1 ( t ) ,
d y d t = a x y 1 + x b y z h 2 y + α 2 ξ 2 ( t ) ,
d z d t = f y z d z + α 3 ξ 3 ( t ) ,
where α 1 , α 2 , α 3 are the real constants and ξ i ( t ) = [ ξ 1 ( t ) , ξ 2 ( t ) , ξ 3 ( t ) ] is a three-dimensional Gaussian white noise process. E ( ξ i ( t ) ) = 0 , where i = 1 , 2 , 3 ; E [ ξ i ( t ) ξ j ( t ) ] = δ i j δ ( t t ) , where i = j = 1 , 2 , 3 ; δ i j is the Kronecker delta function; δ is the Dirac delta function.
Let
x ( t ) = u 1 ( t ) + S * ; y ( t ) = u 2 ( t ) + P * ; z ( t ) = u 3 ( t ) + T * ,
d x d t = d u 1 ( t ) d t ; d y d t = d u 2 ( t ) d t ; d z d t = d u 3 ( t ) d t .
The linear parts of (41), (42) and (43) are (using (44) and (45))
u 1 ( t ) = u 1 ( t ) S * a u 2 ( t ) S * + α 1 ξ 1 ( t ) ,
u 2 ( t ) = a u 1 ( t ) P * b u 3 ( t ) P * + α 2 ξ 2 ( t ) ,
u 3 ( t ) = f u 2 ( t ) T * + α 3 ξ 3 ( t ) .
Taking the Fourier transform on both sides of (46), (47) and (48), we get
i ω u ˜ 1 ( ω ) = S * u ˜ 1 ( ω ) a S * u ˜ 2 ( ω ) + α 1 ξ 1 ( ω ) ,
i ω u ˜ 2 ( ω ) = a P * u ˜ 1 ( ω ) b P * u ˜ 3 ( ω ) + α 2 ξ 2 ( ω ) ,
i ω u ˜ 3 ( ω ) = f T * u ˜ 2 ( ω ) + α 3 ξ 3 ( ω ) .
The matrix form of (49)–(51) is
M ( ω ) u ˜ ( ω ) = ξ ˜ ( ω ) ,
where
M ( ω ) = i ω + S * a S * 0 a P * i ω b P * 0 f T * i ω ; u ˜ ( ω ) = u ˜ 1 ( ω ) u ˜ 2 ( ω ) u ˜ 3 ( ω ) ; ξ ˜ ( ω ) = η 1 ξ ˜ 1 ( ω ) η 2 ξ ˜ 2 ( ω ) η 3 ξ ˜ 3 ( ω ) .
Equation (52) can also be written into
u ˜ ( ω ) = [ M ( ω ) ] 1 ξ ˜ ( ω ) ,
where
[ M ( ω ) ] 1 = 1 R ( ω ) + i I ( ω ) A 1 D 1 G 1 B 1 E 1 H 1 C 1 F 1 I 1
and
A 1 = ω 2 + f b 3 T * P * ; C 1 = a f P * T * ; D 1 = i ω α 1 S * ;
E 1 = ω 2 + i ω S * ; F 1 = i ω f T * + f T * S * ; G 1 = α 1 b 3 S * P * ;
H 1 = i ω b P * b S * P * ; I 1 = ω 2 + i ω S * + a α 1 P * S * .
Here,
| A 1 | 2 = X 1 2 + Y 1 2 ; | B 1 | 1 = X 2 2 + Y 2 2 ; | C 1 | 2 = X 3 2 + Y 3 2 ;
| D 1 | 2 = X 4 2 + Y 4 2 ; | E 1 | 2 = X 5 2 + Y 5 2 ; | F 1 | 2 = X 6 2 + Y 6 2 ;
| G 1 | 2 = X 7 2 + Y 7 2 ; | H 1 | 2 = X 8 2 + Y 8 2 ; | I 1 | 2 = X 9 2 + Y 9 2 ,
where
X 1 = ω 2 + f b T * P * ; Y 1 = 0 ; X 2 = 0 ; Y 2 = ( a ω P * ) ; X 3 = a f P * T * ; Y 3 = 0 ; X 4 = 0 ; Y 4 = ω α 1 S * ; X 5 = ω S * ; X 6 = ω f T * ; Y 6 = f T * S * ; X 7 = 0 ; Y 7 = α 1 b S * P * ; X 8 = b S * P * ; Y 8 = ω b P * ; X 9 = ω 2 + a α 1 S * P * ; Y 9 = ω S * .
| M ( ω ) | 2 = [ R ( ω ) ] 2 + [ I ( ω ) ] 2 , where R ( ω ) = b 3 f T * P * S * S * ω 2 and I ( ω ) = ω 2 + ω b 3 f T * P * + α 1 a ω S * P * .
If the function Y ( t ) has a zero mean value, then the fluctuation intensity (variance) of its components in the frequency interval [ ω , ω + d ω ] is S Y ( ω ) d ω . S Y ( ω ) is the spectral density of Y and is defined as
S Y ( ω ) = lim T ˜ Y ˜ ( ω ) 2 T ˜ .
If Y has a zero mean value, the inverse transform of S Y ( ω ) is the auto covariance function
C Y ( τ ) = 1 2 π S Y ( ω ) e i ω τ d ω .
The corresponding variance of fluctuations in Y ( t ) is given by
σ Y 2 = C Y ( 0 ) = 1 2 π S Y ( ω ) d ω
and the auto-correlation function is the normalized auto-covariance
P Y ( τ ) = C Y ( τ ) C Y ( 0 ) .
For a Gaussian white noise process, it is
S ξ i ξ j ( ω ) = lim T ˜ + E ξ i ˜ ( ω ξ j ˜ ( ω ) ) T ˜ = lim T ^ + 1 T ^ T ˜ 2 T ˜ 2 T ˜ 2 T ˜ 2 ξ i ˜ ( t ) ξ j ˜ ( t ) e i ω ( t t ) d t d t = δ i j .
From (54), we have
u i ˜ ( ω ) = j = 1 3 K i j ( ω ) ξ j ˜ ( ω ) ; i = 1 , 2 , 3 .
From (59), we have
S u i ( ω ) = j = 1 3 η j | K i j ( ω ) | 2 ; i = 1 , 2 , 3 ,
where
K i j ( ω ) = [ M ( ω ) ] 1 .
Hence, by (61) and (62), the intensities of fluctuations in the variable u i ( i = 1 , 2 , 3 ) are given by
σ u i 2 = 1 2 π j = 1 3 η j | K i j ( ω ) | 2 d ω ; i = 1 , 2 , 3
and by (54), we obtain
σ μ 1 2 = 1 2 π η 1 A d j ( 1 ) | M ( ω ) | 2 d ω + η 2 A d j ( 2 ) | M ( ω ) | 2 d ω + η 3 A d j ( 3 ) | M ( ω ) | 2 d ω ,
σ μ 2 2 = 1 2 π η 1 A d j ( 4 ) | M ( ω ) | 2 d ω + η 2 A d j ( 5 ) | M ( ω ) | 2 d ω + η 3 A d j ( 6 ) | M ( ω ) | 2 d ω ,
σ μ 1 2 = 1 2 π η 1 A d j ( 7 ) | M ( ω ) | 2 d ω + η 2 A d j ( 8 ) | M ( ω ) | 2 d ω + η 3 A d j ( 9 ) | M ( ω ) | 2 d ω .
From (55), (64), (65) and (66),
σ u 1 2 = 1 2 π 1 R 2 ( ω ) + I 2 ( ω ) α 1 ( X 1 2 + Y 1 2 ) + α 2 ( X 2 2 + Y 2 2 ) + α 3 ( X 3 2 + Y 3 2 ) d ω ,
σ u 2 2 = 1 2 π 1 R 2 ( ω ) + I 2 ( ω ) α 1 ( X 4 2 + Y 4 2 ) + α 2 ( X 5 2 + Y 5 2 ) + α 3 ( X 6 2 + Y 6 2 ) d ω ,
σ u 3 2 = 1 2 π 1 R 2 ( ω ) + I 2 ( ω ) α 1 ( X 7 2 + Y 7 2 ) + α 2 ( X 8 2 + Y 8 2 ) + α 3 ( X 9 2 + Y 9 2 ) d ω ,
where
| M ( ω ) | = R ( ω ) + i I ( ω ) .
If we are interested in the dynamics of stochastic process (41)–(69) with either α 1 = 0 or α 2 = 0 or α 3 = 0 , the population variances are
if α 1 = 0 , α 2 = 0 , then σ u 1 2 = α 3 2 π ( X 3 2 + Y 3 2 ) R 2 ( ω ) + I 2 ( ω ) d ω ; σ u 2 2 = α 3 2 π ( X 6 2 + Y 6 2 ) R 2 ( ω ) + I 2 ( ω ) d ω ;
σ u 3 2 = α 3 2 π ( X 9 2 + Y 9 2 ) R 2 ( ω ) + I 2 ( ω ) d ω .
If α 2 = 0 , α 3 = 0 , then σ u 1 2 = α 1 2 π ( X 1 2 + Y 1 2 ) R 2 ( ω ) + I 2 ( ω ) d ω ; σ u 2 2 = α 1 2 π ( X 4 2 + Y 4 2 ) R 2 ( ω ) + I 2 ( ω ) d ω ;
σ u 3 2 = α 1 2 π ( X 7 2 + Y 7 2 ) R 2 ( ω ) + I 2 ( ω ) d ω .
If α 3 = 0 , α 1 = 0 , then σ u 1 2 = α 2 2 π ( X 2 2 + Y 2 2 ) R 2 ( ω ) + I 2 ( ω ) d ω ; σ u 2 2 = α 2 2 π ( X 5 2 + Y 5 2 ) R 2 ( ω ) + I 2 ( ω ) d ω ;
σ u 3 2 = α 2 2 π ( X 8 2 + Y 8 2 ) R 2 ( ω ) + I 2 ( ω ) d ω .
Equations (67)–(69) give three variations of the inhabitants. The integrations over the real line can be estimated, which gives the variations of the inhabitants.

9. Mathematical Model with Delay

In this section, we establish some conditions for oscillations of all positive solutions of the delay system
d x d t t = x t 1 h 1 x t τ K a y t τ 1 + x t τ ,
d y d t t = y t h 2 + a x t τ 1 + x t τ b z t τ ,
d z d t t = z t d + f y t τ .
Here, the parameter τ R + is the delay. This proposed system is concerned not only with the present number of predator and prey but also with the number of predator and prey in the past. If t is the present time, then (t- τ ) is the past.
According to Krisztin [31], a solution of (70)–(72), ( x ( t ) , y ( t ) , z ( t ) ) is called oscillatory if every component has arbitrary large zeros; otherwise, ( x ( t ) , y ( t ) , z ( t ) ) is said to be a non-oscillatory solution. Whenever all solutions of (70)–(72) are oscillatory, we will say that (70)–(72) is an oscillatory system.
In [32], Kubiaczyk and Saker studied the oscillatory behavior of the delay differential equation
x ( t ) + p x ( t ) q x ( t ) r + x n ( t τ ) = 0 ,
where p , q , r , τ R + . Using similar methods to liberalize each equation of the delay system, we will establish conditions for oscillations of all positive solutions of the system.
Now, we will analyze the system of (70)–(72) with the following initial conditions:
x ( θ ) 0 , y ( θ ) 0 , z ( θ ) 0 , θ ( τ , 0 ) , ( τ 0 ) .
Using the same arguments that we got in Theorem 1, we can establish the following theorem:
Theorem 7.
All solutions of ( x ( t ) , y ( t ) , z ( t ) ) of systems (70)–(72) with the initial condition (73) are positive for all t 0 .
Easily, we can see that the equilibrium point remains the same when we have the delay system. However, it is important to know the oscillatory behavior of the solutions around the equilibrium points.
Theorem 8.
If there exist a λ R such that
min λ R λ 3 α λ 2 e λ τ d σ + β γ λ e 2 λ τ + d σ α e 3 λ τ > 0 ,
where α = x * k , β = a d f 1 + x * , γ = a x * 1 + x * and σ = d z * ; then, all solutions of the system (70)–(72) oscillate around E 3 .
Proof. 
Let us consider the system (70)–(72). Let
x ( t ) = x * e ϕ 1 t ,
y ( t ) = y * e ϕ 2 t ,
z ( t ) = z * e ϕ 3 t .
Then, x ( t ) , y ( t ) , z ( t ) oscillate around x * , y * , z * if ϕ 1 ( t ) , ϕ 2 ( t ) , ϕ 3 ( t ) oscillate around 0 , 0 , 0 . From (70)–(72) and (75)–(77), we have
d ϕ 1 d t t = 1 h 1 x * e ϕ 1 t τ K a y * e ϕ 2 t τ 1 + x * e ϕ 1 t τ = 1 h 1 x * K a y * 1 + x * e ϕ 1 t τ x * K e ϕ 1 t τ 1 a y * 1 + x * e ϕ 1 t τ e ϕ 2 t τ 1 ,
d ϕ 2 d t t = h 2 + a x * e ϕ 1 t τ 1 + x * e ϕ 1 t τ b z * e ϕ 3 t τ = h 2 + a x * 1 + x * e ϕ 1 t τ b z * + a x * 1 + x * e ϕ 1 t τ e ϕ 1 t τ 1 b z * e ϕ 3 t τ 1 ,
d ϕ 3 d t t = d + f y * + f y * e ϕ 2 t τ 1 .
Let
f u = e u 1 .
Note that
u f u > 0 for u > 0 , and lim u 0 f ( u ) u = 1 .
Moreover, since x * , y * , z * is the equilibrium point E 3 , we have
1 h 1 x * K a y * 1 + x * = 0 , h 2 + a x * 1 + x * b z * = 0 , and d + f y * = 0 .
Thus, the linearized system associated with the system (75)–(77) is given by
d m 1 d t t = x * K m 1 t τ a y * 1 + x * m 2 t τ ,
d m 2 d t t = a x * 1 + x * m 1 t τ b z * m 3 t τ ,
d m 3 d t t = f y * m 2 t τ ,
and every solution of (79)–(81) oscillates if and only if the characteristic equation has no real roots (see Theorem 5.1.1 in [21]), i.e.,
det λ I A 0
for all λ R . Equation (82) is equivalent to the equation
λ 3 + α λ 2 e λ τ d σ + β γ λ e 2 λ τ + d σ α e 3 λ τ = 0 ,
where α = x * k , β = a d f 1 + x * , γ = a x * 1 + x * and σ = d z * . In fact,
lim λ + λ 3 + α λ 2 e λ τ d σ + β γ λ e 2 λ τ + d σ α e 3 λ τ = +
and
lim λ λ 3 + α λ 2 e λ τ d σ + β γ λ e 2 λ τ + d σ α e 3 λ τ = + , since d σ α > 0 .
Then, by (74) and (78), systems (79)–(81) will start oscillating and then all the solutions of systems (75)–(77) will also oscillate. □
Example 1.
Let the parameters of systems (70)–(72) be a = 1.5 , d = 0.2 , b = 0.3 , f = 0.65 , h 1 = 0.5 , h 2 = 0.3 , K = 9 and τ = 2 . In this case, condition (74) becomes
min λ R λ 3 + 0.3995837778 λ 2 e λ τ + 0.234339529 λ e 2 λ τ + 0.046546037 e 3 λ τ > 0.045
and consequently all solutions of the system oscillate around the equilibrium point 3 . 596254004 , 0 . 307692308 , 2 . 912157597 .

10. Numerical Simulations

Analytical studies can never complete without numerical verification of the results. Moreover, it may be noted that, as the parameters of the model are not based on the real world observation, the main features described by the simulations presented in this section should be considered from a qualitative rather than a quantitative point of view. We choose the parameters of system (1) as a = 1.5 , b = 0.3 , f = 0.65 , h 1 = 0.5 , h 2 = 0.3 , d = 0.2 , K = 8 with the initial densities x ( 0 ) = 2 , y ( 0 ) = 1.8 , z ( 0 ) = 1 and observe the dynamical behaviour of system (1). Figure 1a shows that the equilibrium point E 1 is locally asymptotically stable and the corresponding phase-portrait is shown in Figure 1b. Figure 2a shows that the equilibrium point E 2 is locally asymptotically stable and the corresponding phase-portrait is shown in Figure 2b. Figure 3a shows that the co-existence equilibrium point E 3 is locally asymptotically stable and the corresponding phase portrait is shown in Figure 3b. From Figure 4a–d, if all other parameters are fixed and varying transmission rate a = 1.5 to a = 2 , we observe that oscillation settles down into a stable situation for all three of the species. Stability around this state implies extinction of the infected prey. This study interestingly suggests that the harvesting of both prey prevent limit cycle oscillations and the combined effect of both harvests also prevent disease propagation in the system. We also conclude that the inclusion of stochastic perturbation creates a significant change in the intensity of populations because change of responsive noise parameters causes chaotic dynamics with low, medium and high variances of oscillations (see Figure 5, Figure 6 and Figure 7).

11. Conclusions

In this paper, we have studied stability of a diseased model of susceptible, infected prey and predators around an interior steady state. The positivity of the solutions and boundedness of the system together with stability analysis of boundary equilibrium providing all the necessary information to establish persistence of the system. The deterministic situation theoretical epidemiologists are usually guided by an implicit assumption that most epidemic models (we observe in nature) correspond to stable equilibrium of models. In Theorem 3, we gave the condition for stable co-existence. In Theorem 4, we proved the global stability by using a Lyapunov function. We also worked out the condition for which all three species will persist. The controllability conditions and the conditions for global asymptotic stability have been obtained by using the adaptive back-stepping control approach by using a suitable Lyapunov function. We also studied the stochastic perturbation of model (1), which generates a significant change in the intensity of populations due to low, medium and high variances of oscillations.

Author Contributions

Kalyan Das designed the mathematical model and did the entire analysis with compilation; M. N. Srinivas focused on the stochastic analysis part; V. Madhusudanan contributed to the section of adaptive back-stepping control analysis; Sandra Pinelas performed the analysis of the model with discrete time delay; the numerical simulations were carried out jointly by Kalyan Das and M. N. Srinivas.

Acknowledgments

The authors are grateful to the anonymous referee for their helpful comments and valuable suggestions, which led to the improvement of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kermark, M.; Mckendrick, A. Contributions to the mathematical theory of epidemics. Part I. Proc. R. Soc. A 1927, 115, 700–721. [Google Scholar] [CrossRef]
  2. Anderson, R.M.; May, R.M. Population biology of infectious diseases: Part I. Nature 1979, 280, 361–367. [Google Scholar] [CrossRef] [PubMed]
  3. Anderson, R.M.; May, R.M. The invasion, persistence and spread of infectious diseases within animal and plant communities. Philos. Trans. R. Soc. Lon. 1986, B314, 533–570. [Google Scholar] [CrossRef] [PubMed]
  4. Hadeler, K.P.; Freedman, H.I. Predator–prey populations with parasitic infection. J. Math. Biol. 1989, 27, 609–631. [Google Scholar] [CrossRef] [PubMed]
  5. Haque, M. A prey–predator model with disease in the predator species only. Nonlinear Anal. Real World Appl. 2010, 11, 2224–2236. [Google Scholar] [CrossRef]
  6. Haque, M.; Venturino, E. An ecoepidemiological model with disease in predator: The ratio-dependent case. Math. Methods Appl. Sci. 2007, 30, 1791–1809. [Google Scholar] [CrossRef]
  7. Bhattacharyya, R.; Mukhopadhyay, B. On an eco-epidemiological model with prey harvesting and predator switching: local and global perspectives. Nonlinear Anal. Real World Appl. 2010, 11, 3824–3833. [Google Scholar] [CrossRef]
  8. Bairagi, N.; Roy, P.K.; Chattopadhyay, J. Role of infection on the stability of a prey–predator system with several functional responses—A comparative study. J. Theor. Biol. 2007, 248, 10–25. [Google Scholar] [CrossRef] [PubMed]
  9. Han, L.; Ma, Z.; Hethcote, H.W. Four predator prey models with infectious diseases. Math. Comput. Model. 2001, 34, 849–858. [Google Scholar] [CrossRef]
  10. Das, K. A Mathematical study of a predator-prey dynamics with disease in predator. ISRN Appl. Math. 2011, 2011. [Google Scholar] [CrossRef]
  11. Pal, A.K.; Samanta, G.P. Stability analysis of an eco-epidemiological model incorporating a prey refuge. Nonlinear Anal. Model. Control 2010, 15, 473–491. [Google Scholar]
  12. Naji, R.K.; Mustafa, A.N. The dynamics of an eco-epidemiological model with nonlinear incidence rate. J. Appl. Math. 2012, 852631. [Google Scholar] [CrossRef]
  13. Naji, R.K.; Naseen, R.M. Modeling and Stability analysis of eco-epidemiological model. Iraqi J. Sci. 2013, 54, 374–385. [Google Scholar]
  14. Haque, M.; Sarwadi, S.; Preston, S.; Venturino, E. Effect of delay in a Lotka-Volterratypa prey–predator model with a transmissible disease in the predator species. Math. Biol. Sci. 2011, 234, 47–57. [Google Scholar] [CrossRef] [PubMed]
  15. Hilker, F.M.; Malchow, H. Strange periodic attractors in a prey–predator system with infected prey. Math. Popul. Stud. 2006, 13, 119–134. [Google Scholar] [CrossRef]
  16. Rahman, S.; Chakravarthy, S. A predator–prey model with disease in prey. NonLinear Anal. Model. Control 2013, 18, 191–209. [Google Scholar]
  17. Samantha, G.P. Analysis of a delay non autonomous predator–prey system with disease in the prey. NonLinear Anal. Model. Control 2010, 15, 97–108. [Google Scholar]
  18. El-Gohary, A.; Al-Ruzaiza, A.S. Optimal control of the equilibrium state of a prey–predator model. Nonlinear Dyn. Psychol. Life Sci. 2002, 6, 27–36. [Google Scholar] [CrossRef]
  19. El-Gohary, A.; Al-Ruzaiza, A.S. Chaos and Adaptive control in two prey, one predator system with nonlinear feedback. Chaos Solitons Fractals 2007, 34, 443–453. [Google Scholar] [CrossRef]
  20. Madhusudanan, V.; Bapu, B.R.T. Dynamics of diseased prey predator model with nonlinear feedback control. Appl. Math. Inf. Sci. 2017, 11, 1–8. [Google Scholar] [CrossRef]
  21. Gholizadeh, A.; Nik, H.S. Analysis and control of a three-dimensional autonomous chaotic system. Appl. Math. Inf. Sci. 2015, 9, 739–747. [Google Scholar]
  22. Al-Ruzaiza, A.S. The chaos and control of prey–predator system with some unknown parameters. Appl. Math. Sci. 2009, 3, 1361–1374. [Google Scholar]
  23. Birkoff, G.; Rota, G.C. Ordinary Fifferential Equation; Ginn and Company: Boston, MA, USA, 1982. [Google Scholar]
  24. Nisbet, R.M.; Gurney, W.S.C. The systematic formulation of population models for insects with dynamically varying instar duration. Theor. Popul. Biol. 1983, 23, 114–135. [Google Scholar] [CrossRef]
  25. Nisbet, R.M.; Gurney, W.S.C. Modelling Fluctuating Populations; John Wiley: New York, NY, USA, 1982. [Google Scholar]
  26. Sauer, T.D.; Alligood, K.T.; Yorke, J.A. Chaos: An Introduction to Dynamical Systems; Springer: New York, NY, USA, 2000. [Google Scholar]
  27. Smale, S.; Hirsch, M.W.; Devaney, R. Differential Equations, Dynamical Systems and an Introduction to Chaos; Academic Press: Cambridge, MA, USA, 2004. [Google Scholar]
  28. Tapaswi, P.K.; Mukhopadhyay, A. Effects of environmental fluctuation on plankton allelopathy. J. Math. Biol. 1999, 39, 39–58. [Google Scholar] [CrossRef]
  29. Samanta, G.P. Stochastic analysis of a prey–predator system. Int. J. Math. Edu. Sci. Technol. 1994, 25, 797–803. [Google Scholar] [CrossRef]
  30. Maiti, A.; Jana, M.M.; Samanta, G.P. Deterministic and stochastic analysis of a ratio dependent predator-prey system with delay. Nonlinear Anal. Model. Control 2007, 12, 383–398. [Google Scholar]
  31. Krisztin, T. Oscillations in linear functional differential systems. Differ. Equ. Dyn. Syst. 1994, 2, 99–112. [Google Scholar]
  32. Kubiaczyk, I.; Saker, S.H. Oscillation and stability in nonlinear delay differential equations of population dynamics. Math. Comput. Model. 2002, 35, 295–301. [Google Scholar] [CrossRef]
Figure 1. (a) time series evolution of the populations of the system (1); (b) phase-space trajectories corresponding to the stabilities of the population with the parameters a = 1.5 , b = 0.3 , f = 0.65 , h 1 = 0.5 , h 2 = 0.3 , d = 0.2 , K = 8 .
Figure 1. (a) time series evolution of the populations of the system (1); (b) phase-space trajectories corresponding to the stabilities of the population with the parameters a = 1.5 , b = 0.3 , f = 0.65 , h 1 = 0.5 , h 2 = 0.3 , d = 0.2 , K = 8 .
Mca 24 00022 g001
Figure 2. (a) time series evolution of the populations of system (1); (b) phase-space trajectories corresponding to the stabilities of the population with the parameters a = 1.5 , b = 0.3 , f = 0.65 , h 1 = 0.5 , h 2 = 0.3 , d = 0.2 , K = 9 .
Figure 2. (a) time series evolution of the populations of system (1); (b) phase-space trajectories corresponding to the stabilities of the population with the parameters a = 1.5 , b = 0.3 , f = 0.65 , h 1 = 0.5 , h 2 = 0.3 , d = 0.2 , K = 9 .
Mca 24 00022 g002
Figure 3. (a) time series evolution of the populations of system (1); (b) phase-space trajectories corresponding to the stabilities of the population with the parameters a = 1.65 , b = 0.3 , f = 0.65 , h 1 = 0.5 , h 2 = 0.3 , d = 0.2 , K = 9 .
Figure 3. (a) time series evolution of the populations of system (1); (b) phase-space trajectories corresponding to the stabilities of the population with the parameters a = 1.65 , b = 0.3 , f = 0.65 , h 1 = 0.5 , h 2 = 0.3 , d = 0.2 , K = 9 .
Mca 24 00022 g003
Figure 4. (ac) time series evolution of the populations of system (1); (d) phase-space trajectories corresponding to the stabilities of the population with the parameters a = 2 , b = 0.3 , f = 0.65 , h 1 = 0.5 , h 2 = 0.3 , d = 0.2 , K = 9 .
Figure 4. (ac) time series evolution of the populations of system (1); (d) phase-space trajectories corresponding to the stabilities of the population with the parameters a = 2 , b = 0.3 , f = 0.65 , h 1 = 0.5 , h 2 = 0.3 , d = 0.2 , K = 9 .
Mca 24 00022 g004
Figure 5. (a) the oscillations of populations against time with low intensities (low noise) of parameters α 1 = 1 , α 2 = 2 , α 3 = 3 ; (b) the phase-portrait diagram of populations under random low noise of parameters α 1 = 1 , α 2 = 2 , α 3 = 3 .
Figure 5. (a) the oscillations of populations against time with low intensities (low noise) of parameters α 1 = 1 , α 2 = 2 , α 3 = 3 ; (b) the phase-portrait diagram of populations under random low noise of parameters α 1 = 1 , α 2 = 2 , α 3 = 3 .
Mca 24 00022 g005
Figure 6. (a) the oscillations of populations against time with medium intensities (medium noise) of parameters α 1 = 6 , α 2 = 8 , α 3 = 8 ; (b) the phase-portrait diagram of populations under random medium noise of parameters α 1 = 6 , α 2 = 8 , α 3 = 8 .
Figure 6. (a) the oscillations of populations against time with medium intensities (medium noise) of parameters α 1 = 6 , α 2 = 8 , α 3 = 8 ; (b) the phase-portrait diagram of populations under random medium noise of parameters α 1 = 6 , α 2 = 8 , α 3 = 8 .
Mca 24 00022 g006
Figure 7. (a) the oscillations of populations against time with high intensities (high noise) of parameters α 1 = 10 , α 2 = 20 , α 3 = 30 ; (b) the phase-portrait diagram of populations under random high noise of parameters α 1 = 10 , α 2 = 20 , α 3 = 30 .
Figure 7. (a) the oscillations of populations against time with high intensities (high noise) of parameters α 1 = 10 , α 2 = 20 , α 3 = 30 ; (b) the phase-portrait diagram of populations under random high noise of parameters α 1 = 10 , α 2 = 20 , α 3 = 30 .
Mca 24 00022 g007

Share and Cite

MDPI and ACS Style

Das, K.; Srinivas, M.N.; Madhusudanan, V.; Pinelas, S. Mathematical Analysis of a Prey–Predator System: An Adaptive Back-Stepping Control and Stochastic Approach. Math. Comput. Appl. 2019, 24, 22. https://doi.org/10.3390/mca24010022

AMA Style

Das K, Srinivas MN, Madhusudanan V, Pinelas S. Mathematical Analysis of a Prey–Predator System: An Adaptive Back-Stepping Control and Stochastic Approach. Mathematical and Computational Applications. 2019; 24(1):22. https://doi.org/10.3390/mca24010022

Chicago/Turabian Style

Das, Kalyan, M. N. Srinivas, V. Madhusudanan, and Sandra Pinelas. 2019. "Mathematical Analysis of a Prey–Predator System: An Adaptive Back-Stepping Control and Stochastic Approach" Mathematical and Computational Applications 24, no. 1: 22. https://doi.org/10.3390/mca24010022

Article Metrics

Back to TopTop