Next Article in Journal
Multi-Directional Functionally Graded Sandwich Plates: Buckling and Free Vibration Analysis with Refined Plate Models under Various Boundary Conditions
Previous Article in Journal
Generalized Approach to Optimal Polylinearization for Smart Sensors and Internet of Things Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving a System of One-Dimensional Hyperbolic Delay Differential Equations Using the Method of Lines and Runge-Kutta Methods

1
Department of Mathematics, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur 603203, Tamilnadu, India
2
Department of Mathematics, Texas A&M University-Kingsville, Kingsville, TX 78363, USA
3
Department of Mathematics and Systems Engineering, Florida Institute of Technology, Melbourne, FL 32901, USA
*
Authors to whom correspondence should be addressed.
Computation 2024, 12(4), 64; https://doi.org/10.3390/computation12040064
Submission received: 23 February 2024 / Revised: 18 March 2024 / Accepted: 25 March 2024 / Published: 27 March 2024

Abstract

:
In this paper, we consider a system of one-dimensional hyperbolic delay differential equations (HDDEs) and their corresponding initial conditions. HDDEs are a class of differential equations that involve a delay term, which represents the effect of past states on the present state. The delay term poses a challenge for the application of standard numerical methods, which usually require the evaluation of the differential equation at the current step. To overcome this challenge, various numerical methods and analytical techniques have been developed specifically for solving a system of first-order HDDEs. In this study, we investigate these challenges and present some analytical results, such as the maximum principle and stability conditions. Moreover, we examine the propagation of discontinuities in the solution, which provides a comprehensive framework for understanding its behavior. To solve this problem, we employ the method of lines, which is a technique that converts a partial differential equation into a system of ordinary differential equations (ODEs). We then use the Runge–Kutta method, which is a numerical scheme that solves ODEs with high accuracy and stability. We prove the stability and convergence of our method, and we show that the error of our solution is of the order O ( Δ t + h ¯ 4 ) , where Δ t is the time step and h ¯ is the average spatial step. We also conduct numerical experiments to validate and evaluate the performance of our method.

1. Introduction

Delay Differential Equations (DDEs) are a type of differential equations that incorporate delays. They are useful for modeling complex systems that exhibit time- and space-dependent behaviors, such as epidemics, neuronal activity, and wave phenomena. By introducing delays into mathematical models, researchers can capture the effects of memory, feedback, or propagation that are present in real-world situations. This can lead to more accurate and realistic representations of the dynamics of the system, as well as better predictions and control strategies. Solving DDEs requires providing values for unknown functions within defined intervals, rather than just at initial points. This is because the value of an unknown function at a certain time depends on its value at some previous time, which reflects the delay effect. The delay can be constant or variable, and it can affect one or more terms in the equation. The presence of delays can significantly alter the qualitative behavior of the model, such as its stability, equilibrium, and periodicity. For example, consider an epidemic model that uses generalized logistic dynamics to describe the growth and decline of the susceptible and infected populations. If the duration of infection is constant, the model can exhibit periodic solutions, meaning that the populations oscillate between high and low levels. However, if the duration of infection is delayed, meaning that it depends on the past state of the system, the model can exhibit more complex behaviors, such as bifurcations, chaos, and extinction. This shows how delays can affect the outcome of the epidemic and the effectiveness of interventions [1,2,3]. Another example is a model that describes the activity of neurons in the brain. Neurons are cells that communicate with each other through electrical and chemical signals. The signals travel along the axons and synapses of the neurons, which introduce delays in the transmission. The delays can vary depending on the distance, the type, and the state of the neurons. The model also accounts for stochastic effects, meaning that the signals are subject to random fluctuations and noise. These effects can result from the excitation or inhibition of the neurons, which depend on the input from other neurons or external stimuli. The model can capture the inherent variability and unpredictability of the neuronal system, as well as its ability to adapt and learn [4,5,6,7]. It investigates the intricate dynamic characteristics inherent in heat exchanges, a pivotal component extensively employed in the chemical industry for thermal management. It not only delves into the theoretical foundations but also provides illuminating real-world examples. By elucidating the mathematical intricacies, the reference serves as a valuable resource for understanding and optimizing the dynamic behavior of heat exchanges, offering a comprehensive exploration of their applications in chemical engineering [8]. In a broader mathematical context, hyperbolic partial differential equations (HPDEs) are often encountered. These equations arise in various fields and play a key role in understanding and describing wave phenomena. Examples of HPDEs include the wave equation and the telegraph equation, which are used to study classical physics phenomena such as water waves, sound waves, and seismic waves [9]. These equations can also incorporate delays, which can represent the effects of dispersion, dissipation, or diffusion. Advanced numerical methods have been developed to solve hyperbolic delay partial differential equations, with special attention given to techniques such as the Forward Time Backward Space (FTBS) and Backward Time Backward Space (BTBS) methods. These methods can handle the challenges posed by the delays, such as non-linearity, instability, and boundary conditions [10,11]. Researchers have devoted extensive efforts to the analysis of convergence and numerical treatments for both ordinary delay differential equations (DDEs) and hyperbolic partial differential equations (PDEs) [12,13]. When dealing with DDEs, which involve delays in the state variables, the use of numerical techniques poses a substantial challenge. A widely used approach is the method of lines, which involves discretizing the spatial derivatives in hyperbolic equations and obtaining systems of ordinary differential equations (ODEs). The solution of these ODEs can be efficiently obtained using Runge–Kutta methods, which improve the performance of the numerical solution process. This allows for the effective approximation of DDE solutions. However, these numerical techniques are not without drawbacks. The computational demands can be high, especially for large-scale or complex problems. Furthermore, the convergence analysis for both DDEs and hyperbolic PDEs is a difficult task that requires careful attention and computational effort [14,15]. These techniques are useful for solving various problems in science and engineering, but they often require significant computational resources [16]. Another topic that has been explored in depth is the maximum principle, which reveals the implications and practical significance of applying hyperbolic, parabolic, and elliptical differential equations to various phenomena [17,18]. Implicit Runge–Kutta (IRK) methods are numerical approaches for solving ordinary differential equations (ODEs). Unlike explicit methods, IRK tackles stiff ODEs by involving algebraic equations at each stage. This makes it adept at handling problems where certain components evolve at distinct rates. In IRK, each step necessitates solving a system of equations, often using iterative methods. Its application shines in scenarios with varying timescales, like chemical reactions or electrical circuits. The method’s formulation includes coefficients dictating its accuracy and stability. Higher-order methods enhance precision, but stability analysis is crucial for reliable solutions. IRK is commonly employed in stiff systems where explicit methods become computationally demanding. It finds use in solving systems of differential-algebraic equations and time-dependent partial differential equations, offering accurate and stable results. The theoretical understanding involves delving into the coefficients role, stability analysis, convergence properties and practical implementation using iterative solvers. Implicit Runge–Kutta (IRK) methods are a sophisticated class of time discretization schemes that stand out for their advanced features compared to other methods. These schemes have higher orders of accuracy, which means they can produce more accurate solutions with fewer steps. They also exhibit desirable stability properties, which means they can handle stiff or oscillatory problems without numerical instability. Moreover, they have effective error estimators, which can provide an estimate of the local or global error of the numerical solution. These features make IRK methods suitable for optimizing the time step needed to ensure stable and accurate solutions while maintaining the dispersion and dissipation at fixed levels [19,20,21]. Implicit Runge–Kutta (IRK) methods are a cutting-edge group of schemes within the range of advanced time discretization schemes [22]. On a different track, Runge–Kutta methods have been carefully developed to overcome the difficulties of solving systems of ordinary differential equations that arise from discretizing the spatial derivatives in hyperbolic equations using the method of lines approach. Cubic Hermite Interpolation is a technique used to create a smooth curve between given data points, where both the function values and their derivatives are known. It ensures that the resulting curve is continuous and differentiable. This method relies on cubic polynomials to connect adjacent data points, with coefficients determined to satisfy conditions at each point. The process involves setting up and solving a system of equations to find these coefficients. The interpolation polynomial preserves both the function values and their derivatives at each data point, creating a piecewise continuous and differentiable curve. Cubic Hermite Interpolation is commonly applied in computer graphics and computer-aided design to achieve accurate and visually pleasing interpolations of complex curves. Its versatility makes it valuable in situations where precise control over both function and derivative information is essential.The main goal of these methods is to accurately adjust the time step needed to achieve stable and accurate solutions while keeping the dispersion and dissipation constants unchanged. Implicit Runge–Kutta (IRK) methods, in particular, are known for their advanced features and efficiency. They have high orders of accuracy, which make them especially relevant in situations where precision is crucial. Their stability properties ensure the generation of reliable and robust numerical solutions, complemented by error estimators for rigorous accuracy assessment. The distinctive feature of IRK methods is their skillful use of the structure derived from carefully selected time discretization formulae, which enable the customization of the method based on the specific needs and characteristics of the problem. In this investigation, our computational endeavors were facilitated by a computer boasting an Intel Core i5 processor paired with 16 GB of RAM memory. The selection of this specific hardware configuration was driven by the need for an optimal blend of processing capability and memory capacity. The Intel Core i5 processor ensured efficient execution of our simulations, while the 16 GB RAM proved instrumental in handling sizable datasets. Notably, the computing time for our experiments was impressively brief, clocking in at an elapsed time of 1.312139 s. This swift computational performance underscores the efficiency of our chosen hardware setup and lays a foundation for the subsequent exploration of our research methodology and results.
This work is organized as follows: Section 2 contains the problem statement. Section 3 presents the maximum principle and its consequence of stability results. In Section 4, we describe the time semi-discrete problem using a backward Euler scheme in temporal direction. In Section 5, we discretize the spatial domain using fourth-order Runge–Kutta method with piecewise cubic Hermite interpolation. In Section 6, we present some numerical results and compare them with the analytical solutions. Finally, conclusions are presented in Section 7.

2. Problem Statement

Works from [10,11] motivate us to study the following problem: We find u ¯ = ( u 1 , u 2 , , u n ) , u 1 , u 2 , , u n C ( D ¯ ) C ( 1 , 1 ) ( D ) such that
  L ¯ u ¯ : = u ¯ t + A u ¯ x + B u ¯ ( x , t ) + C u ¯ ( x δ , t ) = f ¯ ( x , t ) , ( x , t ) D ,
  u ¯ = ϕ ¯ ( x , t ) , ( x , t ) [ δ , 0 ] × [ 0 , T ] ,
  u ¯ ( x , 0 ) = u ¯ 0 ( x ) , x [ 0 , x f ] , ϕ ¯ ( 0 , 0 ) = u ¯ 0 ( 0 ) ,
where L ¯ = ( L 1 , L 2 , , L n ) T ,   f ¯ = ( f 1 , f 2 , , f n ) T ,   ϕ ¯ = ( ϕ 1 , ϕ 2 , , ϕ n ) T , u 0 ¯ = ( u 0 , 1 , u 0 , 2 ,   , u 0 , n ) T ,
A = a 11 ( x , t ) 0 0 0 a 22 ( x , t ) 0 0 0 a n n ( x , t ) , B = b 11 ( x , t ) b 12 ( x , t ) b 1 n ( x , t ) b 21 ( x , t ) b 22 ( x , t ) b 2 n ( x , t ) b n 1 ( x , t ) b n 2 ( x , t ) b n n ( x , t ) , C = c 11 ( x , t ) c 12 ( x , t ) c 1 n ( x , t ) c 21 ( x , t ) c 22 ( x , t ) c 2 n ( x , t ) c n 1 ( x , t ) c n 2 ( x , t ) c n n ( x , t ) .
The above Equation (1) can be written as
  L ¯ u ¯ : = u ¯ t + A u ¯ x + B u ¯ = f ¯ C ϕ ¯ ( x δ , t ) , ( x , t ) [ 0 , δ ] × 0 , T , u ¯ t + A u ¯ x + B u ¯ = f ¯ C u ¯ ( x δ , t ) , ( x , t ) ( δ , x f ] × 0 , T ,
  u ¯ ( 0 , t ) = ϕ ¯ ( 0 , t ) , t [ 0 , T ] , u ¯ ( x , 0 ) = u ¯ 0 ( x ) , x [ 0 , x f ] ,
where a i i α i > 0 ,   b i i β i 0 , and b i j     0 ,   i j and c i j     0 ,   D = ( 0 , x f ] × ( 0 , T ] and δ     x f , x f and δ are fixed constants. Functions a i j , b i j , and c i j , are sufficiently differentiable on their domains.
Note: If all the coefficients a i j , b i j , c i j , f k are continuous functions of t on a compact set, then the above system has a solution; see [23].

3. Stability Analysis

In this section, we present the maximum principle for Problems (4) and (5) which is a system of partial differential equations with initial conditions. We also present a stability result that follows from the maximum principle.
Theorem 1
(Maximum Principle). Let ψ ¯ = ( ψ 1 , ψ 2 , , ψ n ) , ψ 1 , ψ 2 , , ψ n C ( D ¯ ) C ( 1 , 1 )   ( D ) be any function satisfying L ¯ ψ ¯ 0 ¯ , ( x , t ) D ¯ ,   ψ ¯ ( 0 , t ) 0 ¯ , t [ 0 , T ] ,   ψ ¯ ( x , 0 ) 0 ¯ , x [ 0 , x f ] . Then, ψ ¯ ( x , t ) 0 ¯ , ( x , t ) D ¯ .
A consequence of the above theorem is the following stability result:
Theorem 2
(Stability Result). Let ψ ¯ = ( ψ 1 , ψ 2 , , ψ n ) , ψ 1 , ψ 2 , , ψ n C ( D ¯ ) C ( 1 , 1 ) ( D ) be any function; then,
| ψ k ( x , t ) |     C 1 max max t ψ ¯ ( 0 , t ) , max x ψ ¯ ( x , 0 ) , max k { sup ( x , t ) D L k ψ ¯ ( x , t ) } , ( x , t ) D ¯
where C 1 is a constant.

3.1. Propagation of Discontinuities

W recall that System (1)–(3) consists of n partial differential equations. Now, let us focus on the kth equation of the system and fix time variable t to a constant value which we can write as
L k u ¯ = u k t + a k k u k x + l = 1 n b k l u l ( x , t ) + l = 1 n c k l u l ( x δ , t ) = f k ( x , t )
It is assumed that ϕ k ( 0 , t ) = u k ( 0 , t ) , k = 1 , , n and t [ 0 , T ] . We differentiate the equation partially with respect to x; then,
a k k u k , x x = f k , x u k , x t a k k , x u k , x l = 1 n [ b k l , x u l + b k l u l , x ] l = 1 n [ c k l , x u l ( x δ , t ) + c k l u l , x ( x δ , t ) ] lim x δ a k k u k , x x = f k , x ( δ , t ) u k , x t ( δ , t ) a k k , x u k , x ( δ , t ) l = 1 n [ b k l , x ( δ , t ) u l ( δ , t ) + b k l ( δ , t ) u l , x ( δ , t ) ] l = 1 n [ c k l , x ( δ , t ) u l ( 0 , t ) + c k l ( δ , t ) u l , x ( 0 , t ) ] = f k , x ( δ , t ) u k , x t ( δ , t ) a k k , x u k , x ( δ , t ) l = 1 n [ b k l , x ( δ , t ) u l ( δ , t ) + b k l ( δ , t ) u l , x ( δ , t ) ] l = 1 n [ c k l , x ( δ , t ) ϕ l ( 0 , t ) + c k l ( δ , t ) ϕ l , x ( 0 , t ) ]
and
lim x δ + a k k u k , x x = f k , x ( δ + , t ) u k , x t ( δ + , t ) a k k , x u k , x ( δ + , t ) l = 1 n b k l , x ( δ + , t ) u l ( δ + , t ) l = 1 n b k l ( δ + , t ) u l , x ( δ + , t ) l = 1 n c k l , x ( δ + , t ) u l ( 0 + , t ) l = 1 n c k l ( δ + , t ) u l , x ( 0 + , t ) = f k , x ( δ + , t ) u k , x t ( δ + , t ) a k k , x u k , x ( δ + , t ) l = 1 n [ b k l , x ( δ + , t ) u l ( δ + , t ) + b k l ( δ + , t ) u l , x ( δ + , t ) ] l = 1 n c k l , x ( δ + , t ) ϕ l ( 0 + , t ) l = 1 n c k l ( δ + , t ) ϕ l , x ( 0 + , t )
Hence, a k k ( δ + , t ) u k , x x ( δ + , t ) a k k ( δ , t ) u k , x x ( δ , t ) . Similarly, we can show that u k , x x x   ( 2 δ , t ) u k , x x x ( 2 δ + , t ) , k = 1 , 2 , , n . Points δ , 2 δ , 3 δ , are primary discontinuities [2].

3.2. Derivative Bounds

From differential Equations (1)–(3) that are given, we can derive the following bounds for the derivative:
Theorem 3.
Let u ¯ be the exact solution of the system of partial differential Equations (1)–(3). Then, the bound of the derivatives satisfies the following estimate: i + j u k x i t j ( x , t )     C ,   0     i + j     2 , k = 1 , 2 , , n .

4. Semi-Discretization in Temporal Direction

We divide the time interval [ 0 , T ] into M subintervals of equal length, and we denote the resulting time grid by Ω t M = { t i = i Δ t } i = 0 M , where Δ t = T M is the time step size. Using this grid, we apply a finite difference method to discretize partial differential Equations (1)–(3) in the time variable. We assume that the initial conditions are given by u k 0 ( x ) = u k , 0 ( x ) , x [ 0 , x f ] . Moreover, we define u k j ( x ) as the approximate value of u k ( x , t j ) at spatial point x and time level t j .
L k j u k j ( x ) : = D k , t u k j ( x , t j ) + a k k ( x , t j ) u k , x j ( x , t j ) + l = 1 n b k l ( x , t j ) u k j ( x , t j ) + l = 1 n c k l u l j ( x δ , t j ) = f k ( x , t j ) ,   u k j ( x ) = ϕ k ( x , t j ) , x [ δ , 0 ] , j = 1 , 2 , , M ,
where D k , t u k j ( x , t j ) = u k ( x , t j ) u k ( x , t j 1 ) Δ t .
For fixed time point at t = t j , the equation presented earlier can be written in the following manner:
  Δ t a k k ( x , t j ) d u k j d x ( x ) + ( 1 + Δ t l = 1 n b k l ( x , t j ) ) u k j ( x , t j ) + Δ t l = 1 n c k l u l j ( x δ , t j ) = Δ t f k ( x , t j ) + u k j 1 ( x , t j 1 ) , j = 1 , 2 , , M .
Lemma 1.
Let u k be the solution of (1)–(3) and u k j ( x ) be the solution of (6) at t = t j ; then, u k u k j     C Δ t .
Proof. 
We let E k , j ( x ) = u k ( x , t j ) u k j ( x ) , and we let x be fixed. Then,
L k j E k , j ( x ) = D k , t E k , j ( x ) + a k k ( x , t j ) E k , j ( x ) + l = 1 n b k l ( x , t j ) E k j ( x ) + l = 1 n c k l , i j E k , j ( x δ , t j ) = D t u k ( x , t j ) D k , t u k j ( x , t j ) + a k k ( x , t j ) ( u k , x ( x , t j ) u k j ( x , t j ) ) + l = 1 n b k l ( x , t j ) ( u k ( x , t j ) u k j ( x , t j ) ) + l = 1 n c k l ( x , t j ) ( u k , x ( x δ , t j ) u k j ( x δ , t j ) ) ;
using ([24], Lemma 4.1), we can have E k , j ( x ) = ( D k , t t ) u k ( x , t j ) and | E k , j ( x ) |     O ( Δ t ) ,   j = 1 , 2 , , M , x , which implies E k , j ( x )     C ( Δ t ) ; therefore, u k u k j ( x )     C ( Δ t ) .

5. Fully Discretized Problem

In this section, we apply spatial discretization to the semi-discrete problem defined by Equation (7). To achieve this, we use the fourth-order Runge–Kutta method to integrate the differential equations and piecewise cubic Hermite interpolation to approximate the solution over the interval of [ 0 , x f ] .

Spatial Mesh Points

In Section 3.1, it is evident that δ , 2 δ , serve as primary points of discontinuity. Consequently, we partition domain [ 0 , x f ] as follows: [ 0 , δ ] , [ δ , 2 δ ] , , [ ( r 1 ) δ , r δ ] , and [ r δ , x f ] . Each of these sub-domains is further subdivided into N r + 1 segments. Thus, we define Ω ¯ x N = x i i = 0 N , x i = x i 1 + h i , where x i = x i 1 + h i and h i = x i x i 1 for i = 1 , 2 , , N . The formulation of Problem (7) can be expressed as follows:
f k * ( x , u k j , u k j 1 , t j ) = 1 Δ t a k k ( x , t j ) [ Δ t f k ( x , t j ) ( 1 + l = 1 n b k l ( x , t j ) Δ t ) u k j ( x , t j ) + u k j 1 ( x , t j 1 ) l = 1 n c k l ( x , t j ) Δ t u k j , I ( x ) ] .
For details on the numerical approach for piecewise cubic Hermite interpolation used to interpolate solution u ( x i δ , t j ) within range [ δ , x f ] , one may consult [25]. Employing the fourth-order Runge–Kutta method alongside piecewise cubic Hermite interpolation in the spatial domain over [ 0 , x f ] , we obtain
U r , i + 1 j = U r , i j + 1 6 [ K r , 1 + 2 K r , 2 + 2 K r , 3 + K r , 4 ] , i = 0 , 1 , , N 1 , j = 1 , 2 , , M ,
where, r = 1 , 2 , , n ,
K r 1 = 1 a k k ( x i , t j ) Δ t { Δ t f r ( x i , t j ) + U r j 1 ( x i , t j 1 ) ( 1 + l = 1 n b r l ( x i , t j ) Δ t ) U r j ( x i , t j ) l = 1 n c r l ( x i , t j ) Δ t U r , i j , I ( x i ) , K r 2 = 1 a k k ( x i + h i 2 , t j ) Δ t { Δ t f r ( x i + h r , i 2 , t j ) + ( U r , i j 1 + K r 1 2 ) ( 1 + l = 1 n b r l ( x i + h r , i 2 , t j ) Δ t ) ( U r , i j + K r , 1 2 ) l = 1 n c r l ( x i + h i 2 , t j ) Δ t ( U k , i j , I ( x i + h r , i 2 ) ) } , K r 3 = 1 a k k ( x i + h r , i 2 , t j ) Δ t { Δ t f r ( x i + h r , i 2 , t j ) + ( U r , i j 1 + K r 2 2 ) ( 1 + l = 1 n b r l ( x i + h r , i 2 , t j ) Δ t ) ( U r , i j + K r 2 2 ) l = 1 n c r l ( x i + h r , i 2 , t j ) Δ t ( U r , i j , I ( x i + h r , i 2 ) ) } , K r 4 = 1 a k k ( x i + h r , i , t j ) Δ t { Δ t f r ( x i + h r , i , t j ) + ( U r , i j 1 + K r 3 ) ( 1 + l = 1 n b r l ( x i + h r , i , t j ) Δ t ) ( U r i j + K r 3 ) l = 1 n c r l ( x i + h r , i , t j ) Δ t ( U r i j , I ( x i + h r , i ) ) } , U r j , I ( x ) = ϕ l ( x i δ , t j ) , if ( x i δ )     0 , U r , p j A p ( x ) + U r , p + 1 j A p + 1 ( x ) + B p ( x ) f r * ( x p , U r , p j , U r , p j 1 , t j ) + B p + 1 ( x ) f r * ( x p + 1 , U r , p + 1 j , U r , p + 1 j 1 , t j ) , if ( x i δ ) > 0 ,
and p is an integer such that x i δ ( x p , x p + 1 ) .
A p ( x ) = 1 2 ( x x p ) x p x p + 1 x x p + 1 x p x p + 1 2 , A p + 1 ( x ) = 1 2 ( x x p + 1 ) x p + 1 x p x x p x p + 1 x p 2 , B p ( x ) = ( x x p ) ( x x p + 1 ) 2 ( x p x p + 1 ) 2 , B p + 1 ( x ) = ( x x p + 1 ) ( x x p ) 2 ( x p + 1 x p ) 2 , p = i N .
Theorem 4
([2]). Let u k j ( x i ) be the solution of problem (7) and U k , i j represent the solution of problem (9). Then, | u k ( x i , t j ) U k , i j |     C ( h ¯ 4 ) is established, where C denotes a constant and h ¯ = maximum of h i .
This theorem offers an estimation of error for the above method.
Theorem 5.
Let u k , i j be the exact solution of (1) at point ( x i , t j ) and U k , i j be the numerical solution of (9); then, u k ( x i , t j ) U k , i j     C ( Δ t + h ¯ 4 ) .
Proof. 
Using Lemma 1 and Theorem 4, one can prove that
u k U k , i j = u k u k , i j + u k , i j U k , i j     u k u k , i j + u k , i j U k , i j     C ( Δ t + h ¯ 4 ) .

6. Numerical Examples

In order to demonstrate the effectiveness and accuracy of the numerical methods that we developed in this paper, we present two examples in this section. We compute the maximum error of our numerical solutions by using the half mesh principle, which is a technique for refining the mesh size and comparing the solutions on different grids.
E k N , M = max i , j U k , i j ( Δ x , Δ t ) U k , i j ( Δ x / 2 , Δ t / 2 ) , 0     i     N , 0     j     M , D k , x N = max M E k N , M , D k , t M = max N E k N , M ,
U k , i j ( Δ x , Δ t ) and U k , i j ( Δ x / 2 , Δ t / 2 ) stand for the numerical outcomes at node ( x i , t j ) for mesh sizes ( Δ x , Δ t ) and ( Δ x / 2 , Δ t / 2 ) , respectively.
Example 1.
We consider the first-order hyperbolic delay differential equation.
u k t + A u k x + B u k ( x , t ) + C u k ( x δ , t ) = 0 , ( x , t ) ( 0 , 4 ] × ( 0 , 4 ] ,
u ¯ ( x , t ) = ( 0 , 0 ) , ( x , t ) [ δ , 0 ] × [ 0 , 4 ] ,
u k ( x , 0 ) = x exp ( ( 6 x 1 ) 2 / 4 ) × ( 4 x ) , k = 1 , 2 , x [ 0 , 4 ] ,
a 11 = 1 + x 3 + t 4 1 + 2 t x + 4 x 2 , a 22 = 1 + x 3 + t 4 1 + 4 t x + 4 x 2 , b 11 = 1 , b 12 = 1 2 , b 21 = 1 , b 22 = 1 2 , c 11 = 2 , c 12 = 1 , c 21 = 2 , c 22 = 1 , δ = 1 .
We assume that δ = 1 in this case. The presence of the delay term results in additional wave propagation occurring in the forward direction of x at a δ unit distance. Figure 1 and Figure 2 show the numerical solution obtained by the proposed method and the exact solution, respectively. We can compare the solution curves at different time levels in Figure 3 and Figure 4. Figure 5 and Figure 6 display the maximum error between the numerical and exact solutions at each time level. The maximum pointwise error for each case is also given in Table 1 and Table 2, where we can see that the error decreases as the mesh size decreases.
Example 2.
We consider Problems (10) and (11) with the following coefficients:
u ¯ ( x , 0 ) = x exp ( ( 6 x 1 ) 2 / 2 ) , x exp ( ( 4 x 1 ) 2 / 4 ) , x [ 0 , 4 ] ,
a 11 = 3 + x 3 + t 4 1 + 3 t x + 4 x 3 , a 22 = 4 + x 2 + t 4 1 + 2 t x + 4 x 2 , b 11 = 1 2 , b 12 = 1 4 , b 21 = 1 2 , b 22 = 1 4 , c 11 = 0 , c 12 = 0 , c 21 = 0 , c 22 = 0 , δ = 0 .
We assume that δ = 0 in this case. We observe that there is no additional wave propagation in the solution. Figure 7 and Figure 8 show the numerical solution obtained by applying the proposed method. We can see the solution curves for different values of time in Figure 9 and Figure 10, which demonstrates the accuracy and stability of the method. Figure 11 and Figure 12 display the maximum error between the numerical and exact solutions at each time level. The maximum pointwise error for various values of M and N is also given in Table 3 and Table 4, which confirm the convergence and consistency of the method.

7. Conclusions

This article deals with the system of first-order hyperbolic delay differential equations which include spatial delay terms. This system can model various phenomena in science, such as wave propagation, population dynamics, and neural networks. To obtain numerical solutions for this system, we adopt a semi-discretization technique in the time direction, using a backward finite difference formula on a uniform grid. This method reduces the original system to a set of algebraic equations, which have a truncation error of order O ( Δ t ) for a fixed x. We then discretize the resulting system further by applying the fourth-order Runge–Kutta method, which is a well-known and efficient method for solving ordinary differential equations. We also use piecewise cubic Hermite interpolation to approximate the spatial delay terms. This method offers us an overall error of order O ( Δ t + h ¯ 4 ) , where Δ t is the time step and h ¯ is the average spatial step. We discuss how to handle Problem (1) with both smooth and non-smooth data functions, and we investigate the characteristics of the solutions. The theoretical results are also verified by numerical examples (Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12) and Table 1, Table 2, Table 3 and Table 4. From these examples, we observe that, for fixed integer M, increasing the value of N leads to a decrease in the maximum error. On the other hand, for a fixed N, increasing the value of M causes the maximum error to increase. We also notice the conditional stability of the method, which requires that h ¯ < 1 , further the method is stable h i     C Δ t .

Author Contributions

Conceptualization, S.K., V.S. and R.P.A.; methodology, S.K., V.S. and R.P.A.; formal analysis, S.K., V.S. and R.P.A.; investigation, S.K., V.S. and R.P.A.; writing—original draft preparation, S.K., V.S. and R.P.A.; writing—review and editing, R.P.A., S.K. and V.S.; supervision, V.S. and R.P.A.; project administration, S.K., V.S. and R.P.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors acknowledges with sincere thanks to DST-SERB for providing computational facilities from the project TAR/2021/000053. The authors wish to acknowledge the referees for their valuable comments and suggestions, which helped to improve the presentation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Smith, H.L. An Introduction to Delay Differential Equations with Applications to the Life Sciences; Springer: New York, NY, USA, 2011; Volume 57, pp. 119–130. [Google Scholar]
  2. Bellen, A.; Zennaro, M. Numerical Methods for Delay Differential Equations; Oxford University Press: Oxford, UK, 2003. [Google Scholar]
  3. Kuang, Y. Delay Differential Equations with Applications in Population Dynamics; Academic Press: Cambridge, MA, USA, 1993. [Google Scholar]
  4. Stein, R.B. A theoretical analysis of neuronal variability. Biophys. J. 1965, 5, 173–194. [Google Scholar] [CrossRef]
  5. Stein, R.B. Some models of neuronal variability. Biophys. J. 1967, 7, 37–68. [Google Scholar] [CrossRef]
  6. Holden, A.V. Models of the Stochastic Activity of Neurons; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  7. Rai, P.; Sharma, K.K. Numerical study of singularly perturbed differential–difference equation arising in the modeling of neuronal variability. Comput. Math. Appl. 2012, 63, 118–132. [Google Scholar] [CrossRef]
  8. Varma, A.; Morbidelli, M. Mathematical Methods in Chemical Engineering; Oxford University Press: Oxford, UK, 1997. [Google Scholar]
  9. Banasiak, J.; Mika, J.R. Singularly perturbed telegraph equations with applications in the random walk theory. J. Appl. Math. Stoch. Anal. 1998, 11, 9–28. [Google Scholar] [CrossRef]
  10. Sharma, K.K.; Singh, P. Hyperbolic partial differential-difference equation in the mathematical modelling of neuronal firing and its numerical solution. Appl. Math. Comput. 2008, 201, 229–238. [Google Scholar]
  11. Singh, P.; Sharma, K.K. Numerical solution of first-order hyperbolic partial differential-difference equation with shift. Numer. Methods Partial Differ. Equ. 2010, 26, 107–116. [Google Scholar] [CrossRef]
  12. Al-Mutib, A.N. Stability properties of numerical methods for solving delay differential equations. J. Comput. Appl. Math. 1984, 10, 71–79. [Google Scholar] [CrossRef]
  13. Loustau, J. Numerical Differential Equations: Theory and Technique, ODE Methods, Finite Differences, Finite Elements and Collocation; World Scientific: Singapore, 2016. [Google Scholar]
  14. Warming, R.F.; Hyett, B.J. The modified equation approach to the stability and accuracy analysis of finite-difference methods. J. Comput. Phys. 1974, 14, 159–179. [Google Scholar] [CrossRef]
  15. Süli, E.; Mayers, D.F. An Introduction to Numerical Analysis; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  16. Singh, S.; Patel, V.K.; Singh, V.K. Application of wavelet collocation method for hyperbolic partial differential equations via matrices. Appl. Math. Comput. 2018, 320, 407–424. [Google Scholar] [CrossRef]
  17. Protter, M.H.; Weinberger, H.F. Maximum Principles in Differential Equations; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  18. Bainov, D.D.; Kamont, Z.; Minchev, E. Comparison principles for impulsive hyperbolic equations of first order. J. Comput. Appl. Math. 1995, 60, 379–388. [Google Scholar] [CrossRef]
  19. Jain, M.K.; Iyengar, S.R.K.; Saldanha, J.S.V. Numerical solution of a fourth-order ordinary differential equation. J. Eng. Math. 1977, 11, 373–380. [Google Scholar] [CrossRef]
  20. Rana, M.M.; Howle, V.E.; Long, K.; Meek, A. A New Block Preconditioner for Implicit Runge–Kutta Methods for Parabolic PDE Problems. SIAM J. Sci. Comput. 2021, 43.5, 475–495. [Google Scholar] [CrossRef]
  21. Takei, Y.; Iwata, Y. Numerical Scheme Based on the Implicit Runge-Kutta Method and Spectral Method for Calculating Nonlinear Hyperbolic Evolution Equations. Axioms 2022, 11, 28. [Google Scholar] [CrossRef]
  22. Southworth, B.S.; Krzysik, O.; Pazner, W.; Sterck, H.D. Fast Solution of Fully Implicit Runge-Kutta and Discontinuous Galerkin in Time for Numerical PDEs, Part I: The Linear Setting. SIAM J. Sci. Comput. 2022, 44, A416–A443. [Google Scholar] [CrossRef]
  23. Mizohata, S.; Murthy, M.V.; Singbal, B.V. Lectures on Cauchy Problem; Tata Institute of Fundamental Research: Mumbai, India, 1965; Volume 35. [Google Scholar]
  24. Miller, J.J.H.; O’Riordan, E.; Shishkin, G.I. Fitted Numerical Methods for Singular Perturbation Problems: Error Estimates in the Maximum Norm for Linear Problems in One and Two Dimensions; World Scientific: Singapore, 2012. [Google Scholar]
  25. Subburayan, V.; Ramanujam, N. An asymptotic numerical method for singularly perturbed convection-diffusion problems with a negative shift. Neural Parallel Sci. Comput. 2013, 21, 431–446. [Google Scholar]
Figure 1. The surface plot of the U 1 —numerical solution of Example 1.
Figure 1. The surface plot of the U 1 —numerical solution of Example 1.
Computation 12 00064 g001
Figure 2. U 2 —numerical solution of Example 1 at different time levels.
Figure 2. U 2 —numerical solution of Example 1 at different time levels.
Computation 12 00064 g002
Figure 3. U 1 —numerical solution of Example 1 at different time level.
Figure 3. U 1 —numerical solution of Example 1 at different time level.
Computation 12 00064 g003
Figure 4. U 2 —numerical solution of Example 1 at different time levels.
Figure 4. U 2 —numerical solution of Example 1 at different time levels.
Computation 12 00064 g004
Figure 5. U 1 —Maximum point wise error of Example 1.
Figure 5. U 1 —Maximum point wise error of Example 1.
Computation 12 00064 g005
Figure 6. U 2 —maximum point wise error of Example 1.
Figure 6. U 2 —maximum point wise error of Example 1.
Computation 12 00064 g006
Figure 7. The surface plot of U 1 —numerical solution of Example 2.
Figure 7. The surface plot of U 1 —numerical solution of Example 2.
Computation 12 00064 g007
Figure 8. The surface plot of U 2 —numerical solution of Example 2.
Figure 8. The surface plot of U 2 —numerical solution of Example 2.
Computation 12 00064 g008
Figure 9. U 1 —numerical solution of Example 2 at different time levels.
Figure 9. U 1 —numerical solution of Example 2 at different time levels.
Computation 12 00064 g009
Figure 10. U 2 —numerical solution of Example 2 at different time levels.
Figure 10. U 2 —numerical solution of Example 2 at different time levels.
Computation 12 00064 g010
Figure 11. U 1 —maximum point wise error of Example 2.
Figure 11. U 1 —maximum point wise error of Example 2.
Computation 12 00064 g011
Figure 12. U 2 —maximum point wise error of Example 2.
Figure 12. U 2 —maximum point wise error of Example 2.
Computation 12 00064 g012
Table 1. U 1 —the component maximum error for Example 1 using the conditional method.
Table 1. U 1 —the component maximum error for Example 1 using the conditional method.
N and δ = 1
M ↓641282565121024 D 1 , t M
642.3551 × 10   3 1.1683 × 10   3 5.8188 × 10   4 2.9037 × 10   4 1.4505 × 10   4 5.8188 × 10   4
1283.9796 × 10   3 1.9611 × 10   3 9.7352 × 10   4 4.8503 × 10   4 2.4209 × 10   4 9.7352 × 10   4
2568.9886 × 10   3 4.2439 × 10   3 2.0649 × 10   3 1.0186 × 10   3 5.0590 × 10   4 8.9886 × 10   3
5122.0254 × 10   2 9.2206 × 10   3 4.4098 × 10   3 2.1579 × 10   3 1.0675 × 10   3 9.2206 × 10   3
10244.3110 × 10   2 1.8317 × 10   2 8.5104 × 10   3 4.1089 × 10   3 2.0196 × 10   3 8.5104 × 10   3
D 1 , x N 8.9886 × 10   3 9.2206 × 10   3 9.7352 × 10   4 4.8503 × 10   4 5.0590 × 10   4 -
Table 2. U 2 —the component maximum error for Example 1 using the conditional method.
Table 2. U 2 —the component maximum error for Example 1 using the conditional method.
N and δ = 1
M ↓641282565121024 D 2 , t M
643.7129 × 10   3 1.8074 × 10   3 8.9174 × 10   4 4.4293 × 10   4 2.2074 × 10   4 8.9174 × 10   4
1287.2100 × 10   3 3.4536 × 10   3 1.6906 × 10   3 8.3643 × 10   4 4.1602 × 10   4 8.3643 × 10   4
2561.3467 × 10   2 6.3518 × 10   3 3.0862 × 10   3 1.5217 × 10   3 7.5559 × 10   4 7.5559 × 10   4
5122.2033 × 10   2 1.0136 × 10   2 4.8746 × 10   3 2.3920 × 10   3 1.1850 × 10   3 4.8746 × 10   3
10243.4673 × 10   2 1.4911 × 10   2 7.0017 × 10   3 3.4002 × 10   3 1.6763 × 10   3 7.0017 × 10   3
D 2 , x N 7.2100 × 10   3 6.3518 × 10   3 8.9174 × 10   4 8.3643 × 10   4 7.5559 × 10   4 -
Table 3. U 1 —the component maximum error for Example 2 using the conditional method.
Table 3. U 1 —the component maximum error for Example 2 using the conditional method.
N and δ = 0
M ↓641282565121024 D 1 , t M
641.5319 × 10   3 7.5986 × 10   4 3.7843 × 10   4 1.8885 × 10   4 9.4332 × 10   5 9.4332 × 10   5
1282.5359 × 10   3 1.2540 × 10   3 6.2355 × 10   4 3.1093 × 10   4 1.5525 × 10   4 6.2355 × 10   4
2564.7023 × 10   3 2.2893 × 10   3 1.1298 × 10   3 5.6127 × 10   4 2.7973 × 10   4 5.6127 × 10   4
5129.7474 × 10   3 4.5751 × 10   3 2.2226 × 10   3 1.0958 × 10   3 5.4407 × 10   4 9.7474 × 10   3
10242.2033 × 10   2 9.5061 × 10   3 4.4622 × 10   3 2.1665 × 10   3 1.0678 × 10   3 9.5061 × 10   3
D 1 , x N 9.7474 × 10   3 9.5061 × 10   3 6.2355 × 10   4 5.6127 × 10   4 9.4332 × 10   5 -
Table 4. U 2 —the component maximum error for Example 2 using the conditional method.
Table 4. U 2 —the component maximum error for Example 2 using the conditional method.
N and δ = 0
M ↓641282565121024 D 2 , t M
641.2625 × 10   3 6.2681 × 10   4 3.1231 × 10   4 1.5588 × 10   4 7.7871 × 10   5 7.7871 × 10   5
1282.0230 × 10   3 1.0021 × 10   3 4.9875 × 10   4 2.4881 × 10   4 1.2426 × 10   4 4.9875 × 10   4
2563.5823 × 10   3 1.7576 × 10   3 8.7066 × 10   4 4.3331 × 10   4 2.1616 × 10   4 8.7066 × 10   4
5127.3619 × 10   3 3.5035 × 10   3 1.7114 × 10   3 8.4594 × 10   4 4.2067 × 10   4 8.4594 × 10   4
10241.7733 × 10   2 7.7339 × 10   3 3.6528 × 10   3 1.7789 × 10   3 8.7813 × 10   4 8.7813 × 10   4
D 2 , x N 7.3619 × 10   3 7.7339 × 10   3 8.7066 × 10   4 8.4594 × 10   4 8.7813 × 10   4 -
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karthick, S.; Subburayan, V.; Agarwal, R.P. Solving a System of One-Dimensional Hyperbolic Delay Differential Equations Using the Method of Lines and Runge-Kutta Methods. Computation 2024, 12, 64. https://doi.org/10.3390/computation12040064

AMA Style

Karthick S, Subburayan V, Agarwal RP. Solving a System of One-Dimensional Hyperbolic Delay Differential Equations Using the Method of Lines and Runge-Kutta Methods. Computation. 2024; 12(4):64. https://doi.org/10.3390/computation12040064

Chicago/Turabian Style

Karthick, S., V. Subburayan, and Ravi P. Agarwal. 2024. "Solving a System of One-Dimensional Hyperbolic Delay Differential Equations Using the Method of Lines and Runge-Kutta Methods" Computation 12, no. 4: 64. https://doi.org/10.3390/computation12040064

APA Style

Karthick, S., Subburayan, V., & Agarwal, R. P. (2024). Solving a System of One-Dimensional Hyperbolic Delay Differential Equations Using the Method of Lines and Runge-Kutta Methods. Computation, 12(4), 64. https://doi.org/10.3390/computation12040064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop