Previous Article in Journal
A Review of and Some Results for Ollivier–Ricci Network Curvature
Previous Article in Special Issue
Qualitative Analysis of Multi-Terms Fractional Order Delay Differential Equations via the Topological Degree Theory

## Article Menu

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# On Fundamental Solution for Autonomous Linear Retarded Functional Differential Equations

39 Girard St Marlboro, NJ 07746, USA
Mathematics 2020, 8(9), 1418; https://doi.org/10.3390/math8091418
Received: 31 July 2020 / Revised: 18 August 2020 / Accepted: 20 August 2020 / Published: 24 August 2020
(This article belongs to the Special Issue Functional Differential Equations and Applications)

## Abstract

:
This document focuses attention on the fundamental solution of an autonomous linear retarded functional differential equation (RFDE) along with its supporting cast of actors: kernel matrix, characteristic matrix, resolvent matrix; and the Laplace transform. The fundamental solution is presented in the form of the convolutional powers of the kernel matrix in the manner of a convolutional exponential matrix function. The fundamental solution combined with a solution representation gives an exact expression in explicit form for the solution of an RFDE. Algebraic graph theory is applied to the RFDE in the form of a weighted loop-digraph to illuminate the system structure and system dynamics and to identify the strong and weak components. Examples are provided in the document to elucidate the behavior of the fundamental solution. The paper introduces fundamental solutions of other functional differential equations.

## 1. Introduction

This paper examines and characterizes the fundamental matrix solution for the n-vector autonomous linear retarded functional differential equation (RFDE)
$x ′ ( t ) = ∫ 0 a A ( d s ) x ( t − s ) + f ( t ) , t > 0 ,$
with forcing function $f ( t )$ and initial function $h ( θ )$
$x ( θ ) = h ( θ ) , − a ≤ θ ≤ 0 ,$
where $x ( t )$ is an n-vector, the kernel matrix $A ( d s )$ is an $n × n$ matrix of Borel measures with support on the interval $[ 0 , a ]$, and f and g belong to function spaces that will be described in Section 2.
We shall consider the case that
$A ( d s ) = ∑ i = 0 N A i δ ( s − θ i ) + A ( s ) d s ,$
where $0 = θ 0 < θ 1 ⋯ < θ N = a$, $A i$ are $n × n$ matrices over $R$, $δ ( s )$ is the Dirac delta function, and $A ( s )$ is an integrable $n × n$ matrix function on the interval $[ 0 , a ]$. Consequently, in terms of the Radon–Nikodym theorem, $A ( d s )$ has an absolutely continuous part $A ( s )$, a discrete singular part with a finite number of point masses (atoms) corresponding to Dirac delta measures, and a zero continuous singular part. It will be evident from the context whether the symbol A refers to an $n × n$ matrix over Borel measures, an $n × n$ matrix over the real numbers, or a matrix function. The terms $∑ i = 0 N A i δ ( s − θ i )$ and $A ( s ) d s$ respectively correspond to discrete delays and distributed delay of the kernel matrix $A ( d s )$.
The fundamental solution $Φ ( t )$ satisfies the RFDE
$Φ ′ ( t ) = ∫ 0 a A ( d s ) Φ ( t − s ) , t > 0 ,$
with initial conditions
$Φ ( 0 ) = I , Φ ( t ) = O for t < 0 ,$
where I and O are respectively the $n × n$ identity and zero matrices. Note that the fundamental solution $Φ ( t )$ also satisfies the Volterra integro-differential equation
$Φ ′ ( t ) = ∫ 0 t A ( d s ) Φ ( t − s ) , t > 0 ,$
with initial condition
$Φ ( 0 ) = I .$
Theorem 1.
The fundamental solution $Φ ( t )$ is given by
$Φ ( t ) = ∑ n = 0 ∞ 1 n ! ∫ 0 t A n ( d s ) ( t − s ) n ,$
where the convolutional powers of the kernel matrix $A n ( d s )$ are defined by
1.
$A 0 ( d s ) = I δ ( s )$,
2.
$A 1 ( d s ) = A ( d s )$, and
3.
$A n ( d s ) = A n − 1 ( d s ) ∗ A 1 ( d s )$ for $n ≥ 2$,
where ∗ denotes the convolution of two matrix measures.
Proof.
See the proof of Theorem 2. □
This expression for the fundamental solution in the case $n = 1$ can be found in References [1,2,3]. Note that the computation for the fundamental solution $Φ ( t )$ is simplified by the restriction of a finite number of discrete atoms in the discrete singular component, and a zero continuous singular component.
The fundamental solution $Φ ( t )$ for the autonomous linear ordinary differential equation (ODE) $Φ ′ ( t ) = A Φ ( t )$ is given (see Reference [4]) by the exponential matrix function
$Φ ( t ) = ∑ n = 0 ∞ 1 n ! ( A t ) n = exp ( A t ) .$
Note that the fundamental solution $Φ ( t )$ in Equation (8) for the RFDE is analogous to and is a generalization of the fundamental solution in Equation (9) for the ODE, and that it has the form of a convolutional exponential matrix function.
Equation (4) establishes a relationship between two actors in an RFDE: the fundamental solution $Φ ( t )$ and the kernel matrix $A ( d s )$. Let us consider the other actors and their roles.
For $z ∈ C$ the Laplace transform $L$ (see Reference [5]) of the fundamental solution $Φ ( t )$ is
$L ( Φ ( t ) ) ( z ) = Φ ^ ( z ) = ∫ 0 ∞ e − z t Φ ( t ) d t .$
Definition 1.
We define the characteristic matrix $Δ ( z ) = z I − ∫ 0 a A ( d s ) e − z s$, the characteristic determinant $det ( Δ ( z ) )$, and the resolvent matrix $Δ − 1 ( z )$.
We shall show in Section 4.1 that $Φ ^ ( z ) = Δ − 1 ( z )$ or alternatively $Φ ( t ) = L − 1 ( Δ − 1 ( z ) )$, so that the fundamental solution $Φ ( t )$ is the inverse Laplace of the resolvent matrix $Δ − 1 ( z )$.
The fundamental solution $Φ ( t )$ can also be expressed in the form of Laplace inverse as a contour integral in the z-plane
$Φ c i ( t ) = 1 2 π i ∫ ( c ) e z t Δ − 1 ( z ) d z ,$
where contributions to the integral arise from the characteristic roots ${ λ j }$ with multiplicity $m r$ of the characteristic determinant $D ( z ) = det ( Δ ( z ) ) = 0$. We shall show in Section 4 that this gives rise to a spectral representation of the fundamental solution $Φ ( t )$ in terms of the exponential solutions of the RFDE
$Φ c i ( t ) = ∑ r ∑ j = 0 m r − 1 Ψ r j t j exp ( λ r t ) .$
In the case that all characteristic roots $λ r$ are simple with multiplicity $m r = 1$, we have
$Φ c i ( t ) = ∑ r Ψ r exp ( λ r t ) ,$
where $Ψ r = C ( λ r ) / D ′ ( λ r )$ with $C ( z ) = adj ( Δ ( z ) )$. The characteristic roots ${ λ r }$ can be arranged in some appropriate order, such as decreasing real part $ℜ ( λ r )$ or increasing modulus $| λ r |$.
It is known (References [6,7,8,9]) that $Φ c i ( t )$ is convergent for a range of values of t, where $− a ≤ t < n a$. The case $t = − a$ arises in the scalar case in which the exponential solutions are complete and independent (see Reference [10]). Even where convergence is not an issue, we could have $Φ c i ( t ) ≠ Φ ( t )$ for $t < t 0$ and $Φ c i ( t ) ≡ Φ ( t )$ for $t ≥ t 0$, as shown in the example in Section 4.4, where $t 0 = 2 a$. An open question is to characterize the minimum time $t 0$ for the equality of $Φ c i ( t )$ and $Φ ( t )$ given by $t 0 = inf τ > − a { τ ; Φ c i ( t ) ≡ Φ ( t ) for t > τ }$. Note that $Φ c i ( t )$ plays the role of a spectral representation of the fundamental solution $Φ ( t )$ for $t > t 0$.
Figure 1 shows the relationship between the RFDE actors: the kernel matrix $A ( d s )$, the fundamental solution $Φ ( t )$, the characteristic matrix $Δ ( z )$, and the resolvent matrix $Δ − 1 ( z )$. Note that Figure 1 shows two different routes to determine the fundamental solution $Φ ( t )$ from the kernel matrix $A ( d s )$: (i) the direct route from Equation (8), and (ii) the indirect route through the characteristic matrix, the resolvent matrix, and the Laplace inverse. Depending on circumstances, either route may be preferable.
On its own, the fundamental solution is a representative proxy for the solutions of the RFDE. For example, the fundamental solution has exponential bounds that determine the asymptotic behavior of the RFDE solutions. Also the fundamental solution combined with a solution representation gives an exact expression in explicit form for the solution of an RFDE. However, the explicit form of the fundamental solution does not in and of itself convey all the information on its properties. It is the diversity of actors and their interplay that brings a variety of perspectives and the full force to bear on extracting information on the RFDE.
Remark 1.
The right hand side of the linear autonomous RFDE (1) is expressed as
$∫ 0 a A ( d s ) x ( t − s ) i n s t e a d o f ∫ − a 0 A ′ ( d s ) x ( t + s )$
where $A ′ ( d s ) = A ( − d s )$, to emphasize the convolution nature of the linear autonomous RFDE (see page 77 of Reference [11]), to highlight the convolutional composition of the fundamental solution, and to reflect the situation that for a linear autonomous RFDE essentially every operation is a convolution.
Remark 2.
The use of measures in the kernel matrix of the RFDE in Equation (1) is chosen over the alternatives of (i) distributions and (ii) functions of normalized bounded variation. This choice is mainly a matter of style. Whatever choice is made, the fundamental solution will involve convolutional powers of the kernel matrix.
Some monographs dealing with the study of functional differential equations and related Volterra integro-differential equations are References [7,11,12,13,14,15,16,17,18].
The remainder of the paper is organized as follows: Section 2 on preliminaries establishes the foundation for the paper. Section 3 applies algebraic graph theory to study RFDEs. It uses a weighted loop-digraph representation of an RFDE to illuminate the system structure, connectivity, and dynamics. The characteristics of the fundamental solution are explored in Section 4. Examples are considered in the main body of the document to elucidate the behavior of the fundamental solution to an RFDE. Section 5 extends the fundamental solution to other functional differential equations. Finally, the conclusions are presented in Section 6.

## 2. Preliminaries

#### 2.1. Ring of Borel Measures

Consider the set $B c s +$ of Borel measures on $R +$ with compact support of the form
$a ( d s ) = ∑ i = 0 N a i δ ( s − θ i ) + a ( s ) d s$
with finite number of atoms at ${ θ i }$, $0 = θ 0 < θ 1 ⋯ < θ N$, $a i ∈ R$, and integrable function $a ( s )$.
The addition operator + is given by
$( a + b ) ( d s ) = a ( d s ) + b ( d s ) = ∑ i = 0 N a i δ ( s − θ i ) + ∑ i = 0 M b i δ ( s − η i ) + a ( s ) d s + b ( s ) d s ,$
and convolution (aka multiplication) operator ∗ is given by
$(17) ( a ∗ b ) ( d s ) = a ( d s ) ∗ b ( d s ) = ∑ i = 0 N ∑ j = 0 M a i b j δ ( s − θ i − η j ) + ∑ i = 0 N a i b ( s − θ i ) d s (18) + ∑ j = 0 M b j a ( s − η j ) d s + ( a ∗ b ) ( s ) d s ,$
where
$b ( d s ) = ∑ j = 0 M b j δ ( s − η j ) + b ( s ) d s .$
It is straightforward to show that $B c s + = B c s + ( + , ∗ )$ is a commutative ring, with additive identity $0 ( d s )$ such that $( 0 + a ) ( d s ) = a ( d s )$, and the multiplicative identity the Dirac delta measure $δ ( d s )$ such that $δ ( s ) ∗ a ( d s ) = a ( d s )$.
Definition 2.
For $a ( d s ) ∈ B c s +$ define norm $| a ( d s ) | = sup | f ( x ) | ≤ 1 ∫ a ( d s ) f ( s )$.
Define the support $supp ( a ( d s ) )$ as the complement in $R +$ of the set $X = { x : ∫ N x a ( d s ) = 0 }$ for all neighborhoods $N x$ of x that are sufficiently small.
We have the following standard results:
• $| ( a ∗ b ) ( d s ) | ≤ | a ( d s ) | | b ( d s ) |$.
• $supp ( ( a + b ) ( d s ) ) ) ⊆ supp ( a ( d s ) ) ⋃ supp ( b ( d s ) )$.
• $supp ( ( a ∗ b ) ( d s ) ) ) ⊆ supp ( a ( d s ) ) + supp ( b ( d s ) )$.
In particular, if $supp ( a ( d s ) ) ⊆ [ 0 , d ]$, $supp ( a n ( d s ) ) ⊆ [ 0 , n d ]$.

#### 2.2. Matrix over Ring of Borel Measures

Definition 3.
A $n × n$ matrix over a ring of Borel measures $M n ( B c s + )$ is a $n × n$ array of elements of the ring $B c s +$.
We have matrix addition
$C ( d s ) = ( A + B ) ( d s ) = A ( d s ) + B ( d s ) w i t h c i j ( d s ) = a i j ( d s ) + b i j ( d s )$
and matrix convolution (multiplication)
$C ( d s ) = ( A ∗ B ) ( d s ) = A ( d s ) ∗ B ( d s ) w i t h c i j ( d s ) = ∑ k = 0 n a i k ( d s ) ∗ b k j ( d s ) .$
Note that the kernel matrix $A ( d s ) ∈ M n ( B c s + )$, and well as its its convolutional powers $A n ( d s ) ∈ M n ( B c s + )$.
Definition 4.
A matrix norm $| | A | |$ for $A ∈ M n ( R )$ with addition and multiplication defined conventionally on the ring $R$ has the following properties:
1.
$| | A | | ≥ 0 N o n n e g a t i v e$
2.
$| | A | | = 0 ⇔ A = 0 P o s i t i v e$
3.
4.
5.
$| | A B | | ≤ | | A | | | | B | | S u b m u l t i p l i c a t i v i t y$
Particular instances of matrix norms are the $l 1$-norm $| | A | | 1 = ∑ i = j = 1 n | a i j |$ and the Frobenius $l 2$-norm $| | A | | 2 = ( ∑ i = j = 1 n | a i j | 2 ) 1 2$. See Section 5.6 of Reference [19].

#### 2.3. Fundamental Solution

We now turn our attention to the Volterra integro-differential equation of convolutional type satisfied by the fundamental solution for an RFDE
$Φ ′ ( t ) = ∫ 0 t A ( d s ) Φ ( t − s ) , t > 0 , Φ ( 0 ) = I .$
By integrating we obtain the equivalent Volterra integral equation of convolutional type
$Φ ( t ) = I + ∫ 0 t d s B ( s ) Φ ( t − s ) , t ≥ 0 ,$
where the kernel matrix function
$B ( t ) = ∫ 0 t A ( d s ) = ∑ i = 0 N A i H ( t − θ i ) + ∫ 0 min ( t , a ) d θ A ( θ ) ,$
with
$H ( t ) = 1 , t ≥ 0 , 0 , t < 0 .$
It is well known that this Volterra integral equation has a Liouville–Neumann series solution in the form
$Φ ( t ) = I + ∑ n = 1 ∞ ∫ 0 t B n ( s ) d s$
where matrix functions $B n ( t )$ are defined by $B 1 = B$, and $B n = B n − 1 ∗ B 1$ where ∗ denotes a convolution of two matrix functions. This form of the fundamental solution is promulgated in References [14,15,20].
Proposition 1.
The two forms of the fundamental solution
$Φ ( t ) = ∑ n = 0 ∞ 1 n ! ∫ 0 t A n ( d s ) ( t − s ) n and Φ ( t ) = I + ∑ n = 1 ∞ ∫ 0 t B n ( s ) d s$
are equivalent.
Proof.
Since $B ( t ) = ∫ 0 t A ( d s )$, we have $L ( ( B ( t ) ) ( z ) = B ^ ( z ) = A ^ ( z ) / z$.
$L ∫ 0 t B n ( s ) d s = ( B ^ ( z ) ) n / z = ( A ^ ( z ) ) n / z n + 1 .$
$L 1 n ! ∫ 0 t A n ( d s ) ( t − s ) n = 1 n ! L ( A n ( d s ) ) L ( t n ) = ( A ^ ( z ) ) n / z n + 1 .$
Hence from equality of Laplace transforms $1 n ! ∫ 0 t A n ( d s ) ( t − s ) n = ∫ 0 t B n ( s ) d s$ for $n ≥ 1$. □
Theorem 2.
The solution of $Φ ′ ( t ) = ∫ 0 t A ( d s ) Φ ( t − s )$ with $Φ ( 0 ) = I$ exists in a form
$Φ ( t ) = ∑ n = 0 ∞ 1 n ! ∫ 0 t A n ( d s ) ( t − s ) n .$
The solution $Φ ( t )$ is unique and has continuous dependence on the kernel matrix $A ( d s )$.
Proof.
For $t ∈ [ 0 , T ]$ where $T < ∞$ we have
$| | ∑ n = 0 ∞ 1 n ! ∫ 0 t A n ( d s ) ( t − s ) n | | ≤ ∑ n = 0 ∞ 1 n ! K n T n = exp ( K T )$
so that the series converges absolutely and uniformly, thereby justifying the formal procedures in the following steps:
$(32) Φ ′ ( t ) = ∑ n = 1 ∞ 1 ( n − 1 ) ! ∫ 0 t A n ( d s ) ( t − s ) n − 1 = ∑ n = 0 ∞ 1 n ! ∫ 0 t A n + 1 ( d s ) ( t − s ) n (33) = ∑ n = 0 ∞ 1 n ! ∫ 0 t A ( d u ) ∫ 0 t − u ( t − s − u ) n = ∫ 0 t A ( d u ) ∑ n = 0 ∞ 1 n ! ∫ 0 t − u A n ( d s ) ( t − s − u ) n (34) = ∫ 0 t A ( d u ) Φ ( t − u ) .$
Intermediate steps are
$(35) ∫ 0 t A n + 1 ( d s ) ( t − s ) n = ∫ 0 t ∫ 0 s A ( d u ) A n ( d ( s − u ) ) ( t − s ) n = ∫ 0 t A ( d u ) ∫ u t A n ( d ( s − u ) ) ( t − s ) n (36) = ∫ 0 t A ( d u ) ∫ 0 t − u A n ( d s ) ( t − s − u ) n .$
Hence $Φ ( t )$ is a solution.
To show that the solution is unique, suppose that we have two solutions $Φ 1 ( t )$ and $Φ 2 ( t )$ so that for the Volterra integral equation we have
$Φ 1 ( t ) − Φ 2 ( t ) = ∫ 0 t B ( t − s ) [ Φ 1 ( s ) − Φ 2 ( s ) ] d s .$
Consequently
$| | Φ 1 ( t ) − Φ 2 ( t ) | | < ϵ + ∫ 0 t K | | Φ 1 ( s ) − Φ 2 ( s ) | | d s$
for some constant $K > 0$, arbitrarily small $ϵ > 0$, and $0 ≤ t ≤ T$. From Gronwall’s inequality (see page 24 of Reference [21]) we have $| | Φ 1 ( t ) − Φ 2 ( t ) | | ≤ ϵ exp ( K t )$. However, since $ϵ$ is arbitrarily small, we have $| | Φ 1 ( t ) − Φ 2 ( t ) | | = 0$ and consequently $Φ 1 ( t ) = Φ 2 ( t )$.
To demonstrate continuous dependence on the kernel matrix $A ( d s )$, consider two solutions $Φ 1 ( t )$ and $Φ 2 ( t )$ of
$Φ 1 ( t ) = I + ∫ 0 t B 1 ( t − s ) Φ 1 ( s ) d s and Φ 2 ( t ) = I + ∫ 0 t B 2 ( t − s ) Φ 2 ( s ) d s .$
We can write
$Φ 1 ( t ) − Φ 2 ( t ) = ∫ 0 t B 1 ( t − s ) [ Φ 1 ( s ) − Φ 2 ( s ) ] d s + ∫ 0 t [ B 1 ( t − s ) − B 2 ( t − s ) ] Φ 2 ( s ) d s ,$
so that
$| | Φ 1 ( t ) − Φ 2 ( t ) | | ≤ ∫ 0 t | | B 1 ( t − s ) | | | | Φ 1 ( s ) − Φ 2 ( s ) | | d s + ∫ 0 t | | B 1 ( t − s ) − B 2 ( t − s ) | | | | Φ 2 ( s ) | | d s .$
Choose $K 1 > 0$ so that $| | B 1 ( s ) | | < K 1$, $K 2 > 0$ so that $| | Φ 2 ( t ) | | < K 2$ and $δ = | | A 1 ( d s ) − A 2 ( d s ) | |$ so that $| | B 1 ( s ) − B 2 ( s ) | | ≤ T δ$. Then
$| | Φ 1 ( t ) − Φ 2 ( t ) | | ≤ ∫ 0 t K 1 | | Φ 1 ( s ) − Φ 2 ( s ) | | ] d s + K 2 T 2 δ ,$
so that by Gronwall’s inequality we have $| | Φ 1 ( t ) − Φ 2 ( t ) | | ≤ K 2 T 2 δ exp ( K 1 T )$. □
We have already seen an exponential bound $| | Φ ( t ) | | ≤ exp ( K T )$ on the interval $[ 0 , T ]$. A more precise exponential bound is provided by the supremum of the real part of the characteristic roots of the characteristic determinant $det ( Δ ( z ) ) = 0$.
Theorem 3
(Theorem 1.21 of Reference [15]). Let $α 0 = sup { ℜ ( z ) ; det ( Δ ( z ) = 0 }$. For $α > α 0$$∃ K > 0$ with $| | Φ ( t ) | | ≤ K exp ( α t )$ for $t ≥ 0$.
Remark 3.
The term fundamental solution $Φ ( t )$ has several synonyms in the literature depending on context:
A. Cauchy matrix $C ( t , s )$ for the nonautonomous RFDE
$x ′ ( t ) = ∫ s t A ( t , d τ ) x ( τ ) + f ( t ) , t > s .$
See page 51 of Reference [16]. For the autonomous equation $C ( t , s ) = Φ ( t − s )$.
B. Differential resolvent for the Volterra integro-differential equation
$x ′ ( t ) = ∫ 0 t A ( d s ) x ( t − s ) + f ( t ) , t > 0 .$
See page 77 of Reference [11].
C. Green’s function or impulse response function for RFDE
$x ′ ( t ) = ∫ 0 a A ( d s ) x ( t − s ) + f ( t ) , t > 0 ,$
with $h = 0$ and $f ( t ) = δ ( s )$. In some parts of the literature, for example Reference [16], a Green’s function is associated with a boundary value problem.
Proposition 2.
$∫ 0 t A ( d s ) Φ ( t − s ) = ∫ 0 t Φ ( t − s ) A ( d s ) ,$
so that the kernel matrix and the fundamental solution commute convolutionally.
Proof.
$(47) ∫ 0 t A ( d s ) Φ ( t − s ) = A ∗ Φ = A ∗ ∑ n = 0 ∞ 1 n ! A ∗ n ∗ ( t ) n (48) = ∑ n = 0 ∞ 1 n ! A ∗ ( n + 1 ) ∗ ( t ) n = ∑ n = 0 ∞ 1 n ! A ∗ n ∗ ( t ) n ∗ A (49) = Φ ∗ A = ∫ 0 t Φ ( t − s ) A ( d s ) ,$
where the steps involving an infinite sum are justified by uniform convergence. □

#### 2.4. RFDE Solution

Standard choices for the initial function space are:
$F 1 = M 2 ( [ − a , 0 ] , R n )$; h is Lebesgue measurable on $[ − a , 0 ]$, $h ( 0 )$ is well defined, $∫ − a 0 | h ( t ) | 2 d t < ∞$, and $| | h | | 1 = { | h ( 0 ) | 2 + ∫ − a 0 | h ( t ) | 2 d t } 1 / 2$.
$F 2 = C ( [ − a , 0 ] , R n )$, $| | h | | 2 = sup − a ≤ t ≤ 0 | h ( t ) |$.
The standard choice for the forcing function space is $L l o c 1 ( [ 0 , ∞ ) , R n )$.
Results on the existence, uniqueness and continuous dependence on the initial data can be found in Reference [22] for function space $F 1$ and Reference [13] for function space $F 2$.
Theorem 4.
The representation of the solution $x ( t )$ for the RFDE in Equation (1) is
$x ( t ) = Φ ( t ) h ( 0 ) + ∫ − a 0 d α ∫ − α a A ( d s ) Φ ( t − s − α ) h ( α ) + ∫ 0 t Φ ( s ) f ( t − s ) d s .$
Proof.
We first consider the case $h ( 0 ) ≠ 0$, $h = 0$ and $f ≠ 0$. Taking the Laplace transform we obtain
$x ^ ( z ) = Φ ^ ( z ) h ( 0 ) + Φ ^ ( z ) f ^ ( z ) ,$
and taking the Laplace inverse, we have
$x ( t ) = Φ ( t ) h ( 0 ) + ∫ 0 t Φ ( s ) f ( t − s ) .$
Now consider the case $h ( 0 ) = 0$, $h ≠ 0$, $f = 0$.
Define
$g ( t ) = ∫ t a A ( s ) h ( t − s ) , if 0 ≤ t ≤ a , 0 , if t > a .$
We have the equivalent equation
$x ′ ( t ) = ∫ 0 t A ( d s ) x ( t − s ) + g ( t )$
with $h ( 0 ) = 0$ and $h = 0$.
This equation has the solution
$(55) x ( t ) = ∫ 0 t d s Φ ( t − s ) ∫ s a A ( d u ) h ( s − u ) = ∫ 0 a d s Φ ( t − s ) ∫ s a A ( d u ) h ( s − u ) (56) = ∫ − a 0 d α ∫ − α a [ d s ] Φ ( t − α − s ) A ( d s ) h ( α ) = ∫ − a 0 d α ∫ − α a A ( d s ) Φ ( t − α − s ) h ( α ) .$
From linearity, we can combine the two solutions to obtain the complete solution. □
Remark 4.
Note that the fundamental solution in combination with the representation of the solution gives an exact expression in explicit form for the solution of an RFDE.
Remark 5.
Note that the initial function h and forcing function f enter into the three constituents of the RFDE solution $x ( t )$ as
1.
Initial function point value $h ( 0 )$ (with no history) in $Φ ( t ) h ( 0 )$,
2.
Initial function h (with history and hereditary information) as an integrand in $∫ − a 0 d α ∫ − α a A ( d s ) Φ ( t − s − α ) h ( α )$,
3.
Forcing function f as a convolution with the fundamental solution $Φ ( t )$ in $∫ 0 t Φ ( s ) f ( t − s ) d s$.
Remark 6.
The integration region R for the double integral is
$R = { ( α , s ) ; − s ≤ α ≤ 0 , 0 ≤ s ≤ a } = { ( α , s ) ; − α ≤ s ≤ a , − a ≤ α ≤ 0 } .$
The alternative expressions for the double integral are:
$(58) ∫ − a 0 d α ∫ − α a A ( d s ) Φ ( t − s − α ) h ( α ) = ∫ − a 0 d α ∫ − α a [ d s ] Φ ( t − s − α ) A ( d s ) h ( α ) (59) = ∫ 0 a [ d s ] ∫ − s 0 d α Φ ( t − s − α ) A ( d s ) h ( α ) (60) = ∫ 0 a [ d s ] ∫ − s 0 d α A ( d s ) Φ ( t − s − α ) h ( α ) .$
The change in the order of integration in the double integral is justified by the Fubini theorem. The change in the order of the kernel matrix and the fundamental solution is justified since they commute convolutionally.

## 3. Application of Algebraic Graph Theory

In this section we apply algebraic graph theory to examine and to provide a pictorial rendition of the system structure and system dynamics of an RFDE. We also use the theory to characterize the system connectivity of the RFDE, and to identify its strong and weak components. The main reference for the application of algebraic graph theory to RFDEs is the excellent book [23].

#### 3.1. Weighted Loop-Digraph Representation

We consider a weighted loop-digraph representation for the mathematical objects of the kernel matrix $A ( d s )$ and its convolutional powers $A n ( d s )$. These mathematical objects are prominent aspects of the RFDE and its fundamental solution $Φ ( t )$.
In component form the fundamental solution $Φ ( t )$ is
$Φ i j ( t ) = H ( t ) δ i j + ∑ n = 1 ∞ 1 n ! ∫ 0 t a i j n ( d s ) ( t − s ) n ,$
where
$δ i j = 1 , if i = j , 0 , if i ≠ j .$
The component form expression for the RFDE is
$(63) x i ′ ( t ) = ∑ j = 1 n ∫ 0 a a i j ( d s ) x j ( t − s ) (64) = ⋯ + ∫ 0 a a i i ( d s ) x i ( t − s ) + ⋯ + ∫ 0 a a i j ( d s ) x j ( t − s ) + … .$
Likewise
$x j ′ ( t ) = ⋯ + ∫ 0 a a j j ( d s ) x j ( t − s ) + ⋯ + ∫ 0 a a j i ( d s ) x i ( t − s ) + … .$
Figure 2 depicts the network structure and interactive systems dynamics for a general RFDE in the form of a weighted loop-digraph, albeit for simplicity focusing only on two nodes i and j.
Definition 5.
A weighted loop-directed graph or loop-digraph is an ordered triple $G = ( V , E , W )$ where V is a set of elements called vertices (or nodes, points), and E is a set of ordered pairs of vertices called directed edges and loops. The directed edges connect two distinct vertices in the given order. Loops connect a specific vertex with itself. W is the weight assigned to each directed edge and loop.
Although for the most part the book [23] focuses on directed graphs without loops and without weights, the applied results in this paper are applicable to directed graphs with loops and weights.
Figure 2 shows a weighted loop-digraph for an n-vector RFDE. The weights assigned to directed edge $e i j$ and loop $e i i$ are Borel measures $B c s +$ of $a i j ( d s )$ and $a i i ( d s )$, respectively. By convention the direction on the directed edge $e i j$ is i to j. However, note that in a physical interpretation the influence of node j on the system dynamics at node i is represented by the Borel measure $a i j ( d s )$ in the opposite direction. This consideration should be borne in mind in interpreting the system dynamics from the loop-digraph.
For the components of the second convolutional power $A 2 ( d s )$ we have
$a i j 2 ( d s ) = ∑ k = 1 n ( a i k ∗ a k j ) ( d s ) .$
This has the interpretation that the Borel measure $a i j 2 ( d s )$ is the sum of convolutions of the Borel measures encountered on the directed edges and loops on a walk of length 2 from vertex i to vertex j. More generally the Borel measure $a i j n ( d s )$ is the sum of convolutions of the Borel measures encountered on the directed edges and loops on a walk of length n from vertex i to vertex j. The Borel measures $a i j n ( d s )$ thus have a straightforward interpretation in the language of algebraic graph theory.
The following examples illustrate the concepts.
Example 1.
We have the RFDE
$x 1 ′ ( t ) = x 2 ( t )$
$x 2 ′ ( t ) = a x 1 ( t − 1 ) .$
The kernel matrix and its convolutional powers are
$A ( d s ) = 0 δ ( s ) a δ ( s − 1 ) 0 ,$
$A 2 n ( d s ) = a n δ ( s − n ) 0 0 a n δ ( s − n ) ,$
$A 2 n + 1 ( d s ) = 0 a n δ ( s − n ) a n + 1 δ ( s − n − 1 ) 0 .$
The fundamental solution is
$Φ ( t ) = Φ 11 ( t ) Φ 12 ( t ) Φ 21 ( t ) Φ 22 ( t ) ,$
where
$Φ 11 ( t ) = Φ 22 = ∑ n = 0 ∞ 1 2 n ! a n ( t − n ) + 2 n ,$
$Φ 12 ( t ) = ∑ n = 0 ∞ 1 ( 2 n + 1 ) ! a n ( t − n ) + 2 n + 1 ,$
$Φ 21 ( t ) = ∑ n = 0 ∞ 1 ( 2 n + 1 ) ! a n + 1 ( t − n − 1 ) + 2 n + 1 ,$
where
$( t ) + = t , i f t > 0 , 0 , i f t ≤ 0 .$
The weighted loop-digraphs for the kernel matrix and its convolutional powers for the RFDE are shown in Figure 3.
Example 2.
The RFDE is
$x 1 ′ ( t ) = a 11 x 1 ( t − 1 )$
$x 2 ′ ( t ) = a 22 x 2 ( t − 1 ) .$
The kernel matrix and its convolutional powers are
$A ( d s ) = a 11 δ ( s − 1 ) 0 0 a 22 δ ( s − 1 ) ,$
$A n ( d s ) = a 11 n δ ( s − n ) 0 0 a 22 n δ ( s − n ) .$
The fundamental solution is
$Φ ( t ) = Φ 11 ( t ) 0 0 Φ 22 ( t ) ,$
where
$Φ 11 ( t ) = ∑ n = 0 ∞ 1 n ! a 11 n ( t − n ) + n ,$
$Φ 22 ( t ) = ∑ n = 0 ∞ 1 n ! a 22 n ( t − n ) + n .$
The weighted loop-digraphs for the kernel matrix and its convolutional powers for the RFDE are shown in Figure 4.

#### 3.2. Strong and Weak Connectivity

Notice that in the first example from the previous subsection that the two components in the RFDE are interconnected and interact, and that there is no interaction in the second example. The two components are interconnected in the first example, and disconnected in the second example. The nature of the interaction between the RFDE components will be obvious in the case that the number of components is small, but less so when the number is large.
In this subsection we apply material from Chapters 2,3,5,14 of Reference [23] to characterize the connectivity between RFDE components and to classify the nature of the connectivity. Notice that this approach focuses on the structure of the RDFE, and is not concerned with the dynamics produced by the Borel measures, i.e., the weights on the directed edges and loops.
We start with some definitions.
Definition 6.
A path from $v 1$ to $v n$ is an ordered collection of vertices $v 1 , v 2 ⋯ v n$ interspersed with an ordered collection of directed edges $v 1 v 2 , v 2 v 3 , ⋯ , v n − 1 v n$.
A semipath from $v 1$ to $v n$ is an ordered collection of vertices $v 1 , v 2 ⋯ v n$ interspersed with an ordered collection of $n − 1$ directed edges, one from each pair of directed edges $v 1 v 2$ or $v 2 v 1$, $v 2 v 3$ or $v 3 v 2$, ⋯, $v n − 1 v n$ or $v n v n − 1$.
A strict semipath is a semipath that is not a path.
A vertex v is reachable from a vertex u if there is a path from u to v.
Points u and v are 0—connected if they are not joined by a semipath; 1—connected if they are joined by a semipath but not a path; 2—connected if they are joined by a path in one direction but not the other; 3—connected if they are joined by paths in both directions.
A loop-digraph is strongly connected or strong if every two vertices are mutually reachable.
A loop-digraph is unilaterally connected or unilateral if for any two points, at least one is reachable from the other; it is strictly unilateral if it is unilateral but not strong.
A loop-digraph is weakly connected or weak if every two vertices are joined by a semipath; it is strictly weak if it is weak but not unilateral.
A subgraph of a loop-digraph D is a loop-digraph in which the vertices and loops/directed edges are vertices and loops/directed edges of D.
A strong component of a loop-digraph is a maximal strong loop-digraph.
A unilateral component of a loop-digraph is a maximal unilateral loop-digraph.
A weak component of a loop-digraph is a maximal weak loop-digraph.
Theorem 5
(Theorem 3.2 of Reference [23]). Every vertex and every directed edge is contained in exactly one weak component.
Every vertex and every directed edge is contained in at least one unilateral component.
Every vertex and every directed edge is contained in exactly one strong component.
We shall now proceed to demonstrate how to decompose a loop-digraph into strong and weak components.
The reachability matrix $R = [ r i j ]$ is defined as
$r i j = 1 , if vertex j is reachable from vertex i , 0 , if vertex j is not reachable from vertex i .$
In terms of the convolutional powers $A k ( d s )$ for $0 ≤ k ≤ n − 1$ of the kernel matrix $A ( d s )$, the reachability matrix $R = [ r i j ]$ is given by
$r i j = 1 , if a i j k ( d s ) ≠ 0 for some k , 0 ≤ k ≤ n − 1 , 0 , otherwise .$
Other variants of the reachability matrix come into play:
The transpose reachability matrix $R T = [ r i j T ]$ is given by
$r i j T = 1 , if vertex i is reachable from vertex j , 0 , if vertex i is not reachable from vertex j .$
The symmetric reachability matrix $R S = [ r i j S ]$ arises from the symmetrized kernel matrix $A S ( d s )$ where $a i j S ( d s ) = a i j ( d s ) + a j i ( d s )$. It is given by
$r i j S = 1 , vertex i is reachable from vertex j or ¯ vertex j is reachable from vertex i , 0 , otherwise .$
The element-wise product $R × R T = [ r i j P ] = [ r i j r j i ]$. It is given by
$r i j P = 1 , vertex i is reachable from vertex j and ¯ vertex j is reachable from vertex i , 0 , otherwise .$
The following theorem says that the strong components of the RFDE are determined by the elementwise product reachability matrix $R × R T$.
Theorem 6
(Theorem 5.8 of Reference [23]). The strong component containing the vertex $v i$ is given by the entries of 1 in the ith row (or column) in the elementwise product reachability matrix $R × R T$.
The following theorem says that the weak components of the RFDE are determined by the symmetric reachability matrix $R S$.
Theorem 7
(Corollary 5.15a of Reference [23]). The weak component containing the vertex $v i$ is given by the entries of 1 in the ith row (or column) in the symmetric reachability matrix $R S$.
The connectedness matrix $C = [ c i j ]$ is obtained from the reachability matrix $R = [ r i j ]$ as follows:
If $v i$ and $v j$ are in the same weak component $c i j = r i j + r j i + 1$.
Otherwise $c i j = 0$.
The connectedness matrix $C = [ c i j ]$ takes the values
$c i j = 0 , vertices i and j are not connected with a semipath , i . e . , they are disconnected , 1 , vertices i and j are connected with a strict semipath , 2 , vertices i and j are connected with a path in one direction but not the other , 3 , vertices i and j are connected with paths in both directions .$
Definition 7.
A permutation matrix $P ∈ M n ( B c s + )$ has exactly one entry in each row and in each column set to $δ ( d s )$. All other entries are 0. A permutation matrix P can be construed as relabelling the vertices in a loop-digraph.
A $n × n$ matrix $M ∈ M n ( B c s + )$ is said to be decomposable if a permutation matrix $P ∈ M n ( B c s + )$ such that
$P T M P = M 11 O r × n − r O n − r × r M 22 ,$
where $M 11$ and $M 22$ are square matrices of sizes $r × r$ and $n − r × n − r$ respectively, and $O r × n − r$ and $O n − r × r$ are zero matrices of the indicated sizes.
Theorem 8
(Theorem 5.17 of Reference [23]). The following conditions are equivalent.
1.
The loop-digraph is disconnected.
2.
Its kernel matrix is decomposable.
3.
Its reachability matrix is decomposable.
Theorem 9.
Suppose that the loop-digraph for an RFDE is disconnected so that the kernel matrix $A ( d s )$ is decomposable in the form
$A ( d s ) = A 11 ( d s ) O r × n − r O n − r × r A 22 ( d s ) .$
Then the fundamental solution $Φ ( t )$ is also decomposable in the form
$Φ ( t ) = Φ 11 ( t ) O r × n − r O n − r × r Φ 22 ( t ) ,$
where $Φ 11 ( t )$ and $Φ 22 ( t )$ are the fundamental solutions corresponding respectively to the kernel matrices $A 11 ( d s )$ and $A 22 ( d s )$.
In general, if the loop-digraph for an RFDE has p weak components, the kernel matrix $A ( d s )$ can be decomposed into the form
$A ( d s ) = A 11 ( d s ) A 22 ( d s ) A p p ( d s ) ,$
and the fundamental solution can also be decomposed into a corresponding form
$Φ ( t ) = Φ 11 ( t ) Φ 22 ( t ) Φ p p ( t ) ,$
where $Φ i i ( t )$ is the fundamental solution for the kernel matrix $A i i ( d s )$.
Example 3.
To illustrate the results, we consider the RFDE weighted loop-digraph shown in Figure 5.
The kernel matrix $A ( d s )$ is
$A ( d s ) = v 1 v 2 v 3 v 4 v 5 v 6 v 1 v 2 v 3 v 4 v 5 v 6 0 a 12 0 0 0 0 a 21 0 a 23 0 0 0 0 a 32 0 0 0 a 36 0 0 0 0 a 45 0 0 0 0 0 0 0 0 0 0 0 0 a 66 ,$
where the dummy variable (ds) is suppressed within the matrix to reduce clutter.
The symmetric kernel matrix $A S ( d s )$ is
$A S ( d s ) = v 1 v 2 v 3 v 4 v 5 v 6 v 1 v 2 v 3 v 4 v 5 v 6 0 a 12 + a 21 0 0 0 0 a 12 + a 21 0 a 23 + a 32 0 0 0 0 a 23 + a 32 0 0 0 a 36 0 0 0 0 a 45 0 0 0 0 a 45 0 0 0 0 a 36 0 0 2 · a 66 .$
The reachability matrix $R$ is
$R = v 1 v 2 v 3 v 4 v 5 v 6 v 1 v 2 v 3 v 4 v 5 v 6 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 1 .$
The transpose reachability matrix $R T$ is
$R T = v 1 v 2 v 3 v 4 v 5 v 6 v 1 v 2 v 3 v 4 v 5 v 6 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 1 1 1 0 0 1 .$
The symmetric reachability matrix $R S$ is
$R S = v 1 v 2 v 3 v 4 v 5 v 6 v 1 v 2 v 3 v 4 v 5 v 6 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 0 0 0 1 1 0 0 0 0 1 1 0 1 1 1 0 0 1 .$
This yields the weak components ${ v 1 , v 2 , v 3 , v 6 }$ and ${ v 4 , v 5 }$.
The element-wise product reachability matrix $R × R T$ is
$R × R T = v 1 v 2 v 3 v 4 v 5 v 6 v 1 v 2 v 3 v 4 v 5 v 6 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 .$
This yields the strong components ${ v 1 , v 2 , v 3 }$, ${ v 4 }$, ${ v 5 }$, and ${ v 6 }$.
The connectedness matrix C is given by
$C = v 1 v 2 v 3 v 4 v 5 v 6 v 1 v 2 v 3 v 4 v 5 v 6 3 3 3 0 0 2 3 3 3 0 0 2 3 3 3 0 0 2 0 0 0 3 2 0 0 0 0 2 3 0 2 2 2 0 0 3 .$
By applying the appropriate permutation matrix or simply interchanging the rows of columns of the kernel matrix $A ( d s )$ in accordance with the weak components, the kernel matrix is expressed in decomposed form, as follows:
The kernel matrix $A ( d s )$ is
$A ( d s ) = v 1 v 2 v 3 v 6 v 4 v 5 v 1 v 2 v 3 v 6 v 4 v 5 0 a 12 0 0 0 0 a 21 0 a 23 0 0 0 0 a 32 0 a 36 0 0 0 0 0 a 66 0 0 0 0 0 0 0 a 45 0 0 0 0 0 0 .$
The kernel matrix $A ( d s )$ is expressed succinctly in the decomposed form
$A ( d s ) = A 11 ( d s ) O 4 × 2 O 2 × 4 A 22 ( d s ) ,$
where $A 11 ( d s )$ and $A 22 ( d s )$ are respectively $4 × 4$ and $2 × 2$ kernel matrices, and $O 2 × 4$ and $O 4 × 2$ are respectively $2 × 4$ and $4 × 2$ zero matrices.
The fundamental solution $Φ ( t )$ can be expressed in decomposed form
$Φ ( t ) = Φ 11 ( t ) O 4 × 2 O 2 × 4 Φ 22 ( t ) ,$
where $Φ 11 ( t )$ and $Φ 22 ( t )$ are respectively the fundamental solutions for the kernel matrices $A 11 ( d s )$ and $A 22 ( d s )$.

## 4. Characteristics of Fundamental Solution

#### 4.1. Fundamental Solution is Laplace Inverse of Resolvent Matrix

The fundamental solution satisfies the Volterra integro-differential equation of convolutional type
$Φ ′ ( t ) = ∫ 0 t A ( d s ) Φ ( t − s ) , t > 0 , Φ ( 0 ) = I .$
Taking the Laplace transform, we have
$z Φ ^ ( z ) − I = Φ ^ ( z ) ∫ 0 a A ( d s ) e − z s$
$Δ ( z ) Φ ^ ( z ) = I$
$Φ ^ ( z ) = Δ − 1 ( z )$
$Φ ( t ) = L − 1 Δ − 1 ( z ) .$
Consequently, the fundamental solution is the Laplace inverse of the resolvent matrix $Δ − 1 ( z )$. Alternatively, the Laplace transform of the fundamental solution $Φ ( t )$ is the resolvent matrix $Δ − 1 ( z )$.

#### 4.2. Tauberian Behavior

From the previous subsection, we have
$Φ ^ ( z ) = ∫ 0 ∞ Φ ( t ) exp ( − z t ) d t = Δ − 1 ( z ) = z I − ∫ 0 a A ( d s ) exp ( − z s ) − 1 .$
Assume that all the characteristic roots of the characteristic determinant $det ( Δ ( z ) )$ have real part less than zero, so that $α 0 < 0$. Choose $α$ so that $α 0 < α$. Then $Φ ( t ) exp ( − α t )$ is integrable and we have
$∫ 0 ∞ Φ ( t ) exp ( − α t ) d t = α I − ∫ 0 a A ( d s ) exp ( − α s ) − 1$
In particular for $α = 0$ we have
$∫ 0 ∞ Φ ( t ) d t = − ∫ 0 a A ( d s ) ) − 1 .$

#### 4.3. Contour Integral Version of Fundamental Solution

Let us consider the expression for the fundamental solution $Φ c i ( t )$ in the form of Laplace inverse as a contour integral in the z-plane
$Φ c i ( t ) = 1 2 π i ∫ ( c ) e z t Δ − 1 ( z ) d z = 1 2 π i ∫ ( c ) e z t adj ( Δ ( z ) ) det ( Δ ( z ) ) d z .$
For notational convenience, we put $C ( z ) = adj ( Δ ( z ) )$, $D ( z ) = det ( Δ ( z ) )$, and $E ( z ) = 1 / det ( Δ ( z ) )$.
To obtain the contribution to the integral of a characteristic root $λ r$ of $det ( Δ ( λ ) ) = 0$ with multiplicity $m r$, we consider the integral over a small circle centered at $λ r$ containing no other characteristic roots of $det ( Δ ( λ ) ) = 0$.
Following the approach in Reference [10], we have
$e z t = e λ r t e ( z − λ r ) t = e λ r t ∑ n = 0 t n n ! ( z − λ r ) n ,$
$C ( z ) = ∑ n = 0 1 n ! C ( n ) ( λ r ) ( z − λ r ) n ,$
$D ( z ) = ( z − λ r ) m r ∑ n = 0 1 ( n + m r ) ! D ( ( n + m r ) ) ( λ r ) ( z − λ r ) n .$
We have that $E ( z )$, the reciprocal function of $D ( z )$, is given by
$E ( z ) = ( z − λ r ) − m r ∑ n = 0 E n ( λ r ) ( z − λ r ) n ,$
where $E 0 ( λ r ) = m r ! / D ( ( m r ) ) ( λ r )$, and $E n ( λ r )$ for $n ≥ 1$ is obtained recursively from
$∑ m = 0 n 1 ( m + m r ) ! E n − m ( λ r ) D ( ( m + m r ) ) ( λ r ) = 0 .$
We have
$e z t Δ − 1 ( z ) = e λ r t ( z − λ r ) − m r ∑ k 1 = 0 k 2 = 0 k 3 = 0 ∞ t k 1 k 1 ! k 2 ! C ( k 2 ) ( λ r ) E k 3 ( λ r ) ( z − λ r ) k 1 + k 2 + k 3 .$
The residue for the characteristic root $λ r$ is the coefficient of $( z − λ r ) − 1$ corresponding to $k = k 1 + k 2 + k 3 = m r − 1$. It is given by
$e λ r t ∑ k 1 + k 2 + k 3 = m r − 1 t k 1 k 1 ! k 2 ! C ( k 2 ) ( λ r ) E k 3 ( λ r ) ( z − λ r ) k 1 + k 2 + k 3 .$
The matrix coefficient $Ψ r j$ is given by
$Ψ r j = 1 j ! ∑ k 2 + k 3 = m r − 1 − j 1 k 2 ! C ( k 2 ) ( λ r ) D k 3 ( λ r ) .$
For example $Ψ r , m r − 1 = C ( λ r ) m r / D ( m r ) ( λ r )$, and for $m r = 1$$Ψ r , 0 = C ( λ r ) / D ′ ( λ r )$.
We have arrived at the following theorem:
Theorem 10.
Let ${ λ r }$ be the characteristic roots of the characteristic determinant $det ( Δ ( z ) ) = 0$ with multiplicity $m r$. The version of the fundamental solution $Φ c i ( t )$ arising from a contour integral has the form of spectral decomposition of exponential solutions
$Φ c i ( t ) = ∑ r ∑ j = 0 m r − 1 Ψ r j t j exp ( λ r t ) ,$
where
$Ψ r j = 1 j ! ∑ k 2 + k 3 = m r − 1 − j 1 k 2 ! C ( k 2 ) ( λ r ) E k 3 ( λ r ) .$
In the case that all characteristic roots $λ r$ are simple with multiplicity $m r = 1$ we have
$Φ c i ( t ) = ∑ r Ψ r exp ( λ r t ) ,$
where $Ψ r = C ( λ r ) / D ′ ( λ r )$. For the scalar case $n = 1$, $C ( z ) ≡ 1$ so that $Ψ r = 1 / D ′ ( λ r )$.
Example 4.
Consider
$x 1 ′ ( t ) = A 1 x 2 ( t − a )$
$x 2 ′ ( t ) = A 2 x 3 ( t − a )$
$x 3 ′ ( t ) = 0 .$
$A ( d s ) = 0 A 1 δ ( s − a ) 0 0 0 A 2 δ ( s − a ) 0 0 0 .$
$A 2 ( d s ) = 0 0 A 1 A 2 δ ( s − 2 a ) 0 0 0 0 0 0 .$
$A n ( d s ) = 0$ for $n ≥ 3$ so that $A ( d s )$ is a nilpotent matrix with index 3. The weighted loop-digraphs for the kernel matrix and its convolutional powers for the RFDE are shown in Figure 6.
The fundamental matrix $Φ ( t )$ is
$Φ ( t ) = H ( t ) A 1 ( t − a ) + 1 2 A 1 A 2 ( t − 2 a ) + 2 0 H ( t ) A 2 ( t − a ) + 0 0 H ( t ) .$
$Δ ( z ) = z − A 1 e − z a 0 0 z − A 2 e − z a 0 0 z .$
$C ( z ) = z 2 A 1 z e − z a A 1 A 2 e − 2 z a 0 z 2 A 2 z e − z a 0 0 z 2 .$
$D ( z ) = z 3$.
The contribution to the contour integral is
$∑ k 1 + k 2 = 2 t k 1 k 1 ! k 2 ! C ( k 2 ) ( 0 ) = 1 2 C ( 2 ) ( 0 ) + t C ( 1 ) ( 0 ) + 1 2 t 2 C ( 0 ) .$
The contour integral version of the fundamental solution $Φ c i ( t )$ is given by
$Φ c i ( t ) = 1 A 1 ( t − a ) 1 2 A 1 A 2 ( t − 2 a ) 2 0 1 A 2 ( t − a ) 0 0 1 .$
Note for this specific example that $Φ c i ( t ) ≡ Φ ( t )$ for $t ≥ 2 a$, but that $Φ c i ( t ) ≠ Φ ( t )$ for $0 ≤ t < 2 a$. As this example has only a finite number of terms in $Φ c i ( t )$ and $Φ ( t )$, convergence is not a factor in the discrepancy. The issue is that $Φ ( t )$ is compelled to contain truncated powers of t to reflect delay activations, whereas by its nature $Φ c i ( t )$ is compelled to contain untruncated powers of t.
An open question is to characterize the minimum time $t 0$ for the equality of $Φ c i ( t )$ and $Φ ( t )$ given by $t 0 = inf τ > − a { τ ; Φ c i ( t ) ≡ Φ ( t ) for t > τ }$. The matter has been encountered and considered in the literature (see References [6,7,8,9]) with values of $t 0$ ranging from $− a$ to $n a$.

#### 4.4. Semigroup Relations

The fundamental solution $Φ ( t )$ for the ODE $x ′ ( t ) = A x ( t )$ satisfies the semigroup relation $Φ ( t 1 + t 2 ) = Φ ( t 1 ) Φ ( t 2 )$. The generalization for the fundamental solution $Φ ( t )$ for an RFDE
$x ′ ( t ) = ∫ 0 a A ( d s ) x ( t − s )$
is as follows:
Theorem 11.
For $t 1 ≧ 0$ and $t 2 ≧ 0$
$Φ ( t 1 + t 2 ) = Φ ( t 1 ) Φ ( t 2 ) + ∫ − a 0 d α ∫ − α a [ d s ] Φ ( t 1 − s − α ) A ( d s ) Φ ( t 2 + α ) .$
Proof.
This follows from the representation of the solution with $f = 0$ and the consideration that the translate of a solution of an autonomous linear RFDE is also a solution. □
Remark 7.
A related result is found in Reference [24].
Remark 8.
It follows from the representation of solutions that the double integral has the following equivalent forms (see Equation (58)):
$(137) ∫ − a 0 d α ∫ − α a A ( d s ) Φ ( t − s − α ) Φ ( t 2 + α ) = ∫ − a 0 d α ∫ − α a [ d s ] Φ ( t − s − α ) A ( d s ) Φ ( t 2 + α ) (138) = ∫ 0 a [ d s ] ∫ − s 0 d α Φ ( t − s − α ) A ( d s ) Φ ( t 2 + α ) (139) = ∫ 0 a [ d s ] ∫ − s 0 d α A ( d s ) Φ ( t − s − α ) ) Φ ( t 2 + α ) .$

#### 4.5. Semigroup of Solution Operators

In this subsection we consider the role played by the fundamental solution $Φ ( t )$ in solution operators mapping the initial function h to the solution $x ( t ; h )$ in the function spaces $M 2 ( [ − a , 0 ] , R n )$ and $C ( [ − a , 0 ] , R n )$.
The solution $x ( t ; h )$ is given by
$(140) x ( t ; h ) = Φ ( t ) h ( 0 ) + ∫ − a 0 d α ∫ − α a A ( d s ) Φ ( t − s − α ) h ( α ) (141) = Φ ( t ) h ( 0 ) + ∫ − a 0 d α Φ ˜ ( t , α ) h ( α ) ,$
where
$Φ ˜ ( t , α ) = ∫ − α a A ( d s ) Φ ( t − s − α ) .$
Remark 9.
Note that a solution operator has to accommodate two aspects of the initial function h: (i) the initial value $h ( 0 )$ acted on by the fundamental solution $Φ ( t )$, and (ii) the initial function h as an integrand with kernel $Φ ˜ ( t , α )$.
For the function space $M 2 ( [ − a , 0 ] , R n ) = R n × L 2 ( [ − a , 0 ] , R n )$ we use the notation $h ¯ = ( h ( 0 ) , h )$ and $x ¯ t ( h ) = ( x ( t ; h ) , x t ( h ) )$, where
$x t ( h ) ( θ ) = x ( t + θ ; h ) , − a ≤ θ ≤ 0 .$
The $M 2 ( [ − a , 0 ] , R n )$ solution operator $S ( t )$ is given by
$S ( t ) h ¯ = x ¯ t ( h ¯ )$
$S ( t ) ( h ( 0 ) , h ) = ( x ( t ; h ) , x t ( h ) ) .$
We write
$S ( t ) ( h ( 0 ) , h ) = S 00 ( t ) h ( 0 ) S 01 ( t ) h S 10 ( t ) h ( 0 ) S 11 ( t ) h ,$
where
$S 00 ( t ) : R n → R n , S 00 ( t ) h ( 0 ) = Φ ( t ) h ( 0 ) ,$
$S 01 ( t ) : L 2 → R n , S 01 ( t ) h = ∫ − a 0 d α Φ ˜ ( t , α ) h ( α ) ,$
$S 10 ( t ) : R n → L 2 , ( S 10 ( t ) h ( 0 ) ) ( θ ) = Φ ( t + θ ) h ( 0 ) ,$
$S 11 ( t ) : L 2 → L 2 , ( S 11 ( t ) h ) ( θ ) = ∫ − a 0 d α Φ ˜ ( t + θ , α ) h ( α ) .$
The $C ( [ − a , 0 ] , R n )$ solution operator $T ( t )$ is given by
$T ( t ) h = x t ( h )$
$(152) ( T ( t ) h ) ( θ ) = x ( t + θ ; h ) (153) = Φ ( t + θ ) h ( 0 ) + ∫ − a 0 d α Φ ˜ ( t + θ , α ) h ( α ) .$
The solution operator $S ( t )$ on the function space $M 2 ( [ − a , 0 ] , R n )$ is a strongly continuous semigroup of linear operators for $t ≥ 0$ satisfying: (i) $S ( 0 ) = I$, (ii) $S ( t 1 + t 2 ) = S ( t 1 ) S ( t 2 )$ for $t 1 , t 2 ≥ 0$, and (iii) $lim t ↓ 0 ∥ S ( t ) h − h ∥ = 0$. See References [22,25,26,27]. Likewise for the solution operator $T ( t )$ on the function space $C ( [ − a , 0 ] , R n )$. See References [8,13,15].

#### 4.6. Picard Iteration

Definition 8.
The $n t h$ Picard iteration ${ Φ n ( t ) }$ for the fundamental solution $Φ ( t )$ based on its Volterra integral equation $Φ ( t ) = I + ∫ 0 t B ( s ) Φ ( t − s ) d s$ is
$Φ 0 ( t ) = I , Φ n + 1 ( t ) = I + ∫ 0 t B ( s ) Φ n ( t − s ) d s .$
Theorem 12.
The $n t h$ Picard iteration $Φ n ( t )$ is given by
$(155) Φ n ( t ) = I + ∑ j = 1 n ∫ 0 t B j ( s ) d s (156) = ∑ j = 0 n 1 j ! ∫ 0 t A j ( d s ) ( t − s ) j .$
Proof.
Arguing by induction, it suffices to show that
$∫ 0 t d s B ( s ) ∫ 0 t − s B n ( u ) = ∫ 0 t d s ∫ 0 s d u B ( s − u ) B n ( u ) = ∫ 0 t d s B n + 1 ( s ) .$
The first equation in the proof follows from a change in dummy integration variables, and the second from the definition of $B n ( s )$. The second equation in the theorem follows from $∫ 0 t B n ( s ) d s = 1 n ! ∫ 0 t A n ( d s ) ( t − s ) n$. □

## 5. Extensions to other Functional Differential Equations

#### 5.1. p-th Order RFDE

We consider fundamental solution $Φ ( t )$ for the p-th order RFDE with n-vector $x ( t )$
$x ( p ) = ∫ 0 a A ( d s ) x ( t − s ) .$
By the standard procedure we convert the p-th order RFDE for n-vector $x ( t )$ into a first order RFDE for a $p n$-vector $y ( t )$
$y ′ ( t ) = ∫ 0 a A ( d s ) y ( t − s ) ,$
where the $p n × p n$ companion matrix $A ( d s )$ is given by
$A ( d s ) = 0 I n δ ( s ) 0 ⋯ 0 0 0 I n δ ( s ) ⋯ 0 ⋯ ⋯ ⋯ ⋯ ⋯ A ( d s ) 0 0 ⋯ 0 .$
The $p n × p n$ fundamental solution is $Φ ( t ) = ( Φ i j ( t ) )$ where
$Φ i j ( t ) = ∑ m = 0 ∞ 1 ( p m + k ) ! ∫ 0 t A m + l ( d s ) ( t − s ) p m + k ,$
where
$k = j − i , if j ≥ i , j + p − 1 , if j < i , and l = 1 , if i > j , 0 , if i ≤ j .$
In particular, for $p = 2$
$Φ ( t ) = Φ 11 ( t ) Φ 12 ( t ) Φ 21 ( t ) Φ 22 ( t ) ,$
where
$Φ 11 ( t ) = Φ 22 = ∑ n = 0 ∞ 1 2 n ! ∫ 0 t A n ( d s ) ( t − s ) 2 n ,$
$Φ 12 ( t ) = ∑ n = 0 ∞ 1 ( 2 n + 1 ) ! ∫ 0 t A n ( d s ) ( t − s ) 2 n + 1 ,$
$Φ 21 ( t ) = ∑ n = 0 ∞ 1 ( 2 n + 1 ) ! ∫ 0 t A n + 1 ( d s ) ( t − s ) 2 n + 1 .$
See Reference [28].

#### 5.2. Infinite Delay

The RDFE with finite delay has a kernel matrix $A ( d s ) ∈ B c s +$ with compact support and an initial function space defined over a finite interval $[ − a , 0 ]$. In contrast, the RDFE with infinite (or unbounded) delay has a kernel matrix with non-compact support and an initial function space defined on the infinite interval $( − ∞ , 0 ]$. There is a wide variety of choices for this initial function space and the theory for the RDFE with infinite delay is intricately intertwined with a choice (see Reference [29]). However, the fundamental solution, with its simple initial condition of $Φ ( 0 ) = I$ and $Φ ( t ) = O$ for $t < 0$ is exempt from these intricacies.
Reference [29] considers the fundamental solution for the equation
$x ′ ( t ) = ∑ j = 0 ∞ A j x ( t − θ j ) + ∫ 0 t A ( t − s ) x ( s ) d s , t > 0 ,$
where $0 = θ 0 < θ 1 < ⋯ < θ N < ⋯$, $∑ j = 0 ∞ | | A j | | < ∞$, and $| | A ( s ) | | ∈ L 1 [ 0 , ∞ )$. We assume that there is no finite accumulation point and that there is some minimum separation $α = inf i ( θ i + 1 − θ i ) > 0$, so that given $0 < T < ∞$ we can find N so that $θ N ≤ T < θ N + 1$.
For tractability we split Equation (167) into two equations corresponding to discrete delays and a distributed delay, as follows:
$x ′ ( t ) = ∑ j = 0 ∞ A j x ( t − θ j ) , t > 0 ,$
and
$x ′ ( t ) = ∫ 0 t A ( t − s ) x ( s ) d s , t > 0 .$
The fundamental solutions for Equation (168) for $t ∈ ( 0 , T ]$ is
$Φ ( t ) = I + ∑ r 0 + ⋯ + r N = n = 1 ∞ 1 r 0 ! ⋯ r N ! A 0 r 0 ⋯ A N r N ( t − r 0 θ 0 − ⋯ − r N θ N ) + n .$
In this expression we assume either (i) we are dealing with the scalar case or (ii) that the matrices $A 0 ⋯ A N$ commute. Note that for fixed t we have a finite sum if $A 0 = 0$. The method of steps for computing the fundamental solution $Φ ( t )$ will work in this case.
The fundamental solutions for Equation (169) is
$Φ ( t ) = ∑ n = 0 ∞ 1 n ! ∫ 0 t A n ( s ) ( t − s ) n d s , t > 0 .$
It is possible, as demonstrated in the following example, that in the case of an infinite delay that the fundamental solution satisfies an ODE—without a delay.
Example 5.
Consider the scalar case with $A ( t ) = exp ( − K t ) ∑ i = 0 p a i t i$ so that
$x ′ ( t ) = ∫ 0 t ∑ i = 0 p a i ( t − s ) i exp ( − K ( t − s ) ) x ( s ) d s , t > 0 .$
Multiplying Equation (172) by $exp ( K t )$, differentiating $p + 1$ times, and then multiplying by $exp ( − K t )$, we obtain that the fundamental solution $Φ ( t )$ satisfies the $( p + 2 ) t h$ order ODE
$∑ j = 0 p + 1 p + 1 j K j x ( ( p + 2 − j ) ) ( t ) = ∑ j = 0 p K j ∑ i = 0 p − j i ! a i x ( ( p − j − i ) ) ( t ) .$
Consequently the fundamental solution satisfies an ODE with no delay.

#### 5.3. Nonautonomous RFDE

We consider the fundamental solution $Φ ( t , s )$ for the nonautonomous RFDE
$x ′ ( t ) = ∫ s t d τ A ( t , τ ) x ( τ ) , t > s ,$
where $A ( t , τ )$ is measurable in t, of bounded variation in $τ$ for $s < τ < t$, constant for $τ ≥ t$, and
$∫ s T d t | ∫ s T d τ A ( t , τ ) | < ∞ .$
The fundamental solution $Φ ( t , s )$ (also known as the Cauchy function $C ( t , s )$) satisfies initial conditions $Φ ( s , s ) = I$, and $Φ ( t , s ) = O$ for $t < s$.
We integrate to obtain the equivalent Volterra integral equation
$Φ ( t , s ) = I + ∫ s t d τ 1 ∫ s τ 1 d τ 2 A ( τ 1 , τ 2 ) Φ ( τ 2 , s ) .$
Applying Picard iteration, we obtain the fundamental solution $Φ ( t , s )$ in the form
$Φ ( t , s ) = I + ∑ n = 1 ∞ ∫ s t d τ 1 ∫ s τ 1 d τ 2 A ( τ 1 , τ 2 ) ⋯ ∫ s τ 2 n − 2 d τ 2 n − 1 ∫ s τ 2 n − 1 d τ 2 n A ( τ 2 n − 1 , τ 2 n ) .$
Note that in the autonomous case that the Picard iteration for the fundamental solution of the nonautonomous RFDE reduces to the established expression
$Φ ( t ) = ∑ n = 0 ∞ 1 n ! ∫ 0 t A n ( d s ) ( t − s ) n .$

## 6. Conclusions

This paper has extended the expression for and the treatment of the fundamental solution for autonomous linear RFDE from scalar to n-vector, in the following ways: The fundamental solution is presented in the form of a convolutional exponential matrix function. The paper demonstrates how RFDE analysis can be conducted through an interplay of the RFDE actors (fundamental solution, kernel matrix, characteristic matrix, characteristic determinant, resolvent matrix) and tools (Laplace transform and solution representation). Elements of algebraic graph theory are applied in the form of a weighted loop-digraph to illuminate the system structure and dynamics, and to identify the strong and weak components. This is potentially a powerful tool in the case that the RFDE has a special structure, such as coupled oscillators with delay interactions. The paper compares the fundamental solution $Φ ( t )$ of an RFDE with its contour integral version $Φ c i ( t )$ for a special example, and raises an open question on the minimum time $t 0$ for equivalence. For $t > t 0$, $Φ c i ( t )$ plays the role of a spectral representation for the fundamental solution $Φ ( t )$. In its explicit form and in tandem with the solution representation, the paper describes the key role played by the fundamental solution in the semigroup of solution operators. The paper considers the fundamental solution for other functional differential equations (p-th order RFDE, RFDE with infinite delay, nonautonomous RFDE), but stops short of Neutral and Partial functional differential equations. Finally, the paper opens the door to future work on fundamental solutions and associated analysis for (i) RFDEs with a special structure and (ii) Neutral and Partial functional differential equations.

## Funding

This research received no external funding.

## Conflicts of Interest

The author declares that there is no conflict of interest.

## References

1. McCalla, C. Exact Solutions of Some Functional Differential Equations. In An International Symposium; Cesari, L., Hale, J.K., LaSalle, J.P., Eds.; Academic Press: New York, NY, USA, 1976; Volume 2, pp. 163–168. [Google Scholar]
2. McCalla, C. Zeros of the Solutions of First Order Functional Differential Equations. SIAM J. Math. Anal. 1978, 9, 843–847. [Google Scholar] [CrossRef]
3. McCalla, C. Asymptotic Behavior of First Order Scalar Linear Autonomous Retarded Functional Differential Equations. Funct. Differ. Equ. 2018, 25, 155–187. [Google Scholar]
4. Coddington, E.A.; Levinson, N. The Theory of Ordinary Differential Equations; Robert E. Krieger Publishing Co.: Malabar, India, 1983. [Google Scholar]
5. Doetsch, G. Introduction to the Theory and Applications of the Laplace Transform; Springer: New York, NY, USA, 1974. [Google Scholar]
6. Banks, H.T.; Manitius, A. Projection Series for Retarded Functional Differential Equations with Applications to Optimal Control Problems. J. Differ. Equ. 1975, 18, 296–332. [Google Scholar] [CrossRef][Green Version]
7. Bellman, R.; Cooke, K.L. Differential-Difference Equations; Academic Press: New York, NY, USA, 1963. [Google Scholar]
8. Verduyn Lunel, S.M. Exponential Type Calculus for Linear Delay Equations; Centre for Mathematics and Computer Science, Tract No. 57, Amsterdam 1989; Springer: New York, NY, USA, 1993. [Google Scholar]
9. Verduyn Lunel, S.M. Series Expansions and Small Solutions for Volterra Equations of Convolution Type. J. Differ. Equ. 1995, 85, 17–53. [Google Scholar] [CrossRef][Green Version]
10. Levinson, N.; McCalla, C. Completeness and Independence of the Exponential Solutions of Some Functional Differential Equations. Stud. Appl. Math. 1974, 53, 1–15. [Google Scholar] [CrossRef]
11. Gripenberg, G.; Londen, S.-O.; Staffans, O. Volterra Integral and Functional Equations; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
12. Myshkis, A.D. Linear Differential Equations with Retarded Arguments; Nauka: Moscow, Russia, 1972. [Google Scholar]
13. Hale, J.K.; Verduyn Lunel, S.M. Introduction to Functional Differential Equations; Springer: New York, NY, USA, 1993. [Google Scholar]
14. Diekmann, O.; Gils, S.A.; Verduyn Lunel, S.M.; Walther, H.-O. Delay Equations, Functional-, Complex-, and Nonlinear Analysis; Springer: New York, NY, USA, 1995. [Google Scholar]
15. Kappel, F. Linear Autonomous Functional Differential Equations. In Delay Equations and Applications; Arino, O., Hbid, M.L., Ait Dads, E., Eds.; Springer: Dordrecht, The Netherlands, 2006; pp. 41–139. [Google Scholar]
16. Azbelev, N.V.; Maksimov, V.P.; Rakhmatullina, L.F. Introduction to the Theory of Functional Differential Equations: Methods and Applications; Hindawi Publishing Corporation: New York, NY, USA, 2007. [Google Scholar]
17. Vlasov, V.V.; Medvedev, D.A. Functional-Differential Equations in Sobolev Spaces and Related Problems of Spectral Theory. J. Math. Sci. 2010, 64, 659–841. [Google Scholar] [CrossRef]
18. Agarwal, R.; Berezansky, L.; Braverman, E.; Domoshnitsky, A. Nonoscillation Theory of Functional Differential Equation with Applications; Springer: New York, NY, USA, 2012. [Google Scholar]
19. Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press: New York, NY, USA, 2013. [Google Scholar]
20. Kappel, F. Laplace-Transform Methods and Linear Autonomous Functional-Differential Equations; Bericht Nr. 64, Mathematisch-Statistischen Sektion in Forschungszentrum Graz: Graz, Austria, 1976. [Google Scholar]
21. Hartman, P. Ordinary Differential Equations; Philip Hartman: Baltimore, MD, USA, 1973. [Google Scholar]
22. Delfour, M.C.; Mitter, S.K. Hereditary Differential Equations with Constant Delays II—A Class of Affine Systems and the Adjoint Problem. J. Differ. Equ. 1975, 18, 399–409. [Google Scholar] [CrossRef][Green Version]
23. Harary, F.; Norman, R.Z.; Cartwright, D. Structural Models: An Introduction to the Theory of Directed Graphs; John Wiley & Sons: New York, NY, USA, 1965. [Google Scholar]
24. Richardson, J.M. Quasi-Differential Equations and Generalized Semigroup Relations. J. Math. Anal. Appl. 1961, 2, 293–298. [Google Scholar] [CrossRef][Green Version]
25. Bernier, C.; Manitius, A. On Semigroups in Rn × Lp corresponding to Differential Equations with Delays. Can. J. Math. 1978, 30, 897–914. [Google Scholar] [CrossRef]
26. Delfour, M.C.; Manitius, A. The Structural Operator F and its Role in the Theory of Retarded Systems, I. J. Math. Anal. Appl. 1980, 73, 359–381. [Google Scholar] [CrossRef][Green Version]
27. Delfour, M.C.; Manitius, A. The Structural Operator F and its Role in the Theory of Retarded Systems, II. J. Math. Anal. Appl. 1980, 75, 466–490. [Google Scholar] [CrossRef][Green Version]
28. McCalla, C. Oscillatory and Asymptotic Behavior of Second Order Functional Differential Equations. Nonlinear Anal. Theory Methods Appl. 1979, 3, 283–291. [Google Scholar] [CrossRef]
29. Corduneanu, C.; Lakshmikantham, V. Equations with Unbounded Delay: A Survey. Nonlinear Anal. Theory Methods Appl. 1980, 4, 831–877. [Google Scholar] [CrossRef][Green Version]
Figure 1. Relationships between retarded functional differential equation (RFDE) actors.
Figure 1. Relationships between retarded functional differential equation (RFDE) actors.
Figure 2. Weighted loop-digraph for general RFDE.
Figure 2. Weighted loop-digraph for general RFDE.
Figure 3. Weighted loop-digraphs for Example 1.
Figure 3. Weighted loop-digraphs for Example 1.
Figure 4. Weighted loop-digraphs for Example 2.
Figure 4. Weighted loop-digraphs for Example 2.
Figure 5. Weighted loop-digraph for Example 3.
Figure 5. Weighted loop-digraph for Example 3.
Figure 6. Weighted loop-digraphs for Example 4.
Figure 6. Weighted loop-digraphs for Example 4.

## Share and Cite

MDPI and ACS Style

McCalla, C. On Fundamental Solution for Autonomous Linear Retarded Functional Differential Equations. Mathematics 2020, 8, 1418. https://doi.org/10.3390/math8091418

AMA Style

McCalla C. On Fundamental Solution for Autonomous Linear Retarded Functional Differential Equations. Mathematics. 2020; 8(9):1418. https://doi.org/10.3390/math8091418

Chicago/Turabian Style

McCalla, Clement. 2020. "On Fundamental Solution for Autonomous Linear Retarded Functional Differential Equations" Mathematics 8, no. 9: 1418. https://doi.org/10.3390/math8091418

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Back to TopTop