Next Article in Journal
A Non-Intrusive Stochastic Isogeometric Analysis of Functionally Graded Plates with Material Uncertainty
Next Article in Special Issue
Global Optimization and Common Best Proximity Points for Some Multivalued Contractive Pairs of Mappings
Previous Article in Journal
Stochastic Process Emerged from Lattice Fermion Systems by Repeated Measurements and Long-Time Limit
Previous Article in Special Issue
Mindfulness Model Using Polariton Oscillation in Plasmonic Circuit for Human Performance Management
Open AccessArticle

Feedback Diagram Application for the Generation and Solution of Linear Differential Equations Solvable by Quadrature

Department of Electrical and Computer Engineering, Manhattan College, Riverdale, NY 10471, USA
*
Author to whom correspondence should be addressed.
Axioms 2020, 9(3), 91; https://doi.org/10.3390/axioms9030091
Received: 19 June 2020 / Revised: 25 July 2020 / Accepted: 27 July 2020 / Published: 29 July 2020
(This article belongs to the Collection Mathematical Analysis and Applications)

Abstract

A novel method for generating and providing quadrature solutions to families of linear, second-order, ordinary differential equations is presented in this paper. It is based upon a comparison of control system feedback diagrams—one representing the system and equation under study and a second equalized to it and providing solutions. The resulting Riccati equation connection between them is utilized to generate and solve groups of equations parameterized by arbitrary functions and constants. This method also leads to a formal solution mechanism for all second-order linear differential equations involving an infinite series of integrals of each equation’s Schwarzian derivative. The practicality of this mechanism is strongly dependent on the series rates of and allowed regions for convergence. The feedback diagram method developed is shown to be equivalent to a comparable method based on the differential equation’s normal form and another relying upon the grouping of terms for a reduction of the equation order, but augmenting their results. Applications are also made to the Helmholtz equation.
Keywords: ordinary differential equations; models of systems; systems of linear equations ordinary differential equations; models of systems; systems of linear equations

1. Introduction

In Control and Systems Theory, feedback diagrams have long been used to provide clear pictorial representations of closed-loop systems and their corresponding differential equation descriptions. These diagrams have proven to be useful tools in the analysis of differential equations for both characterization and solutions [1]. In this paper, we construct and apply a method derived from competing diagrammatic representations to investigate the initial value problem of the linear second-order, variable coefficient, nonhomogeneous, differential equation,
y ¨ ( x ) + p ( x ) y ˙ ( x ) + q ( x ) y ( x ) = f y ( x ) ,
for the solution interval ( x 0 , x ) subject to the initial values y ( x 0 ) = y 0 and y ˙ ( x 0 ) = y 0 ˙ . The coefficients p ( x ) and q ( x ) and driving function f y ( x ) are assumed to be sufficiently smooth over the desired interval so as to guarantee the existence and uniqueness of solutions. We are simultaneously seeking quadrature, or integral form, solutions, as well as the functional forms of the coefficients p ( x ) and q ( x ) , for which these solutions are valid. To achieve this purpose, two second-order linear systems are presented: The first portraying the system under investigation described by Equation (1) and the second modified to provide integral form general solutions. The coefficients of this second system are then adjusted to correspond to those of the first, thereby making the two systems effectively identical and providing solutions for both. The coefficient relationship between the two systems necessitates the solution of a Riccati equation, which can be expressed so as to maintain the same form for all differential equations. This nonlinear relation can limit the range of equations that are exactly solvable by the constructed system correspondence. However, when coupled with series solution procedures applied to this unchanging Riccati equation, the feedback diagram can ideally demonstrate the solution to all equations in the form of Equation (1). Additionally, the feedback diagram method provides a generation mechanism for identifying and solving an unlimited sequence of second-order linear differential equations amenable to a quadrature solution.
A good number of formulations for second-order differential equations have been presented with methods for solutions for some classes of equations, often involving substitutions and regroupings leading to equation simplification. For example, the so-called normal form technique [2,3], which is discussed and compared in detail in Section 7, is seen to parallel the method here, but depends on a Helmholtz equation solution for completion. Another representative example is the work by Bougoffa [4], in which the presence of a constant condition among the coefficients is shown to lead to the solution of certain second-order equations falling within that category. Badani [5] has introduced a procedure for grouping terms so as to reduce linear second-order equations to first-order equations. As in the feedback diagram method, a similar Riccati equation must be solved to provide direct solutions to the original differential equation. A comparison of Badani’s approach and the method presented here is also discussed in Section 7. Lastly, a number of advanced studies (see, for example, [6]) consider Equation (1) as a special case within a more general formulation, often under specific circumstances. Such studies are not included in the following discussion.
A general iterative procedure, which is complementary to approaches such as those described above, is the Adomian Decomposition Method (ADM) [7,8]. This is a highly versatile series technique that has been applied to linear, nonlinear, and stochastic ordinary and partial differential equations in many areas of applied science and engineering. Its usefulness has been greatly enhanced by the development of several modified versions that avoid series divergence and/or accelerate convergence [9,10]. Although other methods could also serve as possible choices, the ADM is representative of such widely used current iteration techniques. Most importantly for this discussion, its incorporation and application to the unvarying-in-form Riccati connection between the two feedback diagram systems developed here allows for the solution, in principle, of any differential equation in the form of Equation (1), essentially only subject to successful series convergence.
The format of the presentation of this feedback diagram method for generating and solving groups of equations in the form of Equation (1) is as follows. In Section 2, the basic methodology is developed utilizing state-variable and state-transition matrix principles, which inherently and directly incorporate homogeneous and particular solutions together with initial conditions for nonhomogeneous problems. In Section 3, the feedback diagram mechanism is used to generate a wide class of second-order, linear equations with solutions, with these equations being governed by the choices for two arbitrary functions p ( x ) and v ( x ) . In Section 4, three particular solutions for the Riccati equation connection between the two alternative linear systems are established and utilized. Each of these solutions is in keeping with the ideal goal of solving Equation (1) for any p ( x )   and   q ( x ) , but the actual outcome in each case provides solutions for a specific family of q ( x ) coefficients for arbitrary p ( x ) . These families and solutions are governed by additional parameters and/or functions. In Section 5, the results of the previous two sections are applied to the solution of the one-dimensional Helmholtz equation. Additionally, in Section 6, the Adomian Decomposition Method is applied to the Riccati equation, producing a prescription for its solution and hence the corresponding solution for Equation (1) for any set of coefficients p ( x )   and   q ( x ) , provided that the series description converges. Moreover, in Section 7, comparisons are made with the normal form technique and the method presented in Badani’s work [5]. These methods turn out to be similar in mathematical phraseology and essentially identical in outcome to the feedback diagram approach, but are more limited in scope compared to the results stemming from the investigation of the universal Riccati equation presented here.

2. Descriptive Systems

2.1. System One

The feedback diagram of Figure 1 presents the usual depiction of the second-order, linear, differential Equation (1). It includes two integrators, a summing box on the left, two amplifiers representing the variable coefficients p ( x ) and q ( x ) , and the nonhomogeneous driving function f y ( x ) . This is a format widely used with simulation software. Note, for example, that at the output of the summing box, y ¨ ( x ) = p ( x ) y ˙ ( x ) q ( x ) y ( x ) + f y ( x ) is as required.
An inherent alternative description, known as the state variable approach, is simultaneously indicated in Figure 1. For this case, the second-order system is evaluated as two first-order equations in terms of the state variables u 1 and u 2 through the following transformation:
y ( x ) = u 1 ( x ) ,     y ˙ ( x ) = u 1 ˙ ( x ) = u 2 ( x ) ,
u 2 ˙ ( x ) = q ( x ) u 1 ( x ) p ( x ) u 2 ( x ) + f y ( x ) ,
y 0 ( x 0 ) = y 0 = u 1 ( x 0 ) ,       y ˙ ( x 0 ) = y 0 ˙ = u 2 ( x 0 ) ,
or in matrix form:
[ u 1 ˙ ( x ) u 2 ˙ ( x ) ] = [ 0 1 q ( x ) p ( x ) ] [ u 1 ( x ) u 2 ( x ) ] + [ 0 1 ] f y ( x ) .
By renaming the vectors and matrices within Equation (5), the standard, variable-coefficient linear system canonical form description for this single-input, single-output (SISO) case is expressed as
u ˙ ( x ) = A I ( x ) u ( x ) + b f y ( x ) .
Matrix A I ( x ) is the companion matrix of the corresponding characteristic polynomial of Equation (1), and the standard general solution to equations (5) and (6) [11] (pp. 114–118), [12] (pp. 74–75) is obtained from the fundamental or state transition matrix Φ I ( x , x 0 ) as
u ( x ) = Φ I ( x , x 0 ) u ( x 0 ) + x 0 x Φ I ( x , x ) b f y ( x ) d x .
The roman numeral superscript, I, emphasizes that the matrix relates to System One. This result in Equation (7) is also the variable coefficient version of Duhamel’s Formula [13] (p. 149) and exhibits the zero-input and zero-state responses for the state vector u ( x ) and single input f y ( x ) , respectively, in the two terms on the right-hand side. The 2 x 2 state transition matrix of Equation (7) for the initial-value problem of Equation (1) is, more specifically,
Φ I ( x , x 0 ) = [ Φ 11 I ( x , x 0 ) Φ 12 I ( x , x 0 ) Φ 21 I ( x , x 0 ) Φ 22 I ( x , x 0 ) ] ,
where the four elements Φ i j I ( x , x 0 ) are only nonzero for x x 0 for initial-value problems. Finally, since we are only interested in y ( x ) = u 1 ( x ) in Equation (1), the top row of Equation (7) provides the general solution to Equation (1):
y I ( x ) = u 1 ( x ) = Φ 11 I ( x , x 0 ) y 0 + Φ 12 I ( x , x 0 ) y 0 ˙ + x 0 x Φ 12 I ( x , x ) f y ( x ) d x .
Here, the u ( x 0 ) initial conditions have been replaced by those for y 0 and y 0 ˙ , as per Equation (4). Hence, whenever a state transition matrix can be found for a system, its two top row elements Φ 11 I ( x , x 0 ) and Φ 12 I ( x , x 0 ) provide the solution to Equation (1) by means of Equation (9).

2.2. System Two

A more general second-order linear system description for the SISO case compared to Equation (5) has the following state space description (with u 1 ( x ) = y ( x ) ):
[ u 1 ˙ ( x ) u 2 ˙ ( x ) ] = [ a 11 ( x ) a 12 ( x ) a 21 ( x ) a 22 ( x ) ] [ u 1 ( x ) u 2 ( x ) ] + [ b 1 ( x ) b 2 ( x ) ] f u ( x ) ,
and
y ( x ) = [ 1 0 ] [ u 1 ( x ) u 2 ( x ) ] .
A solution for this system through Equations (7) and (8) is not available in the general case of arbitrary a i j ( x ) , where i , j = 1 ,   2 . However, since this general system has an overabundance of variable attributes for the purposes of matching System One, modifications can be implemented that guarantee the obtainability of a state transition matrix, as in Equation (8). Therefore, further simplification is imposed, as shown in Figure 2.
The most important change for System Two is that a 21 ( x ) has been set to zero. The resulting triangular form for Φ I I ( x , x 0 ) ensures that it can be readily calculated from its definition as a fundamental matrix, Φ I I ˙ ( x , x 0 ) = A I I ( x ) Φ I I ( x , x 0 ) , where the roman numeral II denotes System Two. A more extensive comparison of Systems One and Two reveals that for a 21 ( x ) = 0 , the specific value of component a 12 ( x ) drops out of the analysis and can thereby be replaced by a nonzero constant chosen to be one for simplicity. Similarly, the general components b 1 ( x )   and   b 2 ( x ) provide an unnecessary complication for solutions to System One and are replaced by constant values of zero and one, respectively.

2.3. Solutions for System Two

The differential equation solutions to System Two are obtained next, followed by the derivation of conditions necessary for their transference or application to System One.
The state-space description of System Two then becomes
[ u 1 ˙ ( x ) u 2 ˙ ( x ) ] = [ a 11 ( x ) 1 0 a 22 ( x ) ] [ u 1 ( x ) u 2 ( x ) ] + [ 0 1 ] f u ( x ) ,
or
u ˙ ( x ) = A I I ( x ) u ( x ) + b f u ( x ) .
The resulting state-transition matrix elements required for solutions, as previously noted in Equations (7) and (9), are summarized in the following theorem:
Theorem 1.
A general 2 x 2 fundamental matrix Φ I I ( x , x 0 ) for the second-order system of Equation (12), valid over the interval ( x 0 , x ) and to be used with initial conditions u 1 ( x 0 ) = u 10 and u 2 ( x 0 ) = u 20 , is given by its four matrix elements:
Φ 11 I I ( x , x 0 ) = e A 11 ( x , x 0 ) ,
where
A 11 ( x , x 0 ) = x 0 x a 11 ( x ) d x ,  
Φ 22 I I ( x , x 0 ) = e A 22 ( x , x 0 ) ,
where
A 22 ( x , x 0 ) = x 0 x a 22 ( x ) d x ,
Φ 21 I I ( x , x 0 ) = 0 ,  
and
Φ 12 I I ( x , x 0 ) = e A 11 ( x , x 0 ) x 0 x e A 22 ( λ , x 0 ) A 11 ( λ , x 0 ) d λ .
Proof of Theorem 1 follows in Appendix A.
In summary, for System Two,
Φ I I ( x , x 0 ) = [ Φ 11 I I ( x , x 0 ) Φ 12 I I ( x , x 0 ) Φ 21 I I ( x , x 0 ) Φ 22 I I ( x , x 0 ) ] = [ e A 11 ( x , x 0 ) e A 11 ( x , x 0 ) x 0 x e A 22 ( λ , x 0 ) A 11 ( λ , x 0 ) d λ 0 e A 22 ( x , x 0 ) ] .
From Equation (20), the state-space solution for System Two analogous to Equation (7) for System One is
[ u 1 ( x ) u 2 ( x ) ] = [ Φ 11 I I Φ 12 I I 0 Φ 22 I I ] [ u 10 u 20 ] + x 0 x [ Φ 11 I I Φ 12 I I 0 Φ 22 I I ] [ 0 1 ] f u ( x ) d x ,
for the elements defined by Equations (14) to (19). Since we are primarily interested in y ( x ) = u 1 ( x ) , the top row of Equation (21) shows the final desired solution for System Two:
y I I ( x ) = u 1 ( x ) = Φ 11 I I ( x , x 0 ) u 10 + Φ 12 I I ( x , x 0 ) u 20 + x 0 x Φ 12 I I ( x , x ) f u ( x ) d x .

2.4. Equalization of Systems One and Two

A direct comparison is made between Figure 1 and Figure 2, in order to apply the results of Equations (20)–(22) to System One. Using System Two, we see that u 1 ( x ) = y ( x ) , u 1 ˙ ( x ) = y ˙ ( x ) , and
[ u 1 ˙ ( x ) = y ˙ ( x ) ] = a 11 ( x ) [ u 1 ( x ) = y ( x ) ] + u 2 ( x ) ,
u 2 ˙ ( x ) = a 22 ( x ) u 2 ( x ) + f u ( x ) .
From Equation (23),
u 2 ( x ) = y ˙ ( x ) a 11 ( x ) y ( x ) .
The differentiation of Equation (25) and comparison with Equation (24) shows that
y ¨ ( x ) a 11 ( x ) y ˙ ( x ) a 11 ˙ ( x ) y ( x ) = a 22 ( x ) [ u 2 ( x ) = y ˙ ( x ) a 11 ( x ) y ( x ) ] + f u ( x ) ,
from which we have a System Two description in terms of its output y ( x ) and its derivatives
y ¨ ( x ) ( a 11 ( x ) + a 22 ( x ) ) y ˙ ( x ) ( a 11 ˙ ( x ) a 11 ( x ) a 22 ( x ) ) y ( x ) = f u ( x ) .
In order for System Two of Figure 2 to provide the same input/output behavior as System One and Equation (1), the following equivalences must hold from Equation (27):
p ( x ) = ( a 11 ( x ) + a 22 ( x ) ) ,  
q ( x ) = ( a 11 ˙ ( x ) a 11 ( x ) a 22 ( x ) ) ,
and
f y ( x ) = f u ( x ) .
If the component values of System One are to be brought into System Two, then, from Equations (28) and (29), for known p ( x ) and q ( x ) ,
a 22 ( x ) = a 11 ( x ) p ( x ) ,
and
a 11 ˙ ( x ) + p ( x ) a 11 ( x ) + a 11 2 ( x ) = q ( x ) .
Therefore, the determination of the proper choices for System Two coefficients a 11 ( x ) and a 22 ( x ) that ensure equivalence with System One comes from the Riccati equation solution of Equation (32), together with Equation (31).

2.5. Application of System Two Solutions to System One

The general solution to the differential equation describing System Two, Equation (22), shows the homogeneous solution with arbitrary constants in terms of the state-variable initial conditions and the particular response due to the driving function. For given a 11 ( x ) and a 22 ( x ) , the resulting matrix elements are presented in equations (14) through (19) and summarized in the complete state transition matrix of Equation (20). Then, this general solution will also apply to System One if the appropriate equivalent coefficients a 11 ( x ) and a 22 ( x ) resulting from the given p ( x ) and q ( x ) of Equations (31) and (32) are incorporated in the Φ 11 I I ( x , x 0 ) and Φ 12 I I ( x , x 0 ) matrix element calculations of Equations (14) through (19). These must be combined with the driving function equivalence of Equation (30) and the conversion from state-variable initial conditions to those originally posed for output y ( x ) and its derivative in System One. Hence,
[ u 1 ( x 0 ) = u 10 ] = [ y ( x 0 ) = y 0 ] ,
and, from Equation (25),
[ u 2 ( x 0 ) = u 20 ] = [ y ˙ ( x 0 ) a 11 ( x 0 ) y ( x 0 ) ] = y 0 ˙ a 11 ( x 0 ) y 0 .
With these equivalents, together with that of Equation (30), Equation (22) becomes
y I I ( x ) = [ Φ 11 I I ( x , x 0 ) a 11 ( x 0 ) Φ 12 I I ( x , x 0 ) ] y 0 + Φ 12 I I ( x , x 0 ) y 0 ˙ + x 0 x Φ 12 I I ( x , x ) f y ( x ) d x .
For matrix elements calculated appropriately, that is, through the additional relations of Equations (31) and (32), the System Two solution will match that of System One. If a final comparison is made between Equation (35) and Equation (9), then solutions for System One can be readily identified as depending on System Two matrix elements as
Φ 11 I ( x , x 0 ) = Φ 11 I I ( x , x 0 ) a 11 ( x 0 ) Φ 12 I I ( x , x 0 )
and
Φ 12 I ( x , x 0 ) = Φ 12 I I ( x , x 0 ) .
Therefore, for resulting system equivalence, the impulse response functions Φ 12 ( x , x 0 ) for both systems are seen to be identical, and the Φ 11 ( x , x 0 ) matrix elements differ whenever the initial value of the System Two component a 11 ( x ) resulting from the Riccati equation equivalence process of Equation (32) is nonzero.
In summary, when the System Two matrix elements Φ 11 I I ( x , x 0 ) and Φ 12 I I ( x , x 0 ) are calculated via Equations (14) and (19) using the appropriately chosen functions a 11 ( x ) and a 22 ( x ) that result from the System One, System Two equalization process of Equations (28) to (32), they provide the System One counterpart matrix elements necessary and sufficient for solutions to that system. Upon completion of this process in what follows, y I ( x ) = y I I ( x ) and Equation (35) solutions apply to either system.

2.6. System One Solutions from the Riccati Equation Connection

It has been shown that System One solutions can be constructed from readily calculable System Two matrix elements as per Equations (36) and (37) and Equations (14) through (19). Therefore, we must now search for appropriate System Two coefficients a 11 ( x ) and a 22 ( x ) that correspond to System One p ( x ) and q ( x ) from Equations (31) and (32). An important aspect of Equation (32) is the recognition that various functional forms, such as a 11 ( x ) = constant = a or a 11 ( x ) = 1 x used in Equations (20) and (35), present solutions to differential equations in the form of Equation (1) for the specific coefficient interrelations q ( x ) = a p ( x ) a 2 or q ( x ) = p ( x ) x , respectively. Note that the former case of constant a 11 = a has been previously analyzed from a related yet different viewpoint in [1]. Similarly, other choices, such as a 11 ( x ) = tanh ( x ) or tan ( x ) , also lead to comparable coefficient interrelationships and equation solutions. However, it is advantageous and more systematic for some situations to deal with the Riccati equation connection for a 11 ( x ) of Equation (32) by means of a transformation to a new function, v ( x ) = a 11 ( x ) + p ( x ) 2 , which results in v ˙ ( x ) + v 2 ( x ) = p ˙ ( x ) 2 + p 2 ( x ) 4 q ( x ) . For known p ( x ) and q ( x ) , and by defining
f v ( x ) = p ˙ ( x ) 2 + p 2 ( x ) 4 q ( x ) ,
this becomes
v ˙ ( x ) + v 2 ( x ) = f v ( x ) .  
The quantity f v (x) is the negative of the Schwarzian derivative for Equation (1) and plays an important role in more generalized studies of differential equations [14].
Solutions for v ( x ) or a 11 ( x ) are the main links between Systems One and Two, and the relations among the feedback coefficients are
a 11 ( x ) = v ( x ) p ( x ) 2
and, from Equation (31),
  a 22 ( x ) = v ( x ) p ( x ) 2 .
Assuming that v ( x ) solutions are obtained from Equation (39), we can recast the matrix elements from Equations (14) through (19) in terms of the calculated v ( x ) and known p ( x ) by using
A 11 ( x ) = x 0 x [ a 11 ( x ) = v ( x ) p ( x ) 2 ] d x = V ( x , x 0 ) 1 2 P ( x , x 0 ) .  
Here,
V ( x , x 0 ) = x 0 x v ( x ) d x
and
P ( x , x 0 ) = x 0 x p ( x ) d x
follow the previous format of small letter-capital letter integral definitions, as in Equations (15) and (17). Similarly,
A 22 ( x ) = x 0 x [ a 22 ( x ) = v ( x ) p ( x ) 2 ] d x = V ( x , x 0 ) 1 2 P ( x , x 0 ) ,
and the System Two state transition matrix Φ I I ( x , x 0 ) in Equation (20) is
Φ I I ( x , x 0 ) = [ Φ 11 I I ( x , x 0 ) Φ 12 I I ( x , x 0 ) Φ 21 I I ( x , x 0 ) Φ 22 I I ( x , x 0 ) ] = [ e V ( x , x 0 ) 1 2 P ( x , x 0 ) e V ( x , x 0 ) 1 2 P ( x , x 0 ) x 0 x e 2 V ( λ , x 0 ) d λ 0 e V ( x , x 0 ) 1 2 P ( x , x 0 ) ] .
When the top two matrix elements of Equation (46) for calculated v ( x ) are substituted into Equation (35), the System One solution, y I ( x ) , results in
y I ( x ) = e V ( x , x 0 ) 1 2 P ( x , x 0 ) [ 1 a 11 ( x 0 ) x 0 x e 2 V ( λ , x 0 ) d λ ] y 0 +   e V ( x , x 0 ) 1 2 P ( x , x 0 ) [ x 0 x e 2 V ( λ , x 0 ) d λ ] y 0 ˙ +   x 0 x [ e V ( x , x ) 1 2 P ( x , x ) x x e 2 V ( λ , x ) d λ ] f y ( x ) d x .
Note that, here, a 11 ( x 0 ) for Equations (35) and (47) is given from Equation (40) as
a 11 ( x 0 ) = v ( x 0 ) p ( x 0 ) 2 .
An additional note of concurrence for the Φ 11 I I ( x , x 0 ) and Φ 12 I I ( x , x 0 ) matrix elements of Equation (46) is provided by the reduction of order technique for independent solutions, since y h 1 ( x ) = Φ 11 I I ( x , x 0 ) and y h 2 ( x ) = Φ 12 I I ( x , x 0 ) exactly adhere to the well-known interconnecting result [15] (pp. 171–172) of
y h 2 ( x ) = y h 1 [ x 0 x k 1 e P ( x , x 0 ) y h 1 2 ( x ) d x + k 2 ] .
Here, the constants can be seen to be k 2 = Φ 12 I I ( x 0 , x 0 ) = 0 and k 1 = Φ ˙ 12 I I ( x 0 , x 0 ) = Φ 22 I I ( x 0 , x 0 ) = 1 from Equation (A2) of Appendix A, thereby providing agreement for the upper-right matrix element expression.

3. Generation of Second-Order Linear Differential Equations for System One Solvable by Quadrature

Although solutions v ( x ) to the Riccati equation of Equation (39) clearly exist for certain forms of f v ( x ) , we can reverse our emphasis in that equation and search for possible coefficients q ( x ) that result from arbitrary choices for continuous p ( x ) and v ( x ) . That is, from Equations (38) and (39),
q ( x ) = p ˙ ( x ) 2 v ˙ ( x ) + p 2 ( x ) 4 v 2 ( x ) ,
determines those q ( x ) values with given p ( x ) and arbitrarily chosen v ( x ) that define the specific System One whose solutions are known from Equation (47). Functional possibilities for v ( x ) , together with given p ( x ) , then define a family of differential equations exhibiting this form of solution. An example of such a system and its specific second-order linear differential equation follow.

3.1. Exponential and Sinusoidal Coefficients

Example 1.
To illustrate the process of generating equations in the form of Equation (1) with the quadrature solution described by Theorem One and Equation (47), consider an example of combined exponential and sinusoidal coefficients chosen to be p ( x ) = 2 ( c o s x + e x ) and v ( x ) = ( c o s x e x ) . From Equation (50),   q ( x ) = 2 e x ( 2 c o s x 1 ) , and hence, the resulting differential equation that is generated or produced by this process is
y ¨ ( x ) + 2 ( c o s x + e x ) y ˙ ( x ) + 2 e x ( 2 c o s x 1 ) y ( x ) = f y ( x ) .
This is assumed to hold for interval ( 0 , x ) and general but unspecified initial conditions y 0 ,   y 0 ˙ , and driving function   f y ( x ) . From Equation (40), a 11 ( x ) = 2 e x , from which a 11 ( x 0 ) = 2 e x 0 . From Equations (42) through (46), we can deduce that V ( x , x 0 ) = ( s i n x s i n x 0 ) + ( e x e x 0 ) , P ( x , x 0 ) = 2 ( s i n x s i n x 0 ) 2 (   e x e x 0 ) ,   and A 11 ( x ) = V ( x , x 0 ) 1 2 P ( x , x 0 ) = 2 ( e x 0 e x ) . Then, from Equation (46),
Φ 11 I I ( x , x 0 ) = e 2 ( e x 0 e x ) ,
and
Φ 12 I I ( x , x 0 ) = e 2 ( e x 0 e x ) x 0 x e 2 ( s i n λ s i n x o ) 2 ( e λ e x 0 ) d λ .
From Equation (47), for x 0 = 0 and   a 11 ( x 0 ) = 2 , this can be simplified to the quadrature solution for Equation (51):
y I ( x ) = e 2 ( 1 e x ) [ y 0 + ( 2 y 0 + y 0 ˙ ) x 0 = 0 x e 2 s i n λ 2 e λ + 2 d λ ] + x 0 = 0 x [ e 2 ( e x e x ) λ = x x e 2 ( s i n λ s i n x ) 2 ( e λ e x ) d λ ] f y ( x ) d x .
Hence, the System One general solution for Equation (51) is given by Equation (54). To further verify the related behavior of Systems One and Two under the connecting conditions of Equations (30)–(32), (36), and (37), MATLAB Simulink simulation software has been used for this example and others to examine and compare the homogeneous and particular solution components, as well as the total solutions resulting from their two respective feedback system diagrams. Particular solution comparisons have included the choices of step, sinusoidal, and exponential functions for the driving term f y ( x ) . Maximum numerical differences in the order of a few thousandths of a percent were found near points of rapid variation in the system response, but were usually much lower. These differences were further reduced whenever it was possible to decrease the simulation step size.

4. Equations Solvable by Quadrature Resulting from Particular Riccati Equation Solutions

Although it is presently not possible to solve the Riccati equation of Equation (39) in closed form for any function f v ( x ) , i.e., for any choice of coefficients p ( x ) and q ( x ) , particular solutions exist for specific forms of the function f v ( x ) defined by Equation (38). Furthermore, any such particular forms for f v ( x ) also interrelate and hence limit the resulting q ( x ) and p ( x ) possibilities that allow a corresponding quadrature solution. Groups of second-order, linear differential equations characterized by the same form for that function then share a common quadrature solution description determined by Equations (38) through (48).
An alternative but related view of Riccati equation solutions is provided by the transformation of v ( x ) in Equation (39) to another function w ( x ) through v ( x ) = w ˙ ( x ) w ( x ) , which results in
w ¨ ( x ) f v ( x ) w ( x ) = 0 .
If two linearly independent solutions w 1 ( x ) and w 2 ( x ) can be found to Equation (55), then the general form for Equation (39) for v ( x ) is determined by [16] (pp. 239–242)
v ( x ) = c 1 w 1 ˙ ( x ) + c 2 w 2 ˙ ( x ) c 1 w 1 ( x ) + c 2 w 2 ( x ) = w 1 ˙ ( x ) + c 3 w 2 ˙ ( x ) w 1 ( x ) + c 3 w 2 ( x )   ,
where c 3 = c 2 c 1   . Although finding exact solutions to Equation (55) in the general case is usually as difficult as solving Equation (39) directly, simultaneous observations for both the Riccati equation and the corresponding Helmholtz equation can be insightful, as discussed with the normal form approach in Section 7.
In the following, three groups of linear, second-order, variable coefficient, ordinary differential equations are presented. The first corresponds to the function f v ( x ) of Equation (38) chosen to be constant, and hence readily admitting a solution v ( x ) to Equation (39), and the next two groups each assume a special form for the v ( x ) or a 11 ( x ) Riccati equation solution. In all cases, either function f v ( x ) or v ( x ) then determines the relationship between coefficients, that is, initially chosen p ( x ) and subsequently calculated q ( x ) , through Equations (38) and (39). This resulting coefficient pair then defines the explicit version of Equation (1) describable by the quadrature solution of Equations (38) through (48).

4.1. Group One: Quadrature Solution to Equation (1) Corresponding to the f v ( x ) Constant

If the function f v ( x ) of Equation (38) is constant, then the Riccati equation of Equation (39) is readily solvable for functions v ( x ) , as summarized in the ensuing proposition.
Proposition 1.
For the coefficients p ( x ) and q ( x ) of Equation (1) assuming values such that function f v ( x ) = p ( x ) ˙ 2 + p 2 ( x ) 4 q ( x ) of Equation (38) is constant, then the following results for Riccati equation solution v ( x ) are valid over interval ( x 0 , x ) with real constant c.
Category 1(a): For   f v ( x ) = c 2 > 0 ,
v ( x ) = c [ v ( x 0 ) c o s h   c ( x x 0 ) + c s i n h   c ( x x 0 ) c   c o s h   c ( x x 0 ) + v ( x 0 ) s i n h   c ( x x 0 ) ] .
Category 1(b): For   f v ( x ) = c 2 = 0 ,
v ( x ) = v ( x 0 ) 1 + v ( x 0 ) ( x x 0 ) .
Category 1(c): For f v ( x ) = c 2 < 0 ,
v ( x ) = c [ v ( x 0 ) c o s   c ( x x 0 ) c s i n   c ( x x 0 ) c   c o s   c ( x x 0 ) + v ( x 0 ) s i n   c ( x x 0 ) ] .
Proof of Proposition 1.
For category 1(a), c 2 > 0 . For Equation (39), the separation of variables v and x , followed by partial fraction expansion and integration over ( x 0 , x ) , leads to Equation (57). Alternatively, the transformation of v ( x ) to w ( x ) resulting in Equation (55) leads to cosh and sinh function solutions for that equation. Equation (57) then results from Equation (56).
For category 1(b), c 2 = 0 . The direct integration of either Equation (39) or Equation (55) leads to Equation (58).
For category1(c), c 2 < 0 . As in case 1(a) above, the separation of variables with partial fraction expansion and the direct integration of Equation (39) leads to Equation (59). Alternatively, from Equation (55), the resulting cos and sin solutions inserted in Equation (56) reproduce Equation (59).
The families of equations solvable from this methodology for the function f v ( x ) constant are determined in each category by the Equation (38) definition. For each choice of coefficient p ( x ) , the corresponding second coefficient q ( x ) is determined as follows:
Category 1(a),   f v ( x ) = c 2 > 0 :
q ( x ) = p ˙ ( x ) 2 + p 2 ( x ) 4 c 2 .
Category 1(b),   f v ( x ) = c 2 = 0 :
q ( x ) = p ˙ ( x ) 2 + p 2 ( x ) 4 .
Category 1(c),   f v ( x ) = c 2 < 0 :
q ( x ) = p ˙ ( x ) 2 + p 2 ( x ) 4 + c 2 .
Then, each p ( x ) and q ( x ) pair defined by these relations results in a form of Equation (1) whose quadrature solution is given by Equation (35), as determined by the following matrix elements calculated from Equations (40) through (46) for the v ( x ) results of Equations (57)–(59).
Category 1(a):
Φ 11 I I ( x , x 0 ) = e 1 2 P ( x , x 0 ) [   c o s h ( c ( x x 0 ) ) + 1 c v ( x 0 ) s i n h   c ( x x 0 ) ] ,
Φ 12 I I ( x , x 0 ) = e 1 2 P ( x , x 0 ) 1 c [ s i n h ( c ( x x 0 ) ) ] .
Category 1(b):
Φ 11 I I ( x , x 0 ) = e 1 2 P ( x , x 0 ) [ 1 + v ( x 0 ) ( x x 0 ) ] ,
Φ 12 I I ( x , x 0 ) = e 1 2 P ( x , x 0 ) [ x x 0 ] .
Category 1(c):
Φ 11 I I ( x , x 0 ) = e 1 2 P ( x , x 0 ) [   c o s ( c ( x x 0 ) ) + 1 c v ( x 0 ) s i n   c ( x x 0 ) ] ,
Φ 12 I I ( x , x 0 ) = e 1 2 P ( x , x 0 ) 1 c [ s i n   ( c ( x x 0 ) ) ] .
Here, as in Equation (44), P ( x , x 0 ) is the integral of arbitrarily chosen p ( x ) . These elements are used in each case in Equation (35) to solve Equation (1) over ( x 0 , x ) for initial conditions y ( x 0 ) = y 0 and y ˙ ( x 0 ) = y 0 ˙ . Note that the presence of v ( x 0 ) vanishes algebraically from the final result of Equation (35), since p ( x ) , q ( x ) , c , and x 0 are the only elements directly determining the final solution for differential equations of Group One.
Despite the straightforward nature of the resulting matrix elements of Equations (63) to (68), the form of the solution generally provided by this method for this group can introduce computational difficulties due to the logarithmic functions that are present. For example, for Category 1(c) and depending on the solution interval, V ( x , x 0 ) = ln [ cos   ( c ( x x 0 ) ) + v ( x 0 ) c   sin   ( c ( x x 0 ) ) ] will have multiple zeroes and negative regions occurring within its argument, which can halt the calculation. Hence, alternative computational strategies may be required at times. □

Constant Coefficient Equations

A last point of significance for this section is that all constant coefficient ordinary differential equations in the form of Equation (1) are included within Group One. That is, for constants p ( x ) = p 0 and q ( x ) = q 0 , the quantity f v ( x ) = p 0 2 4 q 0 2 will determine which of the three previous categories provides the solution, depending on whether constant f v ( x ) is positive, negative, or zero.

4.2. Group Two: Quadrature Solution to Equation (1) Corresponding to a 11 ( x ) = α ( x ) q ( x )

Given the Riccati equation for v ( x ) , Equation (39), and the structure of the function f v ( x ) of Equation (38), the fact that v ( x ) = p ( x ) 2 is an immediate solution of Equation (39) for q ( x ) = 0 suggests a trial solution of the form
v ( x ) = p ( x ) 2 + α ( x ) q ( x )
that will also satisfy this equation for known, real, nonzero function α ( x ) , which is arbitrary within some limitations to be discussed. From Equation (40), this is equivalent to a 11 ( x ) = α ( x ) q ( x ) . Due to its arbitrary character, the function α ( x ) will serve to define families of possible real coefficients q ( x ) of Equation (1) amenable to the System Two solutions of Equations (40) to (48). The coefficient p ( x ) is assumed to be given. Upon substitution of the v ˙ ( x ) and v 2 ( x ) terms from Equation (69) into Equation (39), it is seen that coefficients q ( x ) for which Equation (69) holds must satisfy the additional Riccati equation
q ˙ ( x ) + [ p ( x ) + α ˙ ( x ) α ( x ) + 1 α ( x ) ] q ( x ) + α ( x ) q 2 ( x ) = 0.
A solution to this Riccati equation leads to the following theorem.
Theorem 2.
For a given real coefficient p ( x ) of Equation (1) and the trial solution of Equation (69), a corresponding functional form for feedback element a 11 ( x ) that provides particular solutions for v ( x ) of Equation (39) and hence direct application of the solution methodology of Equations (35) to (37) and equations (40) to (48) to Equation (1) is obtained as
a 11 ( x ) = α ( x ) q ( x ) = α ( x 0 ) q ( x 0 ) e P ( x , x 0 ) I α ( x , x 0 ) 1 + α ( x 0 ) q ( x 0 ) x 0 x e P ( x , x 0 ) I α ( x , x 0 ) d x ,
from which coefficient q ( x ) follows as q ( x ) = a 11 ( x ) α ( x ) . Here, α ( x 0 )   a n d   q ( x 0 ) are initial values, P ( x , x 0 ) is defined by Equation (44), and I α ( x , x 0 ) is defined by
I α ( x , x 0 ) = x 0 x 1 α ( x ) d x .
Furthermore, the complementary state-variable matrix elements for utilization in Equation (35) (or Equations (46) and (47)) are
Φ 11 I I ( x , x 0 ) = 1 + α ( x 0 ) q ( x 0 ) x 0 x e P ( x , x 0 ) I α ( x , x 0 )   d x
and
Φ 12 I I ( x , x 0 ) = Φ 11 I I ( x , x 0 ) x 0 x e P ( λ , x 0 ) [ Φ 11 I I ( λ , x 0 ) ] 2 d λ = [ 1 + α ( x 0 ) q ( x 0 ) x 0 x e P ( x , x 0 ) I α ( x , x 0 )   d x ] x 0 x e P ( λ , x 0 ) [ 1 + α ( x 0 ) q ( x 0 ) x 0 λ e P ( x , x 0 ) I α ( x , x 0 )   d x ] 2 d λ .
The value of a 11 ( x 0 ) to be used with these matrix elements in Equation (35) is   a 11 ( x 0 ) = α ( x 0 ) q ( x 0 ) . The Proof of Theorem 4.2.1 follows in Appendix A.
In summary, a family of equations in the form of Equation (1) has been established in this section for arbitrary coefficient p ( x ) and the second coefficient q ( x ) determined from Equation (71), together with the matrix elements of Equations (73) and (74) comprising each family’s solution from Equation (35). Individual family members are ascertained by the specific choices for the initial parameter q ( x 0 ) and function of proportionality α ( x ) of Equations (69) and (72). The great latitude in choosing function α ( x ) presents a wide variety of possibilities for interrelating coefficients q ( x ) = a 11 ( x ) α ( x ) and p ( x ) through Equation (71). For example, the choice of α ( x ) = 1 p ( x ) leads to versions of the q ( x ) = p ( x ) x result discussed in Section 2.6, with specific details being dependent upon the value of x 0 . However, as seen in Equations (71) and (72), choices for the function α ( x ) should preclude those that would introduce singular points within the solution interval. Conversely, the solution interval ( x 0 , x ) should be adjusted accordingly for a specific α ( x ) to avoid problematic regions for α ( x ) , I α ( x , x 0 ) , q ( x ) , Φ 11 I I ( x , x 0 ) , and Φ 12 I I ( x , x 0 ) .

4.3. Group Three: Quadrature Solution to Equation (1) Corresponding to a 11 ( x ) = α ( x ) q ( x )

Another group of solutions for v ( x ) of the Riccati Equation (39) is obtained from a trial solution of the form
v ( x ) = p ( x ) 2 + α ( x ) q ( x ) ,
which is equivalent to a 11 ( x ) = α ( x ) q ( x ) . Again, this is motivated by the fact that v ( x ) = p ( x ) 2 is an exact solution to Equation (39) for the case of q ( x ) = 0 . The coefficient p ( x ) is real-valued and arbitrary, but assumed to be known, and here, α ( x ) is again a real-valued and arbitrary known function of proportionality that will be assumed to be strictly positive. Moreover, only positive real resulting q ( x ) functions are included. This will restrict consideration in this case to equations of form (1) with real coefficients only.
In parallel to what was seen in Section 4.2, the use of Equation (75) in Equation (39) results in the Bernoulli equation [17] (p. 49)
q ˙ ( x ) + 2 ( p ( x ) + α ˙ ( x ) α ( x ) )   q ( x ) + 2 ( α ( x ) + 1 α ( x ) ) q 3 ( x ) = 0 ,
of power γ = 3 2 for the nonlinear q ( x ) term. The solution of this equation leads to the next theorem.
Theorem 3.
For a given real coefficient p ( x ) of Equation (1) and the trial solution of Equation (75), a corresponding functional form for feedback element a 11 ( x ) that provides particular solutions for v ( x ) of Equation (39) and hence direct application of the solution methodology of Equations (35) to (37) and Equations (40) to (48) to Equation (1) is obtained as
a 11 ( x ) = α ( x ) q ( x ) = α ( x 0 ) q ( x 0 ) e P ( x , x 0 ) [ 1 + α ( x 0 ) q ( x 0 ) x 0 x [ α 2 ( x ) + 1 α 2 ( x ) ] e P ( x , x 0 ) d x ] ,
from which q ( x ) follows as q ( x ) = ( a 11 ( x ) α ( x ) ) 2 . Additionally, the corresponding state-variable matrix elements for Equation (35) are
Φ 11 I I ( x , x 0 ) = e A 11 ( x , x 0 ) = e x p { x 0 x α ( x 0 ) q ( x 0 ) e P ( x , x 0 ) [ 1 + α ( x 0 ) q ( x 0 ) x 0 x [ α 2 ( x ) + 1 α 2 ( x ) ] e P ( x , x 0 ) d x ] d x }
and
Φ 12 I I ( x , x 0 ) = Φ 11 I I ( x , x 0 ) [ x 0 x e P ( λ , x 0 ) [ Φ 11 I I ( λ , x 0 ) ] 2 d λ ] =   Φ 11 I I ( x , x 0 ) x 0 x e P ( λ , x 0 ) e x p { 2 x 0 λ α ( x 0 ) q ( x 0 )   e P ( x , x 0 ) [ 1 + α ( x 0 ) q ( x 0 ) x 0 x [ α 2 ( x ) + 1 α 2 ( x ) ] e P ( x , x 0 ) d x ] d x } d λ .
The Proof of Theorem 3 follows in the Appendix A.
As in Section 4.2, the arbitrariness of α ( x ) is limited to strictly positive continuous functions not introducing singularities to any of the differential equation quantities of Equations (77)–(79) over the solution range ( x 0 , x).
In summary, given the real coefficient p ( x ) , adjunct (positive only) function α ( x ) , and positive initial value q ( x 0 ) , all of which are arbitrary within the restrictions cited, Equation (77) provides the resulting form of the positive-only q ( x ) coefficients corresponding to the assumption of Equation (75) and thereby appropriate for Equation (1) to be solvable by means of the matrix elements of Equations (78) and (79) in Equation (35). Note that the initial value constant   a 11 ( x 0 ) takes on the value α ( x 0 ) q ( x 0 ) from Equations (40) and (75). As in Section 4.2, and despite limitations, the range of possibilities encompassed by the arbitrary function α ( x ) and parameter q ( x 0 ) offers a wide array of equations in the form of Equation (1) with quadrature solutions.

5. Application to the One-Dimensional Helmholtz Equation

The one-dimensional Helmholtz equation is of importance to many branches of physics and engineering, often representing time-independent wave behavior that occurs in quantum mechanics, electromagnetics, and optics [18] (pp. 31–44), [19] (pp. 206–229). This equation is often dealt with through the WKB approximation in the Physical Sciences [20] (pp. 27–37), which can provide accurate results comparable to those from more exact methods [21]. The Helmholtz equation and its solution also have bearings on the diffusion equation and related studies [22].
Since the nonhomogeneous version of this equation is of the form
y ¨ ( x ) + q ( x ) y ( x ) = f y ( x ) ,
the families of specific differential equations with quadrature solutions obtained in Section 3 and Section 4 apply directly for the choice of coefficient p ( x ) = 0 , thereby supplying additional non-approximate solutions. Although wave investigations using this equation are usually posed as boundary value problems, the initial value version developed here can be interpreted as a portrayal of unidirectional wave propagation over region ( x 0 , x ) in infinite media with spatially varying characteristics, as indicated by non-constant q ( x ) . With this interpretation as the background, exact propagation results for Equation (80) can be taken from Sections Three and Four and are summarized in the following.

5.1. Results from Section 3.1

Given that p ( x ) = 0 in Equation (1) here, the relevant Helmholtz coefficients q ( x ) leading to the subsequent quadrature solution are determined from Equations (38) and (39) by
q ( x ) = ( v ˙ ( x ) + v 2 ( x ) ) ,
for given, real arbitrary v ( x ) . The corresponding System Two feedback elements of Figure 2, from Equations (40) and (41), are
a 11 ( x ) = a 22 ( x ) = v ( x ) ,
from which constant a 11 ( x 0 ) = v ( x 0 ) . The quantity V ( x , x 0 ) of Equation (43) then determines the two pertinent matrix elements of Equation (46) to be
Φ 11 I I ( x , x 0 ) = e V ( x , x 0 ) ,
Φ 12 I I ( x , x 0 ) = e V ( x , x 0 ) x 0 x e 2 V ( λ , x 0 ) d λ .
The accompanying quadrature solution for Equations (83) and (84) follows as in Equation (47) by inserting these matrix elements into Equation (35) to give
y I ( x ) = e V ( x , x 0 ) [ 1 v ( x 0 ) x 0 x e 2 V ( x , x 0 ) d x ] y 0 + e V ( x , x 0 ) [ x 0 x e 2 V ( x , x 0 ) d x ] y 0 ˙ + x 0 x [ e V ( x , x ) x x e 2 V ( λ , x ) d λ ] f y ( x ) d x .
There are many potentially useful versions of the Helmholtz equation available, with solutions from Equations (81) to (85) stemming from the arbitrary nature of v ( x ) within Equation (81).

5.2. Results from Section 4.1

For this Group One case, from Equation (38) for p ( x ) = 0 ,
[ f v ( x ) = c o n s t a n t ] = q ( x ) .
Constant q ( x ) represents a simple and elementary Helmholtz equation for which the feedback diagram method provides a relatively complicated solution. Nevertheless, the formalism provides three possibilities for the constant of Equation (86) and for parameter c being any real constant:
Category 1(a), q ( x ) = c 2 < 0 :     v ( x ) given by Equation (57);
Category 1(b), q ( x ) = 0 :                    v ( x ) given by Equation (58);
Category 1(c), q ( x ) = c 2 > 0 :          v ( x ) given by Equation (59).
The corresponding matrix elements Φ 11 I I ( x , x 0 ) and Φ 12 I I ( x , x 0 ) are given by the three pairs of Equations (63) and (64), (65) and (66), and (67) and (68) for P ( x , x 0 ) = 0 from Equation (44). The System Two feedback elements are then
a 11 ( x ) = a 22 ( x ) = v ( x )
from Equations (40) and (41) for p ( x ) = 0 , and hence initial constant a 11 ( x 0 ) = v ( x 0 ) . Finally, the System One solution y I ( x ) is obtained from Equation (35) using the matrix elements whose values are outlined above.
Clearly, direct and elementary methods for solving the Helmholtz equation under Group One conditions of constant coefficient q ( x ) are preferable to the equivalent results obtained here.

5.3. Results from Section 4.2

The relevant Helmholtz q ( x ) coefficient is taken from Equation (71) as q ( x ) = a 11 ( x ) α ( x ) with p ( x ) = 0 and P ( x , x 0 ) = 0 , where
a 11 ( x ) = v ( x ) = α ( x ) q ( x ) = α ( x 0 ) q ( x 0 ) e I α ( x ,   x 0 ) [ 1 + α ( x 0 ) q ( x 0 ) x 0 x e I α ( x ,   x 0 ) d x ] ,
and I α ( x , x 0 ) is defined by Equation (72). Real function α ( x ) , assumed to be nonzero over ( x 0 , x ) , and initial constant q ( x 0 ) are both assumed to be arbitrary, provided that no singularities are introduced for that region in the denominator of equations such as Equation (88) by these choices. The System Two feedback elements are as in Equation (87), with a 11 ( x 0 ) = α ( x 0 ) q ( x 0 ) . The matrix elements Φ 11 I I ( x , x 0 ) and Φ 12 I I ( x , x 0 ) are determined by Equations (73) and (74) with P ( x , x 0 ) = 0 :
Φ 11 I I ( x , x 0 ) = 1 + α ( x 0 ) q ( x 0 ) x 0 x e I α ( x ,   x 0 ) d x ,  
Φ 12 I I ( x , x 0 ) = [ 1 + α ( x 0 ) q ( x 0 ) x 0 x e I α ( x ,   x 0 ) d x ] x 0 x [ 1 + α ( x 0 ) q ( x 0 ) x 0 λ e I α ( x , x 0 )   d x ] 2 d λ .
As before, the substitution of these matrix element expressions into Equation (35) generates the y I ( x ) System One solution family for the corresponding family of Helmholtz equations.

5.4. Results from Section 4.3

In this case, q ( x ) for Helmholtz Equation (80) is obtained from Equation (77) as q ( x ) = ( a 11 ( x ) α ( x )   ) 2 with P ( x , x 0 ) = 0 for arbitrary but strictly positive function α ( x ) , where
a 11 2 ( x ) = v 2 ( x ) = α 2 ( x ) q ( x ) = α 2 ( x 0 ) q ( x 0 ) [ 1 + α ( x 0 ) q ( x 0 ) x 0 x α 2 ( x ) + 1 α 2 ( x ) d x ] 2 .
The initial constant a 11 ( x 0 ) is then a 11 ( x 0 ) = α ( x 0 ) q ( x 0 ) , which is used with the following matrix elements in Equation (35) to provide the y I ( x ) solution. The first matrix element is from Equation (78) with P ( x , x 0 ) = 0 :
Φ 11 I I ( x , x 0 ) = e x p { x 0 x α ( x 0 ) q ( x 0 ) [ 1 + α ( x 0 ) q ( x 0 ) x 0 x α 2 ( x ) + 1 α 2 ( x ) d x ] d x } .
Using Equation (92) in Equation (79), again with P ( x , x 0 ) = 0 , gives the second matrix element:
Φ 12 I I ( x , x 0 ) = Φ 11 I I ( x , x 0 ) x 0 x e x p { 2 x 0 λ α ( x 0 ) q ( x 0 ) [ 1 + α ( x 0 ) q ( x 0 ) x 0 x α 2 ( x ) + 1 α 2 ( x ) d x ] d x } d λ .

6. Application of Series Methods for Riccati Equations without Known Particular Solutions

For known p ( x ) and q ( x ) of Equation (1), the Riccati Equation (39) connection between Systems One and Two only differs from one equation or system under study to another by the (negative) Schwarzian derivative Equation (38). Hence, it is effectively universal in both form and solution. For Riccati equations without known particular solutions that would otherwise permit and guide further analysis, a potentially useful alternative comes from a series approach, such as the single-variable Adomian Decomposition Method (ADM) [8]. This versatile method compartmentalizes the analysis of nonlinear ODEs into the general form
L [ v ] + R [ v ] + N [ v ] = f v ( x ) ,
where L [ v ] is a chosen invertible linear operator, R [ v ] is the remaining part of linear operations, and N [ v ] is the nonlinear operator, all of which act on v ( x ) , together with the driving function taken from Equation (38). Due to the simplicity of Equation (39), L [ v ] = d v d x , R [ v ] = 0 , N [ v ] = v 2 , and the solution is abstractly
v ( x ) = v h ( x ) + L 1 [ f v ( x ) ] L 1 [ N [ v ( x ) ] ] ,
where inverse operator L 1 [ · ] = x 0 x [ · ] d x and the homogeneous solution for L [ v ] is v h ( x ) , which is equal to constant v ( x 0 ) . We also define integral F v ( x , x 0 ) as the (negative) Schwarzian derivative integral
F v ( x , x 0 ) = L 1 [ f v ( x ) ] = x 0 x f v ( x ) d x .
The solution v ( x ) is assumed to be determined by the infinite series v ( x ) = n = 0 v n ( x ) . Similarly, the nonlinear component is assumed to be describable by N ( v ) = n = 0 A n for the Adomian polynomials A n . These are defined for all n = 0 ,   1 , by
A n   = 1 n !   d n d λ n   [ N [ v ( λ ) =   k = 0 v k ( x )   λ k ] ]   λ = 0 .
Although the higher Adomian polynomials become quite lengthy and calculationally intensive, n-term approximations for them can provide accurate results with relatively rapid convergence for a judicious choice of operators L and R in the general case [9,10]. However, for N ( v ) = v 2 , only two N ( v ) derivatives are nonzero. In the original Adomian recursion scheme [8], the initial series element for v ( x ) is chosen to be
v 0 ( x ) = v h ( x ) + L 1 [ f v ( x ) ] = v ( x 0 ) + F v ( x , x 0 ) .
Note that the initial constant v ( x 0 ) is either found from Equation (48) for known a 11 ( x 0 ) or is left undetermined as an arbitrary constant. From Equation (95) and the N [ v ] Adomian polynomial expansion above, the v ( x ) function series components are
v 1 ( x ) = L 1 [ A 0 ] ,     v 2 ( x ) = L 1 [ A 1 ] ,     v n ( x ) = L 1 [ A n 1 ] .
For Equation (39), the first six Adomian polynomials are
A 0 = v 0 2 ,       A 1 = 2 v 0 v 1 ,       A 2 = 2 v 0 v 2 + v 1 2 ,       A 3 = 2 v 0 v 3 + 2 v 1 v 2 , A 4 = 2 v 0 v 4 + 2 v 1 v 3 + v 2 2 ,       A 5 = 2 v 0 v 5 + 2 v 1 v 4 + 2 v 2 v 3 .
From the uncomplicated nature of Equation (39), the higher Adomian polynomials are simplistic in form, since for
n   odd ,                                                A n = a l l   i + j = n   ( v i v j ) ,   and   n   even ,                      A n = a l l   i + j = n     ( v i v j ( 1 δ i j ) + v i 2 δ i j ) ,
where a Kronecker δ i j demarcates the i = j terms. The convergence of the Adomian Decomposition series has been examined and confirmed by several authors [23,24,25,26]. For Equation (39), from the recursion scheme of Equation (99), the first four v n   ( x ) elements using the Adomian polynomials of Equation (100) are
v 0 ( x ) = v ( x 0 ) + F v ( x , x 0 ) , v 1 ( x ) = x 0 x [ v ( x 0 ) + F v ( x , x 0 ) ] 2 d x , v 2 ( x ) = x 0 x { 2 [ v ( x 0 ) + F v ( x , x 0 ) ] x 0 x   [ v ( x 0 ) + F v ( x , x 0 ) ] 2 d x } d x , v 3 ( x ) = x 0 x { 2 [ v ( x 0 ) + F v ( x , x 0 ) ] [   v 2 ( x ) ] + [ x 0 x [ v ( x 0 ) + F v ( x , x 0 ) ] 2 d x ] 2 } d x .
Therefore, an overall solution v ( x ) is shown to be a series of terms composed of increasingly multiple integrals of the negative Schwarzian derivative plus a constant for the differential equation. This series solution is most appropriate for Riccati equations without identifiable particular solutions for v ( x ) , unlike those shown in the three categories of Section 4. The convergence of the v ( x ) series can be examined through a ratio test [25] by checking whether v n + 1 ( x ) v n ( x ) < 1 for an appropriately chosen norm, which can also determine the radius of convergence of the series.
In summary, the implementation of the algorithm of Equations (99) through (102) provides a general solution mechanism for Equation (39), and the link between a system under study and a second system, which provides solutions to it through Equation (47) and the matrix elements of Equation (46). This link is unchanging in form, varying only through the details of the negative Schwarzian derivative. Hence, the series expansion v ( x ) = n = 0 v n ( x )   and its corresponding integral series, V ( x , x 0 ) , defined by Equation (43) and used directly in Equation (47), extend the applicability of the feedback diagram method in principle to all differential equations in the form of Equation (1), for which a (uniformly) convergent series can be obtained over interval ( x 0 , x ) . However, this formal solution will have practical utility that is highly dependent upon series convergence being adequately rapid and regions of convergence being largely unrestricted.

ADM Implementation

Example 2.
A simple example illustrates these results. Consider
y ¨ ( x ) + 2 x y ˙ ( x ) + x 2 y ( x ) = f y ( x ) ,
for unspecified driving function f y ( x ) , initial conditions y 0   a n d y ˙ 0 , and solution interval ( 0 ,   x ) . The negative of the Schwarzian derivative of Equation (38) is simply f v ( x ) = 1. Note that the solution to Equation (103) falls within the Group One category presented in Section 4.1 for constant c = 1 , in which the Equation (39) solution is v ( x ) = t a n h ( x ) . As such, the exact solution to Equation (103) is obtainable either from Equation (35) together with equations (63) and (64) or from Equation (47) together with Equations (43) and (44), with the final result in either case being
y ( x ) = ( e x 2 c o s h ( x ) ) y 0 + ( e x 2 s i n h   ( x ) ) y 0 ˙ + 0 x e ( x 2 ( x ) 2 ) 2   s i n h ( x x ) f y ( x ) d x .
For series approximation, the ADM algorithm under these conditions, together with v ( x 0 ) = 0 , leads to the following sequence of Adomian polynomials:
A 0 = ( v 0 2 ( x ) = F v 2 ( x , 0 ) ) = x 2 ,     A 1 = 2 x 4 3 ,     A 2 = 17 x 6 45 ,     A 3 = 62 x 8 315 ,     A 4 = 1382 14175 x 6 .  
From these Adomian polynomials, six v n ( x ) elements follow from Equation (99):
v 0 ( x ) = x ,     v 1 ( x ) = x 3 3 ,     v 2 ( x ) = 2 x 5 15 ,     v 3 ( x ) = 17 x 7 315 , v 4 ( x ) = 62 x 9 2835 ,     v 5 ( x ) = 1382 x 11 155925 ,  
which are seen to duplicate the MacLaurin Series for t a n h ( x ) for its first six terms. Note, however, that the region of convergence for this series is tightly restricted to | x | < π   2 . The integration of this last set of elements as per Equation (43) gives the first six elements of V ( x , x 0 = 0 ) ,
V ( x , 0 ) = x 2 2 x 4 12 + x 6 45 17 x 8 2520 + 62 x 10 28350 1382 x 12 1871100   ,
for subsequent usage in the Φ 11 I I ( x , 0 ) and Φ 12 I I ( x , 0 ) matrix elements of Equation (46). From the known tanh(x) result, Equation (107) becomes V ( x , 0 ) = l n ( c o s h ( x ) ) in the limit within regions of convergence. Ultimately, this appears as powers of c o s h ( x ) due to exp(V) and exp (-2V) in Equations (46) and (47) for describing the two matrix elements.
Some numerical details more fully characterize this illustration. The six-term approximate solution obtained by the substitution of Equation (107) in Equation (47) is contrasted with the exact homogeneous solution of Equation (103) in Figure 3. Also included is a nine-term approximate solution. For this comparison, the initial values y 0   a n d   y 0 . were both chosen to be one.
From the figure, for the interval (0, 1.57), the three solutions are found to be within much less than one per cent of each other up till x = 1.2. At x = 1.5, the deviation of the nine-term approximation is about 2.2% above the exact solution and that of the six-term approximation is about 4.1% below. The positive versus negative deviations are generated by the positive versus negative sign of the highest order term in each approximation, hence the bracketing of the exact result, and the series method ceases to apply at x = π 2   . The accuracy of the approximate results are mediocre, with improvement requiring the generation of additional terms or possibly the utilization of an alternate method.
Hence, although there is agreement for the Adomian Decomposition Method with previously determined results for this simple example, limitations on its practicality for solutions of Riccati Equation (39) not connected to particular solutions are seen to arise from issues of series convergence. The ADM has been chosen due to its power and flexibility to encompass a wide variety of equations. However, an alternative, such as Picard’s Iteration Method, also leads to a comparable series solution of multiple integrals of the Schwarzian derivative. Other choices are the Homotopic Analysis Method (HAM) [27] and the Homotopic Perturbation Method (HPM) [28], which often prove to be highly effective. These related approaches are widely used for nonlinear differential equations, and both employ a series of calculated functions in a series expansion with a homotopic expansion or imbedding parameter. Much as for the ADM, both methods employ linear and nonlinear equation components within their analyses. As the imbedding parameter varies from zero to one (analogous to a homotopy or continuous deformation from one topological surface or function to another), the series solution is altered from serving as a solution to the linear component equation to solving the nonlinear one. Of further significance are the facts that a small parameter is not required and that the series functions are calculated from a succession of linear equations. Additionally, the application of HAM or HPM can often result in convergence requiring only a few iterations [29]. Nevertheless, for any iteration scheme, the practical application of the feedback diagram method for an infinite series solution for v ( x ) of Riccati Equation (39) is strongly contingent upon both sufficiently unrestricted regions of convergence and a sufficiently rapid rate of series convergence.

7. Conclusions and Discussion

In summary, a novel feedback diagram-based methodology has been constructed and presented for the generation of equations solvable by quadrature, together with their corresponding solutions, for those adhering to the standard form for second-order, linear differential equations, as in Equation (1), with its coefficients p ( x ) and q ( x ) and initial conditions. Solutions of these equations (or systems) can be acquired from another system, specifically designed to produce such results upon the determination of solutions for feedback elements v ( x ) or a 11 ( x ) of the Riccati equation link between the two systems. In Section 3, families of differential equations with solutions were generated by calculating q ( x ) coefficients from choices for arbitrary p ( x ) and v ( x ) . In Section 4, particular Riccati equation solutions for v ( x ) or a 11 ( x ) were determined by either assuming constant values for the Schwarzian derivative or specific assumed forms for these feedback elements in terms of p ( x ) and q ( x ) . Under either set of assumptions, q ( x )   and   p ( x ) relations were determined, which describe families of solvable equations, together with their corresponding quadrature solutions. In each case, members of these families were specified by choices for an arbitrary function and parameter. In Section 5, the outcomes of the previous two sections were extended to the physically important Helmholtz equation by limiting p ( x ) to zero. Finally, in Section 6, the Adomian Decomposition Method was employed to provide a Riccati equation series solution for all differential equations (1) not overtly dependent upon particular Riccati solutions for v ( x ) or a 11 ( x ) . The overall result, which is an infinite series in terms of multiple integrals of the Schwarzian derivative, formally provides a quadrature solution to any linear, second-order differential equation (1). However, important practical limitations may arise due to restrictions on solution regions of applicability and/or slow rates of convergence.
Two parallel and related developments, which lead to results quite similar to those of the feedback diagram method, will now be compared and considered. The first method comes from the normal form of Equation (1) [2,3], in which a product form of solution y ( x ) = z ( x ) w ( x ) is assumed with the further assumption of
z ( x ) = e 1 2 x 0 x p ( x ) d x = e 1 2 P ( x , x 0 ) ,
using Equation (44). If these are substituted in Equation (1), its normal form is derived as
w ¨ ( x ) f v ( x ) w ( x ) = e 1 2 P ( x , x 0 ) f y ( x ) ,
together with appropriate initial conditions. The negative Schwarzian derivative definition for f v ( x ) of Equation (38) has been utilized, and the solution to Equation (1) follows from Equations (108) and (109) as
y ( x ) = e 1 2 P ( x , x 0 ) w ( x ) .
The normal form approach is usually presented as relying upon a solution to the nonhomogeneous Helmholtz equation of Equation (109). However, for arbitrary coefficient f v ( x ) , this is often as difficult to solve as the original Equation (1), although some of the Helmholtz equation results of Section 5 derived here may be helpful. Two independent homogeneous solutions to Equation (109), w h 1 ( x ) and w h 2 ( x ) , would be required, or at least one would need to be determined, followed by a reduction of order to generate the second. These two could then be used to construct an impulse response or Green’s function to encompass the nonhomogeneous Equation (109) through the relation [16] (p. 113)
G w ( x , x ) = | w h 1 ( x ) w h 2 ( x ) w h 1 ( x ) w h 2 ( x ) | | w h 1 ( x ) w h 2 ( x ) w ˙ h 1 ( x ) w ˙ h 2 ( x ) | .
This is the role played by Φ 12 I I ( x , x ) in Section 2 for the feedback diagram method.
An alternative to solving the homogeneous version of (109) follows the Riccati equation procedure of Equation (39) utilized in Section 2. That is, with the substitution (recall Equation (55)) of v ( x ) = w ˙ ( x ) w ( x ) , this homogeneous version is seen to revert back to Equation (39) since
v ˙ ( x ) = w ¨ ( x ) w ( x ) ( w ˙ ( x ) w ( x ) ) 2 = f v ( x ) v 2 ( x ) .
Any resulting solutions for v ( x ) could then be converted to homogeneous w ( x ) solutions by
w h ( x ) = w h ( x 0 ) e V ( x , x 0 ) ,
using the V ( x , x 0 ) definition of Equation (43). A reduction of order for a second solution and the impulse response of Equation (111) would then accommodate the nonhomogeneous version of Equation (109), and a complete y ( x ) solution would result as per Equation (110). Overall, Equation (112) serves as a connection between the normal form and the feedback diagram methods, making the Riccati equation solutions of Section 4.2, Section 4.3, Section 5, and Section 6 available to each approach.
A second method, which parallels the development here, is that of Badani [5], who reduces the general form of Equation (1) to a corresponding first-order nonhomogeneous equation through an insightful grouping of terms and defining of functions. The derived equation can be solved directly by an integrating factor, which also includes a key term which is itself a solution to a Riccati equation. Through a comparison to the method presented here, that key term can be identified as a 11 ( x ) and the associated Riccati equation as Equation (32). Hence, the Badani method and the method presented here both arrive at the same result, but by different routes. An advantage of the feedback diagram method developed here is that in utilizing Equation (39) for v ( x ) rather than Equation (32) for a 11 ( x ) , the Schwarzian derivative can be used to systematically generate equations and their quadrature solutions, as in Section 3.1; serve as a compact and useful marker to classify or characterize second-order linear differential equations with solutions, as in Section 4.1; or develop series solutions formally extendable to all versions of Equation (1), as established in Section 6. Badani presents several well-chosen examples illustrating his approach for particular choices of the coefficients p ( x ) and q ( x ) leading to particular Riccati solutions. However, the use of function v ( x ) , Equation (39), and the negative Schwarzian derivative of Equation (38) can together provide a broader, more systematic perspective, as has been demonstrated by the generation of families of equations with solutions in Section 3, Section 4, Section 5 and Section 6.
Finally, the presentation of the feedback diagram method here outlines a pathway for extending these results so as to obtain comparable families of equations with quadrature solutions for third-order and even higher-order linear differential equations.

Author Contributions

Conceptualization, R.P.; methodology, R.P.; formal analysis, R.P.; software, C.C.; validation, R.P. and C.C.; writing—original draft preparation, R.P.; writing—review and editing, R.P. and C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Theorem 1.
As a fundamental matrix, Φ I I ( x , x 0 ) satisfies Φ I I ˙ ( x , x 0 ) = A I I ( x ) Φ I I ( x , x 0 ) with initial conditions Φ I I ( x 0 , x 0 ) = I . In detail, using A I I ( x ) of Equations (12) and (13), we have the following four equations:
d Φ 11 I I ( x , x 0 ) d x = a 11 ( x ) Φ 11 I I ( x , x 0 ) + Φ 21 I I ( x , x 0 ) ,
d Φ 12 I I ( x , x 0 ) d x = a 11 ( x ) Φ 12 I I ( x , x 0 ) + Φ 22 I I ( x , x 0 ) ,
d Φ 21 I I ( x , x 0 ) d x = a 22 ( x ) Φ 21 I I ( x , x 0 ) ,
d Φ 22 I I ( x , x 0 ) d x = a 22 ( x ) Φ 22 I I ( x , x 0 ) .
Starting with Equation (A3), we can see that Φ 21 I I ( x , x 0 ) = Φ 21 I I ( x 0 , x 0 ) e x 0 x a 22 ( x ) d x = 0 , since Φ 21 I I ( x 0 , x 0 ) = 0 , giving Equation (18). Similarly, for Φ 22 I I ( x 0 , x 0 ) = 1, the integration of Equation (A4) gives Φ 22 I I ( x , x 0 ) = e x 0 x a 22 ( x ) d x , resulting in equations (16) and (17). Then, with Φ 21 I I ( x , x 0 ) = 0 and Φ 11 I I ( x 0 , x 0 ) = 1 , Equation (A1) leads to Φ 11 I I ( x , x 0 ) = e x 0 x a 11 ( x ) d x , giving Equation (14) together with Equation (15). Finally, using integrating factor e A 11 ( x , x 0 ) in Equation (A2) leads to
d d x [ e A 11 ( x , x 0 ) Φ 12 I I ( x , x 0 ) ] = e A 11 ( x , x 0 ) [ Φ 22 I I ( x , x 0 ) = e A 22 ( x , x 0 ) ] ,
from which Equation (19) results upon integration over ( x 0 , x ) since Φ 12 I I ( x 0 , x 0 ) = 0 . □
Proof of Theorem 2.
Since the Riccati equation of Equation (70) is homogeneous, it is readily transformable to a first-order differential equation by w = 1 q , giving
w ˙ ( x ) [ p ( x ) + α ˙ ( x ) α ( x ) + 1 α ( x ) ] w ( x ) = α ( x ) .
Defining the function
I ( x , x 0 ) = x 0 x ( p ( x ) + α ˙ ( x ) α ( x ) + 1 α ( x ) ) d x = P ( x , x 0 ) + l n ( α ( x ) α ( x 0 ) ) + I α ( x , x 0 ) ,
which incorporates the previous definition of Equation (72) for use within an integrating factor e I ( x , x 0 ) , we can obtain
w ( x ) = e I ( x , x 0 ) w ( x 0 ) + e I ( x , x 0 ) x 0 x α ( x ) e I ( x , x 0 ) d x .
Since q ( x ) = 1 w ( x ) and q ( x 0 ) = 1 w ( x 0 ) , Equation (71) is produced. For Equation (73), recall from Equation (42) that, here, A 11 ( x , x 0 ) is equal to
A 11 ( x , x 0 ) = x 0 x α ( x ) q ( x ) d x .
Given that the resulting α ( x ) q ( x ) of Equation (71) is the derivative of a logarithm,
A 11 ( x , x 0 ) = l n [ 1 + α ( x 0 ) q ( x 0 ) x 0 x e P ( x , x 0 ) I α ( x , x 0 ) d x ] .
From this and Equation (14), Φ 11 I I ( x , x 0 ) = e A 11 ( x , x 0 ) , and Equation (73) is verified. Finally, for Equation (74), we have, from Equations (42) and (45), A 22 ( x , x 0 ) = P ( x , x 0 ) A 11 ( x , x 0 ) , from which
A 22 ( x , x 0 ) A 11 ( x , x 0 ) = P ( x , x 0 ) 2 A 11 ( x , x 0 ) ,
for A 11 ( x , x 0 ) denoted by Equation (A10). When equations (A10) and (A11) are inserted into the original definition of Φ 12 I I ( x , x 0 ) of Equation (19), Equation (74) is the outcome. The a 11 ( x 0 ) value of α ( x 0 ) q ( x 0 ) follows directly from Equations (40) and (69). □
Proof of Theorem 3.
For the Bernoulli equation of Equation (76), the transformation q = z 1 1 γ = z 2 for γ = 3 2 converts it to the linear equation
z ˙ ( x ) ( p ( x ) + α ˙ ( x ) α ( x ) ) z = ( α ( x ) + 1 α ( x ) ) .
Application of the integrating factor e x 0 x [ p ( x ) + α ˙ ( x ) α ( x ) ] d x = e P ( x , x 0 ) α ( x 0 ) α ( x ) for strictly positive α ( x ) and α ( x 0 ) provides the solution
z ( x ) = e P ( x , x 0 ) α ( x ) α ( x 0 ) z ( x 0 ) + e P ( x , x 0 ) α ( x ) x 0 x [ α 2 ( x ) + 1 α 2 ( x ) ] e P ( x , x 0 ) d x .
The substitution of Equation (A13) into the q-z relationship leads to the result of Equation (77). For Equation (78), from equations (40) and (42),
A 11 ( x , x 0 ) = x 0 x α 2 ( x ) q ( x ) d x .
The last integral inserted into the definition of Φ 11 I I ( x , x 0 ) shown in Equation (78) provides the result there. Similarly, Equation (79) proceeds from the defining relation for Φ 12 I I ( x , x 0 ) repeated in Equation (79) by using Equation (78). □

References

  1. Pascone, R. On a General Class of Second-Order, Linear, Ordinary Differential Equations Solvable as a System of First-Order Equations. Differ. Equ. Appl. 2018, 131–146. [Google Scholar] [CrossRef]
  2. Kossovskiy, I.; Zaitsev, D. Normal form for second-order differential equations. J. Dyn. Control Syst. 2018, 24, 541–562. [Google Scholar] [CrossRef]
  3. Ghorai, S. Lecture Notes. Available online: http://home.iitk.ac.in/~sghorai/Teaching/MTH203/TEACHING/MTH203/ode.html (accessed on 6 June 2018).
  4. Bougoffa, L. On the exact solutions for initial value problems of second order differential equations. Appl. Math. Lett. 2009, 22, 1248–1251. [Google Scholar] [CrossRef]
  5. Badani, D. On closed-form solutions to a class of ordinary differential equations. Int. J. Adv. Math. Sci. 2014, 2, 57–70. [Google Scholar] [CrossRef]
  6. Bohner, M.; Li, T.X. Kamenev-type criteria for nonlinear damped dynamic equations. Sci. China Math. 2015, 58, 1445–1452. [Google Scholar] [CrossRef]
  7. Adomian, G. Nonlinear Stochastic Systems Theory and Application to Physics; Springer Netherlands: Dordrecht, The Netherlands, 1989; ISBN 9781402003387. [Google Scholar]
  8. Duan, J.S.; Rach, R.; Baleanu, D.; Wazwaz, A.M. A review of the Adomian decomposition method and Its applications to fractional differential equations. Commun. Fract. Calc. 2012, 3, 73–99. [Google Scholar]
  9. Wazwaz, A.-M. A reliable modification of Adomian decomposition method. Appl. Math. Comput. 1999, 102, 77–86. [Google Scholar] [CrossRef]
  10. Hosseini, M.M.; Nasabzadeh, H. Modified Adomian decomposition method for specific second order ordinary differential equations. Appl. Math. Comput. 2007, 186, 117–123. [Google Scholar] [CrossRef]
  11. Schwarz, R.; Friedland, B. Linear Systems, 1st ed.; McGraw-Hill: New York, NY, USA, 1965; ISBN 9780070557789. [Google Scholar]
  12. Coddington, E.; Levinson, N. Theory of Ordinary Differential Equations; McGraw-Hill: New York, NY, USA, 1955; ISBN 9780898747553. [Google Scholar]
  13. Taylor, M.E. Introduction to Differential Equations; Pure and Applied Undergraduate Texts; American Mathematical Society: Providence, RI, USA, 2001. [Google Scholar]
  14. Osgood, B. Old and New on the Schwarzian Derivative. In Quasiconformal Mappings and Analysis; Springer: New York, NY, USA, 1998; pp. 275–308. [Google Scholar]
  15. Boyce, W.E.; DiPrima, R.C. Elementary Differential Equations and Boundary Value Problems, 8th ed.; Wiley: New York, NY, USA, 2005; ISBN 9780471433385. [Google Scholar]
  16. Leighton, W. Ordinary Differential Equations, 3rd ed.; Wadsworth Publishing Company: Belmont, CA, USA, 1970; ISBN 9780534335762. [Google Scholar]
  17. Davis, H. Introduction to Nonlinear Differential Equations and Integral Equations; Dover Publications, Inc.: New York, NY, USA, 1962; ISBN 9780486609713. [Google Scholar]
  18. Schiff, L.I. Quantum Mechanics; International Series in Pure and Applied Physics, 3rd ed.; McGraw-Hill: New York, NY, USA, 1968; ISBN 9780070856431. [Google Scholar]
  19. Diament, P. Wave Transmission and Fiber Optics; Macmillan: New York, NY, USA, 1990; ISBN 9780023287619. [Google Scholar]
  20. Mathews, J.; Walker, R.L. Mathematical Methods of Physics, 2nd ed.; W. A. Benjamin: New York, NY, USA, 1970; ISBN 9780805370027. [Google Scholar]
  21. Hartog, A.H.; Adams, M.J. On the accuracy of the WKB approximation in optical dielectric waveguides. Opt. Quantum Electron. 1977, 9, 223–232. [Google Scholar] [CrossRef]
  22. Li, T.X.; Pintus, N.; Viglialoro, G. Properties of Solutions to porous medium problems with different sources and boundary conditions. Z. fur Angenwandte Math. und Phys. 2019, 70, 1–18. [Google Scholar] [CrossRef]
  23. Cherruault, Y.; Adomian, G. Decomposition methods: A new proof of convergence. Math. Comput. Model. 1993, 18, 103–106. [Google Scholar] [CrossRef]
  24. Abbaoui, K.; Cherruault, Y. Convergence of Adomian’s method applied to differential equations. Comput. Math. Appl. 1994, 28, 103–109. [Google Scholar] [CrossRef]
  25. Hosseini, M.M.; Nasabzadeh, H. On the convergence of Adomian decomposition method. Appl. Math. Comput. 2006, 182, 536–543. [Google Scholar] [CrossRef]
  26. Abdelrazec, A.; Pelinovsky, D. Convergence of the Adomian decomposition method for initial-value problems. Numer. Methods Partial Differ. Equ. 2011, 27, 749–766. [Google Scholar] [CrossRef]
  27. Liao, S.J. Beyond Perturbation Theory: Introduction to the Homotopy Analysis Method; Chapman&Hall/CRC Press: Boca Raton, FL, USA, 2003; ISBN 9781584884071. [Google Scholar]
  28. He, J.H. Homotopy Perturbation Technique. Comput. Methods Appl. Mech. Eng. 1999, 178, 257–262. [Google Scholar] [CrossRef]
  29. Momani, S.; Odibat, Z. Homotopy perturbation method for nonlinear partial differential equations of fractional order. Phys. Lett. A 2007, 365, 345–350. [Google Scholar] [CrossRef]
Figure 1. System One representation of linear, second-order, ordinary differential equations, Equation (1) or Equation (6). y I ( x ) indicates a System One output.
Figure 1. System One representation of linear, second-order, ordinary differential equations, Equation (1) or Equation (6). y I ( x ) indicates a System One output.
Axioms 09 00091 g001
Figure 2. System Two modifications from the most general system include a 21 ( x ) = 0 ,   a 12 ( x ) = 1 , and b 1 = 0 ,   b 2 = 1 . y I I ( x ) indicates a System Two output.
Figure 2. System Two modifications from the most general system include a 21 ( x ) = 0 ,   a 12 ( x ) = 1 , and b 1 = 0 ,   b 2 = 1 . y I I ( x ) indicates a System Two output.
Axioms 09 00091 g002
Figure 3. Exact-approximate solution comparison. Exact homogenous solution to Equation (103) compared to solutions with six-term and nine-term expansions for V ( x , 0 ) . Initial values are y 0 = 1 and y 0 ˙ = 1 .
Figure 3. Exact-approximate solution comparison. Exact homogenous solution to Equation (103) compared to solutions with six-term and nine-term expansions for V ( x , 0 ) . Initial values are y 0 = 1 and y 0 ˙ = 1 .
Axioms 09 00091 g003
Back to TopTop