Next Article in Journal
Efficient Numerical Schemes for a Heterogeneous Reaction–Diffusion System with Applications
Previous Article in Journal
Gaussian Process Regression with Soft Equality Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variational Iteration and Linearized Liapunov Methods for Seeking the Analytic Solutions of Nonlinear Boundary Value Problems

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(3), 354; https://doi.org/10.3390/math13030354
Submission received: 26 December 2024 / Revised: 14 January 2025 / Accepted: 20 January 2025 / Published: 22 January 2025
(This article belongs to the Section E: Applied Mathematics)

Abstract

:
The boundary shape function method (BSFM) and the variational iteration method (VIM) are merged together to seek the analytic solutions of nonlinear boundary value problems. The boundary shape function method transforms the boundary value problem to an initial value problem (IVP) for a new variable. Then, a modified variational iteration method (MVIM) is created by applying the VIM to the resultant IVP, which can achieve a good approximate solution to automatically satisfy the prescribed mixed-boundary conditions. By using the Picard iteration method, the existence of a solution is proven with the assumption of the Lipschitz condition. The MVIM is equivalent to the Picard iteration method by a back substitution. Either by solving the nonlinear equations or by minimizing the error of the solution or the governing equation, we can determine the unknown values of the parameters in the MVIM. A nonlocal BSFM is developed, which then uses the MVIM to find the analytic solution of a nonlocal nonlinear boundary value problem. In the second part of this paper, a new splitting–linearizing method is developed to expand the analytic solution in powers of a dummy parameter. After adopting the Liapunov method, linearized differential equations are solved sequentially to derive an analytic solution. Accurate analytical solutions are attainable through a few computations, and some examples involving two boundary layer problems confirm the efficiency of the proposed methods.

1. Introduction

Many computational methods are available to solve the boundary value problems (BVPs) [1,2,3,4,5]. The existence of solution to the BVP requires a priori bounds on the derivatives of the potential solution. The lower and upper solutions provide a priori bounds on the solutions with different Nagumo growth conditions [6,7,8,9,10,11].
The Picard iteration method is a conventional functional iteration method that is simply used to seek the analytical solution of nonlinear ordinary differential equations (ODEs). In order to improve the slow convergence of the Picard iteration method and enhance its accuracy, the variational iteration method (VIM) was proposed and modified [12,13,14,15,16]. The VIM as a modification of the Picard iteration method is a powerful method for analytically solving nonlinear scientific and engineering problems [17,18].
As stated in [19], Liapunov developed a dummy parameter technique to investigate the conditions of stability of the Hill equation [20]:
y ¨ ( t ) + p ( t ) y ( t ) = 0 , y ( 0 ) = 1 , y ˙ ( 0 ) = 0 ,
where p ( t + T ) = p ( t ) for some T > 0 as a periodic funtion. Liapunov recast Equation (1) to:
y ¨ ( t ) = μ p ( t ) y ( t ) ,
where μ R is a dummy parameter. When μ = 1 , Equation (2) recovers to Equation (1). The analytic solution of Equation (1) can be determined as the sum of a convergent power series of the parameter μ :
y ( t ) = k = 0 μ k φ k ( t ) .
Substituting Equation (3) for y ( t ) into Equation (1) and equating equal powers of μ yields the following:
φ ¨ 0 ( t ) = 0 , φ ¨ k ( t ) = p ( t ) φ k 1 ( t ) , k = 1 , 2 , ,
which is a recurrent formula to sequentially determine φ k ( t ) from φ k 1 ( t ) of the previous step solution. By starting from φ 0 ( t ) = 1 and subjecting to φ k ( 0 ) = φ ˙ k ( 0 ) = 0 , Liapunov proved that
| φ k ( t ) | M k t 2 k 2 k ! , k = 1 , 2 , ,
where M is an upper bound of p ( t ) , and the convergent solution of Equation (1) was obtained:
y ( t ) = k = 0 ( 1 ) k φ k ( t ) .
We will call the above technique the Liapunov method.
The homotopy perturbation method is proposed via homotopy [21], which is rather general and valid for nonlinear differential equations. Above all, homotopy is constructed by introducing an embedding parameter. Even the Liapunov method did not mention the concept of homotopy and apply it to the nonlinear ODE; its use of the perturbed parameter μ in the analysis of the series solution of y ( t ) is indeed a precursor of the homotopy perturbation method. Currently, there exist many homotopy perturbation techniques for solving two-point boundary value problems [22,23,24].
Previously, the variational iteration method [12] was applied to solve the initial value problem (IVP). Khuri and Sayfy [25] extended the variational iteration method to the Dirichlet-type boundary value problem by using two Lagrange multipliers in the correct functional. The specific advantages of the present approach obtained by merging the boundary shape function method with the variational iteration method (VIM), which is named a modified variational iteration method (MVIM), are that the specified boundary conditions are automatically and exactly satisfied. When the method of Khuri and Sayfy is not applicable, the present approach can easily treat the mixed and nonlocal boundary value problems. For these mixed and nonlocal boundary conditions, there is not a simple way to identify the two Lagrange multipliers in the correct functional.
An extra advantage is that there exist free parameters for the new variable, which can be adopted to minimize the absolute error of the analytic solution, such that we can obtain a more accurate analytic solution. When the boundary shape function method (BSFM) is used in the nonlinear BVP [26], its numerical solution can be obtained. However, the analytic form of the solution cannot be obtained by the BSFM individually.
On the other hand, the VIM is designed for the IVP, not for BVP. How to effectively apply the VIM to a large class of nonlinear BVPs endowed with complicated boundary conditions to derive the analytic solution is still a great challenge. This motivates us to develop the new version of the VIM for an effective method to seek the analytic solution of a nonlinear BVP. When the BSFM is combined with the VIM, we can simply derive the analytic solution of a nonlinear BVP.
We are going to cover a nonlocal boundary value problem based on the concept of the nonlocal boundary shape function method. In the field of variational iteration methods, work treating nonlocal boundary value problems using the VIM is rare.
The outline of this paper is given as follows. For second-order nonlinear boundary value problems (BVPs) with the Dirichlet boundary conditions, we clarify the existence of a solution in Section 2 by considering the transformation of the BVP to an initial value problem (IVP). We take an example in Section 3 to explore the slow convergence of the Picard iteration method. In Section 4, we derive the boundary shape function method for the nonlinear BVPs under mixed-boundary conditions. In Section 5, with the aid of the boundary shape function, we can transform the nonlinear BVP with the mixed-boundary and/or nonlocal boundary conditions to an initial value problem with unknown right-hand values appearing in the ODE for the new variable. Then, a modified VIM (MVIM) is developed, and the methods to compute the right-hand values are depicted. Some examples are tested in Section 6. In Section 7, we develop a linearized Liapunov method for seeking the analytic solutions. Some examples using the linearized Liapunov method are tested in Section 8. Finally, the conclusions are drawn in Section 9.

2. The Existence of Solutions

We consider a second-order nonlinear ordinary differential equation (ODE) endowed with the Dirichlet boundary conditions:
u ( x ) + F ( x , u ( x ) , u ( x ) ) = 0 , x ( 0 , 1 ) ,
u ( 0 ) = c 1 , u ( 1 ) = c 2 .
Later, more complex boundary conditions will be considered in Section 4 and Section 5, nonlocal boundary conditions will be addressed in Section 5.4, and a multi-point BVP is solved in Example 7.

2.1. Existence Theorem

Theorem 1.
In D : = { ( x , y , z ) | 0 x 1 , < y < , < z < } , if F ( x , y , z ) is continuous and satisfies
| F ( x , y , z ) F ( x , y ^ , z ^ ) |   k ( y y ^ ) 2 + ( z z ^ ) 2 , ( x , y , z ) , ( x , y ^ , z ^ ) D ,
then the nonlinear BVP (7) and (8) may have m solutions when in the following nonlinear equation:
R ( α ) = α
there exist m roots for α, where α is a parameter to denote the right-hand value of a new variable v ( x ) , given by
u ( x ) = v ( x ) + ( 1 x ) c 1 + x ( c 2 α ) = v ( x ) + ( c 2 c 1 α ) x + c 1 ,
in which α = v ( 1 ) . R is a nonlinear function of α derived from Equation (7).
Proof. 
Suppose that u ( x ) is transformed to v ( x ) by Equation (11), where v ( 0 ) = v ( 0 ) = 0 are given initial conditions for the new variable v ( x ) , and the right-hand value v ( 1 ) = α is to be determined. It is obvious that u ( x ) automatically satisfies the boundary conditions u ( 0 ) = c 1 and u ( 1 ) = c 2 in Equation (8),
u ( 0 ) = v ( 0 ) + c 1 = c 1 , u ( 1 ) = v ( 1 ) + c 2 c 1 α + c 1 = α + c 2 α = c 2 ,
where v ( 0 ) = 0 and v ( 1 ) = α were used.
It follows from Equations (7) and (11) that
v ( x ) = g ( x , v ( x ) , v ( x ) ) : = F ( x , v ( x ) + ( c 2 c 1 α ) x + c 1 , v ( x ) + c 2 c 1 α ) ,
v ( 0 ) = 0 , v ( 0 ) = 0 ,
which is an IVP with the ODE continuously depending on an unknown value of the parameter α .
Let
v ( x ) = w ( x ) , v ( 0 ) = 0 ,
w ( x ) = g ( x , v ( x ) , w ( x ) ) , w ( 0 ) = 0
be a two-dimensional IVP system. We have
| g ( x , v , w ) g ( x , v ^ , w ^ ) |   =   | F ( x , v , w ) F ( x , v ^ , w ^ ) | ;
based on Equation (9), we can derive the Lipschitz condition:
( w w ^ ) 2 + ( g ( x , v , w ) g ( x , v ^ , w ^ ) ) 2 ( w w ^ ) 2 + k 2 [ ( v v ^ ) 2 + ( w w ^ ) 2 ] ( k 2 + 1 ) [ ( v v ^ ) 2 + ( w w ^ ) 2 ] .
According to [27,28], the existence of the solutions v ( x ) and w ( x ) is guaranteed. Now, applying the Picard iteration method to Equations (14) and (15) generates the following:
v n + 1 ( x ) = 0 x w n ( s ) d s ,
w n + 1 ( x ) = 0 x g ( s , v n ( s ) , w n ( s ) ) d s .
Considering v 0 ( 0 ) = 0 and w 0 ( 0 ) = v 0 ( 0 ) = 0 and starting from v 0 ( x ) = x 2 and w 0 ( x ) = 2 x , the iteration converges to the true solution v ( x ) and w ( x ) , which depends on the parameter α continuously through the function g in Equation (12). Enforcing v ( 1 ) = α , we can obtain a nonlinear Equation (10) for α . Solving this equation for α , it is possible that there exist no solution, one solution, two solutions, etc. □
To prove the existence of a solution of the BVP, it is of utmost importance to construct the nonlinear function R ( α ) , which might be an implicit function of α , by using the numerical integration method. However, by using the analytic method, the function R ( α ) is explicit. Then, we can solve the nonlinear equation R ( α ) = α to determine α and the number of analytic solutions. In Appendix A, we demonstrate this process by giving an example.
Zhou and Shen [29] employed the Nagumo condition and the lower and upper solutions technique to prove the existence of a unique solution to Equations (7) and (8). Generally, the existence of a solution was discussed using different Nagumo growth conditions, assuming that
| F ( x , y 0 , y 1 ) |     h ( | y 1 | ) , ( x , y 0 , y 1 ) E ,
on E [ 0 , 1 ] × R 2 and
0 s h ( s ) d s = ,
where h ( s ) > 0 . The key point is finding a positive function h ( | y 1 | ) as the bound of F ( x , y 0 , y 1 ) and proving Equation (20). Because Equations (7) and (8) are transformed to an IVP in Equations (12) and (13), according to the theory of ODE [27,28], the Lipschitz condition guarantees the unique solution of v ( x ) . The Lipschitz condition is weaker than the Nagumo condition; hence, the most practical boundary value problem can be treated by the presented technique to transform the BVP to an IVP.

2.2. Application to an Example

The following example demonstrates an application of Theorem 1:
u ( x ) 3 u ( x ) + 2 u ( x ) = 0 , 0 < x < 1 ,
u ( 0 ) = 1 , u ( 1 ) = 0 ,
whose exact solution is
u ( x ) = e e 1 e x 1 e 1 e 2 x .
Upon letting
u ( x ) = v ( x ) ( α + 1 ) x + 1 ,
u ( 0 ) = 1 and u ( 1 ) = 0 are satisfied automatically, where v ( 0 ) = 0 and v ( 1 ) = α . We take α = v ( 1 ) to simplify the notation. We come to an IVP for the new variable v ( x ) :
v ( x ) 3 v ( x ) + 2 v ( x ) 2 ( α + 1 ) x + 2 + 3 ( α + 1 ) = 0 ,
v ( 0 ) = 0 , v ( 0 ) = 0 .
The solution is
v ( x ) = ( 3 + α ) e x ( α + 2 ) e 2 x + ( α + 1 ) x 1 .
Taking v ( 1 ) = α leads to
( 3 + α ) e ( α + 2 ) e 2 + α + 1 1 = α ,
α = 3 2 e e 1 .
Inserting it into Equation (27) and then into Equation (24), we obtain the same exact solution of u ( x ) :
u ( x ) = 3 + 3 2 e e 1 e x 3 2 e e 1 + 2 e 2 x = e e 1 e x 1 e 1 e 2 x .
Equation (28) is a special case of Equation (10) with R ( α ) = ( 3 + α ) e ( α + 2 ) e 2 + α . In Appendix A, another comment of Equation (10) is given. The MVIM solution of Equation (25) is provided.

3. Slow Convergence of the Picard Iteration Method

The Picard iteration method can be applied to the nonlinear ODE system [27]:
x ˙ = f ( t , x ) , x R n , f R n ,
where x ( 0 ) = x 0 R n . Note that any solution to Equation (31) will satisfy the following integral equation:
x ( t ) = x 0 + 0 t f ( τ , x ( τ ) ) d τ .
It is useful to prove the existence of solution to Equation (31) via successive approximation deduced from Equation (32):
x k + 1 ( t ) = x 0 + 0 t f ( τ , x k ( τ ) ) d τ , k = 0 , 1 , .
This iteration generates a sequence x k , k = 1 , 2 , of successive approximations of the real solution x ( t ) .
For the example in Equation (25) written as
v ( x ) = w ( x ) , w ( x ) = 3 w ( x ) 2 v ( x ) + 2 γ x 2 3 γ ,
where γ = α + 1 , starting from v 0 ( x ) = x 2 and w 0 ( x ) = 2 x and through five iterations in the Picard method in Equations (18) and (19), we can derive
v ( x ) = 3 γ x 2 2 7 γ x 3 6 5 γ x 4 8 31 γ x 5 120 + γ x 6 10 γ x 7 180 x 2 x 3 7 x 4 12 x 5 4 + 59 x 6 360 x 7 35 + x 8 720 ;
imposing v ( 1 ) = α , γ = α + 1 is obtained as follows:
γ = α + 1 = 1 7 12 1 4 + 59 360 1 35 + 1 720 1 + 3 2 + 7 6 + 5 8 + 31 120 1 10 + 1 180 .
Hence, an analytic solution of Equations (21) and (22) is given as follows:
u ( x ) = 3 γ x 2 2 7 γ x 3 6 5 γ x 4 8 31 γ x 5 120 + γ x 6 10 γ x 7 180 x 2 x 3 7 x 4 12 x 5 4 + 59 x 6 360 x 7 35 + x 8 720 ( α + 1 ) x + 1 .
Notice that the boundary conditions u ( 0 ) = 1 and u ( 1 ) = 0 are exactly satisfied by u ( x ) in Equation (36).
In Figure 1, we compare Equations (36) and (23), and the maximum error (ME) is 5.72 × 10 2 . In order to enhance the accuracy, more computations to generate more terms are needed, which compensate for the slow convergence of the Picard iteration method.
In Appendix A, we will solve Equations (21) and (22) again by using a modified variational iteration method (MVIM).

4. Boundary Shape Function Method

Next, we impose the mixed-boundary conditions on Equation (7) as follows:
a 1 u ( 0 ) + b 1 u ( 0 ) = c 1 , a 2 u ( 1 ) + b 2 u ( 1 ) = c 2 .
From now on, we propose a newly modified VIM (MVIM) for seeking an analytic solution of Equations (7) and (37) using the following results.
Theorem 2.
Two shape functions q 1 ( x ) , q 2 ( x ) C 1 [ 0 , 1 ] satisfy the following:
a 1 q 1 ( 0 ) + b 1 q 1 ( 0 ) = 1 , a 2 q 1 ( 1 ) + b 2 q 1 ( 1 ) = 0 ,
a 1 q 2 ( 0 ) + b 1 q 2 ( 0 ) = 0 , a 2 q 2 ( 1 ) + b 2 q 2 ( 1 ) = 1 .
Proof. 
Refer to [30] for the proof of the existence of q 1 ( x ) and q 2 ( x ) . □
Theorem 3.
Given q 1 ( x ) and q 2 ( x ) via Equations (38) and (39), for any y ( x ) C 1 [ 0 , 1 ] , the boundary shape function
u ( x ) = y ( x ) + q 1 ( x ) [ c 1 a 1 y ( 0 ) b 1 y ( 0 ) ] + q 2 ( x ) [ c 2 a 2 y ( 1 ) b 2 y ( 1 ) ] ,
automatically satisfies the boundary conditions in Equation (37).
Proof. 
Refer to [30] for the proof of this theorem. □
The exact solution u ( x ) must satisfy the mixed-boundary conditions in Equation (37), and in this sense, u ( x ) is one member of the boundary shape function.
Based on Theorem 3, we can transform u ( x ) to a new variable y ( x ) by
u ( x ) = y ( x ) G ( x ) ,
where
G ( x ) : = [ a 1 y ( 0 ) + b 1 y ( 0 ) c 1 ] q 1 ( x ) + q 2 ( x ) [ a 2 y ( 1 ) + b 2 y ( 1 ) c 2 ] .
Inserting Equation (41) into Equation (7) generates a new ODE:
y ( x ) + H ( x , y ( x ) , y ( x ) ) = 0 ,
where
H ( x , y ( x ) , y ( x ) ) = F ( x , y ( x ) G ( x ) , y ( x ) G ( x ) ) G ( x ) .
The initial values are given as follows:
y ( 0 ) = A , y ( 0 ) = B .
In general, we take A = B = 0 . Equations (43) and (45) constitute an initial value problem (IVP) with α : = y ( 1 ) and β : = y ( 1 ) as unknown parameters in G given by Equation (42). We introduce α and β to simplify the notations.
To demonstrate the transformation process step-by-step, we consider a benchmark BVP given as follows:
u ( x ) 3 2 u 2 ( x ) = 0 , u ( 0 ) = 4 , u ( 1 ) = 1 .
The first step determines the shape functions q 1 ( x ) and q 2 ( x ) by satisfying the following:
q 1 ( 0 ) = 1 , q 1 ( 1 ) = 0 , q 2 ( 0 ) = 0 , q 2 ( 1 ) = 1 .
It is easily deduced to q 1 ( x ) = 1 x and q 2 ( x ) = x . The second step transforms u ( x ) to y ( x ) as follows:
u ( x ) = y ( x ) s 1 ( x ) [ y ( 0 ) 4 ] s 2 ( x ) [ y ( 1 ) 1 ] = y ( x ) ( 1 x ) [ y ( 0 ) 4 ] x [ y ( 1 ) 1 ] .
The boundary values u ( 0 ) = 4 and u ( 1 ) = 1 are satisfied automatically. The third step derives a new ODE for y ( x ) :
y ( x ) 3 2 ( y ( x ) ( 1 x ) [ y ( 0 ) 4 ] x [ y ( 1 ) 1 ] ) 2 = 0 ,
which together with the given initial values in Equation (45) constitute an initial value problem (IVP) for y ( x ) . α : = y ( 1 ) is a parameter in the ODE (46). By applying the VIM to IVP (46) and (45), α is a parameter when the trial solution y ( x ) is derived sequentially by starting from the initial function y 0 ( x ) = A + B x .

5. A Modified Variational Iteration Method

Based on the variational iteration method, we develop a new methodology for finding the analytic solution of Equations (7) and (37) in this section.

5.1. Variational Iteration Method

In order to accelerate the convergence speed of the Picard iteration method, we consider the variational iteration method (VIM) [12], which is more efficient than the Picard iteration method. According to the VIM for the differential Equation (7), a correct functional is given as follows:
u n + 1 ( x ) = u n ( x ) + 0 x λ ( x , ξ ) [ u n ( ξ ) + F ( ξ , u n ( ξ ) , u n ( ξ ) ) ] d ξ ,
where
λ ( x , ξ ) = ξ x
is a Lagrange multiplier.
However, we find that the iteration (47) cannot exactly fulfill Equation (37) unless some extra techniques are developed. We improve this drawback below by transforming the BVP to an initial value problem (IVP) and then applying the VIM to the resultant IVP, namely a modified variational iteration method (MVIM).

5.2. A Modified VIM

The following integral formula will be used later.
Lemma 1.
For any integrable function f ( ξ ) , we have the following:
0 x ( ξ x ) f ( ξ ) d ξ = 0 x 0 ξ f ( s ) d s d ξ .
Proof. 
Let
v ( x ) : = 0 x f ( ξ ) d ξ .
Then, we have
0 x ( ξ x ) f ( ξ ) d ξ = 0 x ξ f ( ξ ) d ξ x 0 x f ( ξ ) d ξ = ξ v ( ξ ) ξ = 0 ξ = x 0 x v ( ξ ) d ξ x 0 x f ( ξ ) d ξ = x v ( x ) 0 x v ( ξ ) d ξ x v ( x ) = 0 x v ( ξ ) d ξ = 0 x 0 ξ f ( s ) d s d ξ .
The proof is completed. □
We consider the VIM in Equation (47) for y ( x ) governed by Equations (43) and (45) as an IVP:
y n + 1 ( x ) = y n ( x ) + 0 x ( ξ x ) [ y n ( ξ ) + H ( ξ , y n ( ξ ) , y n ( ξ ) ) ] d ξ .
When y ( x ) is obtained by the MVIM, u ( x ) can be obtained by applying Equations (41) and (42), which automatically satisfies the mixed-boundary conditions in Equation (37).
Theorem 4.
For Equation (43), the MVIM starting from an initial guess y 0 ( x ) can be derived as follows:
y n + 1 ( x ) = y 0 ( 0 ) + x y 0 ( 0 ) + 0 x ( ξ x ) H ( ξ , y n ( ξ ) , y n ( ξ ) ) d ξ , n = 0 , 1 , .
Proof. 
Applying Lemma 1, we have the following integral relation:
0 x ( ξ x ) y n ( ξ ) d ξ = y n ( 0 ) y n ( x ) + y n ( 0 ) x ,
such that Equation (50) can be written as follows:
y n + 1 ( x ) = y n ( x ) + y n ( 0 ) y n ( x ) + y n ( 0 ) x + 0 x ( ξ x ) H ( ξ , y n ( ξ ) , y n ( ξ ) ) d ξ = y n ( 0 ) + y n ( 0 ) x + 0 x ( ξ x ) H ( ξ , y n ( ξ ) , y n ( ξ ) ) d ξ .
Upon noting that y n ( 0 ) = y 0 ( 0 ) and y n ( 0 ) = y 0 ( 0 ) , we can prove Equation (51). □
Taking the twice differentials of Equation (51) with respect to x yields the following:
y n + 1 ( x ) + H ( x , y n ( x ) , y n ( x ) ) = 0 .
In the MVIM, y n + 1 ( x ) does not exactly satisfy the ODE in Equation (43): y n + 1 ( x ) + H ( x , y n + 1 ( x ) , y n + 1 ( x ) ) = 0 , because of H ( x , y n ( x ) , y n ( x ) ) H ( x , y n + 1 ( x ) , y n + 1 ( x ) ) ; hence, Equation (53), then Equation (51) is an approximation of Equation (43).
Inserting Equation (48) into Equation (47) and adopting a similar derivation to that in Theorem 4, the VIM can be written as follows:
u n + 1 ( x ) = u 0 ( 0 ) + x u 0 ( 0 ) + 0 x ( ξ x ) F ( ξ , u n ( ξ ) , u n ( ξ ) ) d ξ , n = 0 , 1 , .
A strong form of the VIM, like Equation (53) for y n + 1 ( x ) , is as follows:
u n + 1 ( x ) + F ( x , u n ( x ) , u n ( x ) ) = 0 .
Remark 1.
Upon comparing Equations (51) and (54), the current MVIM applied to Equation (43) is different from the VIM, which is applied to Equation (7). In the MVIM of Equation (51), we have the freedom to chose y 0 ( 0 ) and y 0 ( 0 ) , inserting the resultant y n ( x ) into Equation (41) to obtain the solution u ( x ) , which fulfills Equation (37) automatically. In contrast, in the VIM of Equation (54), u 0 ( 0 ) and u 0 ( 0 ) cannot be chosen freely, which must satisfy a 1 u 0 ( 0 ) + b 1 u 0 ( 0 ) = c 1 in Equation (37). Moreover, the resultant u n ( x ) is not guaranteed to satisfy a 2 u n ( 1 ) + b 2 u n ( 1 ) = c 2 in Equation (37). It may need to solve some nonlinear algebraic equations to adjust the values of u 0 ( 0 ) and u 0 ( 0 ) , such that a 2 u n ( 1 ) + b 2 u n ( 1 ) c 2 = 0 can be satisfied nearly. It is not an easy task for searching the higher-order analytic solution of the mixed-type BVP by applying the original VIM to Equations (7) and (37).
Remark 2.
Notice that the standard VIM [12] is applied to Equation (7) with the specified initial conditions u ( 0 ) = C and u ( 0 ) = D ; hence, upon giving the initial guessed function u 0 ( x ) = C + D x , we can find the subsequent analytic solutions by sequentially inserting u n ( x ) , n = 0 , 1 , into Equation (54). In contrast, the present MVIM is worked using Equation (51) for the new ODE in Equation (43), which is started from any initial guessed function y 0 ( x ) = A + B x . In doing so, the MVIM is easily tailored to automatically satisfy the boundary conditions given in Equation (37). When the standard VIM is applied to solve the nonlinear BVP, it encounters the difficulty that u ( 0 ) = C and u ( 0 ) = D are unknown. To satisfy the boundary conditions given in Equation (37), in the VIM, some coupled nonlinear algebraic equations must be solved to determine the two unknowns of C and D. This process might be complicated and expensive. In this regard, the MVIM is better than the VIM.
Theorem 5.
For Equation (43), the MVIM is equivalent to the back substitution of the Picard iteration method.
Proof. 
Let z ( x ) = y ( x ) ; Equation (43) can be expressed as a two-dimensional ODE system:
y ( x ) = z ( x ) ,
z ( x ) = H ( x , y ( x ) , z ( x ) ) ,
where we suppose that the initial values are y ( 0 ) = 0 and z ( 0 ) = 0 .
Applying the Picard iteration method (33) to Equations (56) and (57) yields the following:
y n + 1 ( x ) = 0 x z n ( s ) d s ,
z n + 1 ( x ) = 0 x H ( s , y n ( s ) , z n ( s ) ) d s .
A slight modification of the Picard iteration method via back substitution is that we can insert Equation (59) for z n into Equation (58), which results in the following:
y n + 1 ( x ) = 0 x 0 ξ H ( s , y n ( s ) , z n ( s ) ) d s d ξ .
Using Lemma 1, it leads to the following:
y n + 1 ( x ) = 0 x ( ξ x ) H ( ξ , y n ( ξ ) , z n ( ξ ) ) d ξ .
Replacing z n ( ξ ) with y n ( ξ ) , Equation (61) is just Equation (51) with y 0 ( 0 ) = y 0 ( 0 ) = 0 . □

5.3. Determination of the Right-Hand Values

In Equation (42), we let
α : = y ( 1 ) , β : = y ( 1 ) .
to simplify the notations. Therefore, in view of Equations (43) and (51), the solution y ( x ) depends on α and β . Solving these two nonlinear algebraic equations in Equation (62), we can determine α and β , whose procedures are complicated when we seek a higher-order analytic solution.
To overcome the inefficiency in solving the nonlinear algebraic equations, we develop other efficient methods to determine α and β as follows. Because y ( x ) depends on α and β , by inserting
G ( x ) = [ a 1 y ( 0 ) + b 1 y ( 0 ) c 1 ] q 1 ( x ) + q 2 ( x ) [ a 2 α + b 2 β c 2 ]
into Equation (51), y ( 1 ) and y ( 1 ) also depend on α and β . Then, u ( x ) calculated from Equation (41) also depends on α and β . Next, we search for the optimal values of α and β by minimizing the absolute error of the solution u ( x ) compared to the exact one:
min α , β max x ( 0 , 1 ) u ( x ) u e ( x ) ,
where u e ( x ) denotes the exact solution. If u e ( x ) is not available, we can minimize the absolute error of the governing equation.
For the minimization problem in Equation (63), the 2D golden section search algorithm [31] was used to find the optimal values of α and β .
We insert the derived analytic solution into the governing equation and define the following minimization problem:
min α , β max x ( 0 , 1 ) u ( x ) + F ( x , u ( x ) , u ( x ) ) ,
which measures the error to fit the governing equation. For the minimization problem in Equation (64), the 2D golden section search algorithm [31] was used to find the optimal values of α and β .

5.4. Nonlocal Boundary Conditions

Next, more complicated nonlocal boundary conditions are considered for Equation (7):
a 1 u ( 0 ) + b 1 u ( 0 ) 0 1 w 1 ( x ) u ( x ) d x = p 1 ,
a 2 u ( 1 ) + b 2 u ( 1 ) 0 1 w 2 ( x ) u ( x ) d x = p 2 ,
where w 1 ( x ) and w 2 ( x ) are weight functions.
Upon defining linear operators:
L 1 { u ( x ) } : = a 1 u ( 0 ) + b 1 u ( 0 ) 0 1 w 1 ( x ) u ( x ) d x , L 2 { u ( x ) } : = a 2 u ( 1 ) + b 2 u ( 1 ) 0 1 w 2 ( x ) u ( x ) d x ,
Equations (65) and (66) can be written as follows:
L 1 { u ( x ) } = p 1 , L 2 { u ( x ) } = p 2 .
Theorem 6.
If there are nonlocal shape functions q 1 ( x ) and q 2 ( x ) , satisfying
L 1 { q 1 ( x ) } = 1 , L 2 { q 1 ( x ) } = 0 ,
L 1 { q 2 ( x ) } = 0 , L 2 { q 2 ( x ) } = 1 ,
then for any free function y ( x ) C 1 [ 0 , 1 ] ,
u ( x ) = y ( x ) q 1 ( x ) [ L 1 { y ( x ) } p 1 ] q 2 ( x ) [ L 2 { y ( x ) } p 2 ]
satisfies the nonlocal boundary conditions (65) and (66).
Proof. 
We first prove Equation (65). Applying L 1 to Equation (70) and using the linear property of L 1 , we have
L 1 { u ( x ) } = L 1 { y ( x ) } L 1 { q 1 ( x ) } ( L 1 { y ( x ) } p 1 ) L 1 { q 2 ( x ) } ( L 2 { y ( x ) } p 2 ) ,
which, with the aid of the first equations in Equations (68) and (69), becomes
L 1 { u ( x ) } = L 1 { y ( x ) } ( L 1 { y ( x ) } p 1 ) = p 1 .
Similarly, applying L 2 to Equation (70) and using the linear property of L 2 yields
L 2 { u ( x ) } = L 2 { y ( x ) } L 2 { q 1 ( x ) } ( L 1 { y ( x ) } p 1 ) L 2 { q 2 ( x ) } ( L 2 { y ( x ) } p 2 ) ,
which, with the aid of the second equations in Equations (68) and (69), becomes the following:
L 2 { u ( x ) } = L 2 { z ( x ) } ( L 2 { z ( x ) } p 2 ) = p 2 .
We have proved Equation (67); thus, Equations (65) and (66) are proved. This means that when u ( x ) is given by Equation (70), the nonlocal boundary conditions (65) and (66) are satisfied automatically. □

6. Example Testing for the MVIM

We assess the performance of the newly developed methodology of the MVIM by testing some examples.
Example 1.
We consider a nonlinear BVP with the mixed-boundary condition at the left boundary:
u ( x ) + u 2 ( x ) x 4 2 = 0 , u ( 0 ) u ( 0 ) = 0 , u ( 1 ) = 1 ,
whose exact solution is
u ( x ) = x 2 .
Let q 1 ( x ) = 1 / 2 x / 2 and q 2 ( x ) = 1 / 2 + x / 2 ; then, based on Equations (41), (42) and (45), we have
u ( x ) = y ( x ) 1 2 ( α 1 ) ( 1 + x ) ,
where y ( 0 ) = y ( 0 ) = 0 and y ( 1 ) = α .
Starting from y 0 ( x ) = x 2 and after inserting
H = y ( x ) 1 2 ( α 1 ) ( 1 + x ) 2 x 4 2
into Equation (51), we can derive
y 1 ( x ) = ( α 1 ) x 5 20 + x 4 12 ( α 1 ) 2 x 4 48 + x 3 12 + x 2 8 + x 2 .
Imposing y 1 ( 1 ) = α , we can derive
( α 1 ) 1 20 + 1 12 ( α 1 ) 2 1 48 + 1 12 + 1 8 = α 1 .
Obviously, α = 1 is a solution, which leads to y 1 ( x ) = x 2 , then the exact solution u ( x ) = x 2 is obtained via Equation (71). For this example, we can derive the exact solution by using the MVIM.
Example 2.
Let us consider the following singular boundary value problem [32]:
u ( x ) + 1 x u ( x ) + u ( x ) 5 4 x 2 16 = 0 , u ( 0 ) = 0 , u ( 1 ) = 17 16 ,
with the exact solution u ( x ) = 1 + x 2 / 16 . The term u ( x ) / x is singular at x = 0 .
The first-order approximate solution obtained by Lu [32] based on the VIM is as follows:
u 1 ( x ) = 83 96 + 37 192 x 2 + x 4 192 .
We give a detailed analysis of this example when we solve it using the MVIM. Let q 1 ( x ) = x 1 and q 2 ( x ) = 1 ; based on Equations (45), (41), and (42) with y ( 0 ) = 0 , we have
u ( x ) = y ( x ) G ( x ) : = y ( x ) α + 17 16 ,
where we suppose that the unknown right-hand value is y ( 1 ) = α , and B = 0 is taken in Equation (45). In the iteration, the starting initial guess is y 0 ( x ) = A . Inserting Equation (74) into Equation (72), we can derive a new ODE:
y ( x ) + 1 x y ( x ) + y ( x ) 3 16 α x 2 16 = 0 , y ( 0 ) = A , y ( 0 ) = 0 .
The MVIM in Equation (51), starting from a restricted y 0 ( x ) = 1 and with B = 0 , leads to the following first-order approximation:
y 1 ( x ) = A 13 32 x 2 + α 2 x 2 + x 4 192 .
Imposing y 1 ( 1 ) = α , we can solve
α = 2 A 77 96 .
Next, we determine A. It follows from Equations (74), (75), and (76) that
u 1 ( x ) = A 13 32 x 2 + A 77 192 x 2 + x 4 192 + 179 96 .
Inserting it into Equation (72), we search for the optimal value of A by minimizing the absolute error of the governing equation in Equation (72):
min A max x ( 0 , 1 ) u 1 ( x ) + 1 x u 1 ( x ) + u 1 ( x ) 5 4 x 2 16 .
Notice that in the minimization problem with a single unknown value A, we can adopt the so-called interval reduction method to find the proper value of A. First, we select a large interval and list the data of u 1 ( x ) + 1 x u 1 ( x ) + u 1 ( x ) 5 4 x 2 16 in the computer. We can observe where the minimal point locates, then we reduce the interval to a smaller one to involve that minimal point. We carry out the same procedure a few times to find an accurate value of A.
In Figure 2a, we plot the error of the governing equation to the value of A, which shows that there exists a minimal value at A = 0.8645833 ; thus, α = 0.927083266 . Based on this value, we compare the solution obtained from the MVIM to the exact one in Figure 2b, whose ME is 1.302 × 10 3 .
From Table 1, we can observe that the analytic solution obtained by the first-order MVIM is much more accurate than that obtained by Lu [32], whose results are under-estimated.
Notice that A and B in Equation (51) can be regarded as free parameters since they do not influence the new ODE: y n + 1 ( x ) = H ( x , y n ( x ) , y n ( x ) ) . For this example, we take y 0 ( x ) = 1 , not y 0 ( x ) = A . For the latter case, we will derive Equation (73) again, which does not improve the accuracy for the first-order solution.
Next, we apply Equation (63) to determine the optimal value of α. We begin with y 0 = 1 , and up to the first-order, we obtain the following:
H = 1 x y ( x ) + y ( x ) 3 16 α x 2 16 , y 1 ( x ) = 1 13 32 x 2 + α 2 x 2 + x 4 192 , u 1 ( x ) = y 1 ( x ) y 1 ( 1 ) + 17 16 .
The optimal value is α = 0.925 , which is slightly different from α = 0.927083266 obtained above. ME is 1.04 × 10 3 , which is more accurate than that calculated from Equation (77) with ME = 1.302 × 10 3 .
Example 3.
We consider the Airy-type boundary value problem [32,33]:
u ( ξ ) a 0 ξ u ( ξ ) 2 = 0 , u ( 1 ) = u ( 1 ) = 0 ,
where a 0 is a given constant. Due to the stiffness when a 0 20 , the Lie-group shooting method [5] fails to solve this problem. We consider a 0 = 10 .
We seek a variable transformation as follows:
x = 1 2 ( ξ + 1 ) ,
u ( x ) 4 a 0 ( 2 x 1 ) u ( x ) 8 = 0 , u ( 0 ) = u ( 1 ) = 0 .
Let q 1 ( x ) = 1 x and q 2 ( x ) = x . Then, it follows from Equations (45), (41) and (42) that
u ( x ) = y ( x ) G ( x ) : = y ( x ) α x .
Inserting Equation (80) into Equation (79), we can derive a new ODE:
y ( x ) 4 a 0 ( 2 x 1 ) y ( x ) + 4 a 0 ( 2 x 1 ) x α 8 = 0 , y ( 0 ) = 0 , y ( 0 ) = 0 .
Starting from y 0 ( x ) = 0 and from Equation (51), we can derive the following:
y 1 ( x ) = 4 x 2 + 0 x 4 ( ξ x ) a 0 ( 2 ξ 1 ) ξ α d ξ .
By using the formula (49), we can derive the following:
y 1 ( x ) = 4 x 2 4 a 0 α 1 6 x 4 1 6 x 3 ,
By imposing y 1 ( 1 ) = α , we can obtain α = 4 .
Similarly, we can derive the following:
y 2 ( x ) = 4 x 2 4 a 0 α 1 6 x 4 1 6 x 3 + 8 a 0 5 x 5 4 a 0 3 x 4 16 a 0 2 α 1 126 x 7 1 60 x 6 + 1 120 x 5 .
By imposing y 2 ( 1 ) = α , we can obtain α = ( 1260 + 84 a 0 ) / ( 315 2 a 0 2 ) . In Figure 3, we compare the solutions obtained from the MVIM to that obtained using the Lie-group shooting method (LGSM) [5].
When we apply Equation (63) to determine α in Equation (82), the optimal value is α = 16.222 . The current solution based on the second-order MVIM with the minimization technique has ME = 6.26 × 10 1 , which is more accurate than that calculated using Equation (82) with ME = 8.77 × 10 1 .
Example 4.
Let us consider [5,34,35]:
u ( x ) 3 2 u 2 ( x ) = 0 ,
where
u ( 0 ) = 4 , u ( 1 ) = 1 .
There are two solutions, of which
u ( x ) = 4 ( 1 + x ) 2
is the first solution.
As in Example 3, q 1 ( x ) = 1 x and q 2 ( x ) = x . Then, it follows from Equations (45), (41), and (42) that
u ( x ) = y ( x ) G ( x ) : = y ( x ) ( α + 3 ) x + 4 ,
where α = y ( 1 ) .
Starting from y 0 ( x ) = 0 and by inserting
H = 3 2 [ y ( x ) ( α + 3 ) x + 4 ] 2
into Equation (51), we can derive
y 1 ( x ) = ( α + 3 ) 2 8 x 4 + 12 x 2 2 ( α + 3 ) x 3 ,
from which y 1 ( 1 ) = α renders α = 9 2 6 .
Similarly, we can derive the following:
y 2 ( x ) = ( α + 3 ) 3 112 x 7 + 4 α 2 + 24 α + 72 5 x 6 3 ( α + 3 ) x 5 + ( α + 3 ) 2 8 x 4
2 ( α + 3 ) x 3 + 24 x 2 + ( α + 3 ) 4 3840 x 10 ( α + 3 ) 3 96 x 9 + 3 ( α + 3 ) 2 16 x 8 12 ( α + 3 ) 7 x 7 ,
u 2 ( x ) = y 2 ( x ) [ y 2 ( 1 ) + 3 ] x + 4 .
It can be checked that u 2 ( x ) exactly satisfies the boundary conditions u 2 ( 0 ) = 4 and u 2 ( 1 ) = 1 .
In y 2 ( x ) , there exists a free parameter α. Therefore, we apply Equation (63) to determine α in Equation (86). The optimal value is α = 4.4954 . As shown in Figure 4, the current solution based on the second-order MVIM with the minimization technique is quite close to the exact one, whose ME is 2.55 × 10 2 . In Figure 4, we compare the analytic solutions of the MVIM to the exact solution in Equation (85), of which we can observe that the second-order solution is more accurate than the first-order solution and tends to the exact solution.
To compare with the traditional VIM, we begin with
u 0 = 4 + B x ,
where B is determined to meet the right-hand condition in Equation (84). Through laborious work, we can derive the following:
u 2 ( x ) = 4 + B x + 12 x 2 + 2 B x 3 + B 2 8 + 12 x 4 + 3 B x 5 + B 2 4 + 36 5 x 6 + B 3 112 + 12 B 7 x 7 + 3 B 2 x 8 16 + B 3 x 9 96 + B 4 x 10 3840 .
B = 7.81759 is obtained by considering u 2 ( 1 ) = 1 . The result is plotted in Figure 4 with ME = 0.115, which is less accurate than ME = 2.55 × 10 2 obtained by the MVIM by about one order.
Example 5.
We consider a more difficult nonlinear BVP with a mixed-boundary condition given on the right-boundary:
u ( x ) 3 2 u 2 ( x ) = 0 , u ( 0 ) = 4 , u ( 1 ) + u ( 1 ) = 0 ,
of which the exact solution is given by Equation (85).
Let q 1 ( x ) = ( 2 x ) / 2 and q 2 ( x ) = x / 2 . It follows from Equations (41) and (42) that
u ( x ) = y ( x ) + 4 2 x x 2 [ y ( 1 ) + y ( 1 ) ] = y ( x ) + 4 2 x x γ 2 ,
where γ : = y ( 1 ) + y ( 1 ) and y ( 0 ) = y ( 0 ) = 0 are adopted. The new ODE for y ( x ) is Equation (43) with the following function:
H ( x , y ( x ) ) = 3 2 y ( x ) + 4 2 x x γ 2 2 .
Starting from y 0 ( x ) = 0 and inserting Equation (89) for H into Equation (51) yields
y 2 ( x ) = η 2 8 x 4 2 η x 3 + 12 x 2 + η 4 3840 x 10 η 3 96 x 9 + 3 η 2 16 x 8
12 η 7 x 7 + 36 5 x 6 + η 2 4 x 6 3 η x 5 + 12 x 4 η 3 112 x 7 ,
u 2 ( x ) = y 2 ( x ) + 4 2 x x 2 [ y 2 ( 1 ) + y 2 ( 1 ) ] ,
where η = 2 + γ / 2 . Notice that in u 2 ( x ) , y 2 ( 1 ) + y 2 ( 1 ) cannot be replaced by γ; otherwise, the right-boundary condition u 2 ( 1 ) + u 2 ( 1 ) = 0 will not be satisfied. Indeed, y 2 ( 1 ) and y 2 ( 1 ) are calculated from Equation (90).
For this problem, it is hard to solve the coupled nonlinear algebraic equations to obtain y 2 ( 1 ) and y 2 ( 1 ) . Therefore, we apply Equation (63) to determine γ in Equation (90). The optimal value is γ = 10.774 . As shown in Figure 5 by a solid line, the current solution based on the second-order MVIM with the minimization of the error of the solution is quite close to the exact one, whose ME is 4.41 × 10 2 .
Next, we search for the optimal value of γ by minimizing the absolute error of the governing equation in Equation (88):
min γ max x ( 0 , 1 ) u 2 ( x ) 3 2 u 2 2 ( x ) .
The optimal value is γ = 10.848 , which is slightly larger than γ = 10.774 in the above. As shown in Figure 5 by a dashed line, the solution based on the second-order MVIM with the minimization of the error of governing equation is also close to the exact one, whose ME is 7.33 × 10 2 .

A Nonlocal BVP

For Equation (83), we extend the result to the following nonlocal BVP:
u ( 0 ) = 4 , 2 u ( 1 ) 0 1 u ( x ) d x = 0 .
The corresponding nonlocal shape functions are given as follows:
s 1 ( x ) = 1 2 x 3 , s 2 ( x ) = x 3 .
Let
u ( x ) = y ( x ) s 1 ( x ) [ y ( 0 ) 4 ] s 2 ( x ) 2 y ( 1 ) 0 1 y ( x )
be the variable transform. Consequently, Equation (83) is transformed to the following:
y ( x ) 3 2 y ( x ) s 1 ( x ) [ y ( 0 ) 4 ] s 2 ( x ) 2 y ( 1 ) 0 1 y ( x ) 2 = 0 .
For simplicity, we chose y ( 0 ) = 4 . By defining a constant
β = 2 y ( 1 ) 0 1 y ( x ) d x ,
we can reduce Equation (95) to
y ( x ) + H ( y ( x ) ) = y ( x ) 3 2 [ y ( x ) β s 2 ( x ) ] 2 = 0 .
We begin with
y 0 ( x ) = 4 + B x ,
where the value of B can be optimized to minimize the absolute error of the analytic solution. Then, we have the following:
β 0 = 4 + 3 B 2 ,
H ( y 0 ( x ) ) = 3 2 [ y 0 ( x ) β s 2 ( x ) ] 2 = 24 12 B x 3 B 2 x 2 2 3 β 0 2 x 2 18 + 4 β 0 x + β 0 B x 2 .
Inserting Equation (98) into Equation (51) yields the following:
y 1 ( x ) = 4 + B x + 12 x 2 + 2 B x 3 2 β 0 x 3 3 + B 2 x 4 8 + β 0 x 4 72 β 0 B x 4 12 .
Let
β 1 = 2 y 1 ( 1 ) 0 1 y 1 ( x ) d x ;
we come to the first-order analytic solution:
u 1 ( x ) = 4 + B x β 1 x 3 + 12 x 2 + 2 B x 3 2 β 0 x 3 3 + B 2 x 4 8 + β 0 x 4 72 β 0 B x 4 12 .
We can check that u 1 ( x ) satisfies the nonlocal boundary conditions in Equation (93).
When B is determined by
min B max x ( 0 , 1 ) u 1 ( x ) 4 ( 1 + x ) 2 ,
the optimal value is B = 8.15325 . From Table 2, we can observe that the first-order analytic solution obtained by the MVIM with a nonlocal modification is quite accurate. As shown in Figure 6, these two curves are close, with ME = 0.1369.

7. Linearized Liapunov Method for Seeking Analytic Solutions

Before the application of the Liapunov method to solve the nonlinear BVP, we are required to linearize the transformed IVP for Equation (43), which is assumed to have the following form:
y ( x ) + a y ( x ) + b y ( x ) + c + f ( y , y ) y ( x ) + g ( y , y ) y ( x ) = 0 ,
where f ( y , y ) and g ( y , y ) are nonlinear functions of ( y , y ) , and a, b, and c are constants.
According to Equation (45), we can take a suitable initial guess of y 0 ( x ) . To simplify the new analytic method, we decompose Equation (100) as follows:
y ( x ) + a y ( x ) + b y ( x ) + c + q 0 f ( y , y ) y ( x ) + q 0 g ( y , y ) y ( x ) = ( q 0 1 ) f ( y , y ) y ( x ) + ( q 0 1 ) g ( y , y ) y ( x ) ,
where q 0 is a constant weight factor. Starting from the initial guess y 0 ( x ) , one solves the following linearized ODE:
y ( x ) + a y ( x ) + b y ( x ) + c + q 0 f ( y 0 , y 0 ) y ( x ) + q 0 g ( y 0 , y 0 ) y ( x ) = ( q 0 1 ) f ( y 0 , y 0 ) y 0 ( x ) + ( q 0 1 ) g ( y 0 , y 0 ) y 0 ( x )
to seek a higher-order analytic solution. This technique is termed a decomposition–linearization method, advocated by Liu et al. [36] as a basis to treat the analytic solution of the nonlinear ODE.
Like Equation (2), we insert a dummy parameter μ into Equation (101) to obtain the following:
y ( x ) + μ a y ( x ) + μ b y ( x ) + μ c + μ q 0 f ( y 0 , y 0 ) y ( x ) + μ q 0 g ( y 0 , y 0 ) y ( x ) = μ ( q 0 1 ) f ( y 0 , y 0 ) y 0 ( x ) + μ ( q 0 1 ) g ( y 0 , y 0 ) y 0 ( x ) .
Inserting
y ( m ) ( x ) = y 0 ( x ) + k = 1 m μ k y k ( x ) ,
into Equation (102) and equating the coefficients preceding μ k , we can derive the following:
y 1 ( x ) + a y 0 ( x ) + b y 0 ( x ) + c + f ( y 0 , y 0 ) y 0 ( x ) + g ( y 0 , y 0 ) y 0 ( x ) = 0 , y 1 ( 0 ) = y 1 ( 0 ) = 0 , y 2 ( x ) + q 0 f ( y 1 , y 1 ) y 1 ( x ) + q 0 g ( y 1 , y 1 ) y 1 ( x ) = 0 , y 2 ( 0 ) = y 2 ( 0 ) = 0 , y k ( x ) + q 0 f ( y k 1 , y k 1 ) y k 1 ( x ) + q 0 g ( y k 1 , y k 1 ) y k 1 ( x ) = 0 , y k 1 ( 0 ) = y k 1 ( 0 ) = 0 .
We found that the solutions to the above series of ODEs are easily derived. Then, we can fast find the analytic solution of the nonlinear BVP.

8. Examples Testing for the Linearized Liapunov Method

The background of the linearized Liapunov method is a decomposition–linearization technique, which is executed on the nonlinear ODE by simplifying it to a linear ODE around a zero-th order solution y 0 . Previously, the decomposition–linearization technique was combined with the homotopy perturbation method in [36] to treat the nonlinear differential/integral equations and nonlinear jerk equations. It can be seen that Equation (104) can be sequentially used to find the analytic solutions step-by-step by merely executing twice integrations with the integrand, being a function of the previous step’s solution. The reason to generate this powerful method is that we introduced a dummy parameter μ in the linearized Equation (102), then in the solution in Equation (103). Below, we will test the performance of the linearized Liapunov method.
Example 6.
To explain the process of the decomposition–linearization technique, we rewrite Equation (83) to an equivalent form:
u ( x ) 3 2 ( 1 + q 0 q 0 ) u 2 ( x ) = 0 ;
the term 3 2 ( 1 + q 0 ) u 2 ( x ) is moved to the right-hand side:
u ( x ) + 3 2 q 0 u 2 ( x ) = 3 2 ( 1 + q 0 ) u 2 ( x ) .
We select a reference solution u 0 ( x ) , then we linearize the nonlinear term u 2 ( x ) on the left-hand side to a linear term u 0 ( x ) u ( x ) . At the same time, u 2 ( x ) on the right-hand side is approximated by u 0 2 ( x ) ; hence, we have the following:
u ( x ) + 3 2 q 0 u 0 ( x ) u ( x ) = 3 2 ( 1 + q 0 ) u 0 2 ( x ) .
Apparently, it is a second-order linear ODE when q 0 is a given constant, and u 0 ( x ) is a given function.
We consider the following boundary conditions for Equation (83):
2 u ( 0 ) + u ( 0 ) = 0 , u ( 1 ) = 1 .
Equation (85) is the first solution of this boundary value problem.
For this problem, q 1 ( x ) = 1 x and q 2 ( x ) = 2 x 1 , and we take
u ( x ) = y ( x ) ( 1 x ) [ 2 y ( 0 ) + y ( 0 ) ] ( 2 x 1 ) [ y ( 1 ) 1 ] ,
such that u ( x ) automatically satisfies Equation (105).
We give the zeroth-order solution of y ( x ) as follows:
y 0 ( x ) = A + B x .
Let
u ( 0 ) = 2 A 0 ,
y ( 0 ) = A , y ( 0 ) = B ,
where A 0 = u ( 0 ) is to be determined, and A and B are parameters whose values are to be optimized.
Differentiating Equation (106) with respect to x, inserting x = 0 and using Equation (108), we can derive the following:
y ( 1 ) = 1 + A + B A 0 .
Then, we have
u ( x ) = y ( x ) + c 1 x + c 2 ,
where
c 1 = 2 A 0 B , c 2 = A A 0 .
Inserting Equation (109) into Equation (83) and directly considering its linearization around y 0 ( x ) , we have
y ( x ) + 3 2 q 0 y 0 ( x ) y ( x ) = 3 2 ( q 0 + 1 ) y 0 2 ( x ) + h ( x ) + ( 3 c 1 x + 3 c 2 ) y ( x ) ,
where
h ( x ) = 3 2 c 2 2 + 3 c 1 c 2 x + 3 2 c 1 2 x 2 .
Inserting Equation (103) for y ( x ) into
y ( x ) + 3 μ 2 q 0 y 0 ( x ) y ( x ) = 3 μ 2 ( q 0 + 1 ) y 0 2 ( x ) + μ h ( x ) + μ ( 3 c 1 x + 3 c 2 ) y ( x ) ,
we can derive
y 1 ( x ) = 3 2 y 0 2 ( x ) + ( 3 c 1 x + 3 c 2 ) y 0 ( x ) + h ( x ) , y 1 ( 0 ) = y 1 ( 0 ) = 0 ,
y 2 ( x ) = 3 q 0 2 y 0 ( x ) y 1 ( x ) + ( 3 c 1 x + 3 c 2 ) y 1 ( x ) , y 2 ( 0 ) = y 2 ( 0 ) = 0 ,
y 3 ( x ) = 3 q 0 2 y 0 ( x ) y 2 ( x ) + ( 3 c 1 x + 3 c 2 ) y 2 ( x ) , y 3 ( 0 ) = y 3 ( 0 ) = 0 .
Sequentially solving the above linear IVPs to derive y k ( x ) and inserting them into Equation (103) with μ = 1 , an analytic solution of y ( x ) can be achieved; then, u ( x ) is obtained from Equation (109).
For the first-order solution, we can derive
u ( 1 ) ( x ) = A + B x + a 12 x 2 + a 13 x 3 + a 14 x 4 + c 1 x + c 2 ,
where
a 12 = 3 A 2 4 + 3 c 2 2 4 + 3 c 2 A 2 , a 13 = A B 2 + c 1 A 2 + c 2 B 2 + c 1 c 2 2 , a 14 = B 2 8 + c 1 2 8 + c 1 B 4 .
The parameters are A = 0 , B = 1 , and A 0 = 2 ( 2 1 ) , and the first-order solution is close to the exact one obtained by the Lie-group shooting/boundary shape function method [37], which is the second solution for Equations (83) and (105) and is different from that in Equation (85). As shown in Figure 7 by a dashed-dotted line, the first-order solution is quite accurate with ME = 6.24 × 10 3 .
Through some manipulations, the second-order solution can be derived as follows:
u ( 2 ) ( x ) = A + B x + a 12 x 2 + a 13 x 3 + a 14 x 4 + c 1 x + c 2 + c 3 a 12 12 x 4 + c 3 a 13 + c 4 a 12 20 x 5 + c 3 a 14 + c 4 a 13 30 x 6 + c 4 a 14 42 x 7 ,
where
c 3 : = 3 c 2 3 q 0 A 2 , c 4 : = 3 c 1 3 q 0 B 2 .
We take q 0 = 1 , A = B = 0 and A 0 = 0.82428 , and the second-order solution denoted by a dashed line is quite close to the exact one. As shown in Figure 7, the second-order solution is more accurate than the first-order solution with ME = 8.6 × 10 4 .
In Table 3, we tabulate the values of u ( x ) governed by Equation (83) but with different boundary conditions to those in Equation (105), and we compare them to the exact one. It can be seen that the second-order solution is more accurate than that obtained from the first-order solution with ME = 6.24 × 10 3 .
Note that if we do not consider the linearization in Equation (111), we can derive
y ( x ) = 3 μ 2 y 2 ( x ) + μ h ( x ) + μ ( 3 c 1 x + 3 c 2 ) y ( x ) ,
which, by equating the coefficients preceding μ k , k = 1 , 2 , 3 , leads to the following:
y 1 ( x ) = 3 2 y 0 2 ( x ) + ( 3 c 1 x + 3 c 2 ) y 0 ( x ) + h ( x ) , y 1 ( 0 ) = y 1 ( 0 ) = 0 ,
y 2 ( x ) = 3 y 0 ( x ) y 1 ( x ) + ( 3 c 1 x + 3 c 2 ) y 1 ( x ) , y 2 ( 0 ) = y 2 ( 0 ) = 0 ,
y 3 ( x ) = 3 y 0 ( x ) y 2 ( x ) + ( 3 c 1 x + 3 c 2 ) y 2 ( x ) + 3 2 y 1 2 ( x ) , y 3 ( 0 ) = y 3 ( 0 ) = 0 .
Obviously, Equation (119) is equivalent to Equation (113) by taking q 0 = 2 , and Equation (120) with an extra term 3 y 1 2 ( x ) / 2 is more complex than Equation (114) by taking q 0 = 2 . We find that the second-order solution obtained from Equations (118) and (119) by A = 1.99 and B = 0.56 is less accurate than that in Equation (116) with ME = 1.51 × 10 1 .
To compare with the homotopy perturbation method [21], Equation (83) is written in a homotopy form:
u ( x ) 3 2 p u 2 ( x ) = 0 .
Inserting
u ( m ) ( x ) = u 0 ( x ) + k = 1 m p k u k ( x )
into Equation (121) and equating the coefficients preceding p k , we can derive the following:
u 1 ( x ) = 3 2 u 0 2 ( x ) , u 1 ( 0 ) = u 1 ( 0 ) = 0 , u 2 ( x ) = 3 u 0 ( x ) u 1 ( x ) , u 2 ( 0 ) = u 2 ( 0 ) = 0 , u 3 ( x ) = 3 2 u 1 2 ( x ) + 3 u 0 ( x ) u 2 ( x ) , u 3 ( 0 ) = u 3 ( 0 ) = 0 , u k ( x ) = 3 2 u k 2 2 ( x ) + 3 u 0 ( x ) u k 1 ( x ) , u k ( 0 ) = u k ( 0 ) = 0 , k 4 .
We found that the solutions to the above series of ODEs are not easily derived when k 4 because the number of terms in u 2 ( x ) is greater than eight; hence, the computation of u 2 2 ( x ) would be required significant work.
We begin with
u 0 ( x ) = A 2 A x ,
which satisfies the left boundary condition in Equation (105). Through a lengthy derivation, we come to the following:
u ( 2 ) ( x ) = A 2 A x + 3 A 2 x 2 4 A 2 x 3 + A 2 x 4 2 + 3 A 3 x 4 16 3 A 3 x 5 8 + 3 A 3 x 6 12 A 3 x 7 14 .
To meet the right boundary condition in Equation (105), A = 1.0415 was obtained. As shown in Table 3, the results of u ( 2 ) ( x ) at some points computed using the homotopy perturbation method are not accurate upon comparing them to those computed using Equations (115) and (116).
Example 7.
Let us consider a three-point boundary value problem:
u ( x ) 1 8 [ 32 + 2 x 3 u ( x ) u ( x ) ] = 0 ,
u ( 1 ) = 17 , u ( 2 ) + u ( 3 ) = 79 3 ,
which has an exact solution:
u ( x ) = x 2 + 16 x .
The two shape functions are given by q 1 ( x ) = ( 5 2 x ) / 3 and q 2 ( x ) = ( x 1 ) / 3 , and the variable transformation is given as follows:
u ( x ) = y ( x ) q 1 ( x ) [ y ( 1 ) 17 ] q 2 ( x ) y ( 2 ) + y ( 3 ) 79 3 ,
such that u ( x ) automatically satisfies Equation (127). Let
u ( 1 ) = A 0 , α = y ( 2 ) + y ( 3 ) ,
and inserting x = 1 into the derivative of Equation (129), we can obtain
α = 3 B + 2 A 3 A 0 23 3 ,
where A = y ( 1 ) and B = y ( 1 ) . Based on Equation (129), we have
u ( x ) = y ( x ) + c 1 x + c 2 ,
where
c 1 = 2 3 ( A 17 ) 1 3 α 79 3 , c 2 = 1 3 α 79 3 5 3 ( A 17 ) .
We give the zeroth-order solution of y ( x ) as follows:
y 0 ( x ) = A + B ( x 1 ) .
Inserting Equation (130) into Equation (126) and considering its linearization around y 0 ( x ) , we have
y ( x ) + μ q 0 8 y 0 ( x ) y ( x ) = μ ( q 0 1 ) 8 y 0 ( x ) y 0 ( x ) μ c 1 8 y ( x ) μ ( c 1 x + c 2 ) 8 y ( x ) + μ h ( x )
where
h ( x ) = 4 + x 3 4 c 1 8 ( c 1 x + c 2 ) .
By the same token, we can derive the following:
y 1 ( x ) = 1 8 y 0 ( x ) y 0 ( x ) c 1 8 y 0 ( x ) c 1 x + c 2 8 y 0 ( x ) + h ( x ) , y 1 ( 1 ) = y 1 ( 1 ) = 0 , y 2 ( x ) = q 0 8 y 0 ( x ) y 1 ( x ) c 1 8 y 1 ( x ) c 1 x + c 2 8 y 1 ( x ) , y 2 ( 1 ) = y 2 ( 1 ) = 0 .
For the second-order solution, we can derive
u ( 2 ) ( x ) = A + B ( x 1 ) + a 12 x 2 + a 13 x 3 + x 5 80 c 3 + a 22 x 2 + a 23 x 3 + a 24 x 4 + a 25 x 5 + a 26 x 6 + a 27 x 7 c 4 ,
where
a 12 = B ( B A ) 16 + c 1 ( B A ) 16 c 2 B 16 c 1 c 2 16 + 2 , a 13 = B 2 48 c 1 B 24 c 1 2 48 , c 3 = a 12 + a 13 + 1 80 , a 22 = ( q 0 B + c 1 ) c 3 16 , a 23 = c 2 a 12 24 , a 24 = ( q 0 B + c 1 ) a 12 96 c 2 a 13 32 c 1 a 12 48 , a 25 = ( q 0 B + c 1 ) a 13 160 3 c 1 a 13 160 , a 26 = c 2 3840 , a 27 = c 1 5376 q 0 B + c 1 26880 , c 4 = a 22 + a 23 + a 24 + a 25 + a 26 + a 27 .
We take A 0 = 14 , A = 14.621 , and B = 5.999 . The second-order solution is close to the exact one in Equation (128), as shown in Figure 8 with the maximum relative error (MRE) = 1.246 × 10 2 . Table 4 reveals that the present analytic solutions are quite accurate.
Example 8.
The boundary layer theory explains very well the steady-state flow over a flat plate at a zero incidence angle. Based on the assumption of incompressibility and the conservation of momentum, the laminar flow satisfies the following:
U X + V Y = 0 , U U X + V U Y = 1 ρ τ X Y Y .
In above, X and Y are the coordinates attached to the plate in the horizontal and perpendicular directions, and U and V are the velocity components of the flow in the X and Y directions, respectively. The fluid density ρ is assumed to be a constant.
The shear stress is governed by the Newtonian fluid:
τ X Y = K U Y ,
where K > 0 is a constant. The corresponding boundary conditions are given by
U ( X , 0 ) = U w , U ( X , + ) = U , V ( X , 0 ) = V w ( X ) = V 0 X 1 / 2 ,
where the plate is moving at a constant speed U w in the direction parallel to an oncoming flow with a constant speed U . After introducing a similarity variable and a stream function:
η = B X β Y , ϕ ( X , Y ) = A X σ f ( η )
with
σ = 1 2 , β = σ , B = ρ U 2 K 1 / 2 , A = U B ,
we can obtain
f ( η ) + f ( η ) f ( η ) = 0 ,
which is subjected to the boundary conditions:
f ( 0 ) = C , f ( 0 ) = ξ , f ( + ) = 1 .
In above, ξ = U w / U is the velocity ratio. When ξ < 0 , we have a reverse flow attached near the boundary. When 0 < ξ < 1 , the speed of the oncoming fluid is larger than that of the plate. When ξ > 1 , the speed of the moving plate is faster than the speed of the oncoming fluid. The term C = 2 B V 0 / U is a constant related to the situation of suction if it is negative or injection if it is positive.
Since the boundary layer problem is an important issue in practical applications of fluid mechanics, we take them as an application of the proposed method. We concern those which can be transformed to the second-order nonlinear boundary value problems. Many more boundary layer problems are governed by third-order nonlinear boundary value problems, which cannot be treated as second-order nonlinear boundary value problems.
In this case, we consider a third-order nonlinear boundary value problem [38] for depicting a simple boundary layer problem of a fluid:
f ( η ) + f ( η ) f ( η ) = 0 , η [ 0 , ) ,
f ( 0 ) = 0 , f ( 0 ) = 0 , f ( ) = 1 ( B l a s i u s f l o w ) ,
f ( 0 ) = 0 , f ( 0 ) = 1 , f ( ) = 0 ( S a k i d i s f l o w ) .
Previously, Chang et al. [38] employed a Lie-group-shooting method to solve the problem of Blasius flow, obtaining
f ( 0 ) = 0.4696 ,
which was also obtained by Cortell [39].
Letting
u = f ( η ) , x = f ( η ) ,
we can change Equation (131) to a second-order nonlinear ODE:
u ( x ) u ( x ) + x = 0 , x [ 0 , 1 ] ,
u ( 0 ) = 0 , u ( 1 ) = 0 ( B l a s i u s   f l o w ) ,
u ( 0 ) = 0 , u ( 1 ) = 0 ( S a k i d i s   f l o w ) .
No matter which case is considered, we are concerned with the following value:
α = f ( 0 ) ,
which corresponds to
α = u ( 0 ) ( B l a s i u s   f l o w ) ,
α = u ( 1 ) ( S a k i d i s   f l o w ) .
We begin with the zeroth-order solutions:
u 0 ( x ) = α ( B l a s i u s   f l o w ) , u 0 ( x ) = β x ( S a k i d i s   f l o w ) ,
where α and β are determined, respectively, by matching the right-hand boundary conditions u ( 1 ) = 0 for Blasius flow and u ( 1 ) = 0 for Sakidis flow.
Equation (136) is re-written as
u ( x ) + x u ( x ) u 2 ( x ) = 0 ,
and can be linearized to
u ( x ) + q 0 x u ( x ) u 0 2 ( x ) = ( q 0 1 ) x u 0 ( x ) u 0 2 ( x ) .
Inserting
u ( m ) ( x ) = u 0 ( x ) + k = 1 m μ k u k ( x ) ,
into
u ( x ) + μ q 0 x u ( x ) u 0 2 ( x ) = μ ( q 0 1 ) x u 0 ( x ) ,
yields
u 1 ( x ) = x u 0 ( x ) , u 1 ( 0 ) = u 1 ( 0 ) = 0 ,
u 2 ( x ) = q 0 x u 1 ( x ) u 0 2 ( x ) , u 2 ( 0 ) = u 2 ( 0 ) = 0 ,
u k ( x ) = q 0 x u k 1 ( x ) u 0 2 ( x ) , u k ( 0 ) = u k ( 0 ) = 0 .
After inserting u 0 = α into the above equations with k = 4 , for the Blasius flow, we can derive the following:
u ( 4 ) ( x ) = α x 3 6 α + q 0 x 6 180 α 3 q 0 2 x 9 12960 α 5 + q 0 3 x 12 1710720 α 7 .
By taking q 0 = 1.897 , α = 0.46961215 is found by solving u ( 4 ) ( 1 ) = 0 . This value of α is very close to that in Equation (134). Figure 9 compares the analytic solution to the exact one obtained by RK4, whose ME = 1.76 × 10 2 is quite small.
After inserting u 0 = β x into Equations (144)–(147) up to k = 6 , for the Sakidis flow, we can derive the following:
u ( 6 ) ( x ) = β x x 2 2 β + q 0 x 3 12 β 3 q 0 2 x 4 144 β 4 + q 0 3 x 5 2880 β 5 q 0 4 x 6 86400 β 6 + q 0 5 x 7 3628800 β 7 .
By taking q 0 = 1.5 , β = 1.1213468658 is found by solving u ( 6 ) ( 1 ) = 0 . The value of α is obtained from Equation (148) by inserting x = 1 as follows:
α = β 1 2 β + q 0 12 β 3 q 0 2 144 β 4 + q 0 3 2880 β 5 q 0 4 86400 β 6 + q 0 5 3628800 β 7 = 0.596052 .
α = 0.6254 was obtained by Zheng et al. [40] through a more complicated process. Figure 10 compares the analytic solution to the exact one obtained by the Lie-group shooting/boundary shape function method [37], whose ME = 2.61 × 10 2 is quite small.
To obtain a faster convergence solution for the Sakidis boundary layer problem, Equation (142) is expressed for y ( z ) = u ( x ) by
y ( z ) + ( 1 z ) y ( z ) y 2 ( z ) = 0 ,
where
z = 1 x , y ( 0 ) = 1 , y ( 1 ) = 0 .
We attempt to compute y ( 0 ) = α .
We linearize Equation (149) with respect to y 0 = α to
y ( x ) + μ q 0 ( 1 z ) y ( z ) y 0 2 ( x ) = μ ( q 0 1 ) ( 1 z ) y 0 ( x ) y 0 2 ( x ) .
It follows that by equating the coefficients preceding μ k , k = 1 , 2 , , k :
y 1 ( z ) = z α 1 α , y 1 ( 0 ) = y 1 ( 0 ) = 0 , y 2 ( z ) = q 0 ( z 1 ) y 1 ( z ) α 2 , y 2 ( 0 ) = y 2 ( 0 ) = 0 , y k ( z ) = q 0 ( z 1 ) y k 1 ( z ) α 2 , y k ( 0 ) = y k ( 0 ) = 0 .
After inserting y 0 = α into the above equations, we can derive a faster convergence solution for the Sakidis flow up to k = 3 :
y ( 3 ) ( z ) = α z 2 2 α + z 3 6 α + q 0 z 4 24 α 3 q 0 z 5 30 α 3 + q 0 z 6 180 α 3 q 0 2 z 6 720 α 5 + 9 q 0 2 z 7 5040 α 5 7 q 0 2 z 8 10080 α 5 + q 0 2 z 9 12960 α 5 .
By taking q 0 = 1.53 , α = 0.6253392 is found by solving y ( 3 ) ( 1 ) = 0 . The value α = 0.6253392 is very close to that of α = 0.6254 obtained by Zheng et al. [40]. Here, with k = 3 , we obtained a higher-order analytic solution to z 9 ; in Equation (148), only x 7 was obtained by taking k up to k = 6 . Figure 10 compares the second analytic solution to the exact one obtained by the Lie-group shooting/boundary shape function method [37], whose ME = 9.8 × 10 3 is improved compared to that obtained by the first analytic solution.
Example 9.
To further clarify the versality of the splitting–linearizing Lyapunov method, we consider a highly nonlinear BVP:
u ( x ) + 3 u ( x ) u ( x ) + u 3 ( x ) = 0 , u ( 0 ) = 1 , u ( 1 ) = 1 ,
whose exact solution is
u ( x ) = 2 x + 1 x 2 + x + 1 .
Let
u ( x ) = y ( x ) u ( x ) = 1 + 0 x y ( s ) d s ,
where the left condition u ( 0 ) = 1 is considered. We suppose that u ( 0 ) = A is unknown, such that
y ( 0 ) = A ,
where A is to be determined. Equation (150) can be written as
y ( x ) + 3 1 + 0 x y ( s ) d s y ( x ) = 1 + 0 x y ( s ) d s 3 .
We suppose that
y 0 ( x ) = ( A + b ) e λ x b e 2 λ x ,
satisfying y 0 ( 0 ) = A , where b and λ are parameters. Then, we have
1 + 0 x y 0 ( s ) d s = b + 2 λ + 2 A 2 λ A + b λ e λ x + b 2 λ e 2 λ x = a 1 e λ x + a 2 e 2 λ x ,
b = 2 λ 2 A , a 1 : = A + b λ , a 2 : = b 2 λ ,
where b is selected such that the constant term in Equation (155) is zero.
Now, we recast Equation (153) to
y ( x ) + 3 μ q 0 ( a 1 e λ x + a 2 e 2 λ x ) y ( x ) = 3 μ ( q 0 1 ) ( a 1 e λ x + a 2 e 2 λ x ) y 0 ( x ) μ ( a 1 e λ x + a 2 e 2 λ x ) 3 ,
where μ is a dummy parameter. Then, the analytic solution is determined by
y ( x ) = y 0 ( x ) + k = 1 m ( μ ) k y k ( x ) = y 0 ( x ) μ y 1 ( x ) + μ 2 y 2 ( x ) + ,
where y k ( x ) , k = 1 , 2 , , m are to be determined.
Inserting Equation (158) into Equation (157) and equating the coefficients preceding μ k , k = 1 , 2 , , m , we can derive the following:
y 1 ( x ) = 3 ( a 1 e λ x + a 2 e 2 λ x ) y 0 ( x ) + ( a 1 e λ x + a 2 e 2 λ x ) 3 , y 1 ( 0 ) = 0 ,
y k ( x ) = ( 3 q 0 a 1 e λ x + 3 q 0 a 2 e 2 λ x ) y k 1 ( x ) , y k ( 0 ) = 0 , k = 2 , , m .
For the first-order solution, inserting Equation (154) into Equation (159), we have
y 1 ( x ) = a 12 e 2 λ x + a 13 e 3 λ x + a 14 e 4 λ x + a 15 e 5 λ x + a 16 e 6 λ x ,
where
a 12 = 3 a 1 ( 1 + b ) , a 13 = 3 a 2 ( 1 + b ) + a 1 3 3 a 1 b , a 14 = 3 a 1 2 a 2 3 a 2 b , a 15 = 3 a 1 a 2 2 , a 16 = a 2 3 .
Let us define
E k ( x ) = 0 x e k λ s d s = 1 k λ ( 1 e k λ x ) , F k ( x ) = 0 x E k ( s ) d s = x k λ [ x E k ( x ) ] .
It follows from Equations (161), (152), and (158) with m = 1 and μ = 1 that
u ( x ) = a 1 e λ 1 x + a 2 e 2 λ 1 x + a 12 F 2 ( x ) + a 13 F 3 ( x ) + a 14 F 4 ( x ) + a 15 F 5 ( x ) + a 16 F 6 ( x ) ,
where we have replaced λ in the first two terms by λ 1 to control the raising part of the curve.
We take λ = 1.05 and λ 1 = 1.5 , and A = 0.781457 is obtained via the interval reduction method. The first-order approximate analytic solution is quite close to the exact one in Equation (151), as shown in Figure 11 with ME = 3.97 × 10 3 .

9. Conclusions

We have proven that the back substitution of the Picard iteration method for the second-order nonlinear ODE is equivalent to the variational iteration method. According to the new idea of the boundary shape function method, we have developed a novel modified variational iteration method (MVIM) for second-order nonlinear BVPs with mixed-boundary conditions. The main contributions of the present paper are the introduction of a boundary shape function and then the transform from u ( x ) to y ( x ) to seek the analytic solution. If y ( x ) is solved, u ( x ) automatically satisfies the specified mixed-boundary conditions. We have transformed the nonlinear BVP with the mixed-boundary conditions to the initial value problem of a nonlinear ODE with the parameters of the unknown right-hand values of the new variable. The unique solution of the transformed IVP was proven with the Lipschitz condition, and the MVIM can find the analytic solution very fast. Two methods were developed to determine the unknown values of the parameters. One is solving nonlinear algebraic equations, and the other is the minimization of the error of the solution and the minimization of the error of the governing equation. We found that the minimizing techniques were better than that solving the nonlinear algebraic equations based on efficiency and accuracy. The examples showed that the novel MVIM and the Liapunov method, together with the splitting–linearizing method based on the boundary shape function, are accurate and effective. Up to the second-order analytic solution, the accuracy is acceptable.
There exist many nonlinear boundary value problems with analytic solutions. In the present paper, we only consider very restricted type nonlinear boundary value problems. Since boundary layer problems of fluid mechanics are an important issue, we take them as an application of the proposed method. We were only concerned with those which could be transformed to second-order nonlinear boundary value problems; many more boundary layer problems are governed by third-order nonlinear boundary value problems. Also, discontinuous-type nonlinear boundary value problems are not addressed here, which might be pursued in the near future.

Author Contributions

Methodology, C.-S.L.; Validation, B.L. and C.-L.K.; Formal analysis, C.-L.K.; Writing—original draft, C.-S.L.; Writing—review & editing, B.L. and C.-L.K. All authors have read and agreed to the published version of the manuscript.

Funding

The NSTC 113-2221-E-019-043-MY3 granted by the National Science and Technology Council, who partially supported this study, is gratefully acknowledged.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Some comments are given for Equation (10). It is known that even for a linear BVP with the Dirichlet boundary conditions, there are different situations for the existence of the solution. Like that for
u ( x ) + u ( x ) = 0 , 0 < x < b , u ( 0 ) = 0 , u ( b ) = c ,
with the following variable transformation:
u ( x ) = v ( x ) x b [ α c ] ,
where v ( 0 ) = v ( 0 ) = 0 and v ( b ) = α , we can derive a specific case of Equation (10):
( c α ) sin b = b c .
Depending on the given values of b > 0 and c, there are four possible cases for the existence of the solution of u ( x ) :
( a ) c = 0 , b = n π , α R ( infinite solutions ) , ( b ) c = 0 , b n π , α = 0 ( trivial solution ) , ( c ) c 0 , b = n π , ( c α ) sin ( n π ) = n π c ( no solution ) , ( d ) c 0 , b n π , α = c b c sin b ( unique solution ) .
We can guarantee the unique solution of v ( x ) upon giving the initial conditions v ( 0 ) = 0 and v ( 0 ) = 0 :
v ( x ) = c α b sin x + x b [ α c ] ,
which depends continuously on the parameter α . However, the existence of u ( x ) depends on the value of α determined by b and c in Equation (A1).
For the example given in Equations (21) and (22), we have obtained the exact solution in Equation (30) by exactly solving Equations (25) and (26). Instead of the exact solution, we employ the MVIM to seek an approximate analytic solution. By taking q 1 ( x ) = 1 x and q 2 ( x ) = x , H = 2 y 3 y 2 γ x + 3 γ + 2 . Starting from y 0 ( x ) = 0 and through four iterations in the MVIM, we can derive
y 4 ( x ) = 3 γ x 2 2 7 γ x 3 6 5 γ x 4 8 31 γ x 5 120 + γ x 6 4 13 γ x 7 315 + γ x 8 420 γ x 9 22680 x 2 x 3 7 x 4 12 x 5 4 + 5 x 6 36 x 7 70 + x 8 2520 ,
where γ = α + 1 . By imposing y 4 ( 1 ) = α , we can obtain the following:
γ = α + 1 = 1 7 12 1 4 + 5 36 1 70 + 1 2520 1 + 3 2 + 7 6 + 5 8 + 31 120 1 4 + 13 315 1 420 + 1 22680 .
Upon deriving y 4 ( x ) , u 4 ( x ) is given as follows:
u 4 ( x ) = y 4 ( x ) [ y 4 ( 1 ) + 1 ] x + 1 .
In u 4 ( x ) , we replace α by y 4 ( 1 ) to preserve the right-boundary condition.
In Figure 1, we compare it to the exact one in Equation (23), whose ME is 3.89 × 10 2 and is more accurate than that obtained using the Picard iteration method. Moreover, the MVIM converges faster than the Picard iteration method, which spent five iterations with ME = 5.72 × 10 2 .
It can be seen that α in Equation (A4) is quite complicated. Next, we apply Equation (63) to determine the optimal value of α . We begin with y 0 ( x ) = 0 and
H = 2 + 3 ( 1 + α ) 2 ( 1 + α ) x + 2 y ( x ) 3 y ( x ) .
Up to the third-order analytic solution, we can obtain the following:
y 3 ( x ) = 3 η 2 2 x 2 + 7 η 6 6 x 3 + 15 η 14 24 x 4 + 12 25 η 60 x 5 + 3 η 2 180 x 6 η 630 x 7 , u 3 ( x ) = y ( x ) [ y 3 ( 1 ) + 1 ] x + 1 ,
where η = 1 α and the optimal value is α = 1.278 . The ME is 2.85 × 10 2 , which is more accurate than the fourth-order solution in Equation (A5), as compared in Table A1. Both Equations (A6) and (A5) can exactly fulfill the boundary conditions.
Table A1. For the example in Equations (21) and (22), we compare the approximate solutions obtained using the MVIM with Equations (A5)–(A7) to the exact one.
Table A1. For the example in Equations (21) and (22), we compare the approximate solutions obtained using the MVIM with Equations (A5)–(A7) to the exact one.
xExactEquation (A6)Equation (A5)Equation (A7)
01.000001.000001.000001.00000
0.11.037531.052551.034681.03025
0.21.064021.088851.057401.04716
0.31.075011.103471.063511.04601
0.41.064821.090231.047241.02132
0.51.026261.042221.001640.966968
0.60.9503190.9519260.9184570.876260
0.70.8256780.8112840.7881670.741986
0.80.6382040.6117640.5999270.556525
0.90.3702810.3444290.3416330.311926
1.0000 2.12 × 10 8
By using the VIM in Equation (54) to solve Equations (21) and (22), we find that it is quite complicated, since we start from a nonzero function:
u 0 ( x ) = 1 + a x ,
which satisfies the left-boundary condition in Equation (22), while a is to be determined by matching the right-boundary condition in Equation (22). Through a lengthy computation up to the third-order analytic solution, we can obtain the following:
u 3 ( x ) = 1 + a x + b 2 x 2 + 3 b 2 a 6 x 3 + 7 b 6 a 24 x 4 7 a + 6 b 60 x 5 + 6 a + b 180 x 6 a 630 x 7 ,
where b = 3 a 2 and a = 3514 / 9887 = 0.3554162031 is solved from u 3 ( 1 ) = 0 . The ME is 8.44 × 10 2 , which is less accurate than the third-order solution in Equation (A6), as compared in Table A1. The values of u ( x ) obtained using the VIM are under-estimated, and the right-boundary condition is not precisely satisfied.
Instead of solving u 3 ( 1 ) = 0 to obtain a, which is quite complicated, we seek the optimal value of a by
min a max x ( 0 , 1 ) u 3 ( x ) u e ( x ) or min a max x ( 0 , 1 ) u 3 ( x ) 3 u 3 ( x ) + 2 u 3 ( x ) .
When the optimal value a = 0.36976 is obtained, the ME reduces to 5.72 × 10 2 . However, at this moment, the error of the right-boundary condition increases to 5.72 × 10 2 .
Notice that in the minimization problem with a single unknown value a, we can adopt the so-called interval reduction method to find the proper value of a. First, we select a large interval and list the data of u 3 ( x ) u e ( x ) in the computer. We can observe where the minimal point locates, then we reduce the interval to a smaller one to involve that minimal point. We carry out the same procedure a few times, then we can find a quite accurate value of a.
It is apparent that the MVIM is better than the VIM. When the MVIM satisfies the boundary conditions automatically, the VIM does not satisfy the right-boundary condition.

Appendix B

In this appendix, we apply the MVIM to solve a singularly perturbed BVP [41]:
ε u ( x ) + u ( x ) = 0 , u ( 0 ) = 1 , u ( 1 ) = 0 ,
u ( x ) = e ( 1 x ) / ε e ( 1 x ) / ε e 1 / ε e 1 / ε .
Like in Example 4, q 1 ( x ) = 1 x and q 2 ( x ) = x . Then, using Equations (41), (42) and (45), we have
u ( x ) = y ( x ) γ x + 1 ,
where γ = α + 1 .
Starting from y 0 ( x ) = 0 and using Equation (51), we can derive the fourth-order solution:
y 4 ( x ) = 1 ε x 2 2 γ x 3 6 + 1 ε 2 x 4 24 γ x 5 120 + 1 ε 3 x 6 720 γ x 7 5040 + 1 ε 4 x 8 40320 γ x 9 362880 + 1 ε 5 x 10 3628800 γ x 11 39916800 ,
and by imposing y 4 ( 1 ) = α , we can obtain
γ = α + 1 = 1 + 1 2 ε + 1 24 ε 2 + 1 720 ε 3 + 1 40320 ε 4 + 1 3628800 ε 5 1 + 1 6 ε + 1 120 ε 2 + 1 5040 ε 3 + 1 362880 ε 4 + 1 39916800 ε 5 .
We take ε = 0.1 , and upon comparing the analytic solution u 4 ( x ) = y 4 ( x ) ( α + 1 ) x + 1 to the exact solution (A10), the ME is 7.927 × 10 4 .
When we apply Equation (63) to determine α in Equation (A11) with γ = 1 + α , the optimal value is α = 2.1726 . The current solution based on the fourth-order MVIM with the minimization technique is quite close to the exact one, whose ME is 3.63 × 10 4 , which is more accurate than that calculated using Equations (A11) and (A12) with ME = 7.927 × 10 4 .

References

  1. Cash, J.R. On the numerical integration of nonlinear two-point boundary value problems using iterated deferred corrections, Part 1: A survey and comparison of some one-step formulae. Comput. Math. Appl. 1986, 12, 1029–1048. [Google Scholar] [CrossRef]
  2. Cash, J.R. On the numerical integration of nonlinear two-point boundary value problems using iterated deferred corrections, Part 2: The development and analysis of highly stable deferred correction formulae. SIAM J. Numer. Anal. 1988, 25, 862–882. [Google Scholar] [CrossRef]
  3. Cash, J.R.; Wright, R.W. Continuous extensions of deferred correction schemes for the numerical solution of nonlinear two-point boundary value problems. Appl. Numer. Math. 1998, 28, 227–244. [Google Scholar] [CrossRef]
  4. Ascher, U.M.; Mattheij, R.M.M.; Russell, R.D. Numerical Solution of Boundary Value Problems for Ordinary Differential Equations; SIAM: Philadelphia, PA, USA, 1995. [Google Scholar]
  5. Liu, C.S. The Lie-group shooting method for nonlinear two-point boundary value problems exhibiting multiple solutions. Comput. Model. Eng. Sci. 2006, 13, 149–163. [Google Scholar]
  6. Cabada, A.; Pouso, R.L. Existence theory for functional p-Laplacian equations with variable exponents. Nonlinear Anal. 2003, 52, 557–572. [Google Scholar] [CrossRef]
  7. Cabada, A.; O’Regan, D.; Pouso, R.L. Second order problems with functional conditions including Sturm-Liouville and multipoint conditions. Math. Nachr. 2008, 281, 1254–1263. [Google Scholar] [CrossRef]
  8. Mawhin, J.; Thompson, H.B. Bounding surfaces and second order quasilinear equations with compatible nonlinear functional boundary conditions. Adv. Nonlinear Stud. 2011, 11, 157–172. [Google Scholar] [CrossRef]
  9. Erbe, L.H. Nonlinear boundary value problems for second order differential equations. J. Diff. Equ. 1970, 7, 459–472. [Google Scholar] [CrossRef]
  10. Mawhin, J.; Schmitt, K. Upper and lower solutions and semilinear second order elliptic equations with non-linear boundary conditions. Proc. Roy. Soc. Edinburgh 1984, 97, 199–207. [Google Scholar] [CrossRef]
  11. De Coster, C.; Habets, P. Two-Point Boundary Value Problems: Lower And Upper Solutions; Elsevier: New York, NY, USA, 2006. [Google Scholar]
  12. He, J.H. Variational iteration method – a kind of non-linear analytical technique: Some examples. Int. J. Non-linear Mech. 1999, 34, 699–708. [Google Scholar] [CrossRef]
  13. He, J.H. Variational iteration method for autonomous ordinary systems. Appl. Math. Comput. 2000, 114, 115–123. [Google Scholar] [CrossRef]
  14. Herisanu, N.; Marinca, V. A modified variational iteration method for strongly nonlinear problems. Nonlinear Sci. Lett. A 2010, 1, 183–192. [Google Scholar]
  15. Turkyilmazoglu, M. An optimal variational iteration method. Appl. Math. Lett. 2011, 24, 762–765. [Google Scholar] [CrossRef]
  16. Wang, X.; Atluri, S.N. A unification of the concepts of the variational iteration, Adomian decomposition and Picard iteration methods; and a local variational iteration method. Comput. Model. Eng. Sci. 2016, 111, 567–585. [Google Scholar]
  17. Chang, S.H. Convergence of variational iteration method applied to two-point diffusion problems. Appl. Math. Model. 2016, 40, 6805–6810. [Google Scholar] [CrossRef]
  18. Chang, S.H. A variational iteration method involving Adomian polynomials for a strongly nonlinear boundary value problem. East Asian J. Appl. Math. 2019, 9, 153–164. [Google Scholar]
  19. Farkas, M. Periodic Motions; Springer: New York, NY, USA, 1994. [Google Scholar]
  20. Liapunov, A.M. Sur une série dans la théorie des équations différentielles linéaires du second ordre à coefficients périodiques. Zap. Akad. Nauk Fiz.-Mat. Otd. 8th series 1902, 13, 1–70. [Google Scholar]
  21. He, J.H. Homotopy perturbation method: A new nonlinear analytical technique. Appl. Math. Comput. 2003, 135, 73–79. [Google Scholar] [CrossRef]
  22. He, J.H. Homotopy perturbation method for solving boundary value problems. Phy. Lett. A 2006, 350, 87–88. [Google Scholar] [CrossRef]
  23. Noor, M.A.; Mohyud-Din, S.T. Homotopy perturbation method for solving sixth-order boundary value problems. Comput. Math. Appl. 2008, 55, 2953–2972. [Google Scholar] [CrossRef]
  24. Chun, C.; Sakthivel, R. Homotopy perturbation technique for solving two-point boundary value problems–comparison with other methods. Comput. Phy. Communi. 2010, 181, 1021–1024. [Google Scholar] [CrossRef]
  25. Khuri, S.A.; Sayfy, A. Generalizing the variational iteration method for BVPs: Proper setting of the correction functional. Appl. Math. Lett. 2017, 68, 68–75. [Google Scholar] [CrossRef]
  26. Liu, C.S.; Chang, C.W. Boundary shape function method for nonlinear BVP, automatically satisfying prescribed multipoint boundary conditions. Bound. Value Prob. 2020, 2020, 139. [Google Scholar] [CrossRef]
  27. Coddington, E.A.; Levinson, N. Theory of Ordinary Differential Equations; McGraw-Hill: New York, NY, USA, 1955. [Google Scholar]
  28. Reid, W.T. Ordinary Differential Equations; John Wiley & Son: New York, NY, USA, 1971. [Google Scholar]
  29. Zhou, Z.; Shen, J. A second-order boundary value problem with nonlinear and mixed boundary conditions: Existence, uniqueness, and approximation. Abstr. Appl. Anal. 2010, 2010, 287473. [Google Scholar] [CrossRef]
  30. Liu, C.S.; Chang, J.R. Boundary shape functions methods for solving the nonlinear singularly perturbed problems with Robin boundary conditions. Int. J. Nonlinear Sci. Numer. Simul. 2020, 21, 797–806. [Google Scholar] [CrossRef]
  31. Rani, G.S.; Jayan, S.; Nagaraja, K.V. An extension of golden section algorithm for n-variable functions with MATLAB code. IOP Conf. Ser. Mater. Sci. Eng. 2018, 577, 012175. [Google Scholar] [CrossRef]
  32. Lu, J. Variational iteration method for solving two-point boundary value problems. J. Comput. Appl. Math. 2007, 207, 92–95. [Google Scholar] [CrossRef]
  33. Adomian, G.; Elrod, M.; Rach, R. A new approach to boundary value equations and application to a generalization of Airy’s equation. J. Math. Anal. Appl. 1989, 140, 554–568. [Google Scholar] [CrossRef]
  34. Ha, S.N. A nonlinear shooting method for two-point boundary value problems. Comput. Math. Appl. 2001, 42, 1411–1420. [Google Scholar] [CrossRef]
  35. Ha, S.N.; Lee, C.R. Numerical study for two-point boundary value problems using Green’s functions. Comput. Math. Appl. 2002, 44, 1599–1608. [Google Scholar] [CrossRef]
  36. Liu, C.S.; Kuo, C.L.; Chang, C.W. Decomposition-linearization-sequential homotopy methods for nonlinear differential/integral equations. Mathematics 2024, 12, 3557. [Google Scholar] [CrossRef]
  37. Liu, C.S.; Chang, C.W. Lie-group shooting/boundary shape function methods for solving nonlinear boundary value problems. Symmetry 2022, 14, 778. [Google Scholar] [CrossRef]
  38. Chang, C.W.; Chang, J.R.; Liu, C.S. The Lie-group shooting method for solving classical Blasius flat-plate problem. Comput. Mater. Contin. 2008, 7, 139–153. [Google Scholar]
  39. Cortell, R. Numerical solutions of the classical Blasius flat-plate problem. Appl. Math. Comput. 2005, 170, 706–710. [Google Scholar] [CrossRef]
  40. Zheng, L.C.; Chen, X.H.; Chang, X.X. Analytical approximants for a boundary layer flow on a stretching moving surface with a power velocity. Int. J. Appl. Mech. Eng. 2004, 9, 795–802. [Google Scholar]
  41. Khuri, S.A.; Sayfy, A. Self-adjoint singularly perturbed boundary value problems: An adaptive variational approach. Math. Meth. Appl. Sci. 2013, 36, 1070–1079. [Google Scholar] [CrossRef]
Figure 1. Equations (21) and (22) solved using the Picard iteration method and a modified variational iteration method (MVIM), comparing solutions and displaying the error obtained using the Picard iteration method.
Figure 1. Equations (21) and (22) solved using the Picard iteration method and a modified variational iteration method (MVIM), comparing solutions and displaying the error obtained using the Picard iteration method.
Mathematics 13 00354 g001
Figure 2. For the singular equation in Example 2 solved using the modified variational iteration method, we show (a) the error of governing equation, and (b) a comparison of solutions and the errors.
Figure 2. For the singular equation in Example 2 solved using the modified variational iteration method, we show (a) the error of governing equation, and (b) a comparison of solutions and the errors.
Mathematics 13 00354 g002
Figure 3. For the stiff equation in Example 3 solved using the modified variational iteration method and solving a nonlinear algebric equation and with a minimization technique, we compare the solutions to that obtained using the LGSM.
Figure 3. For the stiff equation in Example 3 solved using the modified variational iteration method and solving a nonlinear algebric equation and with a minimization technique, we compare the solutions to that obtained using the LGSM.
Mathematics 13 00354 g003
Figure 4. For a nonlinear equation of Example 4 solved by the modified variational iteration method with a minimization technique, we compare the solutions obtained using MVIM and VIM to the exact one.
Figure 4. For a nonlinear equation of Example 4 solved by the modified variational iteration method with a minimization technique, we compare the solutions obtained using MVIM and VIM to the exact one.
Mathematics 13 00354 g004
Figure 5. For a nonlinear equation of Example 5 solved using the modified variational iteration method with the minimization of the error of solution and the minimization of the error of governing equation, we compare the solutions to the exact solution.
Figure 5. For a nonlinear equation of Example 5 solved using the modified variational iteration method with the minimization of the error of solution and the minimization of the error of governing equation, we compare the solutions to the exact solution.
Mathematics 13 00354 g005
Figure 6. For a nonlinear nonlocal BVP solved using the modified variational iteration method with the minimization of the error of solution, we compare the first-order analytic solution to the exact one.
Figure 6. For a nonlinear nonlocal BVP solved using the modified variational iteration method with the minimization of the error of solution, we compare the first-order analytic solution to the exact one.
Mathematics 13 00354 g006
Figure 7. For Example 6 solved using the Liapunov technique and a splitting–linearizing method, we compare the first-order and second-order solutions to the exact solution.
Figure 7. For Example 6 solved using the Liapunov technique and a splitting–linearizing method, we compare the first-order and second-order solutions to the exact solution.
Mathematics 13 00354 g007
Figure 8. For Example 7 of a three-point BVP solved using the Liapunov technique and a splitting–linearizing method, we compare the second-order solution to the exact solution.
Figure 8. For Example 7 of a three-point BVP solved using the Liapunov technique and a splitting–linearizing method, we compare the second-order solution to the exact solution.
Mathematics 13 00354 g008
Figure 9. For the Blasius boundary layer problem solved using the Liapunov technique and the linearization method, we compare the analytic solution to the exact solution.
Figure 9. For the Blasius boundary layer problem solved using the Liapunov technique and the linearization method, we compare the analytic solution to the exact solution.
Mathematics 13 00354 g009
Figure 10. For the Sakidis boundary layer problem solved using the Liapunov technique and the linearization method, we compare the analytic solution to the exact solution.
Figure 10. For the Sakidis boundary layer problem solved using the Liapunov technique and the linearization method, we compare the analytic solution to the exact solution.
Mathematics 13 00354 g010
Figure 11. For Example 9, we compare the first-order approximate analytic solution obtained using the linearized Lyapunov method to the exact one.
Figure 11. For Example 9, we compare the first-order approximate analytic solution obtained using the linearized Lyapunov method to the exact one.
Mathematics 13 00354 g011
Table 1. For Example 2, comparing the approximate solutions obtained using the MVIM and by Lu [32] to the exact one.
Table 1. For Example 2, comparing the approximate solutions obtained using the MVIM and by Lu [32] to the exact one.
xExactPresentLu [32]
01.000001.000000.8646
0.11.000631.000570.8665
0.21.002501002300.8723
0.31.005631.005200.8820
0.41.010001.009300.8956
0.51.015631.014650.9131
0.61.022501.021300.9346
0.71.030621.029320.9603
0.81.040001.038800.9901
0.91.050621.049821.0241
1.01.062501.062501.0625
Table 2. For a nonlocal BVP, we compare the approximate solutions obtained by the MVIM to the exact one.
Table 2. For a nonlocal BVP, we compare the approximate solutions obtained by the MVIM to the exact one.
xExactPresentAbsolute Error
04.00004.00000.0
0.13.30583.31350.0077
0.22.77782.80730.0295
0.32.36692.42950.0626
0.42.04082.13720.1964
0.51.77781.89610.1183
0.61.56251.68100.1185
0.71.38411.47510.0910
0.81.23461.27060.0360
0.91.10801.06850.0395
1.01.00000.87850.1215
Table 3. For Example 6, we compare the values of u ( 1 ) ( x ) and u ( 2 ) ( x ) with different x values to the exact one.
Table 3. For Example 6, we compare the values of u ( 1 ) ( x ) and u ( 2 ) ( x ) with different x values to the exact one.
xEquation (115)Equation (116)ExactEquation (125)
0.1−0.658247−0.654982−0.654333−0.826110
0.2−0.481409−0.479186−0.478722−0.600393
0.4−0.118469−0.119156−0.119050−0.136309
0.50.0643400.0618220.0617940.095388
0.70.4305740.4248320.4247260.540575
0.90.8044950.7996040.7998790.902546
Table 4. For example 7 with different x values, we list the exact solution, second-order solution, and RE.
Table 4. For example 7 with different x values, we list the exact solution, second-order solution, and RE.
x1.11.41.82.02.42.73
Exact15.75513.38912.1291212.42713.21614.333
Present15.88313.51312.05311.87512.39413.34614.458
RE 8.08 × 10 3 9.31 × 10 3 6.30 × 10 3 1.04 × 10 2 2.63 × 10 3 9.82 × 10 3 8.71 × 10 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; Li, B.; Kuo, C.-L. Variational Iteration and Linearized Liapunov Methods for Seeking the Analytic Solutions of Nonlinear Boundary Value Problems. Mathematics 2025, 13, 354. https://doi.org/10.3390/math13030354

AMA Style

Liu C-S, Li B, Kuo C-L. Variational Iteration and Linearized Liapunov Methods for Seeking the Analytic Solutions of Nonlinear Boundary Value Problems. Mathematics. 2025; 13(3):354. https://doi.org/10.3390/math13030354

Chicago/Turabian Style

Liu, Chein-Shan, Botong Li, and Chung-Lun Kuo. 2025. "Variational Iteration and Linearized Liapunov Methods for Seeking the Analytic Solutions of Nonlinear Boundary Value Problems" Mathematics 13, no. 3: 354. https://doi.org/10.3390/math13030354

APA Style

Liu, C.-S., Li, B., & Kuo, C.-L. (2025). Variational Iteration and Linearized Liapunov Methods for Seeking the Analytic Solutions of Nonlinear Boundary Value Problems. Mathematics, 13(3), 354. https://doi.org/10.3390/math13030354

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop