Adaptive Piecewise Poly-Sinc Methods for Ordinary Differential Equations

We propose a new method of adaptive piecewise approximation based on Sinc points for ordinary differential equations. The adaptive method is a piecewise collocation method which utilizes Poly-Sinc interpolation to reach a preset level of accuracy for the approximation. Our work extends the adaptive piecewise Poly-Sinc method to function approximation, for which we derived an a priori error estimate for our adaptive method and showed its exponential convergence in the number of iterations. In this work, we show the exponential convergence in the number of iterations of the a priori error estimate obtained from the piecewise collocation method, provided that a good estimate of the exact solution of the ordinary differential equation at the Sinc points exists. We use a statistical approach for partition refinement. The adaptive greedy piecewise Poly-Sinc algorithm is validated on regular and stiff ordinary differential equations.


Introduction
Numerous phenomena in engineering, physics, and mathematics are modeled either by initial value problems (IVPs) or by boundary value problems (BVPs) described by ordinary differential equations (ODEs). Accordingly, the numerical solution of IVPs for deterministic and random ODEs is a basic problem in the sciences. For a review of the state of the art on theory and algorithms for numerical initial value solvers, we refer to the monographs [1][2][3][4][5][6][7] and the references therein.
Exact solutions may not be available for some ODEs. This has led to the development of a number of methods to estimate the a posteriori error, which is based on the residual of the ODE [8], forming the basis for adaptive methods for ODEs. The a posteriori error estimates have been derived for different numerical methods, such as piecewise polynomial collocation methods [9,10] and Galerkin methods [11][12][13][14]. An a posteriori error estimate in connection with adjoint methods was developed in [15]. Kehlet et al. [16] incorporated numerical round-off errors in their a posteriori estimates. An a posteriori error estimate based on the variational principle was derived in [17]. Convergence rates for the adaptive approximation of ODEs using a posteriori error estimation were discussed in [18,19]. A less common form is the a priori error estimate [8,20]. Hybrid a priori-a posteriori error estimates for ODEs were developed in [21,22]. An advantage of the a priori error estimate over the a posteriori error estimate is that the a priori error estimate does not require the computation of the residual of the ODE. However, some knowledge about the exact solution of the ODE is required for the a priori error estimate. It was shown in [23,24] that the a priori error estimate of the Poly-Sinc approximation is exponentially convergent in the number of Sinc points, provided that the exact solution belongs to the set of analytic functions.
We propose an adaptive piecewise method, in which the points in a given partition are used as partitioning points. This piecewise property allows for a greater flexibility of constructing the polynomials of arbitrary degree in each partition. Recently, we developed an a priori error estimate for the adaptive method based on piecewise Poly-Sinc interpolation for function approximation [25]. In this work [25], we used a statistical approach for partition refinement in which we computed the fraction of a standard deviation [26][27][28] as the ratio of the mean absolute deviation to the sample standard deviation. It was shown in [29] that the ratio approaches 2 π ≈ 0.798 for an infinite number of normal samples. We extend the work [25] for regular and stiff ODEs. In this paper, we discuss the adaptive piecewise Poly-Sinc method for regular and stiff ODEs, and show that the exponentially convergent a priori error estimate for our adaptive method differs from that for function approximation [25] by a small constant.
This paper is organized as follows. Section 2 provides an overview of the Poly-Sinc approximation, the residual computation, the indefinite integral approximation, and the collocation method. Section 3 discusses the piecewise collocation method, which is the cornerstone of the adaptive piecewise Poly-Sinc algorithm. In Section 4, we present the adaptive piecewise Poly-Sinc algorithm for ODEs and the statistical approach for partition refinement. We also demonstrate the exponential convergence of the a priori error estimate for our adaptive method. We validate our adaptive Poly-Sinc method on regular ODEs and ODEs whose exact solutions exhibit an interior layer, a boundary layer, and a shock layer in Section 5. Finally, we present our concluding remarks in Section 6.

Poly-Sinc Approximation
A novel family of polynomial approximation called Poly-Sinc interpolation which interpolate data of the form {x k , y k } N k=−M where {x k } N k=−M are Sinc points, were derived in [23,30] and extended in [24]. The interpolation to this type of data is accurate provided that the function y with values y k = y(x k ) belong to the space of analytic functions [30,31]. For the ease of presentation and discussion, we assume that M = N. Poly-Sinc approximation was developed in order to mitigate the poor accuracy associated with differentiating the Sinc approximation when approximating the derivative of functions [23]. Moreover, Poly-Sinc approximation is characterized by its ease of implementation. Theoretical frameworks on the error analysis of function approximation, quadrature, and the stability of the Poly-Sinc approximation were studied in [23,24,32,33]. Furthermore, Poly-Sinc approximation was used to solve BVPs in ordinary and partial differential equations [31,[34][35][36][37][38]. We start with a brief overview of Lagrange interpolation. Then, we discuss the generation of Sinc points using conformal mappings.

Lagrange Interpolation
Lagrange interpolation is a polynomial interpolation scheme [39], which is constructed by Lagrange basis polynomials where {x k } m k=1 are the interpolation points and g(x) = ∏ m l=1 (x − x l ). The Lagrange basis polynomials satisfy the property Hence, the polynomial approximation in the Lagrange form can be written as where y h (x) is a polynomial of degree m − 1 and it interpolates the function f (x) at the interpolation points, i.e., y h (x k ) = y(x k ). For Sinc points, the polynomial approximation y h (x) becomes where m = 2N + 1 is the number of Sinc points. If the coefficients y(x k ) are unknown, then we replace y(x k ) with c k , and Equations (1) and (2) become and respectively.

Conformal Mappings and Function Space
We introduce some notations related to Sinc methods [23,24,30]. Let ϕ : D → D d be a conformal map that maps a simply connected region D ⊂ C onto the strip where d is a given positive number. The region D has a boundary ∂ D, and let a and b be two distinct points on ∂ D. Let ψ = ϕ −1 , ψ : D d → D be the inverse conformal map. Let Γ be an arc defined by where a = ψ(−∞) and b = ψ(∞). For real finite numbers a, b, and Γ ⊆ R, ϕ(x) = ln((x − a)/(b − x)) and x k = ψ(kh) = (a + be kh )/(1 + e kh ) are the Sinc points with spacing h(d, , β s > 0 [30,40]. Sinc points can be also generated for semi-infinite or infinite intervals. For a comprehensive list of conformal maps, see [24,30]. We briefly discuss the function space for y. Let ρ = e ϕ , α s be an arbitrary positive integer number, and L α s ,β s (D) be the family of all functions that are analytic in D = ϕ −1 (D d ) such that for all z ∈ D, we have We next set the restrictions on α s , β s , and d such that 0 < α s ≤ 1, 0 < β s ≤ 1, and 0 < d < π. Let M α s ,β s (D) be the set of all functions g defined on D that have finite limits g(a) = lim z→a g(z) and g(b) = lim z→b g(z), where the limits are taken from within D, and such that y ∈ L α s ,β s (D), where The transformation guarantees that y vanishes at the endpoints of (a, b). We assume that y is analytic and uniformly bounded by B(y), i.e., |y(x)| ≤ B(y), in the larger region where r > 0 and B(t, r) = {z ∈ C : |z − t| < r}.

Residual
The residual is used as a measure of the accuracy of the adaptive Poly-Sinc method. The general form of a second-order ODE can be expressed as [41] F(x, y, y , y ) = 0.
An exact solution y satisfies (5). If the exact solution y is unknown, we replace it with the approximation y c and Equation (5) becomes where R(x) is the residual. The residual in (6) for the i−th iteration becomes where κ is the number of iterations. We will denote the residual for integral and differential equations with R I and R D , respectively. The residual is used as an indicator for partition refinement as discussed in Algorithm 4 (see Section 4).

Error Analysis
We briefly discuss the error analysis for Poly-Sinc approximation over the global interval [a, b]. At the end of this section, we will discuss the error analysis of Poly-Sinc approximation for IVPs and BVPs.
For the Poly-Sinc approximation on a finite interval [23,24,32,33,38], it was shown that where y(x) is the exact solution and y h (x) is its Poly-Sinc approximation, A is a constant independent of N, m = 2N + 1 is the number of Sinc points in the interval, r is the radius of the ball containing the m Sinc points, and β > 0 is the convergence rate parameter. On a finite interval [a, b], it was shown that [24,38] max where η is a positive constant. Inequality (7) can be written as Next, we discuss the collocation method for IVPs and BVPs.

Collocation Method
A collocation method [42,43] is a technique in which a system of algebraic equations is constructed from the ODE via the use of collocation points. Here, we adopt the Poly-Sinc collocation method [36,44], in which the collocation points are the Sinc points and the basis functions are the Lagrange polynomials with Sinc points.

Initial Value Problem
The IVP is transformed into an integral equation. We briefly discuss the approximation of indefinite integrals using Poly-Sinc methods ( [45] § 9.3). Define where the weight function w(x) is positive on the interval (a, b) and has the property that the moments b a x j w(x) dx do not vanish for j = 0, 1, 2, . . .. Let A + be an m × m matrix whose entries are Then, the indefinite integral (8) can be approximated as where V y = (y(x −N ), . . . , y(x N )) . We state the following theorem for IVPs [30].

Boundary Value Problem
For a BVP, the collocation method solves for the unknown coefficients c k in (3) or (4) by setting However, we replace the two equations corresponding to x −N and x N with the boundary conditions y(a) = y a and y(b) = y b , respectively. We state the following theorem for BVPs.

Piecewise Collocation Method
We discuss the piecewise collocation method, in which the domain I = [a, b] is discretized into K ∈ N nonoverlapping partitions I n = [x n−1 , x n ), n = 1, 2, . . . , ]. The space of piecewise discontinuous polynomials can be defined as where P k (I n ) denotes the space of polynomials of degree at most k on I n . The piecewise collocation method solves the collocation method in Section 2.4 over partitions. The approximate solution in the global partition [a, b] can be written as where y h, k (x) = ∑ m k j=1 y j, k u j, k (x) is the Lagrange interpolation in the k−th partition. The basis functions , j = 1, 2, . . . , m k , k = 1, 2 . . . , K, where {x j, k } m k j=1 are the interpolation points in the k−th partition, g k (x) = ∏ m k l=1 (x − x l, k ), and m k is the number of points in the k−th partition. The function 1 C is an indicator function which outputs 1 if the condition C is satisfied and otherwise 0. If the coefficients y j, k are unknown, then we replace y j, k with c j, k , and Equation (9) becomes The residual for the k−th partition can be written as The collocation method solves for the unknowns c j, k by setting R k (x j, k ) = 0, j = 1, 2, . . . , m k , k = 1, 2, . . . , K, which we discuss next for IVPs and BVPs.

Initial Value Problem
In this section, we provide examples for first-order and second-order IVPs.

Relaxation Problem
We discuss the piecewise collocation method for a first-order IVP in integral form. Consider the following relaxation or decay Equation [46] on the interval [a, b] where α > 0 is the relaxation parameter. The exact solution is y(x) = y a exp(−α(x − a)). We transform the IVP (11) into an integral form The residual becomes We approximate the indefinite integral as discussed in Section 2.4.1 and the approximate residual becomes R I (x, a, y a , y c (x)) = y c (x) − y a + α(J + m y c )(x). The domain [a, b] is partitioned as discussed in Section 3. For the k−th partition, k = 1, 2, . . . , K, we replace a with x k−1 and y a with y c, k−1 (x k−1 ). The approximate residual becomes We remove the equation corresponding to the leftmost Sinc point in each partition, and replace them with the conditions and the set of equations The set of equations (12b) is known as the continuity equations at the interior boundaries [47]. The collocation algorithm for the IVP (11) is outlined in Algorithm 1.
Algorithm 1: Piecewise Poly-Sinc Algorithm (IVP (11)). input : K : number of partitions m k : number of Sinc points in the k−th partition output: y c (x): approximate solution Replace y(x) with the global approximate solution (10). Solve for the m k K unknowns {c j,k } m k , K j=1, k=1 using the initial condition (12a), continuity Equation (12b), and the set of equations for the residual (13).

Hanging Bar Problem
We discuss the piecewise collocation method for a second-order IVP in integral form. Considering the following where y(x) is the sought-for solution. In the context of the hanging bar problem [48], y(x) andK(x) are the displacement and the material property of the bar at the position x, respectively. For simplicity, we setK(x) = 1. Equation (14) can be written as a system of first-order equations The integral form of (15) is Plugging (17) into (16), we obtain

Using integration by parts
where we set t = s. Thus, the integral form of the solution to (14) becomes The residual can be written as Approximating the indefinite integral in (18), the approximate residual becomes For the k−th partition, k = 1, 2, . . . , K, we replace a with x k−1 , y a with y c, k−1 (x k−1 ), andỹ a with y c, k−1 (x k−1 ). The approximate residual becomes We remove the equations corresponding to the leftmost and rightmost Sinc points in each partition, and replace them with the conditions and the set of equationsR Equations (20c)-(20d) are known as the continuity equations at the interior boundaries [47]. The collocation algorithm for the IVP (14) is outlined in Algorithm 2.

Boundary Value Problem
The collocation method for the BVP is similar to that of the IVP in Section 3.2, except that we replace the set of equations (20) with and the set of equations of the residual for the BVP becomes where R D, k is the residual of the differential equation in the k−th partition. The piecewise Poly-Sinc collocation algorithm for the BVP is outlined in Algorithm 3.

Adaptive Piecewise Poly-Sinc Algorithm
This section introduces the greedy algorithmic approach used in adaptive piecewise Poly-Sinc methods. The core feature used is the non-overlapping properties of Sinc points and the uniform exponential convergence on each partition of the approximation interval. Greedy algorithms seek the "best" candidate of possible solutions at a given step [50]. Greedy algorithms have been applied to model order reduction for parametrized partial differential equations [51,52]. The adaptive piecewise Poly-Sinc algorithm is greedy in the sense that it makes a choice that aims to find the "best" approximation for the solution of the ODE in the current step [50]. The algorithm takes an iterative form in which it computes the L 2 norm values of the residual for all partitions constituting the global interval I = [a, b]. At the i−th step, the algorithm refines the partitions for which the L 2 norm values of the residual are relatively large. By refining the partitions as discussed above, it is expected that the mean value of the L 2 norm values over all partitions decreases in each step. As the iteration proceeds, the algorithm expects to find the "best" polynomial approximation for the solution of the ODE.

Algorithm Description
We discuss the adaptive algorithm for the piecewise Poly-Sinc approximation. The following steps of the adaptive algorithm are performed in an iterative loop [53] The adaptive piecewise Poly-Sinc algorithm is outlined in Algorithm 4. The refinement strategy is performed as follows. For the i−th iteration, we compute the set of L 2 norm values R and the sample standard deviation [54] Using Hölder's inequality for sums with p = q = 2 ( [55] § 3.2.8), one can show that ω i ≤ K i −1 K i < 1. We restrict to second-order moments only. The points in the partitions with the indices I i are used as partitioning points and m = 2N + 1 Sinc points are inserted in the newly created partitions. The algorithm terminates when the stopping criterion is satisfied. The approximate solution y (i) c (x), i = 1, . . . , κ, for the i−th iteration is computed using the collocation method outlined in Algorithms 1 and 2 for IVPs and Algorithm 3 for BVPs. We note that, for partition refinement, the residual is computed in its differential form R D (x).
The definite integral in the L 2 norm [56] is numerically computed using a Sinc quadrature [30], i.e., are the quadrature points, which are also Sinc points, and ϕ(x) is the conformal mapping in Section 2.1.2. The supremum norm on an interval I = [a, b] is approximated as ( [57] where {x k } N k=−N are the Sinc points on I, whose generation is discussed in Section 2.1.2.

Error Analysis
We state below the main theorem.
Theorem 3 (Estimate of Upper Bound [25]). Let y be in M α s ,β s (ϕ), analytic and bounded in D 2 , and let y For fitting purposes, we compute the mean value of the error estimate, i.e., We state the following theorem on collocation.
Append the mean value R 1 in R.
07618 is the Lebesgue constant for Poly-Sinc approximation [31,33,59]. On average, the term |c For fitting purposes, we compute the mean value of the error estimate, i.e., max x∈[a,b]

Results
The results in this section were computed using Mathematica [60]. We tested our adaptive algorithm on regular and stiff ODEs. The Sinc spacing is h = π

Norms
The supremum norm has a theoretical advantage. However, its computation is slower than that of the L 2 norm [25]. Hence, we use the L 2 norm in our computations.

Initial Value Problem
We test our adaptive piecewise Poly-Sinc algorithm on regular first-order and second-order IVPs.
Example 1 (Relaxation Problem). We start with the relaxation problem in Section 3.1. We set a = 0 and the exact solution becomes y(x) = exp(−α x). We set the exponential decay parameter α = 20 and confine the domain of the solution to the interval [0, 1]. The approximate solution y c (x) is computed as discussed in Section 3.1.
We set the number of Sinc points to be inserted in all partitions as m = 2N + 1 = 5. The stopping criterion ε stop = 10 −6 was used. The algorithm terminates after κ = 7 iterations and the number of points |S| = 530. Figure 1a shows the approximate solution y (7) c (x). A proper subset of the set of points S is shown as red dots, which are projected onto the approximate solution y (7) c (x). This proper subset is used to observe the approximate solution y (7) c (x). We plot the statistic ω i as a function of the iteration index i, i = 2, 3, . . . , 7, in Figure 1b. The oscillations are decaying and the statistic ω i is converging to an asymptotic value. The mean value ω i ≈ 0.6 is denoted by a horizontal line.
We perform the least-squares fitting of the logarithm of the set R to the logarithm of the upper bound (24). Figure 2a shows the least-squares fitted model (24) to the set R. The dots represent the set R and the solid line represents the least-squares fitted model (24). Figure 2b shows the residual, absolute local approximation error, and the mean value for the last iteration. The mean value R 7 is below the threshold value 10 −6 . The L 2 norm of the approximation error y(x) − y (7) c (x) ≈ 1.5 × 10 −7 .
Example 2 (Hanging Bar Problem). We apply the collocation method on the hanging bar problem (14) and y(x) = e x (x − 1) 2 [61]. The approximate solution y c (x) is computed as discussed in Section 3.1.
We set the number of Sinc points to be inserted in all partitions as m = 2N + 1 = 7. The stopping criterion ε stop = 10 −6 was used. The algorithm terminates after κ = 3 iterations and the number of points |S| = 350. Figure 3 shows the approximate solution y  c (x). We perform the least-squares fitting of the logarithm of the set R to the logarithm of the upper bound (24). Figure 4a shows the least-squares fitted model (24) to the set R. Figure 4b shows the residual, absolute local approximation error, and the mean value for the last iteration. The mean value R 3 is below the threshold value 10 −6 . The L 2 norm of the approximation error y(x) − y

Boundary Value Problem
We discuss a number of stiff BVPs [8] based on the general linear second-order BVP where a(x) > 0, b(x), c(x) are the coefficients, and f (x) is the source term. We set the number of Sinc points to be inserted in all partitions as m = 2N + 1 = 5. The stopping criterion ε stop = 10 −6 was used. The algorithm terminates after κ = 10 iterations and the number of points |S| = 2055. Figure 5a shows the approximate solution y (10) c (x). A proper subset of the set of points S is shown as red dots, which are projected onto the approximate solution y (10) c (x). We plot the statistic ω i as a function of the iteration index i, i = 2, 3, . . . , 10, in Figure 5b. It is observed that the oscillations are decaying and the statistic ω i is converging to an asymptotic value. The mean value ω i ≈ 0.64.   We perform the least-squares fitting of the logarithm of the set R to the logarithm of the upper bound (24). Figure 6a shows the least-squares fitted model (24) to the set R. Figure 6b shows the residual, absolute local approximation error, and the mean value for the last iteration. The plot of the L 2 norm values of the residual over the partitions demonstrates that fine partitions are formed near x = 0 due to the presence of the boundary layer. The mean value R 10 is below the threshold value 10 −6 . The L 2 norm of the approximation error y(x)−y (10) c (x) ≈ 1.12 × 10 −8 . The threshold value in [8] is 0.05.   The exact solution y(x) experiences a boundary layer near x = 0 and a slope change at approximately x = 0.5. Equation (25) is multiplied by the factor x so that the residual R(x) does not contain a singularity at x = 0.
We set the number of Sinc points to be inserted in all partitions as m = 2N + 1 = 5. The stopping criterion ε stop = 10 −6 was used. The algorithm terminates after κ = 9 iterations and the number of points |S| = 1630.
We perform the least-squares fitting of the logarithm of the set R to the logarithm of the upper bound (24). Figure 7a shows the least-squares fitted model (24) to the set R. Figure 7b shows the residual, absolute local approximation error, and the mean value for the last iteration. The plot of the L 2 norm values of the residual over the partitions shows that fine partitions are formed near x = 0 due to the presence of the boundary layer. The mean value R 9 is below the threshold value 10 −6 . The L 2 norm of the approximation error y(x)−y (9) c (x) ≈ 1.6 ×10 −6 . The threshold value in [8] is 0.01.
The approximating polynomial y (9) c (x) and a proper subset of the set of points S are shown in Figure 8a. The corresponding plot for the statistic ω i a is shown in Figure 8c. The oscillations are decaying and the mean value ω i ≈ 0.66. It was mentioned that Equation (25) was multiplied by the factor x so that the residual R(x) does not contain a singularity at x = 0. We replace the residual R(x) with the quantity y(x) − y c (x) in Algorithm 4 and the BVP (25) contains the term 1/x. We set the number of Sinc points to be inserted in all partitions as m = 2N +1 = 5. The stopping criterion ε stop = 10 −6 was used. The algorithm terminates after κ = 8 iterations and the number of points |S| = 730, which is smaller than the one obtained from multiplying the residual by x. This is expected since the exact solution y(x) is used. The approximating polynomial y (8) c (x) and a proper subset of the set of points S are shown in Figure 8b. The corresponding plot for the statistic ω i is shown in Figure 8d. The mean value ω i ≈ 0.61. It is observed that the statistic ω i oscillates around the mean ω i ≈ 0.61.    x 0 e t 2 dt [63], and ı 2 = −1. The solution y(x) has a boundary layer near x = 0. Equation (25) is multiplied by the factor √ x so that the residual R(x) does not contain a singularity at x = 0.
We set the number of Sinc points to be inserted in all partitions as m = 2N + 1 = 5. The stopping criterion ε stop = 10 −6 was used. The algorithm terminates after κ = 7 iterations and the number of points |S| = 1183.
We performed least-squares fitting of the logarithm of the set R to the logarithm of the upper bound (24). Figure 9a shows the least-squares fitted model (24) to the set R. Figure 9b shows the residual, absolute local approximation error, and the mean value for the last iteration. Fine partitions are formed near x = 0 due to the presence of the boundary layer, as seen in the plot of the L 2 norm values of the residual over the partitions. The mean value R 7 is below the threshold value 10 −6 . The L 2 norm of the approximation error y(x) − y c (x)| R 7 . The approximating polynomial y (7) c (x) and a proper subset of the set of points S are shown in Figure 10a. The corresponding plot for the statistic ω i a is shown in Figure 10c. The statistic ω i oscillates around the median valueω i ≈ 0.49. It was mentioned that Equation (25) was multiplied by the factor √ x so that the residual R(x) does not contain a singularity at x = 0. We replace the residual R(x) with the quantity y(x) − y c (x) in Algorithm 4 and the BVP (25) contains the term 1/ √ x. We set the number of Sinc points to be inserted in all partitions as m = 2N + 1 = 5. The stopping criterion ε stop = 10 −6 was used. The algorithm terminates after κ = 6 iterations and the number of points |S| = 595, which is smaller than the one obtained by multiplying the residual by √ x. This is expected since the exact solution y(x) is used. The approximating polynomial y (6) c (x) and a proper subset of the set of points S are shown in Figure 10b. The corresponding plot for the statistic ω i is shown in Figure 10d. The oscillations are decaying and the statistic ω i is converging to an asymptotic value. The mean value ω i is 0.58. We set the number of Sinc points to be inserted in all partitions as m = 2N + 1 = 5. The stopping criterion ε stop = 10 −6 was used. The algorithm terminates after κ = 8 iterations and the number of points |S| = 605. Figure 11a shows the approximate solution y (8) c (x). A proper subset of the set of points S is shown as red dots, which are projected onto the approximate solution y (8) c (x). We plot the statistic ω i as a function of the iteration index i, i = 2, 3, . . . , 8, in Figure 11b. The oscillations are decaying and the statistic ω i is converging to an asymptotic value. The mean value ω i ≈ 0.65.     We perform the least-squares fitting of the logarithm of the set R to the logarithm of the upper bound (24). Figure 12a shows the least-squares fitted model (24) to the set R. Figure 12b shows the residual, absolute local approximation error, and the mean value for the last iteration. The mean value R 8 is below the threshold value 10 −6 . The L 2 norm of the approximation error y(x) − y (10) c (x) ≈ 3.1 × 10 −7 .   (50) . The solution y(x) has a boundary layer near x = 1. We set the number of Sinc points to be inserted in all partitions as m = 2N + 1 = 5. The stopping criterion ε stop = 10 −6 was used. The algorithm terminates after κ = 9 iterations and the number of points |S| = 1055. Figure 13a shows the approximate solution y (9) c (x). A proper subset of the set of points S is shown as red dots, which are projected onto the approximate solution y (9) c (x). We plot the statistic ω i as a function of the iteration index i, i = 2, 3, . . . , 9, in Figure 13b. The oscillations are decaying and the statistic ω i is converging to an asymptotic value. The mean value ω i is 0.62.   We perform the least-squares fitting of the logarithm of the set R to the logarithm of the upper bound (24). Figure 14a shows the least-squares fitted model (24) to the set R. Figure 14b shows the residual, absolute local approximation error, and the mean value for the last iteration. The plot of the L 2 norm values of the residual over the partitions shows that fine partitions are formed near x = 1 due to the presence of the boundary layer. One observation is that the mean value R 9 is below the threshold value 10 −6 . The L 2 norm of the approximation error y(x) − y (9) c (x) ≈ 2.36 × 10 −8 and the threshold value in [8] is 0.02.  In this example, we increase the number of points per partition to m = 2N + 1 = 7 to examine the effect of the increase on the convergence of the algorithm. The algorithm terminates after κ = 5 iterations and the number of points |S| = 350. Figure 15 shows the set R for m = 2N + 1 = 5 Sinc points and m = 2N + 1 = 7 Sinc points. It is observed that increasing the number of Sinc points per partition leads to faster convergence and a fewer number of iterations. Example 8. We study the following BVP [28,64,65] − (υ(x)y ) = 2 [1 + α(x −x) (arctan(α(x −x)) + arctan(αx))] (26) with boundary conditions y(0) = y(1) = 0, where α > 0 and For large values of α, the BVP (26) has an interior layer close tox [28]. The exact solution is given by We use the values reported in [64], i.e., α = 100 andx = 0.36388. This value ofx was chosen so that lim α→∞ y(x + ) ≈ 2 [64]. We set the number of Sinc points to be inserted in all partitions as m = 2N + 1 = 7. The stopping criterion ε stop = 10 −12 was used. The algorithm terminates after κ = 15 iterations and the number of points |S| = 21,469. Figure 16a shows the approximate solution y (15) c (x). A proper subset of the set of points S is shown as red dots, which are projected onto the approximate solution y (15) c (x). We plot the statistic ω i as a function of the iteration index i, i = 2, 3, . . . , 15, in Figure 16b. One finding is that the oscillations are decaying and the statistic ω i is converging to an asymptotic value. The mean value ω i is 0.46.    We perform the least-squares fitting of the logarithm of the set R to the logarithm of the upper bound (24), where the parameter δ is multiplied by 10 −12 and 10 −12 δ = O(10 −19 ). Figure 17a shows the least-squares fitted model (24) to the set R. The residual, absolute local approximation error, and the mean value for the last iteration are shown in Figure 17b. Fine partitions are formed near x = x due to the presence of the interior layer, as shown in the plot of the L 2 norm values of the residual over the partitions. The mean value R 15 is below the threshold ε stop = 10 −12 .
We compare the L 2 norm of the approximation error of our adaptive piecewise Poly-Sinc method with other methods in Table 1. Method [28] requires a parameter for the construction of refinement intervals. The L 2 norm value of the approximation error is smaller than those reported in [28,65].

Method
Adaptive PW PS [28] ( [65]    Example 9. We consider the BVP [66,67] −ε y − x y = επ 2 cos(πx) + πx sin(πx), x ∈ [−1, 1], with boundary conditions y(−1) = −2, y(1) = 0 and ε > 0 is a parameter. The exact solution follows as The exact solution has a shock layer near x = 0 [66]. We set ε = 10 −6 [67]. We set the number of Sinc points to be inserted in all partitions as m = 2N + 1 = 5. The stopping criterion ε stop = 10 −11 was used. The algorithm terminates after κ = 16 iterations and the number of points |S| = 18530. Figure 18a shows the approximate solution y (16) c (x). A proper subset of the set of points S is shown as red dots, which are projected onto the approximate solution y (16) c (x). We plot the statistic ω i as a function of the iteration index i, i = 2, 3, . . . , 16, in Figure 18b. The oscillations are decaying and the statistic ω i is converging to an asymptotic value. The mean value ω i ≈ 0.55.
We perform the least-squares fitting of the logarithm of the set R to the logarithm of the upper bound (24), where the parameter δ is multiplied by the factor 10 −9 and 10 −9 δ = O(10 −12 ). Figure 19a shows the leastsquares fitted model (24) to the set R. The residual, absolute local approximation error, and the mean value for the last iteration are shown in Figure 19b. The plot of the L 2 norm values of the residual over the partitions show that fine partitions are formed near x = 0 due to the presence of the shock layer. The mean value R 16 is below the threshold ε stop = 10 −11 .
We compare the supremum norm of the approximation error of our adaptive piecewise Poly-Sinc method with other methods in Table 2. B−splines were used as basis functions [67]. The supremum norm result is in the same order as that of [67]. Our adaptive method can reach a smaller value if we set ε stop < 10 −11 .

Method
Adaptive PW PS [67] y(x) − y method (x) ∞ 1.215 × 10 −10 4.3 × 10 −10 We observe that the processing times differ among the many problems we look at. This is due to two factors: first, the fact that we are aiming for a very exact outcome; and second, the fact that there are many sorts of challenges. Therefore, listing the computing time in seconds is meaningless since it would only indicate the computational power used on our machine, which will vary for various users. Overall, it is evident that using adaptive techniques takes longer than using a simple collocation approach or finite element methods with a lower accuracy. In our examples, we used a stopping criterion of 10 −6 or less. As a result, our goal is to complete the computation in the most accurate way feasible rather than the quickest way possible. This will inevitably lengthen the processing time for some problems, such as stiff or layer problems.

Conclusions
In this paper, we developed an adaptive piecewise collocation method based on Poly-Sinc interpolation for the approximation of solutions to ODEs. We showed the exponential convergence in the number of iterations of the a priori error estimate obtained from the piecewise collocation method, and provided that a good estimate of the exact solution y(x) at the Sinc points exists. We used a statistical approach for partition refinement, in which we computed the fraction of a standard deviation as the ratio of the mean absolute deviation to the sample standard deviation. We demonstrated by several examples that an exponential error decay is observed for regular ODEs and ODEs whose exact solutions exhibit an interior layer, a boundary layer, and a shock layer. We showed that our adaptive algorithm can deliver results with high accuracy at the expense of the slower computation for stiff ODEs. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.