**The Fractional Differential Polynomial Neural Network for Approximation of Functions**

## **Rabha W. Ibrahim**

Institute of Mathematical Sciences, University Malaya, Kuala Lumpur 50603, Malaysia; E-Mail: rabhaibrahim@yahoo.com; Fax: +60 3-7967 3535

*Received: 26 August 2013; in revised form: 5 September 2013 / Accepted: 24 September 2013 / Published: 29 September 2013* 

**Abstract:** In this work, we introduce a generalization of the differential polynomial neural network utilizing fractional calculus. Fractional calculus is taken in the sense of the Caputo differential operator. It approximates a multi-parametric function with particular polynomials characterizing its functional output as a generalization of input patterns. This method can be employed on data to describe modelling of complex systems. Furthermore, the total information is calculated by using the fractional Poisson process.

**Keywords:** fractional calculus; fractional differential equations; fractional polynomial neural network

## **1. Introduction**

The Polynomial Neural Network (PNN) algorithm is one of the most important methods for extracting knowledge from experimental data and to locate its best mathematical characterization. The proposed algorithm can be utilized to analyze complex data sets with the objective to conclude internal data relationships and to impose knowledge about these relationships in the form of mathematical formulations (polynomial regressions). One of the most common types of PNN is the Group Method of Data Handling (GMDH) polynomial neural network created in 1968 by Professor Ivakhnenko at the Institute of Cybernetics in Kyiv (Ukraine).

Based on GMDH, Zjavka developed a new type of neural network called Differential Polynomial Neural Network (D-PNN) [1–4]. It organizes and designs some special partial differential equations, performing a complex system model of dependent variables. It makes a sum of fractional polynomial formulas, determining partial mutual derivative alterations of input variable combinations. This kind of retreatment is based on learning generalized data connections. Furthermore, it offers dynamic system models a standard time-series prediction, as the character of relative data allow it to employ a wider range of input interval values than defined by the trained data. In addition, the advantages of differential equation solutions facilitate a major variety of model styles. The principle of this type is similar to the artificial neural network (ANN) construction [5,6].

Fractional calculus is a section of mathematical analysis that deals with considering real number powers or complex number powers of the differentiation and integration operators. The integrals are of convoluted form and exhibit power-law type kernels. It can be viewed as an experimenter for special functions and integral transforms [7–12]. It is well known that the physical interview of the fractional derivative is an open problem today. In [13], the author utilized fractional operators, in the sense of the Caputo differential operator, to define and study the stability of recurrent neural network (NN). In [14], Gardner employed a discrete fractional calculus to study Artificial Neural Network Augmentation. In [15], Almarashi used neural networks with a radial basis function method to solve a class of initial boundary values of fractional partial differential equations. Recently, Jalab *et al.*, applied the neural network method for finding the numerical solution for some special fractional differential equations [16]. Zhou *et al*. propoced a fractional time-domain identification algorithm based on a genetic algorithm [17], while Chen *et al*. studied the synchronization problem for a class of fractional-order chaotic neural networks [18].

Here, our aim is to introduce a generalization of the differential polynomial neural network utilizing fractional calculus. The fractional calculus is assumed in the sense of the Caputo differential operator. It approximates a multi-parametric function with particular polynomials characterizing its functional output as a generalization of input patterns. This method can be employed on data to describe modelling of complex systems [19].

#### **2. Preliminaries**

This section concerns with some basic preliminaries and notations regarding the fractional calculus. One of the most considerably utilized instruments in the theory of fractional calculus is provided by the Caputo differential operator.

**Definition 2.1** The fractional (arbitrary) order integral of the function of order is defined by:

$$M\_a^\beta h(t) = \int\_a^t \frac{(t-\tau)^{\beta-1}}{\Gamma(\beta)} h(\tau) d\tau \tag{1}$$

When  we write -  - where denoted the convolution product (see [7]), and and as where is the delta function.

**Definition 2.2** The Riemann-Liouville fractional derivative of the function of order ! is defined by:

$$D\_a^{\beta} \mathbf{h}(t) = \frac{d}{dt} \int\_a^t \frac{(t-\tau)^{-\beta}}{\Gamma(1-\beta)} \mathbf{h}(\tau) d\tau = \frac{d}{dt} I\_a^{1-\beta} \mathbf{h}(t) \tag{2}$$

**Remark 2.1** [7]

**316** 

$$D^{\beta}t^{\mu} = \frac{\Gamma(\mu+1)}{\Gamma(\mu-\beta+1)}t^{\mu-\beta}, \mu > -1; \ 0 < \beta < 1\tag{3}$$

and:

$$t^{\beta}t^{\mu} = \frac{\Gamma(\mu+1)}{\Gamma(\mu+\beta+1)}t^{\mu+\beta}, \mu>-1; \ \beta>0\tag{4}$$

The Leibniz rule is:

$$\begin{aligned} \, \_D^\beta [f(t)g(t)] &= \sum\_{k=0}^\infty \binom{\beta}{k} D\_a^{\beta-k} f(t) D\_a^k g(t) \\ &= \sum\_{k=0}^\infty \binom{\beta}{k} D\_a^{\beta-k} g(t) D\_a^k f(t) \end{aligned} \tag{5}$$

**Definition 2.3.** The Caputo fractional derivative fractional derivative of order >0 is defined, for a smooth function *f* by:

$$\begin{split} \, \, f^{\left(\beta\right)}\left(\mathbf{x}\right) & \coloneqq \frac{\partial^{\beta} \, f}{\partial \mathbf{x}^{\beta}} \coloneqq ^{c}D\_{\mathbf{x}}^{\beta} \, f(\mathbf{x}) \\ &= \frac{1}{\Gamma\left(n-\beta\right)} \int\_{0}^{\mathbf{x}} \frac{f^{\left(n\right)}\left(\mathbf{\tau}\right)}{\left(\mathbf{x}-\mathbf{\tau}\right)^{\beta-n+1}} d\mathbf{\tau}. \end{split} \tag{6}$$

The local fractional Taylor formula has been generalized by many authors [20–22]. This generalization admits the following formula:

$$\begin{split} f(\boldsymbol{x} + \Delta \boldsymbol{x}) &= f(\boldsymbol{x}) + \, ^\circ D\_\boldsymbol{x}^\beta f(\boldsymbol{x}) \, \frac{\langle \Delta \boldsymbol{x} \rangle^\beta}{\Gamma(\beta + 1)} + \, ^\circ D\_\boldsymbol{x}^\beta \, ^\circ D\_\boldsymbol{x}^\beta f(\boldsymbol{x}) \frac{\langle \Delta \boldsymbol{x} \rangle^{2\beta}}{\Gamma(2\beta + 1)} + \cdots \\ &\quad + \, ^\circ D\_\boldsymbol{x}^{\*\beta} f(\boldsymbol{x}) \frac{\langle \Delta \boldsymbol{x} \rangle^{n\beta}}{\Gamma(n\beta + 1)} \end{split} \tag{7}$$

where *<sup>c</sup> D <sup>x</sup>* Nis the Caputo differential operator and:

$$\prescript{c}{}{D}^{n\beta}{}\_{x} = \underbrace{\prescript{c}{}{D}^{\beta}{}\_{x} \prescript{c}{}{D}^{\beta}{}\_{x} \dots \prescript{c}{}{D}^{\beta}{}\_{x}}\_{n-\text{times}} \tag{8}$$

#### **3. Results**

#### *3.1. Proposed Method*

The fractional differential polynomial neural network (FD-PNN) is based on an equation of the form:

$$a + \sum\_{l=1}^{n} b\_l \frac{\partial^{\beta} u}{\partial \mathbf{x}\_l^{\beta}} + \sum\_{l=1}^{n} \sum\_{j=1}^{n} c\_{lj} \frac{\partial^{\beta} u}{\partial \mathbf{x}\_l^{\beta}} \frac{\partial^{\beta} u}{\partial \mathbf{x}\_j^{\beta}} + \sum\_{l=1}^{n} \sum\_{j=1}^{n} \sum\_{k=1}^{n} d\_{ljk} \frac{\partial^{\beta} u}{\partial \mathbf{x}\_l^{\beta}} \frac{\partial^{\beta} u}{\partial \mathbf{x}\_j^{\beta}} \frac{\partial^{\beta} u}{\partial \mathbf{x}\_k^{\beta}} + \dots = 0 \tag{9}$$

where @D  \* 5 57CCC5: is a function of all input variables, >= B=A =A/ are the polynomial coefficients. Solutions of fractional differential equations can be expressed in term of the Mittag-Leffler function:

**317** 

$$E\_{\alpha}(\mathbf{z}) = \sum\_{n=0}^{\infty} \frac{\mathbf{z}^n}{\Gamma(1+n\alpha)}\tag{10}$$

Recently, numerical routines for Mittag-Leffler functions have been developed, e.g., by Freed *et al.* [23], Gorenflo *et al.* [24] (with MATHEMATICA), Podlubny [25] (with MATLAB), Seybold and Hilfer [26].

We proceed to form sum derivative terms changing the fractional partial differential equation (9) by applying different math techniques, e.g. fractional wave series, [27]:

$$\begin{split} \mathbf{y}\_{l}^{\beta} &= \frac{\{a\_{0} + a\_{1}\mathbf{x}\_{1} + a\_{2}\mathbf{x}\_{2} + ... + a\_{n}\mathbf{x}\_{n} + a\_{n+1}\mathbf{x}\_{1}\mathbf{x}\_{2} + ...\} \frac{\mathbf{m} + \boldsymbol{\mu}}{n}}{b\_{0} + b\_{1}\mathbf{x}\_{1} + ...} \\ &= \frac{\partial^{m\beta} f(\mathbf{x}\_{1}, \mathbf{x}\_{2}, ..., \mathbf{x}\_{n})}{\partial \mathbf{x}\_{1}^{\beta} \; \partial \mathbf{x}\_{2}^{\beta} ... \partial \mathbf{x}\_{m}^{\beta}} \end{split} \tag{11}$$

where ; refers to the combined degree of ; input variable polynomial of numerator; while K indicates to the combined degree of denominator L—weights of terms and I= is the output neuron. Note that when ! Equation (11) reduces to Equation (4) in [4]. The fractional polynomials of fractional power (11), determining relations of ; -input variables, appear summation derivative terms (neurons) of a fractional differential equation. The numerator of Equation (11) is a complete ;-variable polynomial, which recognizes a new partial function @ of Equation (9). The denominator of Equation (11) is a fractional derivative part, which implies a fractional partial change of some input variables combination. Equation (11) indicates a aingle output for fixed fractional power. Each layer of the FD-PNN contains blocks. These blocks stress fractional derivative neurons. For each fractional polynomial of fractional order formulates the fractional partial derivative depending on the change of some input variables. Each block implicates a unique fractional polynomial which forms its output access into the next hidden layer (Figure 1). For example of a system of the form : input layer, first hidden layer, second hidden layer and output layer; we may use y1 1/4 to perform its output to the first layer; y2 1/2 to execute its output to the second hidden layer and y3 3/4 to carry out the last y of the system in the output layer.

#### **Figure 1.** GMDH-PNN.

Let there be a network with two inputs, formulating one functional output value I then, for special values of the sum derivative terms is:

I  L 1 & 5 & 757 & M557 >1 & >5 & L7 1 & 5 & 757 & M557 >1 & >57 NOO<)P, IMQR  L 1 & 5 & 757 & M557SQT >1 & >5 & L7 1 & 5 & 757 & M557SQT >1 & >57 IQ7  L 1 & 5 & 757 & M557MQR >1 & >5 & L7 1 & 5 & 757 & M557MQR >1 & >57 IQR  L 1 & 5 & 757 & M557UQT >1 & >5 & L7 1 & 5 & 757 & M557UQT >1 & >57 (12)

we realize that I includes only one block of two neurons, terms of both fractional derivative variables 5 and 57C Table 1 shows approximation errors (y-axis) of the trained network, *i.e.* differences of the true and estimated function, to random input vectors with dependent variables.


**Table 1.** Approximation values of \* 5 575 & 57.

The 3-variable FD-PNN (Table 2) for linear true function approximation (e.g., \* 5 57 5M  5 & 57 & 5M ) may involve one block of six neurons, FDE terms of all 1 and 2-combination derivative variables of the complete FDE, e.g.,:

$$\begin{aligned} y\_1^{14} &= w\_1 \frac{(a\_0 + a\_1 \mathbf{x}\_1 + a\_2 \mathbf{x}\_2 + a\_3 \mathbf{x}\_3 + a\_4 \mathbf{x}\_1 \mathbf{x}\_2 + a\_5 \mathbf{x}\_1 \mathbf{x}\_3 + a\_6 \mathbf{x}\_2 \mathbf{x}\_3 + a\_7 \mathbf{x}\_1 \mathbf{x}\_2 \mathbf{x}\_3)^{1/3}}{b\_0 + b\_1 \mathbf{x}\_1},\\ y\_1^{3/4} &= w\_1 \frac{(a\_0 + a\_1 \mathbf{x}\_1 + a\_2 \mathbf{x}\_2 + a\_3 \mathbf{x}\_3 + a\_4 \mathbf{x}\_1 \mathbf{x}\_2 + a\_5 \mathbf{x}\_1 \mathbf{x}\_3 + a\_6 \mathbf{x}\_2 \mathbf{x}\_3 + a\_7 \mathbf{x}\_1 \mathbf{x}\_2 \mathbf{x}\_3)^{7/12}}{b\_0 + b\_1 \mathbf{x}\_1} \\ y\_1^{1/2} &= w\_1 \frac{(a\_0 + a\_1 \mathbf{x}\_1 + a\_2 \mathbf{x}\_2 + a\_3 \mathbf{x}\_3 + a\_4 \mathbf{x}\_1 \mathbf{x}\_2 + a\_5 \mathbf{x}\_1 \mathbf{x}\_3 + a\_6 \mathbf{x}\_2 \mathbf{x}\_3 + a\_7 \mathbf{x}\_1 \mathbf{x}\_2 \mathbf{x}\_3)^{1/2}}{b\_0 + b\_1 \mathbf{x}\_1} \\ y\_1^{1/4} &= w\_1 \frac{(a\_0 + a\_1 \mathbf{x}\_1 + a\_2 \mathbf{x}\_2 + a\_3 \mathbf{x}\_3 + a\_4 \mathbf{x}\_1 \mathbf{x}\_2 + a\_5 \mathbf{x}\_1 \mathbf{x}\_3 + a\_6 \mathbf{x}\_2 \mathbf{x}\_3 + a\_7 \mathbf{x}\_1 \mathbf{x}\_2 \mathbf{x}\_3)^{5/12}}{b\_0 + b\_1 \mathbf{x}\_1} \end{aligned} \tag{13}$$

and:

IR  L7 1 & 5 & 757 & M5M & R557 & U55M & [575M & S5575M >1 & >5 & >757 & >M557 NOO<)P, IR MQR  L7 1 & 5 & 757 & M5M & R557 & U55M & [575M & S5575MQ7 >1 & >5 & >757 & >M557 IR Q7  L7 1 & 5 & 757 & M5M & R557 & U55M & [575M & S5575MUQ[ >1 & >5 & >757 & >M557 IR QR  L7 1 & 5 & 757 & M5M & R557 & U55M & [575M & S5575MMQR >1 & >5 & >757 & >M557 (14)


**Table 2.** Approximation values of \* 5 57 5M5 & 57 & 5M.

We proceed to compute approximations for non-linear functions. We let @D  \* 5 57 be a function with square power variables, then we have:

$$F(\mathbf{x}\_1, \mathbf{x}\_2, u, \frac{\partial^\beta u}{\partial \mathbf{x}\_1^\beta}, \frac{\partial^\beta u}{\partial \mathbf{x}\_2^\beta}, \frac{\partial^{2\beta} u}{\partial \mathbf{x}\_1^{2\beta}}, \frac{\partial^{2\beta} u}{\partial \mathbf{x}\_2^{2\beta}}, \frac{\partial^{2\beta} u}{\partial \mathbf{x}\_1^\beta}) = 0 \tag{15}$$

For example, for  ! we get [4]:

$$\begin{split} y\_{10}^{1} &= \mathcal{W}\_{10} \frac{a\_0 + a\_1 \mathbf{x}\_1 + a\_2 \mathbf{x}\_2 + a\_3 \mathbf{x}\_1^2 + a\_4 \mathbf{x}\_2^2 + a\_5 \mathbf{x}\_1 \mathbf{x}\_2}{b\_0 + b\_1 \mathbf{x}\_1 + b\_2 \mathbf{x}\_1^2} \\ &= \frac{\partial^2 f(\mathbf{x}\_1, \mathbf{x}\_2)}{\partial \mathbf{x}\_1^2} \\ y\_{01}^{1} &= \mathcal{W}\_{01} \frac{a\_0 + a\_1 \mathbf{x}\_1 + a\_2 \mathbf{x}\_2 + a\_3 \mathbf{x}\_1^2 + a\_4 \mathbf{x}\_2^2 + a\_5 \mathbf{x}\_1 \mathbf{x}\_2}{b\_0 + b\_1 \mathbf{x}\_2 + b\_2 \mathbf{x}\_2^2} \\ &= \frac{\partial^2 f(\mathbf{x}\_1, \mathbf{x}\_2)}{\partial \mathbf{x}\_2^2} \end{split} \tag{16}$$

In general, for fractional power we have:

$$\begin{split} \mathbf{y}\_{10}^{\beta} &= \mathbf{w}\_{10} \frac{(a\_0 + a\_1 \mathbf{x}\_1 + a\_2 \mathbf{x}\_2 + a\_3 \mathbf{x}\_1^2 + a\_4 \mathbf{x}\_2^2 + a\_5 \mathbf{x}\_1 \mathbf{x}\_2)^{\frac{1+\beta}{2}}}{b\_0 + b\_1 \mathbf{x}\_1 + b\_2 \mathbf{x}\_1^2} \\ &= \frac{\partial^{2\beta} f(\mathbf{x}\_1, \mathbf{x}\_2)}{\partial \mathbf{x}\_1^{2\beta}} \\ \mathbf{y}\_{01}^{\beta} &= \mathbf{w}\_{01} \frac{(a\_0 + a\_1 \mathbf{x}\_1 + a\_2 \mathbf{x}\_2 + a\_3 \mathbf{x}\_1^2 + a\_4 \mathbf{x}\_2^2 + a\_5 \mathbf{x}\_1 \mathbf{x}\_2)^{\frac{1+\beta}{2}}}{b\_0 + b\_1 \mathbf{x}\_2 + b\_2 \mathbf{x}\_2^2} \\ &= \frac{\partial^{2\beta} f(\mathbf{x}\_1, \mathbf{x}\_2)}{\partial \mathbf{x}\_2^{2\beta}} \end{split} \tag{17}$$

For example:

$$\begin{aligned} \mathcal{Y}\_{10}^{3/4} &= \mathcal{W}\_{10} \frac{(a\_0 + a\_1\mathbf{x}\_1 + a\_2\mathbf{x}\_2 + a\_3\mathbf{x}\_1^2 + a\_4\mathbf{x}\_2^2 + a\_5\mathbf{x}\_1\mathbf{x}\_2)^{\frac{7}{8}}}{b\_0 + b\_1\mathbf{x}\_1 + b\_2\mathbf{x}\_1^2} \\ \mathcal{Y}\_{10}^{1/2} &= \mathcal{W}\_{10} \frac{(a\_0 + a\_1\mathbf{x}\_1 + a\_2\mathbf{x}\_2 + a\_3\mathbf{x}\_1^2 + a\_4\mathbf{x}\_2^2 + a\_5\mathbf{x}\_1\mathbf{x}\_2)^{\frac{3}{4}}}{b\_0 + b\_1\mathbf{x}\_1 + b\_2\mathbf{x}\_1^2} \\ \mathcal{Y}\_{10}^{1/4} &= \mathcal{W}\_{10} \frac{(a\_0 + a\_1\mathbf{x}\_1 + a\_2\mathbf{x}\_2 + a\_3\mathbf{x}\_1^2 + a\_4\mathbf{x}\_2^2 + a\_5\mathbf{x}\_1\mathbf{x}\_2)^{\frac{5}{8}}}{b\_0 + b\_1\mathbf{x}\_1 + b\_2\mathbf{x}\_1^2} \end{aligned} \tag{18}$$

#### *3.2. Modified Information Theory*

In this section, we try to measure the learning of the neuron of the system in Figure 1. We wish to improve a applicable measure of the information we get from observing the appearance of an event having probability p. The approach depends on the probability of extinction, which describes by the fractional Poisson process as follows [28]:

$$P\_{\beta} \left\{ N, \chi \right\} = \frac{(\sigma \chi)^{N}}{N!} \sum\_{n=0}^{\infty} \frac{(n+N)!}{n!} \frac{(-\sigma \chi^{\beta})^{n}}{\Gamma(\beta \{n+N\} + 1)} \tag{19}$$

where 
 c *R* is a physical coefficient,  c (0,1]. Let *N* be the number of neurons, *I* be the average information and further that the source emits the symbols with probabilities *P*1, *P*2, *...*, *PN*, respectively such that *Pi* = *P (i,y)*. Thus we may compute the total information as follows:

$$I = \sum\_{l=1}^{N} \{NP\_l \mid \} \log\left(\frac{1}{P\_l}\right) \tag{20}$$

The last assertion is modified work due to Shannon [29]. For example, to compute the average information of the system with *N*=3, for the last fractional derivative in Table 3, we have:

$$I = \sum\_{l=1}^{3} (NP\_l \text{ )} \log\left(\frac{1}{P\_l}\right) = 3P\_1 \log\left(\frac{1}{P\_1}\right) + 3P\_2 \log\left(\frac{1}{P\_2}\right) + 3P\_3 \log\left(\frac{1}{P\_3}\right) \tag{21}$$

$$\simeq 0.2408 - 0.09 - 0.051 = 0.051$$

where Pi converged to a hypergeometric function, which computed with the help of Maple.


#### **4. Discussion**

The presented 2-variable FD-PNN (Table 1) is able to approximate any linear function, e.g., the simple sum \* 5 575 & 57C The comparison processes with respect to D-PNN (normal case) showed that the proposed method converged to the exact values rapidly. For example, the case (1,1) implied ABE=0.01 at IQRC In this experiment, we let >1  ! L  L7  !C Figure 2 shows the approximation of the fractional derivative for the function \* 5 575 & 57C The x-axis represents to the values when 5  57C It is clear that the interval of convergence is )C8!,C The endowed 3-variable FD-PNN (Table 2) is qualified to approximate any linear function e.g., simple sum \* 5 57 5M5 & 57 & 5MC The comparison procedure with respect to D-PNN displayed that the proposed method, of 3-variables, is converged swiftly to the exact values. For example, the case (1,1,0), with L7  VQ8 and (1,1,1), with L7  ! yield ABE=0.063 and 0.0755 respectively at IQRC Furthermore, Figure 3 shows the interval of convergence at )CV!,C Here, we let *x1 = x2 = x3*. Comparable argument can be concluded from the non-linear case, where Table 3 computes approximation values, by utilizing FD-PNN. For example, the data (1,1) give the best approximation at IMQR when L1  !CWC In Figure 4, the x-axis performs to the value when 5  57C Obviously, the interval of convergence is [0.4,2].

**Figure 2.** Selected fractional approximation derivative of f(x1,x2) = x1 + x2.

**Figure 3.** The fractional approximation y4 of the function f(x1,x2,x3) = x1 + x2 + x3.

**Figure 4.**The fractional approximation y10 of the function f(x1,x2) = (x1 + x2) 2 .

#### **5. Conclusions**

Based on GMDH-PNN (Figure 1) and modifying the work described in [4], we suggested a generalized D-PNN, called FD-PNN. The experimental results showed that the proposed method satisfies a quick approximation to the exact value comparison with the normal method. The generalization depended on the Riemann-Liouville differential operator. This method can be employed on data to describe modelling of complex systems. Next step, our aim is to modify this work by utilizing mixed D-PNN and FD-PNN, e.g. one can consider a function of the form:

$$F(\mathbf{x}\_1, \mathbf{x}\_2, u, \frac{\partial u}{\partial \mathbf{x}\_1}, \frac{\partial u}{\partial \mathbf{x}\_2}, \dots, \frac{\partial^{\beta} u}{\partial \mathbf{x}\_1^{\beta}}, \frac{\partial^{\beta} u}{\partial \mathbf{x}\_2^{\beta}}, \frac{\partial^{2\beta} u}{\partial \mathbf{x}\_1^{2\beta}}, \frac{\partial^{2\beta} u}{\partial \mathbf{x}\_2^{2\beta}}, \frac{\partial^{2\beta} u}{\partial \mathbf{x}\_1^{\beta}}, \dots) = 0\tag{22}$$

#### **Acknowledgments**

The author would like to thank the reviewers for their comments on earlier versions of this paper. This research has been funded by the University of Malaya, under Grant No. RG208-11AFR.

#### **Conflicts of Interest**

The authors declare no conflict of interest.

#### **References**


Reprinted from *Entropy*. Cite as: Arqub, O.A.; El-Ajou, A.; Zhour, Z.A.; Momani, S. Multiple Solutions of Nonlinear Boundary Value Problems of Fractional Order: A New Analytic Iterative Technique. *Entropy* **2014**, *16*, 471-493.

*Article* 
