Skip to Content
AlgorithmsAlgorithms
  • Article
  • Open Access

17 January 2026

Algorithms for Solving Ordinary Differential Equations Based on Orthogonal Polynomial Neural Networks

Laboratory of Physical Process Modeling, Institute of Cosmophysical Research and Radio Wave Propagation FEB RAS, 684034 Paratunka, Russia

Abstract

This article proposes single-layer neural network algorithms for solving second-order ordinary differential equations, based on the principles of functional connection. According to this principle, the hidden layer of the neural network is replaced by a functional expansion unit to improve input patterns using orthogonal Chebyshev, Legendre, and Laguerre polynomials. The polynomial neural network algorithms were implemented in the Python programming language using the PyCharm environment. The performance of the polynomial neural network algorithms was tested by solving initial-boundary value problems for the nonlinear Lane–Emden equation. The solution results are compared with the exact solution of the problems under consideration, as well as with the solution obtained using a multilayer perceptron. It is shown that polynomial neural networks can perform more efficiently than multilayer neural networks. Furthermore, a neural network based on Laguerre polynomials can, in some cases, perform more accurately and faster than neural networks based on Legendre and Chebyshev polynomials. The issues of overtraining of polynomial neural networks and scenarios for overcoming it are also considered.

1. Introduction

In recent years, machine learning methods, particularly artificial neural networks (ANNs), have found widespread application beyond traditional data analysis tasks, entering the realm of scientific computing [1]. One promising direction is the use of ANNs for the numerical solution of differential equations, opening new possibilities for modeling complex physical and engineering systems [2]. Compared to classical numerical methods (such as the finite difference or finite element method), neural network approaches offer several potential advantages: they allow obtaining a solution in the form of a continuous and differentiable function over the entire domain, can work efficiently in high-dimensional spaces, and are well suited for problems with inverse parameters or incomplete data.
Various neural network architectures are used to solve ordinary differential equations (ODEs). The most common are multilayer networks, such as the classic multilayer perceptron (MLP) [3] or specialized Physics-Informed Neural Networks (PINNs), which explicitly incorporate governing laws into the loss function [4]. However, these architectures often require significant computational resources for training due to the large number of tunable parameters and can suffer from the vanishing gradient problem. An alternative is offered by single-layer neural networks based on the functional link principle (Functional Link Artificial Neural Network, FLANN) [5,6]. In the FLANN model, the hidden layer is replaced by a functional expansion block that increases the dimensionality of the input data through decomposition into orthogonal polynomial bases (e.g., Chebyshev, Legendre). This significantly reduces the number of network parameters and the number of iterations required for convergence, making FLANN a computationally efficient model with a high training speed.
FLANNs based on Chebyshev (ChNN) and Legendre (LeNN) polynomials have already been successfully applied to solve various ODEs [7,8,9], demonstrating competitive accuracy. However, the literature lacks a systematic investigation and direct comparison of the effectiveness of these two approaches. Moreover, the potential of neural networks based on other classical orthogonal polynomials, particularly Laguerre polynomials (LaNN), for solving initial and boundary value problems for ODEs remains virtually unexplored. Thus, there is a clear knowledge gap: there is no comparative analysis within the entire family of polynomial FLANNs, nor an assessment of their advantages compared to standard deep architectures.
The aim of this work is to fill this gap through a comprehensive study of the performance of the family of single-layer polynomial neural networks ChNN, LeNN, and LaNN. The test examples chosen are the Cauchy problem and the boundary value problem for a second-order nonlinear differential equation of the Lane–Emden type, for which an exact analytical solution is known. The Lane–Emden equation is frequently encountered in astrophysics, in theories of stellar structure, in theories of the thermal behavior of spherical gas clouds, and in theories of thermionic currents [8,10].
It should be noted that the system of Laguerre polynomials forms a complete orthogonal basis in ( 0 , ) . When restricted to the finite interval [ 0 , 1 ] , they retain linear independence and can effectively approximate functions, especially those exhibiting exponential or singular behavior near zero—characteristic of many physical models, including the Lane–Emden equation. Therefore, one should expect superior performance of LANN compared to LeNN and ChNN.
This allows for an objective assessment of the accuracy of each method. The paper conducts a comparative analysis of the polynomial networks among themselves, as well as with the reference multilayer neural network MLP, based on two key criteria: the accuracy of the solution approximation and the algorithm execution time. It is demonstrated that properly configured polynomial networks can outperform MLP in computational efficiency while maintaining high accuracy, making them a promising tool for rapid numerical integration of differential equations.
The research plan in this paper contains the following sections. Section 1 provides an introduction to the research problem and its objectives. Section 2 provides information on orthogonal polynomials. Section 3 describes the architecture of a neural network built on orthogonal polynomials. Section 4 presents the formulation of the Cauchy problem and the edge problem for the Lane–Emden equation. Section 5 presents algorithms for solving the problems described above using polynomial neural networks, organized as pseudocode in the Python 3.13 programming language. Section 6 presents the results of studies of various polynomial neural networks and compares them with a multilayer neural network (MLP). Section 7 discusses issues of overfitting polynomial neural networks. Section 8 concludes the study.

2. Preliminaries

Orthogonal polynomials are widely used in approximation theory, in particular for constructing quadrature formulas for approximate calculation of integrals or function approximation [11].
Let f ( x ) and g ( x ) be real functions belonging to the class L w 2 ( a , b ) .
Definition 1.
Legendre polynomials of degree n for x [ 1 , 1 ] form an orthogonal system and can be computed by the recurrence formula:
P 0 ( x ) = 1 , P 1 ( x ) = x , P n + 1 ( x ) = 2 n + 1 n + 1 x P n ( x ) n n + 1 P n 1 ( x ) .
Definition 2.
Chebyshev polynomials of degree n for x [ 1 , 1 ] form an orthogonal system and can be computed by the recurrence formula:
T 0 ( x ) = 1 , T 1 ( x ) = x , T m + 1 ( x ) = 2 x T m ( x ) T m 1 ( x ) .
Definition 3.
Laguerre polynomials of degree n for x [ 0 , ) form an orthogonal system and can be computed by the recurrence formula:
L 0 ( x ) = 1 , L 1 ( x ) = 1 x , L m + 1 ( x ) = ( 2 m + 1 x ) L m ( x ) m L m 1 ( x ) m + 1 .
Recall that orthogonality of systems (1)–(3) means that the scalar product of any two functions of this system is equal to zero. More details about the properties of orthogonal polynomials can be found in [11].
For simplicity, we introduce the notation O P m ( x ) , which defines the family of orthogonal polynomials (1)–(3).

3. Neural Network Model Based on Orthogonal Polynomials

Consider a single-layer neural network based on the functional-link principle (Figure 1).
Figure 1. Structure of a single-layer neural network based on orthogonal polynomials.
Figure 1 shows the structure of a single-layer neural network, which consists of one input node, one output layer, and a functional expansion block based on Legendre, Chebyshev, and Laguerre orthogonal polynomials. Here, by the notation OPNN, we represent one of the three neural networks LeNN, LaNN, or ChNN. The hidden layer is eliminated by transforming the input pattern into a higher-dimensional space using polynomials (1)–(3).
As input data, we consider a vector x = x 1 , x 2 , , x n of dimension n. The enhanced pattern is obtained using orthogonal polynomials (1)–(3):
1 , O P 1 x 1 , , O P m 1 x 1 ; 1 , O P 1 x 2 , , O P m 1 x 2 ;     ; 1 , O P 1 x n , , O P m 1 x n .
According to (4), n-dimensional input x is extended into m-dimensional improved polynomials. Then, a weighted sum z of the expanded input data is formed, which is written as
z = j = 1 m w j O P j 1 ( x ) .
To update the w j weights of the neural network, a backpropagation learning algorithm using the Adam (Adaptive Moment Estimation) optimizer is used. This algorithm provides faster convergence and is better at choosing an effective learning rate. Thus, the gradient of the error function with respect to the tunable parameters w j is determined.
The nonlinear hyperbolic tangent function tanh ( z ) is considered as the activation function. For training, the gradient descent algorithm is used, and the weights are updated using the negative gradient at each iteration. The weights are initialized randomly and then updated:
w j k + 1 = w j k η E x , w w j k ,
where η is the learning parameter taking values from 0 to 1, k is the iteration step used to update the weights, as usual in neural networks, and E x , w is the error function, w = { w 1 , , w m } .
The output layer N ( x , w ) is determined by the input vector x and the tunable parameters w j of the selected neural network by the formula
N ( x , w ) = tanh ( z ) .
The activation function (7) can be chosen differently depending on the problem [12].

4. Problem Statement

Consider the following problem for a second-order ODE for x [ 0 , 1 ] :
y ( x ) + p ( x ) y ( x ) + q ( x ) y ( x ) = f ( x ) , y ( 0 ) = y 0 , y ( 0 ) = y 0 .
Problem (8) is a Cauchy problem.
Definition 4.
The trial solution of the Cauchy problem (8) will be referred as the function
y t ( x ) = y 0 + y 0 x + x 2 N ( x , w ) .
Consider the following boundary value problem for a second-order ODE for x [ a , b ] :
y ( x ) + p ( x ) y ( x ) + q ( x ) y ( x ) = f ( x ) , y ( a ) = y a , y ( b ) = y b .
Definition 5.
The trial solution of the boundary value problem (10) will be referred to as the function
y t ( x ) = y a + x a b a ( y b y a ) + ( x a ) ( b x ) N ( x , w ) .
The error function for recalculating the weights according to Formula (6), taking into account representations (9) or (11), is given as a residual of the following form:
E x , w = 1 n i = 1 n y t ( x i ) + p ( x i ) y t ( x i ) + q ( x i ) y t ( x i ) f ( x i ) 2 .
The calculation of derivatives (gradients) in Formula (12) is performed, taking into account Formulas (9) and (11). In the algorithms for implementing polynomial networks, gradient computation was performed automatically using the Python library [13].

5. Algorithms for Implementing Polynomial Neural Networks

The polynomial neural network algorithms were implemented in the Python programming language [13] in the PyCharm 3.13 environment [14] in the OPNN computer program. The OPNN program flowchart is shown in Figure 2.
Figure 2. (a) Stage 1: Data input and neural network initialization; (b) stage 2: training and output of simulation results.
It consists of two stages: The first stage defines the input parameters, i.e., Helper Functions: calculating orthogonal polynomials according to (1)–(3), automatic differentiation for calculating derivatives, functions for visualizing and estimating errors, and selects the appropriate polynomial neural network. The second stage contains the procedures for training it and inferring the obtained results. The main procedures in the algorithms are presented as pseudocode (Algorithms 1–7).
PolynomialBasis(x) is a procedure for calculating a polynomial basis for a given input x. Depending on the type of neural network, it invokes different procedures according to definitions (1)–(3).
Algorithm 1 Neural Network Training for ODE Solution
  1: procedure MainTraining( n e t w o r k _ t y p e , p a r a m s )
  2:     Generate training data x _ d a t a [ 0.1 , 1 ]
  3:     Define exact solution: e x a c t _ s o l u t i o n ( x ) = 2 ln ( 1 + x 2 )
  4:
  5:     if  n e t w o r k _ t y p e = 1  then
  6:          m o d e l L e g e n d r e N N ( n _ p o l y n o m i a l s )
  7:     else if  n e t w o r k _ t y p e = 2  then
  8:          m o d e l C h e b y s h e v N N ( n _ p o l y n o m i a l s )
  9:     else if  n e t w o r k _ t y p e = 3  then
10:          m o d e l L a g u e r r e N N ( n _ p o l y n o m i a l s )
11:     else
12:           m o d e l M L P ( h i d d e n _ s i z e , n u m _ l a y e r s )
13:     end if
14: 
15:      s t a r t _ t i m e current time ( )
16:      ( t r a i n e d _ m o d e l , l o s s _ h i s t o r y ) TrainModel(model, x_data)
17:      t r a i n i n g _ t i m e current time ( ) s t a r t _ t i m e
18: 
19:     Visualize loss history and results
20:     Output comparative value table
21:     return  t r a i n e d _ m o d e l , l o s s _ h i s t o r y , t r a i n i n g _ t i m e
22: end procedure
Algorithm 2 Model Training Procedure
  1: procedure TrainModel( n e t , x _ d a t a , e p o c h s , η )
  2:     Initialize Adam optimizer with learning rate η
  3:     Initialize ReduceLROnPlateau scheduler
  4:      l o s s _ h i s t o r y [ ]
  5: 
  6:     for  e p o c h 1 to e p o c h s  do
  7:          o p t i m i z e r . z e r o _ g r a d ( )
  8:          l o s s ComputeLoss(net, x_data)
  9:          l o s s . b a c k w a r d ( )
10:          o p t i m i z e r . s t e p ( )
11:           s c h e d u l e r . s t e p ( l o s s . i t e m ( ) )
12:           l o s s _ h i s t o r y . a p p e n d ( l o s s . i t e m ( ) )
13:         if  e p o c h mod 100 = 0  then
14:            Output current loss and learning rate
15:         end if
16:     end for
17: 
18:     return  n e t , l o s s _ h i s t o r y
19: end procedure
Algorithm 3 Polynomial Neural Network Forward Pass
  1: procedure Forward(x, w e i g h t s )
  2:      p o l y s PolynomialBasis(x)
  3:      o u t p u t i w e i g h t s [ i ] × p o l y s [ : , i ]
  4:     return  o u t p u t
  5: end procedure
Algorithm 4 Loss Function Computation
  1: procedure ComputeLoss( n e t , x)
  2:     Set x . r e q u i r e s _ g r a d T r u e
  3:      ( y , d y _ d x , d 2 y _ d x 2 ) ComputeDerivatives(x, net)
  4: 
  5:      Compute equation components:
  6:      t e r m 1 d 2 y _ d x 2
  7:      t e r m 3 4 × ( 2 × exp ( y ) + exp ( 0.5 × y ) )
  8: 
  9:     if  x > 10 6  then
10:          t e r m 2 ( 2 / x ) × d y _ d x
11:     else
12:           t e r m 2 0
13:     end if
14: 
15:      e q _ r e s i d u a l t e r m 1 + t e r m 2 + t e r m 3
16:      m a s k ( x > 10 6 )
17:      l o s s mean ( e q _ r e s i d u a l [ m a s k ] 2 )
18: 
19:     return  l o s s
20: end procedure
Algorithm 5 Trial Solution Derivative Computation
  1: procedure ComputeDerivatives(x, n e t )
  2:      x _ t e n s o r x (with r e q u i r e s _ g r a d = T r u e )
  3:      y TrialSolution(x_tensor, net)
  4:
  5:    Compute first derivative:
  6:      d y _ d x y x (using automatic differentiation)
  7: 
  8:     Compute second derivative:
  9:      d 2 y _ d x 2 d y _ d x x (using automatic differentiation)
10: 
11:     return  y , d y _ d x , d 2 y _ d x 2
12: end procedure
Algorithm 6 Trial Solution with Boundary Conditions (The Cauchy Problem)
  1: procedure TrialSolution(x, n e t )
  2:      n e t _ o u t p u t n e t ( x )
  3:      y x 2 × n e t _ o u t p u t
  4:     return y
  5: end procedure
A trial solution to the Cauchy problem according to Formula (9) is calculated using Algorithm 6.
Algorithm 7 Trial Solution with Boundary Conditions (Boundary Value Problem)
  1: procedure TrialSolution(x, n e t )
  2:      n e t _ o u t p u t n e t ( x )
  3:      y y a ( 1 x ) + y b x + x ( 1 x ) × n e t _ o u t p u t
  4:     return y
  5: end procedure
A trial solution to the boundary value problem according to Formula (11) is calculated using Algorithm 7.

6. Research Results

Consider the operation of orthogonal polynomial networks on the example of solving the Cauchy problem for the singular nonlinear Lane–Emden differential equation
y ( x ) + 2 x y ( x ) + 4 ( 2 e y ( x ) + e y ( x ) / 2 ) = 0 , y ( 0 ) = 0 , y ( 0 ) = 0 ,
as well as for the boundary value problem
y ( x ) + 2 x y ( x ) + 4 ( 2 e y ( x ) + e y ( x ) / 2 ) = 0 , y ( 0 ) = 0 , y ( 1 ) = 2 ln 2 .
Remark 1.
It is known from refs. [8,10] that the Cauchy problem (13) has an exact solution of the form
y ( x ) = 2 ln ( 1 + x 2 ) .
Based on Definition 4 by Formula (9), the trial solution for the Cauchy problem (13) is written as:
y t ( x ) = x 2 N ( x , w ) ,
and for the boundary value problem (14), the exact solution (15) is also valid, taking into account representation (11) in the form
y t ( x ) = 2 ln 2 · x + x ( 1 x ) N ( x , w ) .
The parameters of the orthogonal polynomial neural networks were set as follows: order of polynomials, m = 5 ; number of training points from the segment [ 0.1 , 1 ] , 11; number of epochs 10.000; initial learning rate, η = 0.01 . The calculation results for solving the Cauchy problem (13) are given in Table 1.
Table 1. Solution of the Cauchy problem (13) using polynomial neural networks ( m = 5 ).
From Table 1, it can be seen that the polynomial neural networks give approximately the same result. Figure 3 shows the solution graph obtained by the LaNN, and Figure 4 shows the loss graph during training.
Figure 3. Example of LaNN operation.
Figure 4. Loss graph during training with LaNN.
The graph in Figure 3 demonstrates the efficiency of the Laguerre NN in solving the Cauchy problem (13). The loss function graph on a logarithmic scale is shown in Figure 4.
From Figure 4, it can be seen that the loss decreases with an increase in the number of epochs, which indicates successful model training. At the beginning of training (first 200 epochs), a rapid decrease in loss is observed, then the rate of decrease slows down, which is typical for the neural network training process. Note that the MLP method exhibits vanishing gradients. However, the single-layer polynomial approach inherently avoids this effect, contributing to the stability observed in Figure 4.
The loss graph allows for adjusting the hyperparameters of the neural network: learning rate, number of epochs, etc. The solution of the boundary value problem (14) is given in Table 2.
Table 2. Solution of the boundary value problem (14) using polynomial neural networks ( m = 5 ).
Here, it can be noted that the LaNN solved the problem more accurately than the ChNN, but was less accurate than the LeNN.
Table 3 shows the work of the multilayer neural network MLP with two hidden layers of 20 and 40 neurons respectively for solving the Cauchy problem (13).
Table 3. Solution of the Cauchy problem (13) using the multilayer MLP network.
It can be seen from Table 3 that to achieve acceptable accuracy in MLP, it is necessary to increase the number of neurons in each layer. This leads, as we will show below, to an increase in the algorithm execution time.
Table 4 shows the solution of the boundary value problem (14) using the MLP network.
Table 4. Solution of the boundary value problem (14) using the multilayer MLP network.
From Table 4 it can be seen that increasing the number of neurons in each hidden layer leads to a deterioration in the accuracy of solving the boundary value problem (14). Let us show how the execution time of algorithms for solving problems (13) and (14) depends on the choice of neural network architecture. Neural network parameters: initial learning rate η = 0.01 , number of epochs 1000, order of polynomials m = 5 .
Table 5 shows the times spent on executing the algorithm for solving the Cauchy problem (13).
Table 5. Execution times of the algorithm for solving the Cauchy problem (13).
From Table 5 it can be seen that ChNN < MLP ( 20 , 2 ) < LaNN < LeNN . Here it should be noted that the neural network based on Laguerre orthogonal polynomials coped with the task faster than the neural network based on Legendre orthogonal polynomials. Moreover, the LaNN and ChNNs were faster in solving the Cauchy problem (13) than the multilayer MLP network.
The results of solving the boundary value problem (14) are given in Table 6.
Table 6. Execution times of the algorithm for solving the boundary value problem (14).
From Table 6, it follows that ChNN < MLP ( 20 , 2 ) < LaNN < LeNN , and therefore the trend in the execution times of algorithms for solving the boundary value problem (14) remains as in Table 5.
Here, a multilayer neural network MLP with three hidden layers of 20 neurons in each layer was used. From Table 6, it follows that single-layer orthogonal polynomial neural networks can work faster than multilayer neural networks of the MLP type, for example, the LeNN.
It should be noted that increasing the order of the polynomial n can also lead to a deterioration in the result. Let us take the parameters for the LeNN from the previous example (Table 3) and find solutions to problems (13) and (14) for polynomial orders m = 5 and m = 7 .
The research results are given in Table 7 and Table 8.
Table 7. Solution of the Cauchy problem (13) using the LeNN with m = 5 and m = 7 .
Table 8. Solution of the boundary value problem (14) using the single-layer LeNN network with m = 5 and m = 7 .
From Table 7, we see that with an increase in the order of the polynomial, the calculation error increased, although the algorithm execution time decreased from 10.01 s for m = 5 to 8.92 s for m = 7 .
From Table 8, we see that increasing the order of the Legendre polynomial leads to a deterioration in accuracy, and also to a decrease in the algorithm execution time for Table 8 at m = 5 –6.54 s, and at m = 7 –8.66 s.
Based on the above, we can conclude that the order of the polynomial requires careful tuning, as well as other hyperparameters of the neural network.

7. Overfitting Issues

It should be noted that the issue of overfitting remains relevant when using neural networks to solve differential equations. Overfitting here in the context of the problem of solving a differential equation is understood differently. Here, the training sample is the differential equation itself, which essentially defines an infinite set of constraints that the solution function must satisfy. The neural network here tries to approximate this function that satisfies these constraints everywhere in a given domain. It should also be said that there is no division of data into test and training samples here. We are looking for a single solution for the entire domain. The concept of “new data” from the same domain is erased—we want the network to work well on the entire domain at once. Thus, pure “memorization of noise in the training data” is not applicable here, since there are no discrete data initially. The concept of overfitting is understood mainly as overfitting at collocation points. We cannot check the fulfillment of the equation on the entire infinite domain. Therefore, we choose a finite set of points (collocation points) at which we calculate the residual of the equation and minimize it. The neural network between collocation points can behave incorrectly, for example, oscillate. The residual at the training collocation points will be very low (sometimes almost zero), but if we check the solution on another, denser grid of points, the residual will be huge. This is a sign of overfitting. Similarly, the concept of overfitting can arise at boundary (initial) points of the domain.
Here, overfitting can be combated by various methods similar to solving the issue of classical overfitting. For example, known methods are L1/L2 regularization, adaptive weights for the loss function, etc. However, in our opinion, the most effective and common method is dynamic selection of collocation points. Instead of using a fixed set of points, at each iteration (or after several epochs) collocations are selected anew (often randomly). Here, it is necessary to remember about the non-optimal choice of collocation points. If the points at which the equation residual is calculated are chosen unsuccessfully (for example, uniformly over the entire domain), the network may learn well in some areas and poorly in others (for example, in areas of high gradients) [15]. This approach does not allow the network to “memorize” their location and forces it to learn to satisfy the equation in the entire domain. We also note article [4], in which the authors study the issues of overfitting of the multilayer PINN caused by gradient imbalance.
In this article, the selection of collation points was static. However, dynamic selection algorithms are currently being developed using Chebyshev interpolation, random uniform distribution, and a number of other methods. The results of this research will be discussed in a future paper.

8. Conclusions

To our knowledge, this work is the first instance of an investigation on the approximate solution of the Cauchy problem and the boundary value problem for the Lane–Emden differential equation using three single-layer neural networks based on Legendre, Chebyshev, and Laguerre orthogonal polynomials. It is shown that the new neural network based on Laguerre polynomials gives acceptable results, and in some cases can be more accurate than the ChNN and faster than the LeNN. All three polynomial neural networks were also compared with the multilayer MLP neural network. Research has shown that single-layer polynomial neural networks can work better than the multilayer MLP neural network. Key parameters for polynomial neural networks, in addition to choosing the number of training epochs and learning rate, is the order of the polynomial. However, this order must be chosen optimally; its further increase or decrease can lead to a deterioration in results. Note that in multilayer neural networks it is necessary to adjust the number of hidden layers and the number of neurons in each of them. The latter means that single-layer polynomial neural networks have a simpler architecture than multilayer neural networks and, accordingly, they are easier to implement on a computer. The choice of neural network architecture depends on the specific task for an ordinary differential equation.
It should be noted that the effectiveness of LaNN on stiff systems or systems of integer-order partial differential equations has yet to be tested. This is a topic for further research of the LaNN method. Another direction of research is related to solving fractional differential equations and their systems [16,17,18,19,20], as well as for solving partial differential equations, similar to the work of [21].
From an applied research perspective, polynomial neural network methods can be applied to solving more global economic problems in various regions, for example, in modeling the energy sector, similar to work [22].

Funding

The work was carried out within the framework of the state assignment of IKIR FEB RAS (reg. No. 124012300245-2).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ChNNChebyshev Neural Network
FLANNFunctional Link Artificial Neural Network
LeNNLegendre Neural Network
LaNNLaguerre Neural Network
MLPMultilayer Perceptron
PINNPhysics-Informed Neural Networks

References

  1. Abdolrasol, M.G.M.; Hussain, S.M.S.; Ustun, T.S.; Sarker, M.R.; Hannan, M.A.; Mohamed, R.; Ali, J.A.; Mekhilef, S.; Milad, A. Artificial Neural Networks Based Optimization Techniques: A Review. J. Abbr. 2021, 10, 2689. [Google Scholar] [CrossRef]
  2. Tan, L.S.; Zainuddin, Z.; Ong, P. Solving ordinary differential equations using neural networks. AIP Conf. Proc. 2018, 1974, 020070. [Google Scholar] [CrossRef]
  3. Riedmiller, M.; Lernen, A. Multi layer perceptron. Mach. Learn. Lab Spec. Lect. Univ. Freibg. 2014, 24, 11–60. [Google Scholar]
  4. Wang, S.; Teng, Y.; Perdikaris, P. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM J. Sci. Comput. 2021, 43, A3055–A3081. [Google Scholar] [CrossRef]
  5. Fan, Q.; Zhang, X.; Wen, Z.; Xu, L.; Zhang, Q. Nonlinear Compensation of the Linear Variable Differential Transducer Using an Advanced Snake Optimization Integrated with Tangential Functional Link Artificial Neural Network. Sensors 2025, 25, 1074. [Google Scholar] [CrossRef] [PubMed]
  6. Pao, Y.H.; Phillips, S.M. The functional link net and learning optimal control. Neurocomputing 1995, 9, 149–164. [Google Scholar] [CrossRef]
  7. Chaharborj, S.S.; Chaharborj, S.S.; See, P.P. Application of Chebyshev neural network to solve Van der Pol equations. Int. J. Basic Appl. Sci. 2021, 10, 7–19. [Google Scholar] [CrossRef]
  8. Mall, S.; Chakraverty, S. Application of Legendre neural network for solving ordinary differential equations. Appl. Soft Comput. 2016, 43, 347–356. [Google Scholar] [CrossRef]
  9. Yang, Y.; Hou, M.; Luo, J. A novel improved extreme learning machine algorithm in solving ordinary differential equations by Legendre neural network methods. Adv. Differ. Equ. 2018, 2018, 469. [Google Scholar] [CrossRef]
  10. Yildirim, A.; Ozis, T. Solutions of singular IVPs of Lane–Emden type by the variational iteration method. Nonlinear Anal. 2009, 70, 2480–2484. [Google Scholar] [CrossRef]
  11. Chihara, T.S. An Introduction to Orthogonal Polynomials; Courier Corporation: Mineola, NY, USA, 2011. [Google Scholar]
  12. Chaharborj, S.S.; Chaharborj, S.S.; Mahmoudi, Y. Study of fractional order integro-differential equations by using Chebyshev neural network. J. Math. Stat. 2017, 13, 1–13. [Google Scholar] [CrossRef]
  13. Shaw, Z.A. Learn Python the Hard Way; Addison-Wesley Professional: Boston, MA, USA, 2024. [Google Scholar]
  14. Van Horn, B.M., II; Nguyen, Q. Hands-on Application Development with PyCharm: Build Applications Like a Pro with the Ultimate Python Development Tool; Packt Publishing Ltd.: Bermingham, UK, 2023. [Google Scholar]
  15. Nabian, M.A.; Gladstone, R.J.; Meidani, H. Efficient training of physics-informed neural networks via importance sampling. Comput. Civ. Infrastruct. Eng. 2021, 36, 962–977. [Google Scholar] [CrossRef]
  16. Mall, S.; Chakraverty, S. Single layer Chebyshev neural network model for solving elliptic partial differential equations. Neural Process. Lett. 2017, 45, 825–840. [Google Scholar] [CrossRef]
  17. Tverdyi, D.; Parovik, R. Application of the Fractional Riccati Equation for Mathematical Modeling of Dynamic Processes with Saturation and Memory Effect. Fractal Fract. 2022, 6, 163. [Google Scholar] [CrossRef]
  18. Allahviranloo, T.; Jafarian, A.; Saneifard, R.; Ghalami, N.; Measoomy Nia, S.; Kiani, F.; Fernandez-Gamiz, U.; Noeiaghdam, S. An application of artificial neural networks for solving fractional higher-order linear integro-differential equations. Bound. Value Probl. 2023, 2023, 74. [Google Scholar] [CrossRef]
  19. Tverdiy, D.A.; Makarov, E.O.; Parovik, R.I. Hereditary Mathematical Model of the Dynamics of Radon Accumulation in the Accumulation Chamber. Mathematics 2023, 11, 850. [Google Scholar] [CrossRef]
  20. Nguyen, T.D. Neural network method for solving the boundary value problem for fractional differential equations. Comput. Methods Program. 2025, 26, 245–253. [Google Scholar]
  21. Ali, I. Advanced machine learning technique for solving elliptic partial differential equations using Legendre spectral neural networks. Electron. Res. Arch. 2025, 33, 826–848. [Google Scholar] [CrossRef]
  22. Giannelos, S.; Konstantelos, I.; Pudjianto, D.; Strbac, G. The impact of electrolyser allocation on Great Britain’s electricity transmission system in 2050. Int. J. Hydrogen Energy 2026, 202, 153097. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.