Next Article in Journal
Optimal Control Problems of a Class of Nonlinear Degenerate Parabolic Equations
Previous Article in Journal
Automated Magnetic Microrobot Control: From Mathematical Modeling to Machine Learning
Previous Article in Special Issue
Fractional-Order SEIRD Model for Global COVID-19 Outbreak
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Differential Transform Method and Neural Network for Solving Variational Calculus Problems

by
Rafał Brociek
*,† and
Mariusz Pleszczyński
Department of Mathematics Applications and Methods for Artificial Intelligence, Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(14), 2182; https://doi.org/10.3390/math12142182
Submission received: 25 June 2024 / Revised: 5 July 2024 / Accepted: 8 July 2024 / Published: 11 July 2024
(This article belongs to the Special Issue Applications of Symbolic and Soft Computations in Applied Sciences)

Abstract

:
The history of variational calculus dates back to the late 17th century when Johann Bernoulli presented his famous problem concerning the brachistochrone curve. Since then, variational calculus has developed intensively as many problems in physics and engineering are described by equations from this branch of mathematical analysis. This paper presents two non-classical, distinct methods for solving such problems. The first method is based on the differential transform method (DTM), which seeks an analytical solution in the form of a certain functional series. The second method, on the other hand, is based on the physics-informed neural network (PINN), where artificial intelligence in the form of a neural network is used to solve the differential equation. In addition to describing both methods, this paper also presents numerical examples along with a comparison of the obtained results.Comparingthe two methods, DTM produced marginally more accurate results than PINNs. While PINNs exhibited slightly higher errors, their performance remained commendable. The key strengths of neural networks are their adaptability and ease of implementation. Both approaches discussed in the article are effective for addressing the examined problems.

1. Introduction

The issues in the calculus of variations have rich applications in many different problems [1]. Using these methods, one can, for example, search for curves connecting fixed points (also considering problems with a moving boundary) and satisfying certain conditions (such as minimal length or the shortest time to travel between them), surfaces meeting specific conditions (such as the surface with the smallest area connecting two fixed circles), problems concerning the propagation of light in inhomogeneous media, isoperimetric problems, geodesic curves, quantum mechanics, hydrodynamics, and many others [2,3,4,5].
The problem of variational calculus can be described in many different ways; this paper focuses on finding the extremum of the following functional:
J ( y ( x ) ) = a b F x , y ( x ) , y ( x ) , , y ( n ) ( x ) d x ,
where y ( x ) is the sought-after n-times differentiable function defined in the interval [ a , b ] , which satisfies the following boundary conditions:
y ( i ) ( a ) = A i , y ( i ) ( b ) = B i , 0 i n 1 ,
where a , b R , a < b , A i , B i R , 0 i n 1 , and F is a certain function of ( n + 2 ) variables’ ( n + 1 ) -times differential, defined in the set [ a , b ] × R n + 1 .
For such a formulated problem, a necessary condition for the functional (1) to attain a local extremum for the sought-after function y ( x ) under the boundary conditions (2) is that the function F satisfies the following differential equation:
F y d d x F y + d 2 d x 2 F y + + ( 1 ) n d n d x n F y ( n ) = 0 ,
(this is the so-called Euler–Poisson equation) with the conditions (2) [6,7].
In the general case, the condition (3) is only a necessary condition for the existence of an extremum of the functional (1). However, in most technical problems, it is also a sufficient condition. Therefore, in this paper, we search for these extrema in this manner.
Equation (3) is a differential equation of the order 2 n and appears relatively rarely for larger values of n. It is more commonly encountered for n = 1 or n = 2 . In these cases, it takes the following form:
F y + i = 1 n ( 1 ) i d i d x i F y ( i ) = F y d d x F y , for n = 1 , F y d d x F y + d 2 d x 2 F y , for n = 2 .
In the special case of n = 1 , this equation is known as the Euler equation. Often, there are problems where the function F depends on many unknown functions, e.g., y i ( x ) (and their derivatives). In such cases, the necessary condition for the existence of an extremum of the functional (the Euler–Lagrange equations) reduces to a system of differential equations.
Unfortunately, very often, the differential equations (or their systems) obtained from the corresponding conditions are nonlinear or cannot be expressed in normal form (which is required by many methods for solving such problems, including classical ones like the Runge–Kutta method and less-known ones like the Adomian decomposition method [8,9]). For this reason, we need to seek new methods that can handle such forms of the problem.
Such methods include the differential transformation method (DTM) discussed in Section 2 and the physics-informed neural networks (PIINs) presented in Section 3. PINNs are a type of specifically designed neural network devoted to solving problems involving differential equations that describe physical laws. PINNs integrate knowledge of physical laws, allowing for the more accurate and efficient modeling of physical phenomena compared with traditional neural networks. PINNs can generalize better to unseen conditions because they are guided by underlying physical laws, which remain consistent across different scenarios. This contrasts traditional neural networks that might struggle to extrapolate beyond their training data. Section 4 is devoted to numerical examples illustrating the effectiveness of both methods and comparing them using selected examples.

2. DTM Method

The differential transform method (DTM) was proposed at the end of the 20th century and was quickly recognized and adapted to solve many different problems. Initially, it was used to solve ordinary differential equations, but its applications have expanded to the systems of such equations [10,11,12,13,14,15], partial differential equations [16,17], differential algebraic equations [18], integral equations [19,20], delay differential equations [21,22], fuzzy differential equations [23,24], and fractional differential equations [25,26,27,28]. This method has also been applied to certain important types of equations, such as Schrödinger equations and nonlinear Klein–Gordon equations, and in modeling predator–prey phenomena [29,30,31], among many other applications [32,33,34,35,36].
If a function, f, can be expanded into a Maclaurin series (such a function is called the original), then for this function, we can write the following:
f ( x ) = k = 0 f ( k ) ( 0 ) k ! x k = k = 0 F ( k ) x k ,
Thus, knowing the properties of such a transformation and the transforms of basic functions, we can solve many of the problems mentioned in the previous paragraph.
Below, we present the transforms of selected functions and selected properties of DTM (the included functions and properties were used in the examples presented in Section 4; proofs of these properties, among others, can be found in [16,25]).
Theorem 1. 
Let x be an element of the domain of the considered function, f, and k Z 0 .
If f ( x ) = x m , Z m 0 , then
F ( k ) = δ ( k m ) = 1 , k = m , 0 , k m .
If f ( x ) = e a x , a R , then
F ( k ) = a k k ! .
If f ( x ) = sin ( a x ) , a R , then
F ( k ) = a k k ! sin k π 2 .
If f ( x ) = cos ( a x ) , a R , then
F ( k ) = a k k ! cos k π 2 .
If f ( x ) = u ( x ) ± v ( x ) , then
F ( k ) = U ( x ) ± V ( x ) .
If f ( x ) = a · u ( x ) , a R , then
F ( k ) = a · U ( x ) .
If f ( x ) = x m · u ( x ) , then
F ( k ) = 0 , k < m , U ( k m ) , k m .
If f ( x ) = u ( x ) · v ( x ) , then
F ( k ) = i = 0 k U ( i ) V ( k i ) = i = 0 k V ( i ) U ( k i ) .
If f ( x ) = u ( x ) · v ( x ) · w ( x ) , then
F ( k ) = i = 0 k j = 0 k i U ( i ) V ( j ) W ( k i j ) = i = 0 k j = 0 i U ( j ) V ( i j ) W ( k i ) .
If f ( x ) = u ( n ) ( x ) , then
F ( k ) = ( k + n ) ! k ! U ( k + n ) .

3. Physics-Informed Neural Network

Physics-informed neural networks (PINNs) represent a novel approach to solving differential equations (both ordinary and partial) and differential-integral equations, harnessing the power of deep learning. Among the pioneering works on this approach are the papers by Raisi et al. [37] and Karniadakis et al. [38]. PINNs integrate the principles of physics in the form of differential equations with a neural network architecture. This methodology embeds differential equations and initial/boundary conditions into the loss function of the neural network, ensuring that predictions align with the equations describing the system. As a result, PINNs offer a flexible and efficient alternative to traditional numerical methods, which often face limitations related to computational costs and scalability, especially in high-dimensional spaces. Once the model is trained, it can provide solutions for any grid, making this approach advantageous over classical methods like finite difference schemes. Additionally, PINNs excel in handling complex boundary and initial conditions, and they can directly incorporate experimental data into the model. This capability not only enhances solution accuracy but also enables the model to learn and adapt to real-world scenarios where data may be sparse or noisy. Consequently, PINNs have achieved significant success in various applications, including fluid dynamics, materials science, and biological systems [39,40,41].
A PINN architecture consists of classical components of the feedforward and backpropagation of neural networks: hidden layers, nodes per hidden layer, and activation functions. To train a neural network, an optimization method like the adam optimizer is used. The PINN system presented in the article consists of two main modules (see Figure 1): a fully connected neural network and a loss function module that computes the loss as the weighted L 2 norm of both the governing ODE equation and boundary condition’s residuals.
For the considered ordinary differential equation, the following applies:
F ( y ( n ) ( x i ) , , y ( x i ) , y ( x i ) , y ( x i ) , x i ) = 0 , x [ x L , x R ] ,
With Dirichlet boundary conditions, the following apply:
y ( x L ) = y L , y ( x R ) = y R ,
The loss function takes the following form:
L = i = 1 N F ( y ( n ) ( x i ) , , y ( x i ) , y ( x i ) , y ( x i ) , x i ) + ( y ( x L ) y L ) + ( y ( x R ) y R ) ,
where N is the number of collocation/training points inside domain [ x L , x R ] . After building the neural network and defining the loss function, the neural network is then trained to find the best parameters, Θ * (weights and biases), by minimizing the loss function, L .

4. Numerical Examples

4.1. Example 1

Let us consider a functional,
J ( y ( x ) ) = 0 ln 3 ( y ( x ) y ( x ) + x ) 2 x y ( x ) d x
with boundary conditions,
y ( 0 ) = 3 , y ( ln 3 ) = ln 3 .
In this case,
F ( x , y ( x ) , y ( x ) ) = ( y ( x ) y ( x ) + x ) 2 x y ( x ) ,
and therefore, according to (3), for n = 1 ,
F y d d x F y = 2 ( y ( x ) y ( x ) + x ) 1 2 ( y ( x ) y ( x ) + 1 ) ,
which, after some transformation, gives the following equation:
2 y ( x ) y ( x ) + 2 x + 3 = 0 .
Taking into account the conditions (16), we therefore have to solve an ordinary second-order differential equation (a linear, non-homogeneous one with constant coefficients), where the exact solution to this equation is the following function:
y ( x ) = 3 2 + x 9 2 e x .

4.1.1. DTM—Example 1

Applying the DTM to Equation (17) (utilizing properties (4), (8), (9), (13)), for k 0 , we obtain the following:
2 ( k + 2 ) ( k + 1 ) Y ( k + 2 ) 2 Y ( k ) + 2 δ ( k 1 ) + 3 δ ( k 0 ) = 0 ,
whereby from the condition y ( 0 ) = 3 , we know that Y ( 0 ) = 3 . Assuming k = 0 in (18), we obtain the following:
2 · 2 Y ( 2 ) 2 Y ( 0 ) + 2 δ ( 1 ) + 3 δ ( 3 ) = 0 Y ( 2 ) = 9 4 .
Notice that neither for k = 0 nor for k > 0 are we able to determine the value of Y ( 1 ) —this is because the second condition (16) takes the form y ( ln 3 ) = ln 3 (the desired form would be y ( 0 ) = a R ) . Therefore, let us assume Y ( 1 ) = s R . Taking k 1 , we sequentially obtain the following:
Y ( 3 ) = s 1 6 , Y ( 4 ) = 3 16 , Y ( 5 ) = s 1 120 , Y ( 6 ) = 1 160 , Y ( 7 ) = s 1 5040 , Y ( 8 ) = s 1 362880 , Y ( 9 ) = 1 8960 , Y ( 10 ) = 1 806400 ,
In the general case, we can note that Y ( 0 ) = 3 and Y ( 1 ) = s ; for k > 1 , we have Y ( 2 ) = 9 4 and Y ( i + 2 ) = Y ( i ) ( i + 1 ) ( i + 2 ) for even numbers, i, and Y ( 3 ) = s 1 6 and Y ( j + 2 ) = Y ( j ) ( j + 1 ) ( j + 2 ) for odd numbers, j.
Therefore, we can represent the sought-after function, y, as follows:
y ( x ) = x + ( s 1 ) i = 1 x 2 i 1 ( 2 i 1 ) ! 3 9 2 i = 1 x 2 i ( 2 i ) ! ,
which, after using a known Taylor series, takes the following form:
y ( x ) = x + ( s 1 ) sinh x + 3 2 9 2 cosh x .
There is still an unknown value of s to find. Using the second condition (16), i.e., y ( ln 3 ) = ln 3 , we obtain the following:
ln 3 = ln 3 + ( s + 1 ) 4 3 + 3 2 9 2 · 5 3 s = 11 2 .
This result allows for the final form of the following solution,
y = 3 2 + x 9 2 e x ,
which fully coincides with the exact solution. Of course, this is not the standard case—in most instances, we can only find an approximate solution, as we present in the subsequent examples.

4.1.2. PINNs—Example 1

Now, we solve Equation (17) using neural networks. In this case, a fully connected neural network of a depth of 4 with 3 hidden layers and 10 neurons per layer is used. As an optimizer, the Adam method is used with a learning rate of 0.001 . The hyperbolic tangent is assumed as the activation function. Figure 2 shows the obtained approximate solution (left plot, black dots) along with the errors of this approximation (right plot). The obtained solution fits the exact solution quite well. The maximum error is approximately 0.04 .

4.2. Example 2

Consider following functional,
J ( y ( x ) ) = 0 5 ( 1 y ( x ) ) 2 ( y ( x ) ) 2 d x
with conditions
y ( 0 ) = 0 , y ( 5 ) = 3 4 .
In this case, we have
F ( x , y ( x ) , y ( x ) ) = ( 1 y ( x ) ) 2 ( y ( x ) ) 2 ,
and therefore we, can, according to (3) for n = 1 , write
F y d d x F y = 6 ( y ( x ) 1 ) ( y ( x ) + 1 ) y ( x ) 2 y ( x ) y ( x ) 2 y ( x ) y ( x ) 2 y ( x ) 4 ,
which, after some transformation, leads to the equation
y ( x ) 2 y ( x ) y ( x ) 2 y ( x ) y ( x ) 2 = 0
with conditions (20) (other solutions, y ( x ) = 1 and y ( x ) = 1 , are neglected because they do not meet the conditions (20)).

4.2.1. DTM—Example 2

Applying the DTM of transformation to Equation (21) (here, we use the properties (8), (9), (12), and (13)), for k 0 , we obtain
i = 0 k j = 0 i ( j + 1 ) ( j + 2 ) Y ( j + 2 ) Y ( i j ) Y ( k i ) ( k + 1 ) ( k + 2 ) Y ( k + 2 ) 2 i = 0 k j = 0 i ( j + 1 ) ( i j + 1 ) Y ( j + 1 ) Y ( i j + 1 ) Y ( k i ) = 0 ,
and according to the condition, y ( 0 ) = 0 implies that Y ( 0 ) = 0 .
Similarly to before, taking successive values, k 0 , in (22), we are not be able to find the value of Y ( 1 ) . Therefore, assume Y ( 1 ) = s R ; as a result, Y ( 0 ) = 0 , Y ( 1 ) = s , and
Y ( 2 ) = 0 , Y ( 3 ) = s 3 3 , Y ( 4 ) = 0 , Y ( 5 ) = 2 s 5 15 , Y ( 6 ) = 0 , Y ( 7 ) = 17 s 7 315 , Y ( 8 ) = 0 , Y ( 9 ) = 62 s 9 2835 , Y ( 10 ) = 0 , Y ( 11 ) = 1382 s 7 155 , 925 ,
This time, we are not able to find a function defined by the Taylor series of the obtained terms. Therefore, we must limit ourselves to finding an approximate solution, y n ( x ) , of the form
y n ( x ) = i = 0 n Y ( i ) x i ,
which depends on the unknown value of parameter s. The value of this parameter can be found using the second of the conditions (20), i.e., y ( 5 ) = 3 4 . Proceeding in this way and taking n = 7 , an approximate solution, y 7 ( x ) , of the following form is obtained:
y 7 ( x ) = s x s 3 3 x 3 + 2 s 5 15 x 5 17 s 7 315 x 7 .
Solving the equation y 7 ( 5 ) = 3 4 , we unfortunately obtain three real solutions: s = 0.24919 , s = 0.20286 , and s = 0.35538 . Therefore, it is necessary to choose the appropriate one. To achieve this, we examine the error as follows:
0 5 | y 7 ( x , s ) 2 y 7 ( x , s ) y 7 ( x , s ) 2 y 7 ( x ) y 7 ( x , s ) 2 | d x .
In this case, it turns out that the best value is s = 0.20286 . For this value, we obtain an approximate solution:
y 7 ( x ) = 0.2063 x + 0.0028 x 3 0.00004 x 5 + 7.63 · 10 7 x 7 .
Following a similar approach but taking n = 11 (again choosing the best value of s), we obtain the following solution:
y 11 ( x ) = 0.1955 x + 0.0025 x 3 0.00004 x 5 + 5.88 · 10 7 x 7 9.11 · 10 9 x 9 + 1.41 · 10 10 x 11 .
In Figure 3, we present the exact solution, y ( x ) , the approximate solutions, y 7 ( x ) and y 11 ( x ) , and the errors, Δ n ( x ) , of these solutions, where
Δ n ( x ) = | y ( x ) y n ( x ) | ,
and the exact solution has the following form:
y ( x ) = 1 7 x / 5 1 + 7 x / 5 .

4.2.2. PINNs—Example 2

The same Equation (21) was also solved using a physics-informed neural network. In this example, a fully connected neural network of a depth of 4 with 3 hidden layers and 10 neurons per layer was used. The Adam optimizer with a learning rate of 0.001 was employed and the hyperbolic tangent is assumed as the activation function. Inside the domain, 128 training points were considered. Figure 4 depicts the obtained approximate solution (left plot, black dots) along with the errors of this approximation (right plot). The obtained solution is very close to the exact solution. The maximum error does not exceed 0.016 .

4.3. Example 3

In the third example, the following equation is considered,
J ( y ( x ) ) = 1 1 y ( x ) y ( x ) e x sin x + ( y ( x ) y ( x ) ) 2 ( y ( x ) y ( x ) ) y ( x ) d x
which is supplemented by the following conditions:
y ( 1 ) = 1 , y ( 1 ) = 1 , y ( 1 ) = 1 , y ( 1 ) = 2 .
In this case,
F ( x , y ( x ) , y ( x ) ) = y ( x ) y ( x ) e x sin x + ( y ( x ) y ( x ) ) 2 ( y ( x ) y ( x ) ) y ( x ) ,
and therefore, according to (3), for n = 2 , we obtain the following,
F y d d x F y + d 2 d x 2 F y = e x 2 y ( 4 ) ( x ) e x + y ( x ) ( cos x sin x 2 e x ) 2 y ( x ) cos x ,
which brings us to the follwing equation,
2 y ( 4 ) ( x ) e x + y ( x ) ( cos x sin x 2 e x ) 2 y ( x ) cos x = 0
with conditions (24).

4.3.1. DTM—Example 3

Applying the DTM of transformation to Equation (25) (we use the properties (5)–(9), (11), and (13)), the following equation is obtained for k 0 :
2 i = 0 k ( i + 1 ) ( i + 2 ) ( i + 3 ) ( i + 4 ) Y ( i + 4 ) ( k i ) ! + i = 0 k ( i + 1 ) ( i + 2 ) Y ( i + 2 ) cos π ( k i ) 2 sin π ( k i ) 2 2 ( k i ) ! 2 i = 0 k ( i + 1 ) Y ( i + 1 ) cos π ( k i ) 2 ( k i ) ! = 0 .
Given successive values of k 0 in (26), it is impossible to find the values Y ( 0 ) , Y ( 1 ) , Y ( 2 ) and Y ( 3 ) , so we assume Y ( 0 ) = s 0 , Y ( 1 ) = s 1 , Y ( 2 ) = s 2 and Y ( 3 ) = s 3 , s i R , 0 i 3 . Therefore Y ( 0 ) = s 0 i Y ( 1 ) = s 1 , Y ( 2 ) = s 2 , Y ( 3 ) = s 3 , and
Y ( 4 ) = s 1 + s 2 24 , Y ( 5 ) = s 1 + 4 s 2 + 3 s 3 120 , Y ( 6 ) = s 1 11 s 2 + 36 s 3 1440 , Y ( 7 ) = 11 s 1 12 s 2 69 s 3 10 , 080 , Y ( 8 ) = 75 s 1 + 109 s 2 + 96 s 3 161 , 280 , Y ( 9 ) = 33 s 1 184 s 2 + 301 s 3 483 , 840 ,
Again, we are unable to find a function with a Taylor series defined by these terms based on the form of the found terms. We must therefore limit ourselves to finding an approximate solution of the form y n ( x ) ,
y n ( x ) = i = 0 n Y ( i ) x i ,
which additionally depends on an unknown parameter, s i , 0 i 3 . These parameters are determined using conditions (24) (this time we have a unique set of exactly four constants s i ). Proceeding in this way and taking n = 4 , we obtain an approximate solution of the form y 4 ( x ) :
y 4 ( x ) = 1.2692 + 0.75 x 0.2885 x 2 0.75 x 3 + 0.0192 x 4 .
Proceeding similarly, but taking n = 7 (again the conditions (24) imply a unique solution to the parameters s i , 0 and 3 ), we obtain the following solution:
y 7 ( x ) = 1.2398 + 0.7288 x 0.2448 x 2 0.7024 x 3 + 0.0202 x 4 0.0318 x 5 0.0152 x 6 + 0.0053 x 7 .
Figure 5 shows the exact solution y ( x ) , the approximate solutions, y 4 ( x ) and y 7 ( x ) , and the errors, Δ n ( x ) , of these solutions. In following formula of error,
Δ n ( x ) = | y ( x ) y n ( x ) | ,
y ( x ) denotes the exact solution obtained form Wolfram Mathematica 14.0 [42,43]. As can be seen in Figure 5, the difference in errors between solutions y 4 ( x ) and y 7 ( x ) is significant. The maximum error for y 7 ( x ) does not exceed 0.004 , whereas for the solution y 4 ( x ) , this error is approximately 0.034 .

4.3.2. PINNs—Example 3

Similar to previous examples, the following network architecture is assumed: 3 hidden layers with 10 neurons per layer. Inside the domain, 128 training points are used. The hyperbolic tangent is assumed as the activation function. The solution obtained after training the model is depicted in Figure 6. The solution using PINNs fits the exact solution quite well, with errors not exceeding 0.04 .

4.4. Comparison of DTM and PINNs Results

In this section, errors obtained from the two methods are compared for examples 2 and 3. In the case of example 1, the DTM method provides the exact solution. Figure 7 presents error plots obtained from both methods, specifically for example 2 (Figure 7a) and example 3 (Figure 7b). As observed in Figure 7, the smallest errors were obtained for DTM with n = 11 (for example 2) and n = 7 (for example 3). The errors of the solutions obtained using neural networks are slightly larger but still smaller than those from DTM with n = 7 (for example 2) and n = 4 (for example 3). Both methods produced solutions of good quality.

5. Conclusions

The paper presents two different methods for solving variational calculus problems involving differential equations.
The first method, the DTM, is characterized by its flexibility in both the form of the differential equation and the boundary conditions. One of its strengths is its scalability to handle approximate solutions of varying orders, and sometimes it even allows for a prediction of the exact solution based on the form of the coefficients found for Y ( k ) . However, there are some drawbacks to this method. One notable challenge is the difficulty in automating the process of solving a given problem. It is complex to develop a program that can generate an approximate solution solely based on the form of the differential equation and the boundary conditions. It is often easier to manually construct a discrete counterpart of the differential equation, develop a methodology for finding the unknown coefficients, Y ( k ) , and then utilize appropriate software tools for further refinement.
The second presented approach—PINNs (physics-informed neural networks)—utilizes neural networks to solve the relevant differential equation. One advantage of this method is its relatively straightforward implementation and flexibility. Once the model for the differential equation is trained, it can provide solutions for different grids (input points) without recalculating the problem each time.
Comparing both methods, it can be observed that slightly more accurate results were obtained using the DTM. The results from PINNs includes slightly larger errors compared with those obtained using the DTM, but they are still of good quality. The advantage of neural networks lies in their flexibility and relatively straightforward implementation. Both methods presented in the article are suitable for solving the considered issues.
In the future, the authors plan to apply PINNs to solve direct and inverse problems in partial differential equations involving fractional-order derivatives.

Author Contributions

Conceptualization, R.B. and M.P.; methodology, R.B. and M.P.; software, R.B. and M.P.; validation, R.B. and M.P.; formal analysis, R.B. and M.P.; investigation, R.B. and M.P.; writing—original draft preparation, R.B. and M.P.; writing—review and editing, R.B. and M.P.; visualization, R.B. and M.P.; supervision, R.B. and M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The author declare no conflict of interest.

References

  1. Struwe, M. Variational Methods; Springer: New York, NY, USA, 2000; pp. 32–58. [Google Scholar]
  2. Borghi, R. The variational method in quantum mechanics: An elementary introduction. Eur. J. Phys. 2018, 39, 035410. [Google Scholar] [CrossRef]
  3. Esteban, M.; Lewin, M.; Séré, E. Variational methods in relativistic quantum mechanics. Bull. Am. Math. Soc. 2008, 45, 535–593. [Google Scholar] [CrossRef]
  4. Mihlin, S.G. Variational Methods of Solving Linear and Nonlinear Boundary Value Problems. Differential Equations and Their Applications; Publishing House of the Czechoslovak Academy of Sciences: Prague, Czech Republic, 1963; pp. 77–92. Available online: http://eudml.org/doc/220899 (accessed on 7 July 2024).
  5. Sysoev, D.V.; Sysoeva, A.A.; Sazonova, S.A.; Zvyagintseva, A.V.; Mozgovoj, N.V. Variational methods in relativistic quantum mechanics. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2021; Volume 1047, p. 012195. [Google Scholar]
  6. Courant, R.; Hilbert, D. Methods of Mathematical Physics; Wiley-VCH: New York, NY, USA, 1962; Volume I. [Google Scholar]
  7. Fox, C. An Introduction to the Calculus of Variations; Courier Corporation: North Chelmsford, MA, USA, 1987. [Google Scholar]
  8. Grzymkowski, R.; Pleszczynski, M.; Słota, D. Comparing the Adomian decomposition method and the Runge–Kutta method for solutions of the Stefan problem. Int. J. Comput. Math. 2006, 83, 409–417. [Google Scholar] [CrossRef]
  9. Grzymkowski, R.; Pleszczynski, M.; Słota, D. The two-phase Stefan problem solved by the Adomian decomposition method. In Proceedings of the 15th IASTED International Conference Applied Simulation and Modelling, Rhodos, Greece, 26–28 June 2006; pp. 511–516. [Google Scholar]
  10. Zhou, J.K. Differential Transformation and Its Applications for Electrical Circuits; Huazhong University Press: Wuhan, China, 1986. [Google Scholar]
  11. Ayaz, F. Solutions of the system of differential equations by differential transform method. Appl. Math. Comput. 2004, 147, 547–567. [Google Scholar] [CrossRef]
  12. Grzymkowski, R.; Pleszczynski, M. Application of the Taylor transformation to the systems of ordinary differential equations. In Information and Software Technologies, Proceedings of the 24th International Conference, ICIST 2018, Vilnius, Lithuania, 4–6 October 2018; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; Volume 24, pp. 379–387. [Google Scholar]
  13. Hetmaniok, E.; Pleszczyński, M. Comparison of the selected methods used for solving the ordinary differential equations and their systems. Mathematics 2022, 10, 306. [Google Scholar] [CrossRef]
  14. Keskin, Y.; Oturanc, G. Reduced differential transform method for partial differential equations. Int. J. Nonlinear Sci. Numer. Simul. 2009, 10, 741–750. [Google Scholar] [CrossRef]
  15. Mirzaee, F. Differential transform method for solving linear and nonlinear systems of ordinary differential equations. Appl. Math. Sci. 2011, 5, 3465–3472. [Google Scholar]
  16. Ayaz, F. On the two-dimensional differential transform method. Appl. Math. Comput. 2003, 143, 361–374. [Google Scholar] [CrossRef]
  17. Kanth, A.R.; Aruna, K. Differential transform method for solving linear and non-linear systems of partial differential equations. Phys. Lett. A 2008, 372, 6896–6898. [Google Scholar] [CrossRef]
  18. Ayaz, F. Applications of differential transform method to differential-algebraic equations. Appl. Math. Comput. 2004, 152, 649–657. [Google Scholar] [CrossRef]
  19. Biazar, J.; Eslami, M.; Islam, M.R. Differential transform method for special systems of integral equations. J. King Saud Univ.-Sci. 2012, 24, 211–214. [Google Scholar] [CrossRef]
  20. Celik, E.; Tabatabaei, K. Solving a Class of Volterra Integral Equation Systems by the Differential Transform Method. Int. J. Nonlinear Sci. Numer. Simul. 2013, 16, 87–91. [Google Scholar]
  21. Liu, B.; Zhou, X.; Du, Q. Differential transform method for some delay differential equations. Appl. Math. 2015, 6, 585. [Google Scholar] [CrossRef]
  22. Hetmaniok, E.; Pleszczyński, M.; Khan, Y. Solving the Integral Differential Equations with Delayed Argument by Using the DTM Method. Sensors 2022, 22, 4124. [Google Scholar] [CrossRef] [PubMed]
  23. Allahviranloo, T.; Kiani, N.A.; Motamedi, N. Solving fuzzy differential equations by differential transformation method. Inf. Sci. 2009, 179, 956–966. [Google Scholar] [CrossRef]
  24. Osman, M.; Almahi, A.; Omer, O.A.; Mustafa, A.M.; Altaie, S.A. Approximation Solution for Fuzzy Fractional-Order Partial Differential Equations. Fractal Fract. 2022, 6, 646. [Google Scholar] [CrossRef]
  25. Arikoglu, A.; Ozkol, I. Solution of fractional differential equations by using differential transform method. Chaos Solitons Fractals 2007, 34, 1473–1481. [Google Scholar] [CrossRef]
  26. Neelma; Eiman; Shah, K. Analytical and Qualitative Study of Some Families of FODEs via Differential Transform Method. Foundations 2022, 2, 6–19. [Google Scholar] [CrossRef]
  27. Odibat, Z.; Momani, S.; Erturk, V.S. Generalized differential transform method: Application to differential equations of fractional order. Appl. Math. Comput. 2008, 197, 467–477. [Google Scholar] [CrossRef]
  28. Rysak, A.; Gregorczyk, M. Differential Transform Method as an Effective Tool for Investigating Fractional Dynamical Systems. Appl. Sci. 2021, 11, 6955. [Google Scholar] [CrossRef]
  29. Kanth, A.R.; Aruna, K. Differential transform method for solving the linear and nonlinear Klein–Gordon equation. Comput. Phys. Commun. 2009, 180, 708–711. [Google Scholar] [CrossRef]
  30. Kanth, A.R.; Aruna, K. Two-dimensional differential transform method for solving linear and non-linear Schrödinger equations. Chaos Solitons Fractals 2009, 41, 2277–2281. [Google Scholar] [CrossRef]
  31. Tari, A. The Differential Transform Method for solving the model describing biological species living together. Iran. J. Math. Sci. Inform. 2012, 7, 63–74. [Google Scholar]
  32. Gupta, R.; Selvam, J.; Vajravelu, A.; Nagapan, S. Analysis of a Squeezing Flow of a Casson Nanofluid between Two Parallel Disks in the Presence of a Variable Magnetic Field. Symmetry 2023, 15, 120. [Google Scholar] [CrossRef]
  33. Kumar, R.S.V.; Sarris, I.E.; Sowmya, G.; Abdulrahman, A. Iterative Solutions for the Nonlinear Heat Transfer Equation of a Convective-Radiative Annular Fin with Power Law Temperature-Dependent Thermal Properties. Symmetry 2023, 15, 1204. [Google Scholar] [CrossRef]
  34. Zhang, L.; Han, M.; Zhang, Q.; Hao, S.; Zhen, J. Analysis of Dynamic Characteristics of Attached High Rise Risers. Appl. Sci. 2023, 13, 8767. [Google Scholar] [CrossRef]
  35. Demir, Ö. Differential Transform Method for Axisymmetric Vibration Analysis of Circular Sandwich Plates with Viscoelastic Core. Symmetry 2022, 14, 852. [Google Scholar] [CrossRef]
  36. Brociek, R.; Pleszczyński, M. Comparison of Selected Numerical Methods for Solving Integro-Differential Equations with the Cauchy Kernel. Symmetry 2024, 16, 233. [Google Scholar] [CrossRef]
  37. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  38. Karniadakis, G.E.; Kevrekidis, I.G.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-informed machine learning. Nat. Rev. Phys. 2021, 3, 422–440. [Google Scholar] [CrossRef]
  39. Eivazi, H.; Wang, Y.; Vinuesa, R. Physics-informed deep-learning applications to experimental fluid mechanics. Meas. Sci. Technol. 2024, 35, 075303. [Google Scholar] [CrossRef]
  40. Cuomo, S.; Schiano Di Cola, V.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What’s Next. J. Sci. Comput. 2022, 92, 88. [Google Scholar] [CrossRef]
  41. Ahmadi Daryakenari, N.; De Florio, M.; Shukla, K.; Karniadakis, G.E. AI-Aristotle: A physics-informed framework for systems biology gray-box identification. PLoS Comput. Biol. 2024, 20, 1–33. [Google Scholar] [CrossRef] [PubMed]
  42. Hastings, C.; Mischo, K.; Morrison, M. Hands-on Start to Wolfram Mathematica and Programming with the Wolfram Language, 3rd ed.; Wolfram Media, Inc.: Champaign, IL, USA, 2020. [Google Scholar]
  43. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media, Inc.: Champaign, IL, USA, 2003. [Google Scholar]
Figure 1. Diagram of physics-informed neural network for ODE with Dirichlet boundary conditions.
Figure 1. Diagram of physics-informed neural network for ODE with Dirichlet boundary conditions.
Mathematics 12 02182 g001
Figure 2. The exact solution, y ( x ) (solid blue line); the approximate solution (black dots)—Figure (a); and the errors of the approximate solution—Figure (b), example 1.
Figure 2. The exact solution, y ( x ) (solid blue line); the approximate solution (black dots)—Figure (a); and the errors of the approximate solution—Figure (b), example 1.
Mathematics 12 02182 g002
Figure 3. Exact solution, y ( x ) (solid green line); approximate solution, y 7 (dashed orange line); approximate solution, y 11 (dashed blue line)—Figure (a)—and errors, Δ , of these approximations—Figure (b), example 2.
Figure 3. Exact solution, y ( x ) (solid green line); approximate solution, y 7 (dashed orange line); approximate solution, y 11 (dashed blue line)—Figure (a)—and errors, Δ , of these approximations—Figure (b), example 2.
Mathematics 12 02182 g003
Figure 4. The exact solution y ( x ) (solid blue line), the approximate solution (black dots)—Figure (a)—and the errors of the approximate solution—Figure (b), example 2.
Figure 4. The exact solution y ( x ) (solid blue line), the approximate solution (black dots)—Figure (a)—and the errors of the approximate solution—Figure (b), example 2.
Mathematics 12 02182 g004
Figure 5. Exact solution, y ( x ) (solid green line); approximate solution, y 7 (dashed orange line); approximate solution, y 11 (dashed blue line)—Figure (a)—and errors Δ of these approximations—Figure (b), example 3.
Figure 5. Exact solution, y ( x ) (solid green line); approximate solution, y 7 (dashed orange line); approximate solution, y 11 (dashed blue line)—Figure (a)—and errors Δ of these approximations—Figure (b), example 3.
Mathematics 12 02182 g005
Figure 6. The exact solution y ( x ) (solid blue line); the approximate solution (black dots)—Figure (a)—and the errors of the approximate solution—Figure (b), example 3.
Figure 6. The exact solution y ( x ) (solid blue line); the approximate solution (black dots)—Figure (a)—and the errors of the approximate solution—Figure (b), example 3.
Mathematics 12 02182 g006
Figure 7. Comparison of absolute errors of the DTM and PINNs for example 2—figure (a) and for example 3—figure (b).
Figure 7. Comparison of absolute errors of the DTM and PINNs for example 2—figure (a) and for example 3—figure (b).
Mathematics 12 02182 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brociek, R.; Pleszczyński, M. Differential Transform Method and Neural Network for Solving Variational Calculus Problems. Mathematics 2024, 12, 2182. https://doi.org/10.3390/math12142182

AMA Style

Brociek R, Pleszczyński M. Differential Transform Method and Neural Network for Solving Variational Calculus Problems. Mathematics. 2024; 12(14):2182. https://doi.org/10.3390/math12142182

Chicago/Turabian Style

Brociek, Rafał, and Mariusz Pleszczyński. 2024. "Differential Transform Method and Neural Network for Solving Variational Calculus Problems" Mathematics 12, no. 14: 2182. https://doi.org/10.3390/math12142182

APA Style

Brociek, R., & Pleszczyński, M. (2024). Differential Transform Method and Neural Network for Solving Variational Calculus Problems. Mathematics, 12(14), 2182. https://doi.org/10.3390/math12142182

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop