Next Article in Journal
A Novel Moving Average–Exponentiated Exponentially Weighted Moving Average (MA-Exp-EWMA) Control Chart for Detecting Small Shifts
Previous Article in Journal
The Influence of Core-Periphery Structure on Information Diffusion over Social Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing PINN and Symbolic Transform Methods in Modeling the Nonlinear Dynamics of Complex Systems: A Case Study of the Troesch Problem

1
Department of Artificial Intelligence Modelling, Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
2
Department of Mathematical Methods in Technology and Computer Science, Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
3
Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
4
Department of Computer, Control, and Management Engineering, Sapienza University of Rome, Via Ariosto 25, 00185 Roma, Italy
5
Department of Artificial Intelligence, Czestochowa University of Technology, 42-201 Czestochowa, Poland
6
Department of Electrical, Electronics and Informatics Engineering, University of Catania, Viale Andrea Doria 6, 95125 Catania, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(18), 3045; https://doi.org/10.3390/math13183045
Submission received: 17 August 2025 / Revised: 15 September 2025 / Accepted: 18 September 2025 / Published: 22 September 2025
(This article belongs to the Special Issue Nonlinear Dynamics, 2nd Edition)

Abstract

Nonlinear complex systems exhibit emergent behavior, sensitivity to initial conditions, and rich dynamics arising from interactions among their components. A classical example of such a system is the Troesch problem—a nonlinear boundary value problem with wide applications in physics and engineering. In this work, we investigate and compare two distinct approaches to solving this problem: the Differential Transform Method (DTM), representing an analytical–symbolic technique, and Physics-Informed Neural Networks (PINNs), a neural computation framework inspired by physical system dynamics. The DTM yields a continuous form of the approximate solution, enabling detailed analysis of the system’s dynamics and error control, whereas PINNs, once trained, offer flexible estimation at any point in the domain, embedding the physical model into an adaptive learning process. We evaluate both methods in terms of accuracy, stability, and computational efficiency, with particular focus on their ability to capture key features of nonlinear complex systems. The results demonstrate the potential of combining symbolic and neural approaches in studying emergent dynamics in nonlinear systems.

1. Introduction

Troesch’s problem is closely related to mass and heat transfer phenomena. It serves as a benchmark model for challenging nonlinear differential equations, analogous to those encountered in continuum physics. Specifically, Troesch’s problem models transport through semipermeable membranes, diffusion with reactivity, or nonlinear behaviors in conductive processes—such as in plasma reactors or biological membranes.
It arises in contexts involving a balance between nonlinear diffusion and reaction, typically under Dirichlet boundary conditions. The nonlinearity may stem from equations describing electric field penetration, temperature gradients, or chemical concentration differences. In physical applications, the problem can represent diffusion models in semiconductors, heat conduction with nonlinear temperature dependence, mass transport through reactive biological membranes, or electric field modeling in plasmas.
In recent decades, there has been an intensive development of analytical methods for solving ordinary differential equations (ODEs) and partial differential equations (PDEs), particularly in the context of nonlinear models. One of the effective and widely used techniques in this area is the Differential Transform Method (DTM)—an analytical method based on series transformations (most commonly the Maclaurin series). Unlike methods such as the Homotopy Perturbation Method (HPM) [1,2] or the Adomian Decomposition Method [3,4], the DTM does not require prior linearization or discretization of equations. As a result, it is naturally suited for solving complex, strongly nonlinear problems or those difficult to address numerically. The DTM can be applied to the analysis of equations and systems of operator equations, including but not limited to ordinary differential equations, partial differential equations, integral equations or their systems [5,6,7], dynamic systems modeling physical, biological, and engineering processes, and boundary and initial value problems [7,8].
The DTM allows us to transform a given problem into a system of recurrence equations for the coefficients of a power series. This approach enables us to obtain analytical solutions that maintain the continuity and differentiability of the approximate solution. Often, just a few initial terms of the expansion yield high-quality approximations, which makes the DTM computationally efficient. In many cases, the method even allows us to find the exact solution. An additional advantage of the method is its relatively simple implementation in computer-aided symbolic computation environments, such as the Mathematica computational platform used in this work, which has a wide range of applicability in various engineering problems [9,10,11].
In recent years, machine learning-based techniques have been playing an increasingly important role in computer simulations and mathematical modeling. One possible approach is the use of neural networks to solve initial boundary value problems. An example of such networks are Physics-Informed Neural Networks (PINNs). The advantages of this type of approach include mesh independence, low requirements for mathematical transformations, and the ability—once the model is trained—to instantly obtain results for any points in the domain without the need for recomputation. Some of the earliest and most important works in this field are the papers by Raissi [12,13,14,15]. These works describe PINNs in the context of solving forward and inverse problems of differential equations. They present the idea behind using neural networks in this way. The authors of [13] focus on a review of PINNs, various variants of such networks, and their applications to different types of problems and outline different directions of development. It is worth mentioning several works where PINNs have been applied in various domains. For example, ref. [16] presents the use of PINNs for modeling biological and epidemiological dynamical systems. The authors consider a system of ordinary differential equations and seek an inverse solution involving parameter estimation of the model. Another interesting article is [17], in which the authors use PINN-type neural networks for predicting 3D soft tissue dynamics from 2D imaging. An example of using PINNs in engineering problems is the paper [18], which considers the heat transfer equation. The predictions of the trained PINN were validated in several 1D and 2D heat transfer cases by comparison with FE results. It was shown that both a standard neural network (NN) and a PINN accurately reproduced the results of the finite element method during training. However, only the PINN with appropriately selected features was able to grasp the physical principles governing the problem and predict correct results even outside the range of the training data. More information on the applications of PINNs can be found in the articles [19,20,21,22].
In this paper, we focus on solving the Troesch problem. To this end, we use two fundamentally different methods: the DTM and PINNs. Section 2 describes the problem to be solved—a second-order ordinary differential equation. Section 3 and Section 4 are devoted to the descriptions of both methods—the DTM first, followed by PINNs. These sections present the main ideas behind each method. Section 5 describes the results obtained for the Troesch problem using the DTM and PINNs. A comparison of both methods is also provided. Finally, Section 6 presents the conclusions drawn from the research.

2. Problem Statement

In this study, we consider a boundary value problem defined by the following equation along with its associated conditions:
y ( x ) = λ sinh ( λ y ( x ) ) , y ( 0 ) = 0 , y ( 1 ) = 1 ,
where λ R + . The above equation is known as the Troesch problem. Due to its strong nonlinearity, this problem is particularly challenging to solve numerically, especially when λ > 5 . A distinctive feature of this equation is the rapid increase in nonlinearity as the parameter λ increases, which leads to numerical difficulties. In particular, the solution becomes increasingly steep as x approaches 1. The high stiffness of this differential equation can make many standard numerical methods unstable. For methods based on domain discretization, extremely fine meshes may be required to obtain reliable results.
The Troesch equation arises in the context of modeling various physical phenomena, including the study of plasma confinement under radiation pressure [23,24], as well as in the analysis of transport processes in gas-porous electrodes [25,26,27].

3. The DTM in Practice: Solving a Stiff Nonlinear Equation

In this work a function f that can be expanded into a Taylor series around a fixed point x 0 is referred to as the original. A special case of this expansion is the Maclaurin series, which arises when x 0 = 0 . In the following discussion, we will focus exclusively on functions that meet the conditions of being originals in the above sense and will make use of the Maclaurin series.
In the adopted context, the notion of a Taylor series is understood in a broader sense than in traditional definitions. Assuming that the function f is an original, it is possible to express the variable mapping in the form
f ( x ) = n = 0 f ( n ) ( 0 ) n ! x n ,
which—under the assumption of a unique expansion—allows us to associate with each original a unique transformation in the form of a function F : N 0 R :
F ( n ) = f ( n ) ( 0 ) n ! , n N 0 ,
As a result, a function that is an original can be written as
f ( x ) = n = 0 F ( n ) x n ,
In the process of solving a given mathematical problem, it is often possible to use known expansions of elementary functions as well as the properties of the transformation itself, which reduces the problem to purely algebraic operations. Although the literature on the Differential Transform Method (DTM) describes a broad range of its properties, in this work, we will limit ourselves to presenting only those necessary for solving the problem under consideration. In what follows, we assume that the variable x belongs to the domain of the function f, which satisfies the conditions required for an original.
Property 1.
If the function f is of the form f ( x ) = u ( x ) + v ( X ) , then
F ( n ) = U ( n ) + V ( n ) , n N 0 .
Property 2.
If the function f is of the form f ( x ) = c · u ( x ) , then
F ( n ) = c · U ( n ) , n N 0 .
Property 3.
If the function f is of the form f ( x ) = c · u ( x ) , c R , then
F ( n ) = c · U ( n ) , n N 0 .
Property 4.
If the function f is of the form f ( x ) = e c · y ( x ) , m N , c R , then
F ( n ) = e c · y ( 0 ) , for n = 0 , c k = 0 n 1 k + 1 n Y ( k + 1 ) F ( n k 1 ) , for n 1 .

4. Application of PINNs to a Stiff Boundary Problem

Neural networks have for several years been a powerful tool for solving a wide range of problems, such as image analysis, speech recognition, and others. The PINN-type network is designed to address forward and inverse problems based on various types of equations, including differential, integral, and integro-differential equations [7,28,29,30,31]. The use of neural networks for this class of problems is a relatively recent approach and fundamentally different from classical numerical methods.
The core idea behind Physics-Informed Neural Networks (PINNs) is the integration of physical laws into the deep learning model training process. During model training, the loss function incorporates the underlying physics, i.e., the governing equations along with the corresponding initial and boundary conditions. Unlike many traditional numerical methods, the PINN approach does not require meshing or domain discretization. This is made possible through a computational technique known as automatic differentiation, which enables the accurate computation of derivatives of composite functions based on elementary mathematical operations and the chain rule.
The structure of a PINN-type network typically consists of two essential components: a neural network (most commonly a fully connected feedforward network) and a physics-informed loss function. The loss function accounts for the approximation errors of the differential equation solution, initial and boundary conditions, and any additional data (e.g., measurements required for solving an inverse problem). A general schematic of a PINN architecture for forward problems is presented in Figure 1.
The input to the neural network (denoted as X in Figure 1) consists of the coordinates of points from the domain of the equation, while the network’s output represents the approximate value of the solution at the given points (denoted as Y in the figure). The diagram also illustrates blocks corresponding to the governing equation and the initial boundary conditions. The symbols I, D, and F denote, respectively, the integral operator, the differential operator, and the function defining the equation.
In the case of inverse problems (not shown in the diagram), additional data may also be considered (e.g., observations or measurements). The loss function accounts for the approximation errors of the equation and the initial boundary conditions:
L = ω 1 L E Q + ω 2 L I B C ,
where L E Q denotes the approximation errors within the domain, L I B C corresponds to the errors associated with the initial and boundary conditions, and ω 1 , ω 2 are the weighting coefficients.
For the Troesch problem, we adopted a loss function composed solely of the approximation errors within the domain:
L = L E Q = 1 M j = 1 M F ( y ( x j ; θ ) ; x j ) 2 ,
where M denotes the number of training (or collocation) points randomly sampled from the domain. In this case, the operator F is defined as follows:
F ( y ( x ) ) = y ( x ) λ sinh ( λ y ( x ) ) .
For the boundary conditions, we applied a technique known as a hard boundary condition. This involves embedding the boundary conditions directly into the structure of the trial solution instead of imposing them softly by adding a penalty term to the loss function. In practice, this is performed by transforming the output of the neural network through a function that enforces the boundary conditions. In the considered case, this function takes the form
y ^ ( x ) = x + x ( 1 x ) N N ( x ; θ ) ,
where N N ( x ; θ ) denotes the output of the neural network with parameters θ . The function y ^ ( x ) denotes the approximation of the solution, obtained from the output of the neural network and constructed to exactly satisfy the boundary conditions. It is easy to verify that this construction satisfies the boundary conditions exactly: y ^ ( 0 ) = 0 and y ^ ( 1 ) = 1 . As a result, the boundary conditions are satisfied exactly rather than approximately during the model training. Furthermore, this approach accelerates the learning process. It is also worth noting that the PINN framework focuses on training a model. Once the model is trained, it can provide the unknown function values at any point in the domain without the need to recompute the solution. In contrast, classical grid-based numerical methods require recomputation whenever the grid is changed.

5. Results and Discussion

In this section, we compare the solutions to the Troesch problem obtained using two different methods: the DTM and PINNs. The former allows obtaining the solution in the form of a continuous function. The latter, namely PINNs, is a relatively new approach in which information about the differential equation is incorporated into the training of the neural network, and the results are obtained in a discrete form.

5.1. Results Obtained from DTM

In order to decompose the problem (1) into algebraic dependencies, we use the relations (2)–(5), whereby the equation in question is rewritten, for the purpose of the solution, in the form y ( x ) = f ( x ) , where
f ( x ) = λ 2 e λ y ( x ) e λ y ( x ) = λ 2 f 1 ( x ) f 2 ( x ) ,
with f 1 ( x ) = e λ y ( x ) , f 2 ( x ) = e λ y ( x ) . For n = 0 , we then obtain
2 Y ( 2 ) = λ 2 e λ Y ( 0 ) e λ Y ( 0 ) ,
and since from the condition y ( 0 ) = 0 we have Y ( 0 ) = 0 , it follows from Equation (10), knowing that F ( 0 ) = F 1 ( 0 ) + F 2 ( 0 ) = 0 , that Y ( 2 ) = 0 . Unfortunately, we do not know the value of Y ( 1 ) (since the boundary conditions of problem (1) do not include a condition for y ( 0 ) ), and so, for the purpose of the solution, we introduce a temporary assumption Y ( 1 ) = s , where s R is an unknown constant.
For n 1 , problem (1) takes the following recursive form:
( n + m ) ! n ! Y ( n + m ) = λ 2 F 1 ( n ) F 2 ( n ) ,
where, for n 1 ,
F i ( n ) = ( 1 ) i + 1 λ k = 0 n 1 k + 1 n Y ( k + 1 ) F i ( n k 1 ) , i = 1 , 2 .
For example, for n = 1 we obtain
F 1 ( 1 ) = λ Y ( 1 ) F 1 ( 0 ) = λ s , F 2 ( 1 ) = λ Y ( 1 ) F 2 ( 0 ) = λ s , F ( 1 ) = λ 2 F 1 ( 1 ) F 2 ( 1 ) = λ 2 s ,
and hence Y ( 3 ) = 1 6 λ 2 s ; for n = 2 we obtain
F 1 ( 2 ) = λ 1 2 Y ( 1 ) F 1 ( 1 ) + Y ( 2 ) F 1 ( 0 ) = λ 2 s 2 2 , F 2 ( 2 ) = λ 1 2 Y ( 1 ) F 2 ( 1 ) + Y ( 2 ) F 2 ( 0 ) = λ 2 s 2 2 , F ( 2 ) = λ 2 F 1 ( 2 ) F 2 ( 2 ) = 0 ,
and thus Y ( 4 ) = 0 ; for n = 3 we obtain
F 1 ( 3 ) = λ 1 3 Y ( 1 ) F 1 ( 2 ) + 2 3 Y ( 2 ) F 1 ( 1 ) + Y ( 3 ) F 1 ( 0 ) = λ 3 s 6 ( 1 + s 2 ) , F 2 ( 3 ) = λ 2 2 1 3 Y ( 1 ) F 2 ( 2 ) + 2 3 Y ( 2 ) F 2 ( 1 ) + Y ( 3 ) F 2 ( 0 ) = λ 3 s 6 ( 1 + s 2 ) , F ( 3 ) = λ 2 F 1 ( 3 ) F 2 ( 3 ) = λ 3 s 6 ( 1 + s 2 ) ,
and hence Y ( 5 ) = λ 4 s 120 ( 1 + s 2 ) ,
Therefore, if we aim to construct a degree-5 polynomial that serves as an approximate solution to problem (1), we obtain
y 5 ( x ) = s x + 1 6 λ 2 s x 3 + λ 4 s 120 ( 1 + s 2 ) x 5 ,
in which the unknown parameter s appears. The value of this parameter will be determined for a fixed value of λ , using the boundary condition y ( 1 ) = 1 , i.e., by solving the equation y 5 ( 1 ) = 1 with respect to the unknown s. In this case (with λ = 3 ), the parameter s takes the value 0.3087 , and the polynomial y 5 takes the form
y 5 ( x ) = 0.308706 x + 0.463059 x 3 + 0.228235 x 5 .
We now compare the solution values obtained using the DTM with the exact values of y ( x ) for λ = 1 , 3 , 10 . The predicted values for n = 3 , 5 , 7 , 9 are summarized in Table 1, Table 2 and Table 3. For λ = 1 , the DTM solutions show excellent agreement with the exact values, with very small errors observed throughout the domain, even for lower-order approximations. The accuracy remains high for λ = 3 , with deviations decreasing steadily as n increases. In the more nonlinear case of λ = 10 , the errors become more pronounced, especially as the value of x increases. For instance, at x = 0.9 , the error at n = 9 reaches approximately 0.263, corresponding to a relative error of about 172.78%. This highlights the slower convergence rate of the DTM in highly nonlinear regimes. Nevertheless, the method still captures the overall behavior of the solution well, and accuracy improves significantly with increasing n, particularly for smaller values of x.
The error graphs shown in Figure 2, Figure 3 and Figure 4 reveal distinct patterns in how the accuracy of the DTM evolves with both the degree of nonlinearity and the position within the domain. For all values of λ , errors are minimal near x = 0 and increase monotonically with x, highlighting that the method performs best close to the initial point and gradually loses precision towards x = 1 . This growth in error with respect to x becomes more pronounced as λ increases. In the linear case ( λ = 1 ), errors remain uniformly low and change slowly across x, while for λ = 3 , the increase is more noticeable but still controlled. In contrast, for λ = 10 , the rate of error escalation with x is steep, with a relatively sharp increase beyond x = 0.5 , indicating a strong sensitivity to domain position under high nonlinearity. Furthermore, while increasing n reduces error at all x, the improvement is more significant in regions with smaller x, suggesting that the DTM converges more rapidly in the early part of the domain and struggles to maintain accuracy near the boundary in nonlinear cases. This increase in error across the domain shows a key weakness of the DTM in nonlinear problems and suggests that using higher-order approximations may be necessary to maintain consistent accuracy.

5.2. Results Obtained from PINN

At the outset, it should be emphasized that the Troesch problem considered in this study is known for its strong nonlinearity and stiffness, particularly for large values of λ ( λ > 5 ), which makes it challenging to solve using classical methods as well as Physics-Informed Neural Networks (PINNs).
After a preliminary analysis of the model’s hyperparameters, the following configuration was established:
  • The neural network consisted of three hidden layers, each containing 10 neurons;
  • The training was performed using the Adam optimizer with a learning rate of 0.001 ;
  • The number of training points was set to 50.
Figure 5 presents solutions obtained with PINNs for different values of the parameter λ . The larger the value of the parameter λ , the more nonlinear the solution becomes, and the inflection point of the curve is closer to x = 1 . Higher values of λ also make Equation (1) more difficult to solve.
We now compare the solution values obtained using PINNs with the exact values taken from the paper [4] for λ = 1 , 3 , 10 . The results of this comparison can be found in Table 4 and Table 5. Figure 6, Figure 7 and Figure 8 show the error distribution over the entire domain for λ = 1 , 3 , 10 . In each case, the errors were small, with particularly good results obtained for the case λ = 3 . In the most computationally challenging case ( λ = 10 ), the approximation results are also very good, with errors not exceeding 0.021 . From the error distribution plots, it can also be observed that the maximum errors occurred for x close to the inflection point; e.g., in the case of λ = 10 , the maximum error is reached at approximately x 0.9 .

5.3. DTM vs. PINN Results Comparison

A direct comparison between the results obtained using the DTM and PINN methods for λ = 1 , 3 , 10 reveals notable differences in performance, especially as the degree of nonlinearity increases. For the linear case ( λ = 1 ), both methods yield almost identical results, with very small and evenly distributed errors across the domain. This indicates that both approaches are equally effective in handling linear problems. For λ = 3 , the accuracy of PINNs remains consistently high, with error values remaining below 1.1 × 10 5 throughout the domain. In contrast, DTM errors begin to increase more noticeably with increasing x, although the method still captures the solution trend reasonably well. The distinction becomes even more pronounced for λ = 10 , where the PINNs continue to produce accurate approximations, with the maximum error staying below 0.021. Meanwhile, the DTM solution exhibits a significant increase in error, especially in the latter half of the domain, reaching 0.263 at x = 0.9 . This indicates a decline in the DTM’s effectiveness under strong nonlinearity, unless much higher-order expansions are used. Overall, PINNs demonstrate superior stability and accuracy across all situations, particularly excelling in highly nonlinear scenarios where the DTM struggles to maintain precision without increasing computational cost (Figure 9, Figure 10 and Figure 11).

6. Conclusions

This work focuses on presenting two different numerical methods: PINNs (Physics-Informed Neural Networks) and the DTM (Differential Transform Method). These methods were used to solve the Troesch problem, which consists of a nonlinear differential equation with boundary conditions. The approaches described in this paper are completely different. In the case of the DTM, the solution is obtained in the form of a polynomial, while the PINN method is based on neural networks and provides results in a discrete form. This article demonstrates the effectiveness of both methods. A comparison of the results showed that both methods perform well for small values of the parameter λ , but for larger values, the PINN method performs significantly better.
For λ = 1 , both methods produced very accurate and similar results. At λ = 3 , PINNs still maintained high accuracy, while the DTM began to show increasing errors in the second half of the domain. The biggest differences appear for λ = 10 , where PINNs preserved good precision and the DTM returned large errors, especially towards the end of the domain. It can therefore be concluded that PINNs are a more stable and accurate method, especially for strongly nonlinear problems, where the DTM requires much higher computational effort to achieve similar accuracy.

Author Contributions

Conceptualization, R.B., M.P., J.B. and M.C.; methodology, J.B., M.C., C.N. and G.C.; software, R.B., M.P., J.B. and M.C.; validation, C.N. and G.C.; investigation, R.B., M.P., J.B., M.C., C.N. and G.C.; writing—original draft preparation, R.B., J.B. and M.C.; writing—review and editing, R.B., J.B. and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Grzymkowski, R.; Hetmaniok, E.; Slota, D. Application of the homotopy perturbation method for calculation of the temperature distribution in the cast-mould heterogeneous domain. J. Achiev. Mater. Manuf. Eng. 2010, 43, 299. [Google Scholar]
  2. He, J.H. An elementary introduction to the homotopy perturbation method. Comput. Math. Appl. 2009, 57, 410–412. [Google Scholar] [CrossRef]
  3. Adomian, G. Solving Frontier Problems of Physics: The Decomposition Method; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 60. [Google Scholar]
  4. Pleszczyński, M.; Kaczmarek, K.; Słota, D. Application of a Hybrid of the Different Transform Method and Adomian Decomposition Method Algorithms to Solve the Troesch Problem. Mathematics 2024, 12, 3858. [Google Scholar] [CrossRef]
  5. Abazari, R.; Ganji, M. Extended two-dimensional DTM and its application on nonlinear PDEs with proportional delay. Int. J. Comput. Math. 2011, 88, 1749–1762. [Google Scholar] [CrossRef]
  6. Brociek, R.; Pleszczyński, M. Comparison of selected numerical methods for solving integro-differential equations with the Cauchy kernel. Symmetry 2024, 16, 233. [Google Scholar] [CrossRef]
  7. Brociek, R.; Pleszczyński, M. Differential Transform Method (DTM) and Physics-Informed Neural Networks (PINNs) in Solving Integral–Algebraic Equation Systems. Symmetry 2024, 16, 1619. [Google Scholar] [CrossRef]
  8. Abazari, R.; Abazari, M. Numerical simulation of generalized Hirota–Satsuma coupled KdV equation by RDTM and comparison with DTM. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 619–629. [Google Scholar] [CrossRef]
  9. Lynch, S. Dynamical Systems with Applications Using Mathematica; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  10. Sitek, G.; Pleszczyński, M. Inferring About the Average Value of Audit Errors from Sequential Ratio Tests. Entropy 2024, 26, 998. [Google Scholar] [CrossRef]
  11. Wolfram, S. The MATHEMATICA® Book, Version 4; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  12. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  13. Cuomo, S.; Schiano Di Cola, V.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific Machine Learning Through Physics–Informed Neural Networks: Where We Are and What’s Next. J. Sci. Comput. 2022, 92, 88. [Google Scholar] [CrossRef]
  14. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations. arXiv 2017, arXiv:1711.10561. [Google Scholar] [CrossRef]
  15. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations. arXiv 2017, arXiv:1711.10566. [Google Scholar] [CrossRef]
  16. Farea, A.; Yli-Harja, O.; Emmert-Streib, F. Using Physics-Informed Neural Networks for Modeling Biological and Epidemiological Dynamical Systems. Mathematics 2025, 13, 1664. [Google Scholar] [CrossRef]
  17. Movahhedi, M.; Liu, X.; Geng, B.; Fan, J.; Zhang, Z.; Ma, J.; Luo, X. Predicting 3D Soft Tissue Dynamics from 2D Imaging Using Physics-Informed Neural Networks. Commun. Biol. 2023, 6, 541. [Google Scholar] [CrossRef] [PubMed]
  18. Zobeiry, N.; Humfeld, K.D. A physics-informed machine learning approach for solving heat transfer equation in advanced manufacturing and engineering applications. Eng. Appl. Artif. Intell. 2021, 101, 104232. [Google Scholar] [CrossRef]
  19. Ren, Z.; Zhou, S.; Liu, D.; Liu, Q. Physics-Informed Neural Networks: A Review of Methodological Evolution, Theoretical Foundations, and Interdisciplinary Frontiers Toward Next-Generation Scientific Computing. Appl. Sci. 2025, 15, 8092. [Google Scholar] [CrossRef]
  20. Lu, Z.; Zhang, J.; Zhu, X. High-Accuracy Parallel Neural Networks with Hard Constraints for a Mixed Stokes/Darcy Model. Entropy 2025, 27, 275. [Google Scholar] [CrossRef]
  21. Li, S.; Feng, X. Dynamic Weight Strategy of Physics-Informed Neural Networks for the 2D Navier–Stokes Equations. Entropy 2022, 24, 1254. [Google Scholar] [CrossRef]
  22. Trahan, C.; Loveland, M.; Dent, S. Quantum Physics-Informed Neural Networks. Entropy 2024, 26, 649. [Google Scholar] [CrossRef]
  23. Feng, X.; Mei, L.; He, G. An efficient algorithm for solving Troesch’s problem. Appl. Math. Comput. 2007, 189, 500–507. [Google Scholar] [CrossRef]
  24. Weibel, E.S.; Landshoff, R. The plasma in magnetic field. In Proceedings of the a Symposium on Magneto Hydrodynamics; Stanford University Press: Stanford, CA, USA, 1958; pp. 60–67. [Google Scholar]
  25. Gidaspow, D.; Baker, B.S. A model for discharge of storage batteries. J. Electrochem. Soc. 1973, 120, 1005. [Google Scholar] [CrossRef]
  26. Markin, V.; Chernenko, A.; Chizmadehev, Y.; Chirkov, Y.G. Aspects of the theory of gas porous electrodes. In Fuel Cells: Their Electrochemical Kinetics; Springer: New York, NY, USA, 1966; pp. 22–33. [Google Scholar]
  27. Vazquez-Leal, H.; Khan, Y.; Fernandez-Anaya, G.; Herrera-May, A.; Sarmiento-Reyes, A.; Filobello-Nino, U.; Jimenez-Fernandez, V.M.; Pereyra-Diaz, D. A general solution for Troesch’s problem. Math. Probl. Eng. 2012, 2012, 208375. [Google Scholar] [CrossRef]
  28. Bararnia, H.; Esmaeilpour, M. On the application of physics informed neural networks (PINN) to solve boundary layer thermal-fluid problems. Int. Commun. Heat Mass Transf. 2022, 132, 105890. [Google Scholar] [CrossRef]
  29. Lee, S.; Popovics, J. Applications of physics-informed neural networks for property characterization of complex materials. RILEM Tech. Lett. 2023, 7, 178–188. [Google Scholar] [CrossRef]
  30. Hu, H.; Qi, L.; Chao, X. Physics-informed Neural Networks (PINN) for computational solid mechanics: Numerical frameworks and applications. Thin-Walled Struct. 2024, 205, 112495. [Google Scholar] [CrossRef]
  31. Brociek, R.; Pleszczyński, M. Differential Transform Method and Neural Network for Solving Variational Calculus Problems. Mathematics 2024, 12, 2182. [Google Scholar] [CrossRef]
Figure 1. PINN-type scheme for forward problems.
Figure 1. PINN-type scheme for forward problems.
Mathematics 13 03045 g001
Figure 2. DTM method errors for λ = 1 and n = 9 .
Figure 2. DTM method errors for λ = 1 and n = 9 .
Mathematics 13 03045 g002
Figure 3. DTM method errors for λ = 3 and n = 9 .
Figure 3. DTM method errors for λ = 3 and n = 9 .
Mathematics 13 03045 g003
Figure 4. DTM method errors for λ = 10 and n = 9 .
Figure 4. DTM method errors for λ = 10 and n = 9 .
Mathematics 13 03045 g004
Figure 5. Solutions obtained using PINN for different values of λ .
Figure 5. Solutions obtained using PINN for different values of λ .
Mathematics 13 03045 g005
Figure 6. PINN method errors for λ = 1 .
Figure 6. PINN method errors for λ = 1 .
Mathematics 13 03045 g006
Figure 7. PINN method errors for λ = 3 .
Figure 7. PINN method errors for λ = 3 .
Mathematics 13 03045 g007
Figure 8. PINN method errors for λ = 10 .
Figure 8. PINN method errors for λ = 10 .
Mathematics 13 03045 g008
Figure 9. PINN and DTM error comparison for λ = 1 .
Figure 9. PINN and DTM error comparison for λ = 1 .
Mathematics 13 03045 g009
Figure 10. PINN and DTM error comparison for λ = 3 .
Figure 10. PINN and DTM error comparison for λ = 3 .
Mathematics 13 03045 g010
Figure 11. PINN and DTM error comparison for λ = 10 .
Figure 11. PINN and DTM error comparison for λ = 10 .
Mathematics 13 03045 g011
Table 1. Solution values obtained using DTM for λ = 1 compared with exact values y ( x ) .
Table 1. Solution values obtained using DTM for λ = 1 compared with exact values y ( x ) .
x y ( x ) n = 3 n = 5 n = 7 n = 9
λ = 1
0.000000
0.10.08180.0848170.0846850.08466490.0846618
0.20.164530.1704840.1702190.1701790.170173
0.30.249170.2578670.2574660.2574050.257396
0.40.336730.3478590.347320.3472380.347225
0.50.428350.4413980.4407230.4406190.440603
0.60.525270.539480.5386830.5385570.538538
0.70.628970.6431730.6422990.6421560.642133
0.80.741170.7536330.7527850.7526370.752613
0.90.863970.8721170.8715030.8713870.871367
1.011111
Table 2. Solution values obtained using DTM for λ = 3 compared with exact values y ( x ) .
Table 2. Solution values obtained using DTM for λ = 3 compared with exact values y ( x ) .
x y ( x ) n = 3 n = 5 n = 7 n = 9
λ = 3
0.000000
0.10.0259460.0313360.028960.027790.02707
0.20.0542480.0655190.0605520.058110.05661
0.30.0874950.1056690.0976690.093720.09131
0.40.1287770.1554550.1437710.137960.1344
0.50.1820560.2193680.2032670.195090.19004
0.60.2527470.3029920.282010.270910.26391
0.70.3488050.4132830.3879650.373670.36429
0.80.4831380.5588390.5320550.515560.50401
0.90.6801630.7501760.7292640.714970.70401
1.011111
Table 3. Solution values obtained using DTM for λ = 10 compared with exact values y ( x ) .
Table 3. Solution values obtained using DTM for λ = 10 compared with exact values y ( x ) .
x y ( x ) n = 3 n = 5 n = 7 n = 9
λ = 10
0.000000
0.1 4.211 × 10 5 1.163 × 10 3 3.925 × 10 4 2.044 × 10 4 1.423 × 10 4
0.2 1.299 × 10 4 3.564 × 10 3 1.211 × 10 3 6.307 × 10 4 4.391 × 10 4
0.3 3.589 × 10 4 9.430 × 10 3 3.326 × 10 3 1.741 × 10 3 1.213 × 10 3
0.4 9.779 × 10 4 2.297 × 10 2 8.834 × 10 3 4.725 × 10 3 3.303 × 10 3
0.5 2.659 × 10 3 5.136 × 10 2 2.250 × 10 2 1.265 × 10 2 8.958 × 10 3
0.6 7.228 × 10 3 1.057 × 10 1 5.422 × 10 2 3.306 × 10 2 2.412 × 10 2
0.7 1.966 × 10 2 2.022 × 10 1 1.228 × 10 1 8.327 × 10 2 6.399 × 10 2
0.8 5.373 × 10 2 3.628 × 10 1 2.613 × 10 1 2.004 × 10 1 1.656 × 10 1
0.9 1.521 × 10 1 6.164 × 10 1 5.249 × 10 1 4.590 × 10 1 4.149 × 10 1
1.011111
Table 4. Comparison of exact values of y ( x ) and PINN approximations for λ = 1 and λ = 3 .
Table 4. Comparison of exact values of y ( x ) and PINN approximations for λ = 1 and λ = 3 .
x y ( x ) [4]PINN Δ PINN y ( x ) [4]PINN Δ PINN
λ = 1 λ = 3
0.0000000
0.10.0817970.0846230.0028260.0259460.025947 9.9 × 10 7
0.20.1645310.1701010.0055700.0542480.054252 4.625 × 10 6
0.30.2491670.2572970.0081300.0874950.087502 7.87 × 10 6
0.40.3367320.3471060.0103740.1287770.128781 4.05 × 10 6
0.50.4283470.4404740.0121270.1820560.182053 2.52 × 10 6
0.60.5252740.5384140.0131400.2527470.252743 3.55 × 10 6
0.70.6289710.6420240.0130530.3488050.348798 6.75 × 10 6
0.80.7411680.7525270.0113590.4831380.483128 1.03 × 10 5
0.90.863970.8713140.0073440.6801630.680158 4.6 × 10 6
1.0110110
Table 5. Comparison of exact values of y ( x ) and PINN approximations for λ = 10 .
Table 5. Comparison of exact values of y ( x ) and PINN approximations for λ = 10 .
x y ( x ) [4]PINN Δ PINN
λ = 10
0.0000
0.1 4.211 × 10 5 5.203 × 10 5 9.924 × 10 6
0.2 1.299 × 10 4 1.663 × 10 4 3.648 × 10 5
0.3 3.589 × 10 4 3.010 × 10 4 5.780 × 10 5
0.4 9.779 × 10 4 9.008 × 10 4 7.703 × 10 5
0.5 2.659 × 10 3 2.905 × 10 3 2.466 × 10 4
0.6 7.228 × 10 3 8.041 × 10 3 8.132 × 10 4
0.7 1.966 × 10 2 2.197 × 10 2 2.311 × 10 3
0.8 5.373 × 10 2 6.042 × 10 2 6.698 × 10 3
0.9 1.521 × 10 1 1.730 × 10 1 2.099 × 10 2
1.0110
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brociek, R.; Pleszczyński, M.; Błaszczyk, J.; Czaicki, M.; Napoli, C.; Capizzi, G. Comparing PINN and Symbolic Transform Methods in Modeling the Nonlinear Dynamics of Complex Systems: A Case Study of the Troesch Problem. Mathematics 2025, 13, 3045. https://doi.org/10.3390/math13183045

AMA Style

Brociek R, Pleszczyński M, Błaszczyk J, Czaicki M, Napoli C, Capizzi G. Comparing PINN and Symbolic Transform Methods in Modeling the Nonlinear Dynamics of Complex Systems: A Case Study of the Troesch Problem. Mathematics. 2025; 13(18):3045. https://doi.org/10.3390/math13183045

Chicago/Turabian Style

Brociek, Rafał, Mariusz Pleszczyński, Jakub Błaszczyk, Maciej Czaicki, Christian Napoli, and Giacomo Capizzi. 2025. "Comparing PINN and Symbolic Transform Methods in Modeling the Nonlinear Dynamics of Complex Systems: A Case Study of the Troesch Problem" Mathematics 13, no. 18: 3045. https://doi.org/10.3390/math13183045

APA Style

Brociek, R., Pleszczyński, M., Błaszczyk, J., Czaicki, M., Napoli, C., & Capizzi, G. (2025). Comparing PINN and Symbolic Transform Methods in Modeling the Nonlinear Dynamics of Complex Systems: A Case Study of the Troesch Problem. Mathematics, 13(18), 3045. https://doi.org/10.3390/math13183045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop