Next Article in Journal
Quantum-Enhanced Residual Convolutional Attention Architecture for Renewable Forecasting in Off-Grid Cloud Microgrids
Previous Article in Journal
Nonlinear Transport of Tracer Particles Immersed in a Strongly Sheared Dilute Gas with Inelastic Collisions
Previous Article in Special Issue
Regularity of Generalized Mean-Field G-SDEs
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence for Studying Interactions of Solitons and Peakons

Institute of Mechanics, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(1), 180; https://doi.org/10.3390/math14010180
Submission received: 21 November 2025 / Revised: 18 December 2025 / Accepted: 29 December 2025 / Published: 3 January 2026
(This article belongs to the Special Issue Applications of Differential Equations in Sciences)

Abstract

In this paper, Artificial Intelligence (AI) is developed for studying the Boussinesq Paradigm equation and so called b-equation based on Physics-Informed Cellular Neural Networks (PICNNs). The models studied here come from fluid dynamics. Machine learning through Physics-Informed Neural Networks (PINNs) is a powerful tool for solving complex problems arising in physical laws. By optimization and automatic differentiation, the solutions of the model under consideration can be approximated precisely and can be obtained in real time. In this paper, we shall apply a new algorithm based on Physics-Informed Cellular Neural Networks (PICNNs) for obtaining the interactions between solitons and peakons. The algorithm has many advantages, but the main ones are that it provides the fastest programming and solutions in real time. It is known that Cellular Neural Networks (CNNs) have the ability to approximate, in a very accurate way, nonlinear partial differential equations (PDEs) and to present their solutions in real time. By incorporating the physical laws into the learning process through PICNN we can solve various problems from fluid dynamics, material science, and quantum mechanics.

1. Introduction

Artificial intelligence (AI) uses neural networks to train computers in the same way as the human brain. In recent years, we have witnessed the development of digital technologies that have been used to collect multidimensional big data. This makes it necessary to obtain new methods for modeling and analysis. The modification of theoretical algorithms is very important for this practice, together with its effective application in different sciences.
Physics-Informed Neural Networks (PINNs) are a special type of neural network that incorporates physics laws into the learning process. They differ from traditional neural networks because PINNs connect domain-specific knowledge to physical principles to increase their predictive capabilities in engineering and scientific contexts. PINNs combine data-driven learning with physics laws and, in this way, they offer a robust tool for solving very complex partial differential equations (PDEs) and ordinary differential equations (ODEs) that arise in mathematical physics [1,2,3,4,5,6]. In this context, PINNs are very good at accurately modeling different physics phenomena like fluid dynamics, predicting material characteristics, and solving inverse problems [2,6].
As is well known, different physics phenomena are described by different mathematical models and therefore by different nonlinear Partial Differential Equations (PDEs) in general. The corresponding solutions can differ from each other in their structure, their behavior at infinity, etc. We are looking for traveling wave solutions and aiming to draw their profiles as well as to construct appropriate solutions with special physical meaning in laser optics and fluid dynamics. As in many cases, these equations are modeling waves that have different geometrical structures (traveling waves, kinks, solitons, loops, butterflies, ovals, peakons and anti-peakons, among others). We provide in this paper two equations—the Boussinesq Paradigm equation and b-equation—as typical examples of PDEs that have solutions in a special form such as solitons, kinks, and peakons. Special attention is paid to the interactions of two solitons and to the interactions of peakons–kinks by reducing the original equations to the study of a system of ordinary differential equations (ODEs) with a non-smooth right-hand side. In this way, we are able, in some cases, to obtain global first integrals of these systems and to study the interaction of the waves using methods from classical analysis. This is the novelty of the analytical results obtained in this paper.
In this paper, firstly, we shall study the Cauchy problem for the Boussinesq Paradigm equation [7,8] as follows:
2 φ t 2 = Δ φ + γ 1 Δ 2 φ t 2 γ 2 Δ 2 φ + β 0 Δ g ( φ ) = 0 ,   x R n ,   0 t T < ,
where φ ( x , 0 ) = φ 0 ( x ) , φ t ( x , 0 ) = φ 1 ( x ) , φ ( x , t ) 0 , Δ φ ( x , t ) 0 as | x | .
The variable φ states for a surface elevation, g ( φ ) = φ p , p N , p 2 , γ 1 , γ 2 0 , γ 1 + γ 2 0 , present two dispersive coefficients and β 0 presents an amplitude parameter. Equation (1) can be derived by modeling the surface waves in shallow waters. It can also be found in other different fields such as the theory of acoustic waves, in ion-sound waves, plasma and nonlinear lattice waves, etc. [8].
The dispersive multidimensional Boussinesq equation [7,8] and its generalizations—for instance, the B ( m , n ) Boussinesq equation and the Boussinesq Paradigm equation—arise mainly in fluid mechanics via the formation of patterns in liquid drops, the vibrations of a single one-dimensional lattice, etc. In the literature [7,8,9,10,11,12] there are many developments concerning the Boussinesq equation and its generalizations. A detailed study of the Boussinesq Paradigm Equation (1) from the point of view of the traveling wave solutions can be found in Section 3 below. We obtain the solutions in integral form. Moreover, the solitons develop cusp-type singularities and we study the interactions of two solitons as well.
This paper will deal with the interactions of peakon, anti-peakon, and peakon—kink solutions for the generalization of the b-equation [13], namely an evolution PDE containing both quadratic and cubic nonlinearities:
v t v x x t = β v x + κ 1 2 [ ( v v x x ) ( v 2 v x 2 ) ] x + κ 2 2 ( 2 ( v v x x ) v x + v x v x x v ) ,
β , κ 1 , κ 2 are some constants.
When κ 1 = 0 , κ 2 = 2 , β = 0 , we obtain the Camassa–Holm equation, and when κ 2 = 0 , κ 1 = 2 , we have the cubic nonlinear evolution equation [9]. Usually, when studying the interactions of (anti)peakons and kink waves, we look for solutions of (2) in a special form which we call Ansatz. In this way, we reduce the equation to a system of ODE with jumps because of the non-smooth right-hand side. Then, we obtain global first integrals. Applying some restrictions, we can solve the obtained ODE in quadratures or in special elementary functions—elliptic ones. We shall find the solutions in explicit form and we shall study their interactions using classical analytical methods. A geometrical interpretation of the collision of this kind of waves is given in Figure 1.
In this paper, we shall study Equations (1) and (2) by applying a new methodology, namely Physics-Informed Cellular Neural Networks (PICNNs). In this way, we will be able to obtain soliton solutions of (1) and peakon-kink solutions of (2). We shall develop an artificial intelligence algorithm based on PICNNs for the interactions of soliton and peakon-kinks waves which arise from Equations (1) and (2). The algorithm for obtaining soliton and peakon-kink solutions and their interactions will be presented in detail in Section 5.
We shall present a brief comparison with other investigations in the field of PINNs. Raissi et al. [4,5] used Gaussian process regression to construct representations of a linear operator functional, accurately derive the solution, and provide uncertainty estimates for various physical problems. This study was then extended in [6]. Various articles have been published in which the new concepts of PINNs are presented. For example, researchers have previously addressed the potential, limitations, and applications of straight and inverse problems for three-dimensional flows [1] or comparison with other machine learning techniques. An introduction to PINNs that covers the basics of machine learning and neural networks can be found in the work of Kolmansberger et al. [2]. In the literature, PINNs are also compared with other methods that can be applied to solve PDEs, such as the one based on the Feynman–Katz theorem [3]. Finally, PINN codes have been extended to solve integro-differential equations or stochastic differential equations. In Ref. [14], a bilinear neural network method for solving nonlinear PDE is presented. In Ref. [15], Lax pairs-informed neural networks (LPNNs) for finding wave solutions are provided. To the authors’ knowledge, cellular neural networks have not previously been incorporated into PINNs’ architecture. This is the novelty of the obtained AI algorithm in this paper.
This paper is organized as follows. In Section 2, we introduce Physics-Informed Neural Networks. Section 3 deals with some analytical results related to Equation (1), as well as the interactions of soliton waves which arise. In Section 4, we obtain analytically the interaction of peak (antipeak)-kink solitons. In Section 5, we develop the concept of Physics-Informed Cellular Neural Networks and apply them to study the models under consideration. We develop an artificial intelligence algorithm based on PICNNs. The obtained results will be presented numerically by computer simulations using the NVIDIA package [16].

2. Physics-Informed Neural Networks

Introduced in 2017 in the papers [4,5] and then in 2019 [6], PINNs are a new class of neural networks which can solve nonlinear partial differential equations. Raisi et al. present PINNs as a new method for finding solutions to the Allen–Cohn, Schrödinger, and Burgers equations. In these papers, they show that PINNs are able to handle two problems—forward and inverse problems. For solving forward problems they illustrate how to evaluate the solutions of the governing equations, while in the inverse problems the parameters of the models are obtained though learning process from the observation data. Since the introduction of PINNs, many papers have been published which present new concepts for solving different types of equations [1,2,3,14,15]. In these papers, the potential and applications of PINNs in both forward and inverse problems, as well as a comparison with existing machine learning algorithms, are described. In Ref. [2], Kolmansberger et al. present an introductory course on PINNs. Also, PINNs are compared with different methods for solving differential equations. The PINN approach has been applied to various types of differential equations, such as integro-differential equations, stochastic differential equations, etc.
In Figure 2, we present the scheme of the main blocks of PINN architecture:
There are three main blocks—including neural networks and PINNs which, through automatic differentiation, incorporate the governing differential equations and loss function minimization block.
Let us consider a general type of partial differential equation written in the following form,
F ( v , x , ρ , v x , ) = 0 ,   x Ω ,   ρ Ω p
where the solution is v ( x , t ) , x is the space variable in the one-dimensional domain Ω and ρ is a physical parameter which takes different values in Ω p . If we have parametric problems then ρ can be considered as the second variable, while for inverse problems it is unknown. We can add boundary conditions (BCs) and initial conditions (ICs) for the forward problems, unless, for the inverse problems, additional conditions should be added, such as, for example, the known solution at some values of x.
We shall introduce vector Γ for all unknown parameters of the deep neural network for both forward and inverse problems, and ρ in the case of inverse problems. Therefore, the neural network should learn to approximate the governing differential equations by finding Γ , which is obtained by minimizing the loss function depending on the differential equation L F , the boundary conditions L B , and some known data L d , all of which are weighted:
Γ = a r g m i n Γ ( ω F L F ( Γ ) + ω B L B ( Γ ) + ω d L d ( Γ ) ) .
Usually, in the PINN approach, the loss function L F can be defined as follows:
L F ( Γ ) = 1 N c i = 1 N c | F ( v Γ ( x i ) | 2
where the residual equation is evaluated as a set of N c points denoted as x i . These points are called collocation points. When we add the boundary data in the training data set, the neural network is able to learn to approximate the solution at the boundaries where the known solutions are available. Therefore, PINNs can solve PDE in inverse problems in a data-driven manner.
PINNs have many advantages in comparison to traditional numerical methods. One of these advantages is numerical simplicity; in PINNs, we do not need discretization as we do in finite-difference methods. Moreover, the number of the collocation points is not high in order to guarantee the convergence of the training process. Another advantage is that after the training is complete, the solution is predicted on any other grid different from the collocation grid, which is not the same as the traditional numerical methods where additional interpolation is needed.

3. Analytical Results Concerning Equation (1)

In this section, we shall present some analytical results concerning the Cauchy problem for the Boussinesq Paradigm Equation (1). Let us put in Equation (1) n = 1 , g ( φ ) = φ m + 1 . We shall look for traveling waves solutions of the form φ ( x , t ) = ψ ( x c t ) , τ = x c t , c 1 . We shall integrate (1) in τ two times and we shall take the integration constants equal to zero. So, we obtain
( c 2 1 ) ψ = ( γ 1 c 2 γ 2 ) ψ + β 0 ψ m + 1 = 0 ,   m 1 ,
i.e.,
ψ = B 1 ψ + B 2 ψ m + 1 ,   B 1 > 0 ,   B 2 < 0
Below, we shall illustrate the t g h x method [8]. For this purpose, we put
ψ = κ 2 ( s e c h 2 ( κ 1 x ) ) δ
with κ 1 , κ 2 , δ > 0 as unknown parameters.
Now, we shall substitute (8) in (7). It is known that the function s e c h 2 ( κ 1 x ) = 1 t g h 2 ( κ 1 x ) and it can be seen that w = t g h ( κ 1 x ) satisfies the ODE w = κ 1 ( 1 w 2 ) . Therefore, for δ m = 1 δ = 1 m , we get
2 κ 1 2 δ + 2 κ 1 2 δ ( 1 + 2 δ ) w 2 = B 1 + B 2 κ 2 m B 2 κ 2 m w 2 ,
i.e.,
2 κ 1 2 δ = B 1 + B 2 κ 2 m 2 κ 1 2 δ ( 1 + 2 δ ) = B 2 κ 2 m .
Then,
κ 2 = ( B 1 ( m + 2 ) 2 B 2 ) 1 m , κ 1 = ± 1 2 B 1 m .
So, we obtain
ψ = [ B 1 B 2 m + 2 2 s e c h 2 ( 1 2 B 1 m ξ ) ] 1 m
which, in fact, generates a traveling wave solution φ ( t , x ) = ψ ( x c t ) of (1), n = 1 .
In (11), B 1 1 = γ 1 c 2 γ 2 c 2 1 > 0 , B 2 1 = γ 1 c 2 γ 2 β 0 < 0 . Consequently, ψ > 0 , ψ ( ± ) = 0 ,
φ ( t , x ) = [ c 2 1 β 0 m + 2 2 s e c h 2 1 2 ( c 2 1 ) ( γ 1 c 2 γ 2 ) m ( x c t ) ] 1 m , m 1 .
It is possible to obtain (11) using the fact that for 0 < ψ < 1 α > 0 : d d ψ a r c s e c h ψ δ = δ ψ 1 ψ 2 δ , i.e., x = d ψ ψ 1 ψ 2 δ = a r c s e c h ψ δ δ and evidently ψ δ = s e c h δ x ψ = ( s e c h δ x ) 1 δ , δ = m 2 , etc.
Now, we shall study the problem of the interaction of solitary waves which satisfy the 1D version of (1) with g ( φ ) = φ 2 , β 0 = 3 , γ 1 = 1.5 and γ 2 = 0.5 , m = 1 .
In [7], it is shown that (1) has the traveling wave solution, represented by solitons
ψ ˜ ( x , t ; x 0 , c ) = 3 2 c 2 1 β 0 s e c h 2 ( x x 0 c t 2 c 2 1 γ 1 c 2 γ 2 ) ,
| c | > m a x ( 1 , γ 2 β 1 ) or | c | < m i n ( 1 , γ 2 γ 1 ) . The maximum of ψ ˜ is attained at the line x x 0 c t = 0 .
We shall investigate the Cauchy problem for (1) with initial data
φ ( x , 0 ) = ψ ˜ ( x , 0 ; x 0 1 , c 1 ) + ψ ˜ ( x , 0 ; x 0 2 , c 2 ) , φ t ( x , 0 ) = ψ ˜ t ( x , 0 ; x 0 1 , c 1 ) + ψ ˜ t ( x , 0 ; x 0 2 , c 2 ) , x R 1 .
Equation (1) is nonlinear PDE and therefore φ ( x , t ) ψ ˜ ( x , t ; x 0 1 , c 1 ) + ψ ˜ ( x , t ; x 0 2 , c 2 ) in the general case.
We shall apply a new conservative finite difference scheme for (1) and we shall consider two cases:
Case (i). γ 1 = 1.5 , γ 2 = 0.5 , β 0 = 3 , x 0 1 = 40 , x 0 2 = 50 , c 1 = 2 , c 2 = 1.5 , 0 t 90 .
Therefore, ψ ˜ ( x , 0 ; x 0 1 , c 1 ) and ψ ˜ ( x , 0 ; x 0 2 , c 2 ) are two solitons which move in opposite directions. At some point, they collide and a new wave arises. This wave is weaker than the initial waves in the sense that its amplitude is smaller. In time, the initial two solitons increase and continue to travel with the same shapes as they had before the collision (see Figure 3):
Case (ii). γ 1 = 1.5 , γ 2 = 0.5 , β 0 = 3 , x 0 1 = 40 , x 0 2 = 50 , c 1 = c 2 = 2.2 ,
In this case, the soliton φ blows up after the collision and the absolute value of the amplitude increases. The blow-up time t * 27 .

4. Interaction of Kink-Peakon Solutions to the B-Equation (2)

We shall apply the following Ansatz of the solution to the b-equation [13]:
v = q 1 ( t ) s g n ( x r 1 ( t ) ) ( e | x r 1 ( t ) | 1 ) + q 2 ( t ) e | x r 2 ( t ) | .
In [13], it is proved that when κ 2 = 0 and β is arbitrary, the amplitudes q 1 , 2 ( t ) and the position functions r 1 , 2 must satisfy the system of an ODE of the following type:
q 1 = ± β κ 1 ( or κ β 1 < 0 ) q 2 = κ 1 q 1 2 q 2 s g n ( r 2 r 1 ) e | r 1 r 2 | r 1 = 1 2 β κ 1 p 1 q 2 s g n ( r 2 r 1 ) e | r 1 r 2 | r 2 = 1 3 κ 1 q 2 2 1 2 κ 1 q 1 2 + κ 1 ( q 1 2 q 1 q 2 s g n ( r 2 r 1 ) ) e | r 1 r 2 | + κ 1 s g n ( r 2 r 1 ) q 1 q 2 .
In the case when we have an interaction of a single kink and N-peakons, the Ansatz for the solution v is similar to (14) (when κ 2 = 0 , β 0 ):
v = q 0 ( t ) s g n ( x r 0 ( t ) ) ( e | x r 0 ( t ) | 1 ) + j = 1 N q j e | x r j | .
In Formula (16) q 0 = ± β κ 1 , r 0 = 1 2 κ 1 q 0 2 + κ 0 q 0 i = 1 N q i s g n ( r 0 r i ) e | r 0 r i | . In this case, the ODEs which are satisfied by q j , r j , 1 j N are very complicated and we shall omit their formulation (see [13]).
Let us assume that in (15), r 2 > r 1 . Put A 1 = ± β κ 1 , A 2 = κ 1 q 1 = ± s g n κ 1 β κ 1 . Therefore, (15) takes the form
q 2 = β q 2 e r 1 2 2 r 1 = 1 2 β A 2 q 2 e r 1 r 2 r 2 = 1 3 κ 1 q 2 2 + β 2 ( β + A 2 q 2 ) e r 1 r 2 + A 2 q 2 .
From the first two equations, we obtain
A 2 q 2 β r 1 = β t 2 + B 1 , B 1 = c o n s t ,
and from the second and the third equations, we get
r = β + 1 3 κ 1 q 2 2 A 2 q 2 + β e r ,
where r = r 1 r 2 < 0 .
We shall make the following change r = l o g f η ( q 2 ) . In this way, r ( t ) = η ( q 2 ) q 2 f η ( q 2 ) = η ( q 2 ) β q 2 and the function e t a is unknown, 0 < η ( q 2 ) < 1 r < 0 . Then, (19) takes the following form
β η ( q 2 ) q 2 = β + 1 3 κ 1 q 2 2 A 2 q 2 + β η ( q 2 ) .
Equation (20) is a linear ODE with respect to the unknown function η and the independent variable q 2 , so
η ( q 2 ) = 1 q 2 κ 1 3 β q 2 + A 2 β η ( q 2 ) q 2 .
Therefore, we obtain
η ( q 2 ) = B 2 q 2 + 1 κ 1 9 β q 2 2 + A 2 2 β q 2 , B 2 = c o n s t
and certainly ( e r 1 r 2 + κ 1 9 β q 2 2 A 2 2 β q 2 1 ) q 2 = B 2 is a first integral of (17). Let κ 1 9 β q 2 2 + A 2 2 β q 2 + 1 + B 2 q 2 ( 0 , 1 ) . We shall find q 2 ( t ) as a solution of the ODE q 2 = β q 2 e r = β q 2 η ( q 2 ) , and therefore
I = d q 2 β B 2 β q 2 + κ 1 9 q 2 3 A 2 2 q 2 2 = d t ; t + C = I ( q 2 ) .
We shall study the zeroes of the cubic polynomial Q 3 ( q 2 ) = κ 1 9 q 2 3 A 2 2 q 2 2 β q 2 β B 2 . If Q 3 ( q 2 ) = 0 has three simple zeroes μ 1 , μ 2 , μ 3 then the integral I = c 1 | q 2 μ 1 | + c 2 | q 2 μ 2 | + c 3 | q 2 μ 3 | , c i is real. In the case when Q 3 ( q 2 ) = 0 has one simple zero and two complex-valued zeros then I = c 1 | q 2 μ 1 | + is a logarithm of the second order of the non-vanishing polynomial of q 2 + c 2 , which is multiplied by a r c t g of a linear function of q 2 . Since the inversion of the function q 2 = I 1 ( t + C ) is non-explicit, it is better to work with definite integrals such as, for example, q 2 0 q 2 d μ Q 3 ( μ ) = t t 0 , etc.
The system (17) can be solved as r 1 = 1 β ( A 2 q 2 β 2 2 t B 1 ) and r 2 = r 1 l o g η ( q 2 ) .
A qualitative picture of the interaction between kink and peakon waves, which is described by the b-Equation (2), is given in Figure 4.

5. A Physics-Informed Cellular Neural Networks Algorithm for Studying Interactions Between Solitons and Rock Waves

In [17], it is shown that some autonomous cellular neural networks (CNNs) represent accurate approximations of the nonlinear partial differential equations. This is possible because the CNN solutions of nonlinear PDE are continuous in time, discrete in space, continuous in parameters, and continuously bounded in value.
In Figure 5, the architecture of a 2D grid of CNNs is presented. The squares are the cells C ( i , j ) . All cells are identical. The structure of a single cell is given in the figure—it consists of a feedback template, a control template, bias, and cell input and output. The interaction of each cell with its neighbor cells is obtained through feedback from other cells. Usually the cells are nonlinear dynamical systems. The interaction between cells is linear, which means that the spatial structure of CNNs is linear and for this reason they are very suitable for solving physics and engineering problems.
In [17,18], the state equation of CNN is described by:
d z j d t = z j + D 1 z j + D 1 ( z j ) + I j , y j = t a n h ( z j ) ,
where z j is the state variable, y j is the output of the CNN, D 1 is the one-dimensional Laplace template, D 1 ( z j ) is the one-dimensional nonlinear Laplace template, ∗ is the convolution operator defined in [18], and I j is a bias.
In this paper, we develop a new AI algorithm for solving nonlinear PDE–Physics-Informed Cellular Neural Networks (PICNNs). PICNNs are able to approximate the solutions of nonlinear PDEs with very high accuracy in real time. We shall find the loss function of PICNNs in the following integral form:
L F = Γ ( F ( ( v ^ Γ ( x ) f ( x i ) 2 d x ,
where v ^ Γ ( x ) is the approximate solution of the PDE (3) and function f ( x i ) defines the problem’s data. This formulation is necessary for both the theoretical study and for the implementation of PICNNs. During the training of PICNNs, the network parameters Γ are recovered.
During numerical analysis, usually we approximate the solution v ( x ) of the PDE (3) with the algorithm which computes v ^ Γ ( x ) . The estimation of the global error is the main problem of the algorithms. Here, we propose the following global error estimation:
E = v ^ Γ ( x ) v ( x )
We are looking for a set of parameters, Γ , such that E = 0 .
In the process of numerical discretization, the most important are the stability, consistency, and convergence of the algorithm. Therefore, the discretization error is found in terms of consistency and stability, which is a basic aspect of study in numerical analysis. In PICNNs, the convergence and stability of the algorithm are estimated with the learning process of neural networks connected to data and physical principles.
In [19], the authors present a neural network which has a t a n h activation function and only two hidden layers, where v ^ Γ is considered. In this way, the authors find the function v with a bound in a Sobolev space:
| | v ^ Γ N v | | W C l n ( c N ) k N s k ,
where N is the number of training points, c , C > 0 are known constants which are independent of N, and v belongs to the Sobolev space W.
In order to validate the accuracy of predictions made by PICNNs, we compute the relative L 2 error between the predicted v ^ Γ ( x ) solution and the exact v ( x ) solution by the following formula [20]:
R L 2 = j = 1 N | v ^ Γ ( x j , t j ) v ( x j , t j ) | 2 j = 1 N | v ( x j , t j ) | 2 ,
where ( x j , t j ) are the collocation points in the spatial–temporal domain.
In this paper, we develop a new AI algorithm for training PICNNs in the following way:
Input BC and IC, Collocation points for coordinates.
1. Perform initialization of the iterations j = 0 .
2. Non-dimensionalize the Boussinesq Paradigm Equation (1) (b-Equation (2), respectively).
3. Present the solution of (1) (respectively (2)) by cellular neural network v Γ .
4. Use the t a n h activation function and initialize the cellular neural network.
5. Train PICNNs with the minimum total loss function
Γ t o t a l = a r g m i n Γ ( ω F L F ( Γ ) + ω B L B ( Γ ) + ω d L d ( Γ ) ) .
6. Update the model parameters:
Γ i = Γ i 1 γ Γ t o t a l Γ .
7. If Γ t o t a l Γ i < m i n i m u m l o s s then
(i) Save the model parameters.
(ii) Update the minimum loss m i n i m u m l o s s = Γ t o t a l ( Γ ) .
(iii) Initialize the counter.
If the termination criterion is satisfied, then stop the training loop.
Otherwise, increase the counter.
End of algorithm.
Applying the above PICNN algorithm using Python 3.8 on a computer equipped with NVIDIA GeForce GTX 1080 graphics card from the NVIDIA Corporation [16], we obtain the following results:
In Figure 6, using a PICNN, we identify the interaction of two solitons ψ ˜ ( x , 0 ; x 0 1 , c 1 ) and ψ ˜ ( x , 0 ; x 0 2 , c 2 ) of the Boussinesq Paradigm Equation (1) with the following parameters γ 1 = 1.5 , γ 2 = 0.5 , β 0 = 3 , x 0 1 = 40 , x 0 2 = 50 , c 1 = 2 , c 2 = 1.5 , 0 t 90 . We apply a four-layer CNN with 80 neurons each. The network is trained with 20,000 iterations followed by 20,000 steps on a spatial–temporal domain. The resulting relative L 2 error is found to be 8.52 × 10 3 . The solitons move in opposite directions and when they collide, a new weaker wave arises. The initial two solitons continue to travel after the collision with the same shape.
In Figure 7, the results obtained by PICNNs present the interaction of peakon-kink waves in the case r 1 = 1 β ( A 2 q 2 β 2 2 t B 1 ) and r 2 = r 1 l o g η ( q 2 ) . The CNN architecture consists of five layers with 80 neurons each. The training of the network is provided by 20,000 iterations and the relative L 2 error is calculated 5.47 × 10 3 .
The main advantage of the PICNN algorithms is that they are very fast. This is due to the spatial structure of the CNN. Another advantage is that we obtain the solutions in real time. For the boundary conditions, we shall use Dirichlet boundary conditions. Usually, PICNN are mesh-free and they allow the computation of the solutions after a training stage. Moreover, PICNNs make the solutions differentiable by applying an analytical gradient. By using the same optimization problem, PICNNs can solve both forward and inverse problems.

6. Discussion

PICNNs have many advantages, but they still have limitations and challenges. Here, we shall discuss some of the constraints and challenges connected to the applications of PICNN algorithms. These include the complexity of the computations, data scarcity, automatic differentiation of the complex mathematical physics equations, as well as robustness and some generalizations.
In this discussion, we shall comment on some of the potential future research on PICNN which can lead to the advancement of the field. This could include, for instance, novel AI algorithms for training neural networks, as well as improving the efficiency of the algorithms. In particular, interdisciplinary collaborations are very helpful for exchanging ideas about problem solving, optimizing the stability and convergence of the algorithms, etc. The connection between physics and machine learning can lead to in-depth investigations of relevant problems and optimization of the algorithms. The link between engineering and data science contributes to specific knowledge in fluid dynamics, quantum mechanics, and feature engineering. PICNN algorithms contribute to materials science by optimizing new materials and contribute to robotics and nanotechnology by enhancing autonomous system and control techniques.
In conclusion, the interdisciplinary property of PICNN algorithms significantly improves reliability and model transparency, solving real-world problems. The full potential of PICNNs focused on these applications may address very complex problems because it can simplify our understanding of the models with regard to physics and data relationships. In this way, PICNNs can contribute to scientific discovery and innovative AI engineering tools.

7. Conclusions

In this paper, we developed a new AI algorithm for studying the interactions between solitons and peakons-kinks based on Physics-Informed Cellular Neural Networks (PICNNs). These kinds of waves arise from the Boussinesq Paradigm equation and the b-equation, respectively. We presented some analytical results for the two equations under consideration. Usually, such equations arise in fluid mechanics.
We presented a short overview of Physics-Informed Neural Networks. Then, we introduced PICNNs which were able to present solutions to nonlinear PDEs in real time. We developed an AI algorithm based on PICNNs and presented simulations created using the NVIDIA package. The computer simulations illustrated the theoretical results in the paper. The AI algorithm has many advantages, such as fast approximations due to automatic differentiation, real-time solutions, the simplicity of the algorithm, etc. In future work, this algorithm can be applied for solving different problems in the fields of quantum mechanics and materials science.

Author Contributions

Conceptualization, A.S. and V.I.; methodology, A.S.; software, A.S.; validation, A.S. and V.I.; formal analysis, A.S.; investigation, A.S.; resources, V.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors acknowledge the grant, “Artificial intelligence for investigation and modeling of real processes” KP-06-N 82/4. The authors acknowledge the bilateral project between Aristotel University, Greece and the Bulgarian Academy of Sciences.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cai, S.; Mao, Z.; Wang, Z.; Yin, M.; Karniadakis, G.E. Physics-informed neural networks (PINNs) for fluid mechanics: A review. Acta Mech. Sin. 2021, 37, 1727–1738. [Google Scholar] [CrossRef]
  2. Kollmannsberger, S.; D’Angella, D.; Jokeit, M.; Herrmann, L. Physics-Informed Neural Networks. In Deep Learning in Computational Mechanics; Studies in Computational Intelligence; Springer International Publishing: Cham, Switzerland, 2021; pp. 55–84. [Google Scholar] [CrossRef]
  3. Karniadakis, G.E.; Kevrekidis, I.G.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-informed machine learning. Nat. Rev. Phys. 2021, 3, 422–440. [Google Scholar] [CrossRef]
  4. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations. arXiv 2017, arXiv:1711.10561. [Google Scholar] [CrossRef]
  5. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations. arXiv 2017, arXiv:1711.10566. [Google Scholar] [CrossRef]
  6. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  7. Yan, Z. Similarity transformations and exact solutions for a family of higher-dimensional Boussinesq equations. Phys. Lett. A 2007, 361, 223–230. [Google Scholar] [CrossRef]
  8. Zhu, Y.; Lu, C. New solitary solutions with compact support for Boussinesq-like B(2n, 2n) equations with fully nonlinear dispertion. Chaos Solut. Fractals 2007, 32, 768–772. [Google Scholar] [CrossRef]
  9. Akhmediev, N.; Ankiewics, A.; Taki, M. Waves that appear from nowhere and disappear without a trace. Phys. Lett. A 2009, 373, 675–678. [Google Scholar] [CrossRef]
  10. Boussinesq, J. Théorie de I’intumescence liquide appelee onde solitaire ou de translation se propageant dans un canal rectangulaire. Comptes Rendus 1871, 72, 755–759. [Google Scholar]
  11. Inc, M. New solitary wave solutions with compact support and Jacobi elliptic function solutions for the nonlinearly dispersive Boussinesq equation. Chaos Solut. Fractals 2008, 37, 792–798. [Google Scholar] [CrossRef]
  12. Jafari, M.; Mahdion, S. Analysis of generalized quasilinear hyperbolic and Boussinesq equations from the point of view of potential symmetry. J. Finsler Geom. Its Appl. 2024, 5, 1–10. [Google Scholar] [CrossRef]
  13. Qiao, Z.; Xia, B.; Li, J. Integrable system with peakon, weak kink, and kink peakon interactional solutions. arXiv 2012, arXiv:1205.2028. [Google Scholar] [CrossRef]
  14. Zhang, R.; Bilige, S.; Chaolu, T. Fractal Solitons, Arbitrary Function Solutions, Exact Periodic Wave and Breathers for a Nonlinear Partial Differential Equation by Using Bilinear Neural Network Method. J. Syst. Sci. Complex. 2021, 34, 122–139. [Google Scholar] [CrossRef]
  15. Pu, J.; Chen, Y. Darboux transformation-based LPNN generating novel localized wave solutions. Phys. D Nonlinear Phenom. 2024, 467, 134262. [Google Scholar] [CrossRef]
  16. NVIDIA Corporation (2021) Modulus User Guide. Release v21.06–9 November 2021. Available online: https://developer.nvidia.com/modulus-user-guide-v2106 (accessed on 27 December 2025).
  17. Slavova, A. Cellular Neural Networks: Dynamics and Modelling; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2003. [Google Scholar]
  18. Chua, L.O.; Yang, L. Cellular Neural Network: Theory and Applications. IEEE Trans. Circuits Syst. 1988, 35, 1257. [Google Scholar] [CrossRef]
  19. De Ryck, T.; Jagtap, A.D.; Mishra, S. Error estimates for physics informed neural networks approximating the Navier-Stokes Equations. arXiv 2022, arXiv:2203.09346. [Google Scholar] [CrossRef]
  20. Shin, Y.; Zhang, Z.; Karniadakis, G.E. Error estimates of residual minimization using neural networks for linear PDEs. arXiv 2020, arXiv:2010.08019. [Google Scholar] [CrossRef]
Figure 1. Collision of the waves.
Figure 1. Collision of the waves.
Mathematics 14 00180 g001
Figure 2. General PINN architecture.
Figure 2. General PINN architecture.
Mathematics 14 00180 g002
Figure 3. Interaction of two solitons.
Figure 3. Interaction of two solitons.
Mathematics 14 00180 g003
Figure 4. Interaction and collision between kink and peakon waves.
Figure 4. Interaction and collision between kink and peakon waves.
Mathematics 14 00180 g004
Figure 5. CNN architecture.
Figure 5. CNN architecture.
Mathematics 14 00180 g005
Figure 6. Interaction of two solitons.
Figure 6. Interaction of two solitons.
Mathematics 14 00180 g006
Figure 7. Interaction of peakon-kink waves.
Figure 7. Interaction of peakon-kink waves.
Mathematics 14 00180 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Slavova, A.; Ignatov, V. Artificial Intelligence for Studying Interactions of Solitons and Peakons. Mathematics 2026, 14, 180. https://doi.org/10.3390/math14010180

AMA Style

Slavova A, Ignatov V. Artificial Intelligence for Studying Interactions of Solitons and Peakons. Mathematics. 2026; 14(1):180. https://doi.org/10.3390/math14010180

Chicago/Turabian Style

Slavova, Angela, and Ventsislav Ignatov. 2026. "Artificial Intelligence for Studying Interactions of Solitons and Peakons" Mathematics 14, no. 1: 180. https://doi.org/10.3390/math14010180

APA Style

Slavova, A., & Ignatov, V. (2026). Artificial Intelligence for Studying Interactions of Solitons and Peakons. Mathematics, 14(1), 180. https://doi.org/10.3390/math14010180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop