Next Article in Journal
On the Geometry in the Large of Einstein-like Manifolds
Next Article in Special Issue
Dynamic Behavior of an Interactive Mosquito Model under Stochastic Interference
Previous Article in Journal
Random Forest Winter Wheat Extraction Algorithm Based on Spatial Features of Neighborhood Samples
Previous Article in Special Issue
Stochastic Transcription with Alterable Synthesis Rates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximate Methods for Solving Problems of Mathematical Physics on Neural Hopfield Networks

1
Department of Higher and Applied Mathematics, Penza State University, 40 Krasnaya Str., 440026 Penza, Russia
2
Department of Computational Physics, Saint Petersburg State University, 1 Ulyanovskaya Str., 198504 Saint Petersburg, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2207; https://doi.org/10.3390/math10132207
Submission received: 21 May 2022 / Revised: 21 June 2022 / Accepted: 21 June 2022 / Published: 24 June 2022
(This article belongs to the Special Issue Difference and Differential Equations and Applications)

Abstract

:
A Hopfield neural network is described by a system of nonlinear ordinary differential equations. We develop a broad range of numerical schemes that are applicable for a wide range of computational problems. We review here our study on an approximate solution of the Fredholm integral equation, and linear and nonlinear singular and hypersingular integral equations, using a continuous method for solving operator equations. This method assumes that the original system is associated with a Cauchy problem for systems of ordinary differential equations on Hopfield neural networks. We present sufficient conditions for the Hopfield networks’ stability defined via coefficients of systems of differential equations.

1. Introduction

Modeling problems of mathematical physics on Hopfield neural networks (HNN) is based on modeling of an artificial neuron implemented as an electronic circuit by a nonlinear ordinary differential equation. Within this approach, the i-th neuron connected to the N neurons of the network (including itself) is modeled by the system of differential equations
C i d u i d t = u i R i + j = 1 N w i j f ( u j ) + I i , i = 1 , 2 , , N ,
where w i j are the synaptic weights of the neurons, I i are the currents representing an external bias, u i are the induced local fields at the input of the nonlinear activation functions f ( u i ) , R i and C i are the leakage resistances and the leakage capacitances, respectively.
Since the 1980s, the methods of modeling numerical solutions on artificial neural networks have attracted a serious attention [1,2,3,4,5,6,7,8,9,10,11,12]. A detailed bibliography of the works carried out in this direction can be found in [1,2,3,4,5,12]. The books [1,4,5] are of encyclopedic character. They describe—with different degrees of detail—the most principal results concerning the architecture, training and practical applications of artificial neural networks (ANN). The article [13] is a pioneering one; it has demonstrated an opportunity in solving computational problems with the devices assembled from a large number of quite simple standardized elements. The paradigm of Hopfield neural networks has also been introduced in this work.
Some of the listed works are devoted to solving of particular problems on the basis of artificial neural networks. For instance, the works [14,15] concern the optimization problem, the works [16,17] describe solving linear and nonlinear algebraic equations with neural networks, and the works [3,7,9,10,11] exploit the application of ANN to ordinary and partial differential equations. The work [6] is devoted to a more difficult and special problem of solving Diofant equations. General problems of information science are elucidated in the context of ANN in the book [2]. The problem of dynamical system parametric identification with HNNs is described in [18].
Along with modeling known numerical methods on artificial neural networks, it is of interest to develop special modeling methods designed to solve problems of mathematical physics with the ANNs and, first of all, with the HNNs. One of these methods is the continuous operator method [19]. Here is a brief overview of the works carried out in this direction.
The predecessors of the HNN are, to some extent, analog computers.
In the second half of the 20th century, a separate direction in computer technology associated with analog computers was actively developing. Analog computers had a high performance (operations performed at the speed of light), but low accuracy due to imperfection of the element base, and by the end of the century, they were ousted from many fields of application, in particular, from computational mathematics, by digital computers. Currently, analog computers are used as part of specialized control systems. However, we note that recently there has been a renaissance in relation to analog computers, albeit on a different element base [20].
Note that analog computers are used to solving the following systems of nonlinear differential equations
d u i d t = j = 1 N a i j f i j ( t , u j ) + g i ( t ) , i = 1 , 2 , , N .
Here, a i j , i , j = 1 , 2 , , N are real numbers, f i j ( t , v j ) , t [ 0 , T ] , m a x 1 j N | v j | H , i , j = 1 , 2 , , N are continuous functions, and g i ( t ) , i = 1 , 2 , , N , t [ 0 , T ] are continuous functions. The parameter T is determined by the problem to be solved and the design of the machine, and H is determined by the design of the machine.
From the comparison of Equations (1) and (2), it follows that the fields of applications of Hopfield’s neural networks and analog computers coincide. Therefore, the results presented in this article in terms of Hopfield neural networks are applicable to analog computers. A detailed presentation of the methods for solving systems of algebraic and ordinary differential equations on analog computers can be found in [21].
A review of works devoted to approximate methods for solving systems of algebraic equations and systems of ordinary differential equations on analog computers and NNH shows that regular (not singular) equations have mostly been solved.
Meanwhile, a large class of problems from various branches in mathematics, physics and engineering has come out of NNH applications. They are ill-posed problems in a Hadamard sense, spectrum problems, inverse problems and many others. Applying the continuous operator method [19] to solve various ill-posed problems (coefficient problems for parabolic and hyperbolic equations, solving systems of linear algebraic equations with non-inversible matrices) has demonstrated the capabilities of the method.
Obviously, the continuous operator method can be implemented directly on NNH of any technological origin. This extends the range of problems implemented on NNH.
The main goal of this work is to demonstrate the relations between the continuous method for solving nonlinear operator equations and NNH, and, thereby, to extend the range of problems solved on NNH.
In [13], J.J. Hopfied researched the application of “biological computers” to computer design. The architecture of machines built by analogy with biological objects is based on a very large number of interconnected and very simple computing nodes of the same type, called neurons.
In the article [14], J.J. Hopfied and D.W. Tank have demonstrated the possibility of implementing such computers, called Hopfield neural networks, using simple circuits made up of resisters, capacitors and inductors. In [14], the energy function was introduced and the stability of neural networks was studied based on the second Lyapunov method. Note that the stability of Hopfield neural networks [22], Ref. [23] has been studied based on the first Lyapunov method.
Starting from [13,14], Hopfield neural networks began to be widely used to solve optimization problems [15], matrix inversion [16], and parametric identification of dynamic systems [18]. In [17], Hopfield neural networks are applied to solve systems of nonlinear algebraic equations.
In works known to the authors, numerical methods for solving problems of mathematical physics on artificial neural networks (ANN) are based on methods for minimizing the corresponding functionals.
In this paper, we review the works of the authors devoted to methods for solving the equations of mathematical physics on the HNN. These methods are based on conditions for stability of the solutions to systems of ordinary differential equations.
Stability of the neural network is crucial for modeling numerical methods with the network. Usually, stability of the network is substantiated on the grounds of symmetry of the synaptic matrix, and the HNN energy is used as the Lyapunov function. Here, we obtain more general stability conditions that are applicable even for non-symmetric synaptic matrices.

2. Notation and Basic Definitions

Let us first introduce the notation used in the work.
Let B be a Banach space, and a B , K is a linear operator on B, Λ ( K ) is the logarithmic norm [24] of the operator K, K * is the conjugate operator to K, and I is the identity operator.
B ( a , r ) = { z B : z a r } ,
S ( a , r ) = { z B : z a = r } ,
R e ( K ) = ( K ) = ( K + K * ) / 2 ,
Λ ( K ) = lim h 0 ( I + h K 1 ) h 1 .
The analytical expressions for logarithmic norms are known for operators in many spaces. We restrict ourselves to a description of the following three norms. Let A = { a i j } , i , j = 1 , 2 , , n , be a matrix. In the n-dimensional space R n of vectors x = ( x 1 , , x n ) the following norms are often used:
  • octahedral— x 1 = i = 1 n | x i | ;
  • cubic— x 2 = max 1 i n | x i | ;
  • spherical (Euclidean)— x 3 = ( i = 1 n x i 2 ) 1 / 2 .
Here are the analytical expressions for the logarithmic norm of a n × n matrix A = ( a i j ) , consistent with vector space norms given above:
octahedral logarithmic norm Λ 1
Λ 1 ( A ) = max 1 j n a j j + i j | a i j | ;
cubic logarithmic norm Λ 2
Λ 2 ( A ) = max 1 i n a i i + j i | a i j | ;
spherical (Euclidean) logarithmic norm Λ 3
Λ 3 ( A ) = λ max A + A * 2 ,
where A * is the conjugate matrix for A .
Note that the logarithmic norm of the same matrix can be positive in one space and negative in another. Moreover, it is known that a linear combination with positive coefficients of any finite number of norms is also a norm.
The logarithmic norm has some properties very useful for numerical mathematics.
Let A , B be n × n matrices with complex elements and x = ( x 1 , , x n ) , y = ( y 1 , , y n ) , ξ = ( ξ 1 , , ξ n ) , η = ( η 1 , , η n ) are n-dimensional vectors with complex components. Let the following systems of algebraic equations: A x = ξ and B y = η be given. The norm of a vector and its subordinate operator norm of the matrix are agreed upon; the logarithmic norm Λ ( A ) corresponds to the operator norm.
Theorem 1
([25]). If Λ ( A ) < 0 , the matrix A is non-singular and A 1 1 / | Λ ( A ) | .
Theorem 2
([25]). Let A x = ξ , B y = η and Λ ( A ) < 0 , Λ ( B ) < 0 . Then
x y ξ η | Λ ( B ) | + A B | Λ ( A ) Λ ( B ) | .
Some properties of the logarithmic norm in a Banach space which are useful in numerical mathematics are given in [24].

3. Continuous Methods for Solving Operator Equations

Extensive literature is devoted to approximate methods for solving nonlinear operator equations, and a detailed bibliography on this subject can be found in the books [26,27]. At the same time, discrete methods have mainly been considered, among which, first of all, we should note the methods of simple iteration and Newton-Kantorovich. The study of continuous analogues of the Newton-Kantorovich method began, apparently, with the article [28]. Later, continuous analogs of the Newton-Kantorovich method were widely used for solving numerous problems in physics [29,30].
Let us present several statements about continuous methods for solving operator equations, which will be used below when substantiating computational methods.
Let the nonlinear operator equation
A ( x ) = f ,
map from the Banach space B to B. Here, A ( x ) is a non-linear operator.
Consider the Cauchy problem in a Banach space B
d x ( t ) d t = A ( x ( t ) ) f , x ( 0 ) = x 0 .
We assume that the operator A has a continuous Gateaux (Frechet) derivative.
Theorem 3
([19]). Let Equation (3) have a solution x * . On any differentiable curve g ( t ) in Banach space B , the inequality
lim t 1 t 0 t Λ ( A ( g ( τ ) ) d τ α g , α g > 0 .
holds. Then, the solution of the Cauchy problem (4) converges to the solution x * of Equation (3) for any initial value x ( 0 ) B .
Theorem 4
([19]). Let Equation (3) have a solution x * . Let the following conditions be satisfied on any differentiable curve g ( t ) , in a ball B ( x * , r ) :
(1) for any t ( t > 0 ) , the inequality
0 t Λ ( A g ( τ ) ) d τ 0
holds;
(2) the following equality is true:
lim t 1 t 0 t Λ ( A ( g ( τ ) ) d τ = α g , α g > 0 .
Then, the solution of the Cauchy problem (4) converges to the solution x * of Equation (3) for any initial value x ( 0 ) B ( x * , r ) .
If the conditions (6) and (7) are not satisfied, it is necessary to transform Equation (3) and the Cauchy problem (4). For this purpose, we employ a symmetrizing version of the operator. The symmetrization is performed by an adjoint derivative operator acting on Equation (3) from the left. As a result, the derivative of the operator becomes symmetric and non-negative. Let A ( x ) be the Gateaux (Frechet) derivative of operator A ( x ) . We will transform Equation (3) to the form
( A ( x ) ) * A ( x ) ( A ( x ) ) * f = 0 ,
where ( A ( x ) ) * is the operator conjugated to ( A ( x ) ) .
Equation (8) is associated with the Cauchy problem
d x ( t ) d t = ( A ( x ) ) * A ( x ) ( A ( x ) ) * f
x ( 0 ) = x 0 .
The following statement is valid.
Theorem 5.
Let Equation (8) have a unique solution x * in the ball B ( x * , r ) , r > 0 . Let the following conditions be satisfied for any differentiable curve g ( t ) , laying in the ball B ( x * , r ) :
(1) for any t ( t > 0 ) , the inequality
0 t Λ ( ( A ( g ( τ ) ) ) * ( A ( g ( τ ) ) ) d τ 0
occurs;
(2) the equality
lim t 1 t 0 t Λ ( ( A ( g ( τ ) ) ) * ( A ( g ( τ ) ) ) d τ α g , α g > 0
is valid.
Then, the solution of the Cauchy problem (9) and (10) converges to the solution x * of Equation (8) for any initial value x ( 0 ) B ( x * , r ) .
The proof of the theorem is based on the sufficient condition for stability of solutions of differential equations in Banach spaces.
Let the differential equation be in a Banach space B
d x ( t ) d t = A ( x ( t ) ) ,
where the nonlinear operator A ( x ) has a Gateaux derivative, A ( 0 ) = 0 , and the spectrum of the operator A ( 0 ) is in the left half-plane of the complex plane and on the imaginary axis.
Theorem 6
([31]). Let the following conditions be satisfied on any differentiable curve φ that lays in the ball B ( 0 , ρ ) : (1) 0 t Λ ( A ( x ( τ ) ) ) d τ 0 ; (2) lim t 1 t 0 t Λ ( A ( x ( φ ( τ ) ) ) ) d τ κ φ < 0 .
Then, a trivial solution to Equation (13) is asymptotically stable.
The validity of Theorem 5 follows from this statement. Indeed, the spectrum of the operator ( A ( x ( t ) ) ) * ( A ( x ( t ) ) ) is in the left half-plane of the complex plane and on the imaginary axis. The asymptotic stability for the solution of Equation (9) for any initial value in the ball B ( x * , r ) follows from Theorem 6. Thus, lim t x ( t ) = x * and lim t d x ( t ) d t = 0 . Therefore, the solution of Equation (9) converges to the solution x * ( t ) .
If conditions (11) and (12) are not met, it is necessary to use regularization. Consider the Cauchy problem
d x ( t ) d t = γ x ( t ) [ ( A ( x ) ) * A ( x ) ( A ( x ) ) * f ] , x ( t 0 ) = x 0 .
Here, γ , γ > 0 is a regularization parameter.
The Cauchy problem (14) has a solution for any initial value. Moreover, this solution satisfies the equation
γ x ( t ) + [ ( A ( x ) ) * A ( x ) ( A ( x ) ) * f ] = 0 ,
and does not satisfy Equation (8).
The continuous method for solving nonlinear equations has the advantages over the standard Newton-Kantorovich method. They are:
(1)
The existence of an inverse operator is not required for the Gateaux (Frechet) derivative of the nonlinear operator;
(2)
If the inequality 0 t Λ ( C ( g ( τ ) ) d τ < 0 holds on any differentiable function g ( t ) , then the convergence of the method does not depend on initial conditions.

4. Representation of Functions of Multiple Variables on Hopfield Neural Networks

In this section we study the methods of representation for functions of multiple variables and for localization of the minimum (maximum) of such functions using an artificial neural network.
When constructing neural networks for solving numerous problems in physics and engineering, the problem of representation for the functions of multiple variables on neural networks arises. The issue is due to the fact that the Hopfield networks compute linear and nonlinear functions of one dynamic variable. They also perform superposition and addition operations when networks [3,5] are cascaded. Therefore, algorithms for representing multiple variables’ functions by superpositions of one variable’s functions and the addition operation are of great interest. Representations of multiple variables’ functions by superpositions of one variable’s functions and the addition operation were obtained by V.I. Arnold and A.N. Kolmogorov [32,33]. The representations have a rather complicated form. Their application to designing neural networks is difficult. Thus, the exact representation of multiple variables’ functions on neural networks is hardly possible, and approximate methods are of considerable interest.
An approximate method for function representation on neural networks is based on Stone’s [2] theorem. The possibility of approximating multiple variables’ functions with any degree of accuracy by superpositions and linear combinations of one variable’s functions was studied in [2,4]. An important question of approximation accuracy, and especially of the choice of the best basis, are open. A number of research results and an extensive literature on representation of continuous functions with neural networks is given in [34].
The second approach to an approximate representation of functions of multiple variables by functions of one variable uses various sweep methods [35]. However the application of these methods in neural networks seems to be problematic. Therefore, it is of interest to develop an easy-to-implement and sufficiently accurate approximate method for representing multiple variables’ function in neural networks. One of such methods is presented below. Note that this method also allows us to find extreme values of functions of multiple variables.
Today, the problem of finding the extreme values of functions of multiple variables is widely studied. A detailed literature review on the subject can be found in [35].
Let f ( x 1 , x 2 ) C ˜ D , D = [ 0 , 2 π ] 2 . Let p 1 , p 2 be prime numbers, ( p 1 p 2 ) .
Show that the extreme values of the function f ( x 1 , x 2 ) are approximated with high degree by the extreme values of the function f ( p 1 t , p 2 t ) , 0 t 2 π . It is sufficient to consider the minimum of the function. We assume that the function f ( x 1 , x 2 ) satisfies the Lipschitz condition
| f ( x 1 , x 2 ) f ( x 1 , x 2 ) | | x 1 x 1 | + | x 2 x 2 | , ( x 1 , x 2 ) D
Without loss of generality, we assume that the function f ( x 1 , x 2 ) in D has a unique maximum at ( x 1 * , x 2 * ) . Let φ ( t ) = f ( p 1 t , p 2 t ) yield its maximum at t * .
Let t be a residue class t modulo 2 π . The set p 1 t , p 2 t for 0 t 2 π form a set of parallel lines, which for p 2 > p 1 intersect ( 0 x 2 π , y = 2 π ) at 2 π ( k p 1 j p 2 ) / p 2 , k = 1 , p 2 ¯ , j = 0 , p 1 ¯ 1 . Obviously, | 2 π ( k p 1 j p 2 ) / p 2 | 2 π | p 1 p 2 | / p 2 .
Let ρ = π | p 1 p 2 | / p 2 . We want to prove | φ ( t * ) f ( x 1 * , x 2 * ) | ρ .
Let t ¯ ( 0 t ¯ 2 π ) be the value of t such that p 2 t ¯ = x 2 * , and p 1 t ¯ yields its minimum | p 1 t x 1 * | , 0 t 2 π . However, from (16), it follows that | f ( x 1 * , x 2 * ) φ ( t ¯ ) | = f ( x 1 * , x 2 * ) f ( p 1 t ¯ , x 2 ) | | x 1 * p 1 t ¯ | ρ .
Thus, it was shown that the extreme value of f ( x 1 , x 2 ) is located at ρ near ( p 1 t * , p 2 t * ) .
Take several sets of natural numbers ( p 1 i , p 2 i ) , i = 1 , 2 , , n . Find the minima φ i ( t i * ) of the functions ψ i ( t ) = f ( p 1 i , p 2 i t ) . Construct ρ i = π | p 1 i p 2 i | / p 2 i near Ω i , i = 1 , 2 , , n , of points ( x 1 i , x 2 i ) , x 1 i = p 1 i t i * , x 2 i = p 2 i t i * , i = 1 , 2 , , n .
The extreme point ( x 1 * , x 2 * ) is at the intersection of the neighborhoods.
This way the problem of the extreme values of multiple variables’ functions is reduced to the problem of finding the extreme values of functions of one variable. Numerous works are devoted to the solution of the latter problem on artificial neural networks (see [3,5]).

5. Multiple Integrals’ Evaluations on Hopfield Neural Networks

This section is devoted to evaluation of multiple integrals of periodic functions 0 2 π 0 2 π f ( x 1 , x 2 ) d x d x 2 on Hopfield networks. The suggested method is applied to the evaluation of multiple variable integral of any finite dimension. Without loss of generality, we consider two-dimensional integrals. The method for evaluating integrals of multiple variables’ functions f ( x 1 , x 2 , , x l ) was presented in [36]. It was suggested that functions expand in a Fourier series. An accuracy estimation of the method on certain classes of functions was also given.
The method is based on the following statement ([36,37]).
Lemma 1.
Let f ( x 1 , x 2 ) be a continuous 2 π periodic function with respect to each variable, and N is a natural, q 1 , q 2 are prime and q 1 q 2 , q 1 , q 2 2 N . Then,
0 2 π 0 2 π f ( x 1 , x 2 ) d x 1 d x 2 = 2 π 0 2 π f ( q 1 t , q 2 t ) d t + R N ( f ) ,
where
| R N ( f ) C E N , N ( f ) ,
where E N , N ( f ) is the best approximation of f ( x 1 , x 2 ) by a trigonometric polynomial of N degree with respect to each variable.
The convergence rate is defined in terms of constants C that here do not depend on N, and the functional classes E N , N ( f ) are defined as follows.
Evaluating the integral in the right-hand side of (17) with the rectangle quadrature formula, we have
0 2 π 0 2 π f ( x 1 , x 2 ) d x 1 d x 2 = 4 π 2 N k = 0 N 1 f ( 2 k q 1 π / N , 2 k q 2 π / N ) + R N ( t ) ,
where q 1 , q 2 are prime numbers, q 1 , q 2 = O ( N 1 / ( 1 + 2 α ) ) .
The estimate holds
R N [ E 2 α ] = O ( N 2 α / ( 1 + 2 α ) ) .
Recall the definition of the class of functions E 2 α . Let the function f ( x 1 , x 2 , , x l ) be continuous in an l-dimensional cube G l defined by the inequalities 0 x v 2 π ( v = 1 , 2 , , l ) and be periodic with a period 2 π in each variable x 1 , x 2 , , x l . c ( m 1 , , m l ) stands for the Fourier coefficients of the function. We also introduce m ¯ v defined as m ¯ v = 1 , for m v = 0 , and m ¯ v = | m v | , for m v 0 . We say that f ( x 1 , x 2 , , x l ) belongs to the class E l α ( A ) , if c ( m 1 , , m l ) = A ( ( m ¯ 1 · m ¯ 2 m ¯ l ) α ) , where a > 1 / 2 , and the constant A does not depend on m 1 , m 2 , , m l .
One can evaluate multiple periodic function integrals on Hopfield networks by formula (17). To do so, it is enough to solve the Cauchy problem
d y ( t ) d t = f ( q 1 t , q 2 t ) , y ( 0 ) = 0 .
Then,
y ( 2 π ) = 1 2 π 0 2 π 0 2 π f ( x 1 , x 2 ) d x 1 d x 2 R N ( f ) .
This method for integral evaluation can be used to calculate the Fourier coefficients of multiple variable functions on the basis of trigonometric functions.

6. Approximate Solution of Systems of Algebraic Equations on Hopfield Neural Networks

Consider the system of linear algebraic equations
A x = b ,
where A = { a i j } , i , j = 1 , 2 , , n , x = ( x 1 , , x n ) , b = ( b 1 , , b n ) T .
Let the logarithmic norm Λ ( A ) of matrix A be negative.
We associate the system of algebraic Equations (20) with the Cauchy problem
d x ( t ) d t = A x b x ( 0 ) = 0 .
From results of Section 2, it follows that for t , the solution of the Cauchy problem (21) converges to the solution x * = A 1 b of the system of algebraic Equations (20) for any initial value x ( 0 ) .
Thus, modeling the Cauchy problem (21) on Hopfield neural networks is possible for sufficiently large values of t to obtain a good approximation to the solution of the system (20). It is easy to see that the solution is stable to perturbation of the initial values for Cauchy problem (21). It is also stable to the perturbation of the coefficients and right-hand sides of Equation (20). The proof of this statement and the corresponding theorems can be found in [38].
Consider the application of the continuous operator method for solving systems of nonlinear algebraic equations on Hopfield neural networks. For generality, we assume operator equations in Banach spaces.
Consider the nonlinear operator equation
A ( x ) f = 0 ,
acting from the Banach space X to X . Here, A ( x ) is a non-linear operator.
Let the Equation (22) have an isolated solution x * . Let us consider the Cauchy problem
d x ( t ) d t = A ( x ( t ) ) f , x ( 0 ) = x 0 .
By Theorem 4, if for any smooth curve g ( t ) defined in the ball R ( x * , r ) of a Banach space B, the inequalities
0 t Λ ( A ( g ( τ ) ) d τ 0 , lim t 1 t 0 t Λ ( A ( g ( τ ) ) d τ = α , α > 0 ,
hold, then lim t x ( t ) = x * .
Remark 1.
Examples of implementations can be found in [38].

7. Approximate Solutions for Fredholm Integral Equations on Hopfield Neural Networks

In this section we solve Fredholm integral equations on Hopfield neural networks. For demonstration, we will use a one-dimensional integral equation of the second kind
x ( t ) = 0 1 h ( t , τ , x ( τ ) ) d τ + f ( t )
with continuous kernels in the right-hand side.
Weakly singular integral equations and multidimensional integral equations are treated similarly.
Let Equation (25) have an isolated solution x * ( t ) in B ( x * , r ) C [ 0 , 1 ] .
The approximate solution x N ( t ) = ( x N ( t 1 ) , , x N ( t N ) ) (25) is obtained from the system of equations
x N ( t k ) = l = 1 N α l h ( t k , t l , x N ( t l ) ) + f ( t k ) , k = 1 , 2 , , N ,
where α l and t l [ 1 , 1 ] , l = 1 , 2 , , N are coefficients and nodes of the quadrature formula
0 1 g ( t ) d t = 1 N α l g ( t l ) + R N ( g ) .
It is essential to take t k = k / N , k = 0 , 1 , , N .
In [27] (Theorem 19.5, Chapter 4), solvability conditions of system (26) were presented. The convergence of approximate solutions x N * ( t ) of (26) to exact solutions x * ( t ) of (25) using nodes t l N , l = 1 , 2 , N was also demonstrated.
We set the following condition to the function h ( t , τ , u ) : at any interior point of B ( x * , r ) in space C [ 0 , 1 ] there exists the derivative h 3 ( t k , t l , u ) , k , l = 1 , 2 , , N . Here, h 3 ( t , τ , u ) stands for the partial derivative with respect to the third variable.
Consider the matrix C ( u ) = { c i j ( u ) } , i , j = 1 , 2 , , N , where c i i ( u ) = 1 α i h 3 ( t i , t i , u ) , i = 1 , 2 , , N ; c i j ( u ) = α i h 3 ( t i , t j , u ) , i , j = 1 , 2 , , N , i j .
It follows from Theorem 4 that if Λ ( C ( u ) ) < 0 for u B ( x * , r ) , then the solution of the system of differential equations
d z k ( t ) d t = z k ( t ) l = 1 N α l h ( t k , t l , z l ( t ) ) f ( t k ) , k = 1 , 2 , , N ,
converges to the solution x * ( t ) of the system of Equation (26) at the nodes t k , k = 1 , 2 , , N .
We illustrate the method described above with solving the following equation:
x ( t ) 1 4 1 1 ( t 2 τ + t τ 2 ) x ( τ ) d τ = t t 2 6 .
The exact solution of the equation reads x ( t ) = t . Equation (28) has been approximated with a system of 10 ordinary differential equations, solved using Euler’s method with a time step h = 0.1 In Table 1 we show the numerical error as a function of the number of iterations N.

8. Approximate Solution of Linear Hypersingular Integral Equations on Hopfield Neural Networks

Let us recall the Hadamard definition of hypersingular integrals [39]. The integral of the type
a b A ( x ) d x ( b x ) p + α
for an integer p and 0 < α < 1 , is defined as the limit of the sum
a x A ( t ) d t ( b t ) p + α + B ( x ) ( b x ) p + α 1 ,
at x b if one assumes that A ( x ) has p derivatives in the neighborhood of the point b. Here, B ( x ) is any function that satisfies the following two conditions:
(i)
The above limit exists;
(ii)
B ( x ) has at least p derivatives in the neighborhood of the point x = b .
An arbitrary choice of B ( x ) is unaffected by the value of the limit in the condition (i). The condition (ii) defines values of the ( p 1 ) first derivatives of B ( x ) at a point b, so that an arbitrary additional term in the numerator is an infinitely small quantity, at least of order ( b x ) p .
Chikin in [40] introduced the definition of the Cauchy–Hadamard-type integral that generalizes the notion of the singular integral in the Cauchy principal sense and in the Hadamard sense.
The Cauchy–Hadamard principal value of the integral
a b φ ( τ ) d τ ( τ c ) p , a < c < b ,
is defined as the limit of the following expression
a b φ ( τ ) d τ ( τ c ) p = lim v 0 a c v φ ( τ ) d τ ( τ c ) p + c + v b φ ( τ ) d τ ( τ c ) p + ξ ( v ) v p 1 ,
where ξ ( v ) is a function constructed so that the limit exists.
Consider the one-dimensional linear hypersingular integral equation
K x a ( t ) x ( t ) + b ( t ) 1 1 x ( τ ) d τ ( τ t ) p + 1 1 h ( t , τ ) x ( τ ) d τ = f ( t ) , p = 2 , 4 , .
We impose the following conditions on the coefficients and the right-hand side of Equation (31):
(1)
b ( t ) 0 , t [ 1 , 1 ] ;
(2)
a ( t ) , b ( t ) , f ( t ) W r ( 1 ) , h ( t , τ ) W r , r ( 1 ) , r p .
(3)
Equation (31) is uniquely solvable and its solution x * ( t ) W r ( M ) , M = const .
Introduce nodes t k = 1 + 2 k / N , k = 0 , 1 , , N , and t ¯ k = t k + k / N , k = 0 , 1 , , N 1 . Let Δ k be intervals Δ k = [ t k , t k + 1 ) , k = 0 , 1 , , N 2 , Δ N 1 = [ t N 1 , t N ] .
An approximate solution (31) is sought in the form of piecewise constant functions
x N ( t ) = k = 0 N 1 α k ψ k ( t ) , ψ k ( t ) = 1 , t Δ k 0 , t Δ k .
The values { α k } , k = 0 , 1 , , N 1 , are determined from the system of linear algebraic equations
a ( t ¯ k ) α k + b ( t ¯ k ) l = 0 N 1 α l Δ l d τ ( τ t ¯ k ) p + l = 0 N 1 α l Δ l h ( t ¯ k , τ ) d τ = f ( t ¯ k ) ,
k = 0 , 1 , , N 1 .
The following statement is true.
Theorem 7
([38,41]). Let the following conditions be satisfied: (1) Equation (31) has the unique solution x * ( t ) W p 1 ( [ 1 , 1 ] , M ) ; (2) the inequality | b ( t ) | b > 0 holds for t ( 1 , 1 ) ; (3) the function h ( t , τ ) is satisfied the Lipschitz condition with respect to the second variable. Then, for sufficiently large N, the system of Equations (33) has the unique solution x N * ( t ) .
The conditions of Theorem 7 are sufficient for the system of Equations (33) to be solved on Hopfield neural networks.
System (33) should be represented in the form
sgn b ( t ¯ k ) a ( t ¯ k ) α k + b ( t ¯ k ) l = 0 N 1 α l Δ l d τ ( τ t ¯ k ) p + l = 0 N 1 α k Δ l h ( t ¯ k , τ ) d τ f ( t ¯ k ) = 0 ,
k = 0 , 1 , , N 1 .
When the conditions given in Section 2 are met, the solution of the system
d α k ( t ) d t = sgn b ( t ¯ k ) a ( t ¯ k ) α k ( t ) + b ( t ¯ k ) l = 0 N 1 α l ( t ) Δ l d τ ( τ t ¯ k ) p + l = 0 N 1 α k ( t ) Δ l h ( t ¯ k , τ ) d τ f ( t ¯ k ) ,
k = 0 , 1 , , N 1 , for t converges to solution of the system (34) for any initial value.
Note that the conditions of Theorem 7 guarantee just a unique solvability for the system (33). To prove the convergence of the approximate solution of Equation (31) to its exact solution, one has to construct a more complicated algorithm.
In doing so, let p = 2 . Divide the interval [ 1 , 1 ] into 2 N subintervals at the points t k = 1 + k / N , k = 0 , 1 , , 2 N . We seek an approximate solution of (31) in the form of a piecewise continuous function
x N ( t ) = k = 0 2 N α k φ k ( t ) ,
where φ k ( t ) , k = 0 , 1 , , 2 N , is a family of basis functions.
For nodes t k , k = 1 , , 2 N 1 , the corresponding basis elements are determined by
φ k ( t ) = 0 , t k 1 t t k 1 + 1 N 2 , N 2 N 2 ( t t k 1 ) 1 N 2 , t k 1 + 1 N 2 t t k 1 N 2 , 1 , t k 1 N 2 t t k + 1 N 2 , N 2 N 2 ( t t k + 1 ) 1 N 2 , t k + 1 N 2 t t k + 1 1 N 2 , 0 , t k + 1 1 N 2 t t k + 1 , 0 , t [ 1 , 1 ] [ t k 1 , t k + 1 ] .
For boundary nodes t k , k = 0 and k = 2 N , the corresponding basis elements are defined as
φ 0 ( t ) = 1 , 1 t 1 + 1 N 2 , N 2 N 2 ( t t 1 ) 1 N 2 , 1 + 1 N 2 t t 1 1 N 2 , 0 , t 1 1 N 2 t t 1 , 0 , [ 1 , 1 ] \ [ t 0 , t 1 ] ;
and
φ 2 N ( t ) = 0 , 1 t t N 1 + 1 N 2 , N 2 N 2 ( t t N 1 ) 1 N 2 , t N 1 + 1 N 2 t 1 1 N 2 , 1 , 1 1 N 2 t 1 .
The coefficients α k in (36) are determined from the following system of linear algebraic equations and obtained by approximating the kernel h ( t , τ ) with a polygon and applying a collocation procedure
a ( t k ) α k + l = 0 2 N h ( t k , t l ) α l 1 1 φ l ( τ ) ( τ t k ) 2 d τ = f ( t k ) ,
k = 0 , 1 , , 2 N .
The system (40) can be rewritten as
a ( t k ) α k h ( t k , t k ) 2 N 2 ln ( N 1 ) N 2 α k + α 0 h ( t k , t 0 ) t 0 t 1 φ 0 ( τ ) d τ ( τ t k ) 2 + + l = 1 2 N 1 α l h ( t k , t l ) t l 1 t l + 1 φ l ( τ ) d τ ( τ t k ) 2 + + α 2 N h ( t k , t 2 N ) t 2 N 1 1 φ 2 N ( τ ) d τ ( τ t k ) 2 = = f ( t k ) , k = 1 , , 2 N 1 , a ( t 0 ) α 0 h ( t 0 , t 0 ) N 2 ln ( N 1 ) N 2 α 0 + l = 1 2 N 1 α l h ( t 0 , t l ) t l 1 t l + 1 φ l ( τ ) d τ ( τ + 1 ) 2 + + α N h ( t 0 , t 2 N ) 2 N 1 1 φ 2 N ( τ ) d τ ( τ + 1 ) 2 = f ( t 0 ) , a ( t 2 N ) α 2 N h ( t 2 N , t 2 N ) N 2 ln ( N 1 ) N 2 α 2 N + l = 1 2 N 1 α l h ( t 2 N , t l ) l 1 l + 1 φ l ( τ ) d τ ( τ + 1 ) 2 + + α 0 h ( t 2 N , t 0 ) 1 t 1 φ 0 ( τ ) d τ ( τ 1 ) 2 = f ( t 2 N ) .
Here, indicates a summation over l k . The system (41) is equivalent to the system
( sgn h ( t k , t k ) ) a ( t k ) α k h ( t k , t k ) 2 N 2 ln ( N 1 ) N 2 α k + α 0 h ( t k , t 0 ) t 0 t 1 φ 0 ( τ ) d τ ( τ t k ) 2 + + l = 1 2 N 1 α l h ( t k , t l ) t l 1 t l + 1 φ l ( τ ) d τ ( τ t k ) 2 + α 2 N h ( t k , t 2 N ) t 2 N 1 1 φ 2 N ( τ ) d τ ( τ t k ) 2 = = ( sgn h ( t k , t k ) ) f ( t k ) , k = 1 , , 2 N 1 , ( sgn h ( t 0 , t 0 ) ) a ( t 0 ) α 0 h ( t 0 , t 0 ) N 2 ln ( N 1 ) N 2 α 0 + l = 1 2 N 1 α l h ( t 0 , t l ) t l 1 t l + 1 φ l ( τ ) d τ ( τ + 1 ) 2 + + α N h ( t 0 , t 2 N ) t 2 N 1 1 φ 2 N ( τ ) d τ ( τ + 1 ) 2 = ( sgn h ( t 0 , t 0 ) ) f ( t 0 ) , ( sgn h ( t 2 N , t 2 N ) ) a ( t 2 N ) α 2 N h ( t 2 N , t 2 N ) N 2 ln ( N 1 ) N 2 α 2 N + + l = 1 2 N 1 α l h ( t 2 N , t l ) t l 1 t l + 1 φ l ( τ ) d τ ( τ + 1 ) 2 + + α 0 h ( t 2 N , t 0 ) 1 t 1 φ 0 ( τ ) d τ ( τ 1 ) 2 = ( sgn h ( t 2 N , t 2 N ) ) f ( t 2 N ) .
The system (42) can be written in the matrix form
D X = F ,
where D = { d k l } , k , l = 0 , 1 , , 2 N , X = ( x 0 , x 1 , , x 2 N ) , F = ( f 0 , f 1 , , f 2 N ) . The values of { d k l } , { x k } and { f k } are determined by matching to the corresponding terms in (42).
The cubic logarithmic norm of the matrix D is estimated as
Λ 2 ( D ) = max max 1 k 2 N 1 d k k + l = 1 2 N 1 | h ( t k , t l ) | t l 1 t l + 1 d τ ( τ t k ) 2 +
+ | h ( t k , t 0 ) | 1 t 1 d τ ( τ t k ) 2 + | h ( t k , t 2 N ) | t 2 N 1 1 d τ ( τ t k ) 2 ,
d 00 + l = 1 2 N 1 | h ( t 0 , t l ) | t l 1 t l + 1 d τ ( τ + 1 ) 2 + | h ( t 0 , t 2 N ) | t 2 N 1 1 d τ ( τ + 1 ) 2 ,
d 2 N , 2 N + l = 1 2 N 1 | h ( t 2 N , t l ) | t l 1 t l + 1 d τ ( τ 1 ) 2 + | h ( t 2 N , t 0 ) | 1 t 1 d τ ( τ 1 ) 2 .
If Λ 2 ( D ) < 0 , by Theorem 2 we can see that the system (31) has a unique solution x N * ( t ) and D 1 1 / | Λ 2 ( D ) | . It is obvious that x N * is a solution of the system of Equation (41).
Let x * ( t ) and x N * be solutions of (40) and (41) respectively.
Theorem 8.
Let the following conditions be satisfied:
  • p = 2 .
  • Equation (31) has the unique solution x * ( t ) W 2 ( M ) , M = c o n s t .
  • For all t [ 1 , 1 ] , it holds that h ( t , t ) 0 .
  • max max 1 k 2 N 1 d k k + l = 1 2 N 1 | h ( t k , t l ) | t l 1 t l + 1 d τ ( τ t k ) 2 + + | h ( t k , t 0 ) | 1 t 1 d τ ( τ t k ) 2 + | h ( t k , t 2 N ) | t 2 N 1 1 d τ ( τ t k ) 2 , d 00 + l = 1 2 N 1 | h ( t 0 , t l ) | t l 1 t l + 1 d τ ( τ + 1 ) 2 + | h ( t 0 , t 2 N ) | t 2 N 1 1 d τ ( τ + 1 ) 2 , d 2 N , 2 N + l = 1 2 N 1 | h ( t 2 N , t l ) | t l 1 t l + 1 d τ ( τ 1 ) 2 + | h ( t 2 N , t 0 ) | 1 t 1 d τ ( τ 1 ) 2 < 0 , where l indicates a summation over l k .
Then, the system of Equation (40) has a unique solution x N * ( t ) and the following estimate holds x * x N * 2 C N 1 ln N .
The system of Equation (35) can be solved by any numerical method. Examples of solving linear hypersingular integral equations by the continuous operator method are given in [38].
Let us now study approximate methods for solving nonlinear hypersingular integral equations
a ( t , x ( t ) ) + 1 1 h ( t , τ , x ( τ ) ) d τ ( τ t ) p = f ( t ) , p = 2 , 4 , 6 ,
Consider Equation (43) with p = 2 . An approximate solution of (43) is sought in the form of a piecewise constant function (32), in which coefficients are determined from the system of nonlinear algebraic equations
a ( t ¯ k , α k ) + l = 0 N 1 Δ l h ( t ¯ k , t ¯ l , α l ) ( τ t ¯ k ) 2 d τ = f ( t ¯ k ) , k = 0 , 1 , , N 1 ,
where t ¯ k = 1 + ( 2 k + 1 ) / N ,   k = 0 , 1 , , N 1 .
Write Equation (44) in the operator form K N x N = F N , where x N = ( α 0 , , α N 1 ) T , F N = ( f ( t ¯ 0 ) , , f ( t ¯ N 1 ) ) T ; K N is N × N a matrix.
We assume that Equation (44) has a solution x * in the ball B ( x * , R ) in the space R N and for any differentiable curve g ( t ) B ( x * , R ) , the inequality 0 t Λ ( K N ( g ( τ ) ) ) d τ < 0 holds.
Then, the solution of the system of differential equations
d α k ( t ) d t = = ( s g n ( a 2 ( t ¯ k , α k ( t ) ) + Δ k h 3 ( t ¯ k , t ¯ k , α k ( τ ) ) ( τ t ¯ k ) 2 d τ ) ) ( a ( t ¯ k , α k ( t ) ) + l = 0 N 1 α k ( t ) Δ l h 3 ( t ¯ k , t ¯ l , α l ( τ ) ) ( τ t ¯ k ) 2 d τ f ( t ¯ k ) ) ,
k = 0 , 1 , , N 1 , converges to the solution x N * = ( α 0 * , , α N 1 * ) of the systems of Equations (44).
Remark 2.
If p is odd, the algorithms proposed and justified in [42] should be used as a basis to construct computational process.
Remark 3.
Study in Section 6 is based on numerical results provided in [38].

9. Continuous Method for Solving Gravity Exploration Problems

Introduce a Cartesian rectangular coordinate system with the 0 z axis pointing down.
If an ore body is located at the depth of H , its lower surface coincides with the plane z = H and the upper surface is described by function z ( x , y ) = H φ ( x , y ) with a non-negative function φ ( x , y ) and max φ ( x , y ) < H , then the gravitational field on Earth’s surface is described by the equation
G H φ ( ζ , η ) H σ ( ζ , η , ξ ) ξ d ζ d η d ξ ( ( x ζ ) 2 + ( y η ) 2 + ξ 2 ) 3 / 2 = f ( x , y , 0 ) ,
where G is the gravitational constant; σ ( ζ , η , ξ ) is the density of the body.
It is assumed that the density σ ( ζ , η , ξ ) 0 outside the body and that the density is differentiable with respect to ξ .
To simplify further calculations, we assume that the density does not depend on ξ . Then, we obtain the equation
G σ ( ζ , η ) 1 ( ( x ζ ) 2 + ( y η ) 2 + ( H φ ( ζ , η ) ) 2 ) 1 / 2 1 ( ( x ζ ) 2 + ( y η ) 2 + H 2 ) 1 / 2 d ζ d η = f ( x , y , 0 ) .
Linearization of Equation (46) leads to
G σ ( ζ , η ) H φ ( ζ , η ) ( ( x ζ ) 2 + ( y η ) 2 + H 2 ) 3 / 2 d ζ d η = f ( x , y , 0 ) .
Below, we assume that the density is constant and, for convenience, let G σ ( ζ , η ) = 1 / 2 π .
Represent Equation (47) in the form
1 2 π H φ ( ζ , η ) ( ( x ζ ) 2 + ( y η ) 2 + H 2 ) 3 / 2 d ζ d η = f ( x , y , 0 ) .
The problem of logarithmic potential leads to nonlinear integral equations
G a b σ ( s ) ln ( x s ) 2 + H 2 ( x s ) 2 + ( H z ( s ) ) 2 d s = f ( x ) ,
where z ( ζ ) is an function describing the surface of a body; H is the depth of body location.
Linearization of Equation (49) leads [43] to linear integral equation
2 G σ H a b z ( ζ ) d ζ ( x ζ ) 2 + H 2 = f ( x ) .
A detailed review of the literature of approximate methods for solving inverse problems of gravity exploration is given in [44,45,46].
In [47], the nonlinear Equation (45) is approximated by a simpler nonlinear equation
1 4 π σ ( ζ , η ) ( φ 2 ( ζ , η ) 2 ( H z ) φ ( ζ , η ) ) d ζ d η ( x ζ ) 2 + ( y η ) 2 + ( H z ) 2 3 / 2 = f ( x , y , z ) .
Here, H is the depth of the body location, σ ( x , y ) is its density, and H φ ( x , y ) stands for the contact surface shape.
Similarly, Equation (49) is approximated by
f ( x , z ) = G 4 π l l σ ( ξ ) 2 ( z H ) φ ( ξ ) + φ ( ξ ) 2 ( x ξ ) 2 + z H + φ ( ξ ) 2 d ξ .
There are three unknown variables in (51). They are the depth of the gravitating body H, the density of the body σ ( x , y ) and the shape of the contact surface H φ ( x , y ) . To find them it is necessary to have three linear and independent information sources.
Assume that gravity field values are known on surfaces z = 0 , z = h 1 , z = h 2 . Denote f ( x , y , 0 ) , f ( x , y , h 1 ) , f ( x , y , h 2 ) by f ( x , y ) , f 1 ( x , y ) , f 2 ( x , y ) , respectively. Suppose z = 0 , z = h 1 , z = h 2 in (51), and we have
1 4 π σ ( ζ , η ) 2 H φ ( ξ , η ) φ 2 ( ξ , η ) ( x ξ ) + ( y η ) 2 + H 2 3 / 2 d ξ d η = f ( x , y ) , 1 4 π σ ( ζ , η ) 2 ( H + h 1 ) φ ( ξ , η ) φ 2 ( ζ , η ) ( x ξ ) 2 + ( y η ) 2 + ( H + h 1 ) 2 3 / 2 d ξ d η = f 1 ( x , y ) , 1 4 π σ ( ζ , η ) 2 ( H + h 2 ) φ ( ξ , η ) φ 2 ( ξ , η ) ( x ξ ) 2 + ( y η ) 2 + ( H + h 2 ) 2 3 / 2 d ξ d η = f 2 ( ξ , η ) .
Introduce functions σ ( x , y ) φ ( x , y ) = w 1 ( x , y ) , σ ( x , y ) φ 2 ( x , y ) = w 2 ( x , y ) . This way, the system (53) is turned into a linear system of new variables
1 4 π 2 H w 1 ( ξ , η ) w 2 ( ξ , η ) ( x ξ ) + ( y η ) 2 + H 2 3 / 2 d ξ d η = f ( x , y ) , 1 4 π 2 ( H + h 1 ) w 1 ( ξ , η ) w 2 ( ζ , η ) ( x ξ ) 2 + ( y η ) 2 + ( H + h 1 ) 2 3 / 2 d ξ d η = f 1 ( x , y ) , 1 4 π 2 ( H + h 2 ) w 1 ( ξ , η ) w 2 ( ξ , η ) ( x ξ ) 2 + ( y η ) 2 + ( H + h 2 ) 2 3 / 2 d ξ d η = f 2 ( ξ , η ) .
Applying the Fourier transformation to (54), we have
4 π e H | ω | W 1 ω 1 , ω 2 2 π H e H | ω | W 2 ω 1 , ω 2 = F ω 1 , ω 2 , 4 π e ( H + h 1 ) | ω | W 1 ω 1 , ω 2 2 π H + h 1 e ( H + h 1 ) | ω | W 2 ω 1 , ω 2 = F 1 ω 1 , ω 2 , 4 π e ( H + h 2 ) | ω | W 1 ω 1 , ω 2 2 π H + h 2 e ( H + h 2 ) | ω | W 2 ω 1 , ω 2 = F 2 ω 1 , ω 2 .
To solve the systems (54) and (55) we used a continuous method for solving nonlinear operator equations similar to the methods for Fredholm integral equations and for systems of algebraic linear equations described above. The detailed description of computations and their justification is given in [47]. More details on the logarithmic potential case can be found in [48].
In Figure 1, we show a numerical example of solving Equation (49) using the NNH. In the works cited above, the NNH has been applied to nonlinear approximations of the original equation. Here, we demonstrate an application of the NNH to the original nonlinear equation with the logarithmic potential. Consider Equation (49) with the following parameters: a = 5 , b = 5 , G = 1 , σ ( s ) = 1 , H = 2 , f ( x ) = 5 5 ln ( ( x s ) 2 + 4 ) d s 5 5 ln ( ( x s ) 2 + ( 2 3 ( s 2 + 1 ) 1 / 4 ) 2 ) d s . The exact solution reads z ( x ) = 3 ( x 2 + 1 ) 1 / 4 . Equation (49) has been solved numerically by solving the Cauchy problem for the following system of nonlinear differential equations
d α k ( t ) d t = 10 N l = 0 N 1 ln ( t k t l ) 2 + H 2 ( t k t l ) 2 + ( H α l ( t k ) ) 2 + f ( t k ) , k = 0 , 1 , , N 1 ,
where t k = 5 + 10 k / N and the initial state α k ( 0 ) = 0.01 . The system has been solved by Euler’s method with time step h = 0.4 . In Figure 1a, we show the final state for N = 180 after 100 iterations together with the exact solution. The numerical and the exact solutions are indistinguishable in the scale of the figure. The corresponding numerical error is shown in Figure 1b.

10. Implementation and Numerical Experiments

We noted above that the continuous operator method has been applied for solving inverse coefficients problems [49], hypersingular integral equations [38], and inverse problems in astrophysics [50]. In each of these works, the obtained results were compared with the known ones.
The most significant advantages of the continuous method for solving nonlinear operator equations compared with iterative methods are as follows:
(1) Implementation of the basic method does not require Gateaux or Frechet derivatives of the operator to solve the equation; (2) Implementation of the modified method does not require invertibility of the Gateaux or Frechet derivatives of the operator to solve the equation; (3) The operator continuous method is based on Lyapunov’s stability theory for solving differential equations. The method is stable for coefficient perturbations in right-hand sides of equations.
In this section, the method’s efficiency is illustrated on the example of solving inverse problems for logarithmic potential. Hereby, we solve the exact equation without any simplifications. As far as the authors know, the inverse problem for logarithmic potential has not been previously solved in such a formulation.
Let us return to Equation (49) and write it in a more convenient form:
G 0 1 σ ( s ) ln ( x s ) 2 + H 2 ( x s ) 2 + ( H u ( s ) ) 2 d s = f ( x ) ,
where u ( s ) is the function describing the surface of the body; σ ( s ) is the density of the body, and H is the occurrence depth of the body.
Recall the issue. There is a gravitating body infinitely extended along the Y-axis and homogeneous along y. In this case, it can be treated as two-dimensional and we restrict ourselves to considering y = 0 . Thus, we consider a body lying in the region G : H u ( x ) z H , 0 x 1 . Let f ( x ) , 0 x 1 stand for perturbation of the Earth’s external field. It is required to restore the function u ( x ) given data about H , σ ( x ) , f ( x ) .
Similar problems are of great practical importance.
It was mentioned above that there are extensive studies devoted to the study of equations of the form (56). They considered either linearized equations or nonlinear approximations of Equation (56).
Below we demonstrate Hopfield neural networks for solving the original Equation (56).
Let Δ k ,   k = 0 , 1 , , N 1 , and Δ k = [ t k , t k + 1 ] ,   k = 0 , 1 , , N 1 , Δ N 1 = [ t N 1 , t N ] . Here, t k = k / N ,   k = 0 , 1 , , N . Introduce the nodes t ¯ k = ( k + 1 / 2 ) / N ,   k = 0 , 1 , , N 1 .
We will seek an approximate solution in the form of a piecewise constant function
u N ( x ) = k = 0 N 1 α k ψ k ( x ) ,
where
ψ k ( x ) = 1 , x Δ k , 0 , x [ 0 , 1 ] \ Δ k , k = 0 , 1 , , N 1 .
The coefficients α k ,   k = 0 , 1 , , N 1 are determined from the system of nonlinear algebraic equations
G N l = 0 N 1 σ ( t ¯ l ) ln ( t ¯ k t ¯ l ) 2 + H 2 ( t ¯ k t ¯ l ) 2 + ( H u N ( t ¯ l ) 2 ) = f ( t ¯ k ) , k = 0 , 1 , , N 1 .
The system of Equation (57) is associated with the Cauchy problem
d α k ( v ) d v = γ k ( G N l = 0 N 1 σ ( t ¯ l ) ln ( t ¯ k t ¯ l ) 2 + H 2 ( t ¯ k t ¯ l ) 2 + ( H α l ( v ) ) 2 f ( t ¯ k ) ) , k = 0 , 1 , , N 1 .
The coefficients α k = ± 1 ,   k = 0 , 1 , , N 1 are selected so that the logarithmic norm of the Jacobian of the right side of the system (58) is negative.
The system (58) can be solved by any numerical method.
Application of the Euler method leads to the iterative scheme
α k ( m + 1 ) = α k ( m ) + h γ k ( G N l = 0 N 1 σ ( t ¯ l ) ln ( t ¯ k t ¯ l ) 2 + H 2 ( t ¯ k t ¯ l ) 2 + ( H α l ( m ) ) ) 2 f ( t ¯ k ) ) ,
k = 0 , 1 , , N 1 . Here, h is the step of the Euler method.
The algorithm described above was applied to solve the problem of restoring a gravitating body with the data: σ = 2 ,   H = 1.5 ,   N = 180 ,   h = 0.4 ,   m = 100 .
The exact solution of the problem is u ( x ) = 3 / ( 2 H ( x 2 + 1 ) ) . The function f ( x ) has been computed by the quadrature rule
f ( t ¯ k ) = 2 N l = 0 N 1 ln ( t ¯ k t ¯ l ) 2 + H 2 ( t ¯ k t ¯ l ) 2 + ( H u ( t ¯ l ) ) 2 .
Detailed computational results are shown in Table 2.

11. Conclusions

The paper is devoted to approximate methods for solving linear and nonlinear equations of mathematical physics. In doing so, we used a spline-collocation method based on a continuous method for solving nonlinear operator equations [19]. It has been shown that a continuous method computational scheme for solving nonlinear operator equations can be implemented with Hopfield neural networks. The efficiency and flexibility of the approach has been shown by evaluating multiple integrals, and solving Fredholm integral equations and hypersingular integral equations. In addition to the listed examples, the method was efficiently applied to a direct and inverse electromagnetic wave scattering problem [51], amplitude-phase problem [52,53], solving Ambartsumian’s systems of equations (astrophysics) [50], solving inverse problems of gravity and magnetic prospecting [47], and solving direct and inverse problems for parabolic and hyperbolic equations [54].
The authors intend to continue the study of the applicability of the continuous method for solving nonlinear operator equations, Hopfield neural networks, to new classes of equations. First, we want to use it for solving inverse problems in optics. Two points should be taken into account. First, the authors started to study the amplitude-phase problem [52,53]. Second, a couple of works devoted to applications of neural networks for inverse problems of restoration have been published recently [55,56]. It is of interest to investigate the possible use of the continuous operator method application for these issues.
As noted above, the continuous method for solving nonlinear operator equations is based on Lyapunov’s stability theory. The authors have studied the stability of Hopfield neural networks [22,57] and considered some issues of stabilization for dynamic systems [58,59]. A new statement of the problem of stabilization for dynamical systems was given in [60,61]. The authors intend to use this formulation in coming works.

Author Contributions

Conceptualization, I.B.; Data curation, V.R. and A.B.; Formal analysis, I.B. and V.R.; Investigation, I.B. and V.R.; Methodology, I.B.and V.R.; Software, V.R. and A.B.; Writing—original draft, I.B.; Writing—review & editing, V.R. and A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Galushkin, A.I. Theory of Neural Networks; IPRZHR: Moscow, Russia, 2000; 416p. [Google Scholar]
  2. Gorban, A.N.; Dunin-Barkovsky, V.L.; Kirdin, A.N.; Mirkes, E.M.; Novokhod’ko, A.Y.; Rossiev, D.A.; Terekhov, S.A.; Senashova, M.Y.; Tzargorodtzev, V.G. Neuroinformatics; Siberian Enterprise “Science”: Novosibirsk, Russia, 1998; 296p. [Google Scholar]
  3. Gorbachenko, V.I. Neurocomputers in Solving Boundary Value Problems of Field Theory; Radio Engineering: Moscow, Russia, 2003; 336p. [Google Scholar]
  4. Gupta, M.M.; Jin, L.; Hamma, N. Static and Dynamic Neural Networks from Fundamentals to Advanced Theory; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2005; 722p. [Google Scholar]
  5. Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice Hall: Hoboken, NJ, USA, 1999; 842p. [Google Scholar]
  6. Joya, G.; Atencia, M.A.; Sandoval, F. Application of high-order Hopfield neural networks to the solution of diophante equations. Lect. Notes Comput. Sci. 1991, 540, 395–400. [Google Scholar]
  7. Lagaris, I.E.; Likas, A.C.; Fotiadis, D.I. Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neural Netw. 1998, 9, 987–1000. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Lagaris, I.E.; Likas, A.C.; Papageorgiou, D.G. Neural-network methods for boundary value problems with irregular boundaries. IEEE Trans. Neural Netw. 2000, 11, 1041–1049. [Google Scholar] [CrossRef] [Green Version]
  9. Lee, H.; Kang, I.S. Neural algorithm for solving differential equations. J. Comput. Phys. 1990, 91, 110–131. [Google Scholar] [CrossRef]
  10. Mehdi, D.; Mojtaba, N.; Menhaj, M.B. Numerical solution of Helmholtz equation by the modified Hopfield finite difference technique. Numer. Partial Differ. Equ. 2009, 25, 637–656. [Google Scholar]
  11. Nesterenko, B.B.; Novotarsky, M.A. Solution of boundary value problems on discrete cellular neural networks. Artif. Intell. 2008, 3, 568–578. [Google Scholar]
  12. Tarkhov, D.A. Neural Networks as a Means of Mathematical Modeling; Radio Engineering: Moscow, Russia, 2006; 48p. [Google Scholar]
  13. Hopfield, J.J. Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [Green Version]
  14. Hopfield, J.J.; Tank, D.W. Neural Computation of decision in Optimization problems. Biol. Cybern. 1985, 52, 141–152. [Google Scholar] [CrossRef]
  15. Tank, D.W.; Hopfield, J. Simple Neural Optimization: An A/D Converter, a Single Decision Cir- cuit and Linear Programming Circuit. IEEE Trans. Circuit Syst. 1991, 33, 137–142. [Google Scholar]
  16. Jang, J.S.; Lee, S.Y.; Shin, S.Y. An Optimization Network for Solving a Set of Simultaneous Linear Equations. In Proceedings of the IJCNN International Joint Conference on Neural Networks, Baltimore, MD, USA, 7–11 June 1992; pp. 516–521. [Google Scholar]
  17. Mishra, D.; Kalra, P.K. Modified Hopfield Neural Network Approach for Solving Nonlinear Algebraic Equations. Eng. Lett. 2007, 14, 135–142. [Google Scholar]
  18. Atencia, M.; Joya, G.; Sandoval, F. Hopfield Neural Networks for Parametric Identification of Dynamical Systems. Neural Process. Lett. 2005, 21, 143–152. [Google Scholar] [CrossRef]
  19. Boikov, I.V. On a continuous method for solving nonlinear operator equations. Differ. Equ. 2012, 48, 1308–1314. [Google Scholar] [CrossRef]
  20. Potapov, A.A.; Gilmutdinov, A.K.; Ushakov, P.A. Fractal Elements and Radio Systems: Physical Aspects; Radio Engineering: Moscow, Russia, 2009; 200p. [Google Scholar]
  21. Eterman, I.I. Analogue Computers; Pergamon Press: New York, NY, USA, 1960; 264p. [Google Scholar]
  22. Boikov, I.V. Stability of Hopfield neural networks. Autom. Remote Control 2003, 64, 1474–1487. [Google Scholar] [CrossRef]
  23. Boikov, I.V. Stability of Solutions of Differential Equations; Publishing House of Penza State University: Penza, Russia, 2008; 244p. [Google Scholar]
  24. Daletskii, Y.L.; Krein, M.G. Stability of Solutions of Differential Equations in Banach Space; Nauka: Moscow, Russia, 1970; 536p. [Google Scholar]
  25. Lozinskii, S.M. Note on a paper by V.S. Godlevskii. USSR Comput. Math. Math. Phys. 1973, 13, 232–234. [Google Scholar] [CrossRef]
  26. Kantorovich, L.V.; Akilov, G.P. Functional Analysis in Normed Spaces; Pergamon Press: Oxford, UK, 1982; 604p. [Google Scholar]
  27. Krasnoselskii, M.A.; Vainikko, G.M.; Zabreiko, P.P.; Rutitcki, J.B.; Stecenko, V.J. Approximated Solutions of Operator Equations; Walters and Noordhoff: Groningen, The Netherlands, 1972; 484p. [Google Scholar]
  28. Gavurin, M.K. Nonlinear functional equations and continuous analogues of iterative methods. Izv. Univ. Math. 1958, 5, 18–31. [Google Scholar]
  29. Puzynina, T.P. Modified Newtonian Schemes for the Numerical Study of Quantum Field Models. Abstract of. Doctoral Dissertation, Tver State University, Tver, Russia, 2003. [Google Scholar]
  30. Puzynin, I.V.; Boyadzhiev, T.L.; Vinitsky, S.I.; Zemlyanaya, E.V.; Puzynina, T.P.; Chuluunbaatar, O. On the methods of computational physics for the study of models of complex physical processes. Phys. Elem. Part. At. Nucl. 2007, 38, 144–232. [Google Scholar]
  31. Boikov, I.V. On the stability of solutions of differential and difference equations in critical cases. Sov. Math. Dokl. 1990, 42, 630–632. [Google Scholar]
  32. Arnold, V.I. On functions of three variables. Dokl. AN SSSR 1957, 144, 679–681. [Google Scholar]
  33. Kolmogorov, A.N. On the representation of continuous functions of several variables as superpositions of continuous functions of one variable and addition. Dokl. AN SSSR 1957, 114, 953–956. [Google Scholar]
  34. Kurkova, V.; Sanguineti, M. Bounds on rates of variable. Basis and neural network approximations. IEEE Trans. Inf. Theory 2001, 47, 2659–2665. [Google Scholar] [CrossRef]
  35. Strongin, R.G.; Sergeev, Y.D. Global Optimization with Non-Convex Constants. Sequential and Parallel Algorithms; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2000; 728p. [Google Scholar]
  36. Boykov, I.V. Optimal Function Approximation Methods and Calculation of Integrals; Publishing House of Penza State University: Penza, Russia, 2007; 236p. [Google Scholar]
  37. Boikov, I.V.; Roudnev, V.A.; Boikova, A.I. Approximate solution of problems of mathematical physics on Hopfield neural networks. Neurocomput. Dev. Appl. 2013, 10, 13–22. [Google Scholar]
  38. Boykov, I.V.; Roudnev, V.A.; Boykova, A.I.; Baulina, O.A. New iterative method for solving linear and nonlinear hypersingular integral. Appl. Numer. Math. 2018, 127, 280–305. [Google Scholar] [CrossRef]
  39. Hadamard, J. Lectures on Cauchy’s Problem in Linear Partial Differential Equations; Dover Publication Inc.: New York, NY, USA, 1952; 334p. [Google Scholar]
  40. Chikin, L.A. Special cases of the Riemann boundary value problems and singular integral equations. Sci. Notes Kazan State Univ. 1953, 1953 113, 53–105. [Google Scholar]
  41. Boykov, I.V.; Ventsel, E.S.; Roudnev, V.A.; Boykova, A.I. An approximate solution of nonlinear hypersingular integral equations. Appl. Numer. Math. 2014, 86, 1–21. [Google Scholar] [CrossRef]
  42. Boykov, I.V.; Boykova, A.I. Approximate solution of hypersingular integral equations with odd singularities of integer order. Univ. Proc. Volga Reg. Phys. Math. Sci. Math. 2010, 3, 15–27. [Google Scholar]
  43. Strakhov, V.N. Some questions of the plane problem of gravimetry. Proc. Acad. Sci. USSR Phys. Earth 1970, 12, 32–44. [Google Scholar]
  44. Boikov, I.V.; Boikova, A.I. Approximate Methods for Solving Direct and Inverse Problems of Gravity Exploration; Publishing House of the Penza State University: Penza, Russia, 2013; 510p. [Google Scholar]
  45. Mudretsova, E.A.; Veselov, K.E. (Eds.) Gravity Exploration; Nedra: Moscow, Russia, 1990; 607p. [Google Scholar]
  46. Zhdanov, M.S. Integral Transforms in Geophysics; Springer: Berlin/Heidelberg, Germany, 1988; 350p. [Google Scholar]
  47. Boikov, I.V.; Ryazantsev, V.A. On Simultaneous Restoration of Density and Surface Equation in an Inverse Gravimetry Problem for a Contact Surface. Numer. Anal. Appl. 2020, 13, 241–257. [Google Scholar] [CrossRef]
  48. Boikov, I.V.; Boikova, A.I.; Baulina, O.A. Continuous Method for Solution of Gravity Prospecting Problems. In Practical and Theoretical Aspects of Geological Interpretation of Gravitational, Magnetic and Electric Fields; Nurgaliev, D., Khairullina, N., Eds.; Springer: Cham, Switzerland, 2019; pp. 55–68. [Google Scholar]
  49. Boikov, I.V.; Ryazantsev, V.A. An Approximate Method for Solving Inverse Coefficient Problem for the Heat Equation. J. Appl. Ind. Math. 2021, 15, 175–189. [Google Scholar] [CrossRef]
  50. Boykov, I.V.; Pivkina, A.A. Iterative methods of solution Ambartsumyan’s equations. Part 2. Univ. Proc. Volga Reg. Phys. Math. Sci. Math. 2021, 4, 71–87. [Google Scholar]
  51. Boykov, I.V.; Roudnev, V.A.; Boykova, A.I.; Stepanov, N.S. Continuous operator method application for direct and inverse scattering. Zhurnal SVMO 2021, 23, 247–272. [Google Scholar] [CrossRef]
  52. Boikov, I.V.; Zelina, Y.V. Approximate Methods of Solving Amplitude-Phase Problems for Continuous Signals. Meas. Tech. 2021, 64, 386–397. [Google Scholar] [CrossRef]
  53. Boikov, I.V.; Zelina, Y.V.; Vasyunin, D.I. Approximate methods for solving amplitude-phase problem for discrete signals. J. Phys. Conf. Ser. 2021, 2099, 012002. [Google Scholar] [CrossRef]
  54. Boikov, I.V.; Ryazantsev, V.A. On an iterative method for solution of direct problem for nonlinear hyperbolic differential equations. Zhurnal SVMO 2020, 22, 155–163. [Google Scholar] [CrossRef]
  55. Yin, W.; Yang, W.; Liu, H. A neural network scheme for recovering scattering obstacles with limited phaseless far-field. J. Comput. Phys. 2020, 417, 109594. [Google Scholar] [CrossRef]
  56. Gao, Y.; Hongyu, L.; Wang, X.; Zhang, K. On an artificial neural network for inverse scattering problems. J. Comput. Phys. 2022, 448, 110771. [Google Scholar] [CrossRef]
  57. Boykov, I.; Roudnev, V.; Boykova, A. Stability of Solutions to Systems of Nonlinear Differential Equations with Discontinuous Right-Hand Sides: Applications to Hopfield Artificial Neural Networks. Mathematics 2022, 10, 1524. [Google Scholar] [CrossRef]
  58. Boikov, I.V. The Brockett stabilization problem. Autom. Remote 2005, 66, 745–751. [Google Scholar] [CrossRef]
  59. Boykov, I.V.; Krivulin, N.P. Methods for Control of Dynamical Systems with Delayed Feedback. J. Math. Sci. 2021, 255, 561–573. [Google Scholar] [CrossRef]
  60. Halik, A.; Wumaier, A. Synchronization on the non-autonomous cellular neural networks with time delays. J. Nonlinear Funct. Anal. 2020, 2020, 51. [Google Scholar]
  61. Hao, J.; Zhu, W. Architecture self-attention mechanism: Nonlinear optimization for neural architecture search. J. Nonlinear Var. Anal. 2021, 5, 119–140. [Google Scholar]
Figure 1. Exact z ( x ) and numerical z N ( x ) solutions of Equation (49): (a) the solutions, (b) the numerical error for 180 spatial grid points, time step h = 0.4 for 100 iterations.
Figure 1. Exact z ( x ) and numerical z N ( x ) solutions of Equation (49): (a) the solutions, (b) the numerical error for 180 spatial grid points, time step h = 0.4 for 100 iterations.
Mathematics 10 02207 g001
Table 1. Convergence of the method to the solution of the linear integral Equation (28) with respect to the number of iterations. ϵ = x N ( t ) t C .
Table 1. Convergence of the method to the solution of the linear integral Equation (28) with respect to the number of iterations. ϵ = x N ( t ) t C .
N ϵ
105.2698 × 10 2
501.1581 × 10 3
1005.8614 × 10 3
1503.9232 × 10 3
Table 2. Convergence of the continuous method to solution of nonlinear integral Equation (56) by a number of iterations, ϵ k = | u ( t k ) U N ( t k ) | .
Table 2. Convergence of the continuous method to solution of nonlinear integral Equation (56) by a number of iterations, ϵ k = | u ( t k ) U N ( t k ) | .
t k U N ( t k ) u ( t k ) ϵ k
05.011124 × 10 1 5.000000 × 10 1 1.112496 × 10 3
15.065099 × 10 1 5.055864 × 10 1 9.235025 × 10 4
105.582226 × 10 1 5.586206 × 10 1 3.980314 × 10 4
206.220347 × 10 1 6.230769 × 10 1 1.042146 × 10 3
306.913298 × 10 1 6.923077 × 10 1 9.778526 × 10 4
407.635971 × 10 1 7.641509 × 10 1 5.537315 × 10 4
508.349188 × 10 1 8.350515 × 10 1 1.326703 × 10 4
609.000700 × 10 1 9.000000 × 10 1 7.007729 × 10 5
709.529645 × 10 1 9.529411 × 10 1 2.333060 × 10 5
809.876083 × 10 1 9.878048 × 10 1 1.965277 × 10 4
909.995410 × 10 1 1.0000004.589433 × 10 4
1009.872545 × 10 1 9.878048 × 10 1 5.502841 × 10 4
1109.527123 × 10 1 9.529411 × 10 1 2.287820 × 10 4
1209.004980 × 10 1 8.999999 × 10 1 4.980633 × 10 4
1308.361412 × 10 1 8.350515 × 10 1 1.089730 × 10 3
1407.646853 × 10 1 7.641509 × 10 1 5.344244 × 10 4
1506.900979 × 10 1 6.923077 × 10 1 2.209774 × 10 3
1606.153842 × 10 1 6.230769 × 10 1 7.692711 × 10 3
1705.429360 × 10 1 5.586207 × 10 1 1.568470 × 10 2
1804.747546 × 10 1 5.000000 × 10 1 2.524530 × 10 2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Boykov, I.; Roudnev, V.; Boykova, A. Approximate Methods for Solving Problems of Mathematical Physics on Neural Hopfield Networks. Mathematics 2022, 10, 2207. https://doi.org/10.3390/math10132207

AMA Style

Boykov I, Roudnev V, Boykova A. Approximate Methods for Solving Problems of Mathematical Physics on Neural Hopfield Networks. Mathematics. 2022; 10(13):2207. https://doi.org/10.3390/math10132207

Chicago/Turabian Style

Boykov, Ilya, Vladimir Roudnev, and Alla Boykova. 2022. "Approximate Methods for Solving Problems of Mathematical Physics on Neural Hopfield Networks" Mathematics 10, no. 13: 2207. https://doi.org/10.3390/math10132207

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop