Next Article in Journal
Performance Analysis and Cost Optimization of the M/M/1/N Queueing System with Working Vacation and Working Breakdown
Previous Article in Journal
A Fractional Integration Model and Testing Procedure with Roots Within the Unit Circle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Analysis of Numerical Methods for Solving 3D Continuation Problem for Wave Equation

by
Galitdin Bakanov
1,
Sreelatha Chandragiri
2,*,
Sergey Kabanikhin
2,3 and
Maxim Shishlenin
2,3
1
Faculty of Natural Sciences, Khoja Akhmet Yassawi International Kazakh-Turkish University, Turkestan 161200, Kazakhstan
2
Sobolev Institute of Mathematics, 630090 Novosibirsk, Russia
3
Institute of Computational Mathematics and Mathematical Geophysics, 630090 Novosibirsk, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(18), 2979; https://doi.org/10.3390/math13182979
Submission received: 1 July 2025 / Revised: 6 August 2025 / Accepted: 10 September 2025 / Published: 15 September 2025
(This article belongs to the Section E: Applied Mathematics)

Abstract

In this paper, we develop the explicit finite difference method (FDM) to solve an ill-posed Cauchy problem for the 3D acoustic wave equation in a time domain with the data on a part of the boundary given (continuation problem) in a cube. FDM is one of the numerical methods used to compute the solutions of hyperbolic partial differential equations (PDEs) by discretizing the given domain into a finite number of regions and a consequent reduction in given PDEs into a system of linear algebraic equations (SLAE). We present a theory, and through Matlab Version: 9.14.0.2286388 (R2023a), we find an efficient solution of a dense system of equations by implementing the numerical solution of this approach using several iterative techniques. We extend the formulation of the Jacobi, Gauss–Seidel, and successive over-relaxation (SOR) iterative methods in solving the linear system for computational efficiency and for the properties of the convergence of the proposed method. Numerical experiments are conducted, and we compare the analytical solution and numerical solution for different time phenomena.

1. Introduction

For hyperbolic differential equations, problems with Cauchy data on non-spatial surfaces began to be studied by F. John (1960, 1961 [1,2]).
In the study of many practical problems, the Cauchy problem for hyperbolic equations occurs, and the Cauchy problem for the acoustic wave equation has been investigated in many works by Douglas (1960, [3]) and Cannon (1964, 1965 [4,5]).
According to Hadamard [6,7], the solution to the Cauchy problem for an acoustic wave equation is ill-posed, and it exists when we impose smoothness conditions or strong compatibility on the initial data. Hadamard showed that a global solution cannot exist unless a certain compatibility relation is established among the Cauchy data. Further, he showed that even if the data are such, a classical solution exists, and it does not continuously depend on the data. According to Hadamard, these problems are well known to be ill-posed, and from various aspects, many investigations have been attempted, such as regularization, existence–uniqueness theorems, and least squares methods by Payne (1975, [8]), for such problems have been discussed. The problem and its applications were then investigated in [9,10,11,12,13,14,15,16,17].
Inverse and ill-posed problems for three-dimensional acoustic waves in the time domain have been studied theoretically with a number of methods of different hyperbolic equations [16,18,19,20]: hyperbolic systems [21], the Green function approach, and wave splitting [22,23].
The method of scales and the Banach spaces of analytic functions were developed in some of the variables to study the Cauchy problem, and used to solve the inverse problem of determining the potential in the hyperbolic equation by V. Romanov (1996, [24,25]).
Helsing et al. (2011, [26]) rewrote the Cauchy problem as an operator equation using the Dirichlet-to-Neumann map on the boundary.
The singular values of the operator of continuation problems were investigated, and a comparative analysis of numerical methods was presented [27,28,29,30].
To model wave propagation in medicine, geophysics, and engineering, among others [31], the acoustic wave equation has been widely used with a non-zero point source function. Imaging these waves in the field of medicine was shown to provide very objective information about the biological tissue being examined. Acoustic wave equations arise widely in various applications such as seabed exploration and underground imaging.
Causon et al. (2010, [32]) provided an introduction to the finite difference method (FDM) to solve partial differential equations (PDEs) and the theory of Jacobi, Gauss–Seidel, and successive over-relaxation (SOR) iterative solution methods.
Due to simple implementation and high accuracy, finite difference methods have attracted great interest from many researchers from various areas of science and engineering over the past several decades. Alford et al. (1974, [33]), Tam et al. (1993, [34]), and Yang et al. (2012, [35]) introduced many FDMs that have been developed to solve the acoustic wave equations.
The FDM is a powerful tool for acoustic or seismic wave simulations due to its high accuracy, low memory, and fast computing speed, especially for models with complex geological structures studied by Liao et al. (2011, [36]), Finkelstein et al. (2017, [37]), and Zapata et al. (2018, [38]).
Alexandre et al. (2014, [39]) studied an explicit finite difference method to solve the acoustic wave equation using locally adjustable time steps; by considering stability, the time step size is initially determined by the medium with higher wave speed propagation, resulting in the fact that, in the whole domain, the higher the speed, the lower the time step needs to be to ensure stability.
Liu et al. (2009, 2010, [40,41,42]) and Liang et al. (2013, [43]) investigated the numerical solution of acoustics with a vertical axis of symmetry (VTI) modeling and a new time–space domain dispersion relation-based finite difference scheme of the acoustic wave equation and an implicit staggered grid finite difference method for seismic modeling, which plays an important role in seismic wave propagation, seismic imaging, and full waveform inversion.
Liao et al. (2018, [44]) proposed a compact high-order FDM using a novel algebraic manipulation of finite difference operators for 2D acoustic wave equations with variable velocity.
Young (1971, [45]) found an iterative solution of large linear systems using the SOR iterative method. Dancis (1991, [46]) used the SOR iteration method to solve linear equations of large sparse systems and to approximate many of the PDEs that arise in engineering and showed that, using a polynomial acceleration together with a suboptimal relaxation factor, a smaller average spectral radius can be achieved.
Rigal (1979, [47]) applied the successive over-relaxation method to non-symmetric linear systems to give the convergence domain of this method with the SOR algorithm to find the best relaxation factor in this domain.
Hadjidimos (2000, [48]) studied the theory of SOR method and some of its properties and mentioned the role of SOR and symmetric SOR methods as preconditioners for the class of semi-iterative methods.
Britt et al. (2018, [49]) introduced an energy method to derive the stability condition for the variable coefficient case using a finite difference scheme of high-order compact time–space for the wave equation.
Additionally, we can see some generalized finite difference schemes, for example, in [50], incompressible two-phase fluid flows, i.e., a conservative Allen–Cahn–Navier–Stokes system solved using a new numerical scheme based on the first-order time-splitting approach, and it has been applied to deal with the time variable.
Qu et al. (2018, [51]) solved the inhomogeneous modified Helmholtz equation using Krylov-deferred correction (KDC) and generalized FDM for a highly accurate solution of transient heat conduction problems; in [52], a hybrid numerical method for 3D heat conduction in functionally graded materials is developed that integrates the advantages of general FDM and KDC techniques.
Belonosov et al. (2019, [53]) solved the continuation problem for the 1D parabolic equation with the data given on the boundary part through comparative analysis of numerical methods.
Chung et al. (2021, [54]) investigated a least squares formulation for inverse ill-posed problems. The existence and uniqueness of an inverse solution and the continuity of the inverse solution were established for noisy data in L 2 .
Desiderio et al. (2023, [55]) solved the 2D time-domain damped wave equation using a boundary element method (BEM) and a curved virtual element method (CVEM) for the simulation of scattered wave fields by obstacles immersed in infinite homogeneous media.
Bzeih et al. (2023, [56]) studied the 2D linear wave equation with dynamical control on the boundary using the finite element method.
The backward parabolic problem was investigated [57], and the error estimates of the method were proved with respect to the noise levels.
Using a space–time discontinuous Galerkin method, the unique continuation problem was solved for the heat equation by Burman et al. (2023, [58]), and the consistency error and discrete inf-sup stability were established, and it led to a priori estimates on the residual.
Dahmen et al. (2023, [59]) solved ill-posed PDEs that are conditionally stable concerning the design and analysis of least squares solvers, and in view of the conditional stability assumption, a general error bound was established.
Helin (2024, [60]) studied the statistical inverse learning theory with the classical regularization strategy of applying finite-dimensional projections and derived probabilistic convergence rates for the reconstruction error of the estimator of maximum likelihood (ML).
Epperly (2024, [61]) solved overdetermined linear least squares problems using iterative sketching and sketch-and-precondition randomized algorithms and showed that, for large problem instances, iterative sketching is stable and faster than QR-based solvers.
Qu et al. (2024, [62]) introduced a numerical framework with stability over long time intervals by addressing dynamic crack problems, spatial and temporal discretizations through the meshless generalized finite difference method, and the arbitrary order KDC method to numerically simulate the system of spatial PDEs generated at each time node.
Li (2025, [63]) proposed a novel iterative method, termed the Projected Newton method (PNT), to solve large-scale Bayesian linear inverse problems; this method can update the solution step and the regularization parameter simultaneously without decompositions and expensive matrix inversions.
Bakanov et al. (2025, [64]) proposed the Jacobi numerical method to solve the 3D continuation problem for a wave equation based on FDM.
In addition, for three-dimensional problems in the time domain, it usually results in sparse and large linear systems, so at each time step, that needs an iteration. In this work, we extend the formulation of the Jacobi, Gauss–Seidel, and SOR iterative methods to solve the linear systems. The conclusion of this study finds that all three iterative methods are accurate; however, the SOR iterative method is more efficient in terms of fewer iterations and fewer execution times and faster convergence compared with the other two iterative methods.
This paper is organized as follows: The three-dimensional acoustic wave equation in the time domain is formulated in Section 2. We give a problem formulation and an overview of the three different iterative schemes, followed by the convergence analysis. In Section 3, finite difference approximations can be described based on the seven-point centered formula that is applied to discretize the three-dimensional acoustic wave equation with difference schemes. Section 4 present the applicability of the proposed schemes by performing some numerical experiments. In Section 5, we discuss the results. Finally, we give our conclusions and remarks in Section 6.

2. Statement of the Problem

In this paper, we consider the three-dimensional hyperbolic acoustic wave equation.

2.1. Cauchy Problem (Continuation Problem)

Consider the ill-posed [65] continuation problem in which the unknown function v ( x , y , z , t ) satisfies the following boundary value problem (BVP) in the domain Ω × ( 0 , T ) = ( 0 , 1 ) 3 × ( 0 , T ) , as follows:
c 2 v t t = Δ v γ ( x , y , z ) v + p ( x , y , z , t ) , ( x , y , z ) Ω , t ( 0 , T )
v ( x , y , z , 0 ) = a 1 ( x , y , z ) , v t ( x , y , z , 0 ) = a 2 ( x , y , z ) ,
v ( 0 , y , z , t ) = b 1 ( y , z , t ) , v ( 1 , y , z , t ) = b 2 ( y , z , t ) ,
v ( x , 0 , z , t ) = c 1 ( x , z , t ) , v ( x , 1 , z , t ) = c 2 ( x , z , t ) ,
v z ( x , y , 0 , t ) = g ( x , y , t ) ,
v ( x , y , 0 , t ) = f ( x , y , t ) ,
here, v ( x , y , z , t ) is a function of space and time and c is the propagation of the velocity of the medium wave, γ ( x , y , z ) is a non-zero function that represents the wave velocity, and p ( x , y , z , t ) is a source function, together with initial conditions v ( x , y , z , 0 ) and v t ( x , y , z , 0 ) , x , y , z Ω and the aforementioned suitable boundary conditions.
Suppose that p ( x , y , z , t ) , f ( x , y , t ) , and  g ( x , y , t ) are given functions and
Δ = 2 x 2 + 2 y 2 + 2 z 2 .

2.2. Inverse Problem

Let us introduce the direct (well-posed) problem (DP): (1)–(5), and the boundary condition
v ( x , y , 1 , t ) = q ( x , y , t ) .
The direct problem in (1)–(5), and (7) is to find v for given p , g , and q.
We assume that the function q ( x , y , t ) is unknown. However, instead of q ( x , y , t ) , we have the following additional information concerning the solution v ( x , y , z , t ) of DP, as follows:
v ( x , y , 0 , t ) = f ( x , y , t ) .
The inverse problem (IP) consists in finding the function q ( x , y , t ) from (1)–(5), (7), and (8).
The inverse problem in (1)–(5), (7), and (8) is equivalent to the continuation problem in (1)–(6) in the following sense. If we solve the continuation problem, then we can find q ( x , y , t ) = v ( x , y , 1 , t ) , i.e., the solution of inverse problem q. Conversely, if we solve the inverse problem and find the solution of the inverse problem, we can set v ( x , y , 1 , t ) = q ( x , y , t ) and solve the direct problem in (1)–(5), and (7) and find v ( x , y , 1 , t ) Ω × ( 0 , T ) —the solution of the continuation problem.

2.3. Operator Form of the Inverse Problem

The inverse problem in (1)–(5), (7), and (8) can be formulated in the following operator form:
A q = f ,
here, A : q ( x , y , t ) v ( x , y , 0 , t ) = f ( x , y , t ) and v ( x , y , z , t ) is the solution of the direct problem in (1)–(5); q is an unknown function; and f is inverse problem data.

2.4. Linear Neural Network

u x x = A u , x ( 0 , h ) , y ( 0 , L ) , t ( 0 , T ) ,
u ( 0 , y , t ) = f ( y , t ) , y ( 0 , L ) , t ( 0 , T ) ,
u x ( 0 , y , t ) = g ( y , t ) , y ( 0 , L ) , t ( 0 , T ) .
Suppose that we have a system of sources g and receivers g.
We construct a linear neural network that maps
T = ( g , q ) * f
and obtain the mapping
T q = f ,
therefore,
T = f q 1 .

2.5. General Formulation of Continuation Problem

Let us consider the following continuation problem:
u x x = A u , x ( 0 , h ) , y ( 0 , L ) , t ( 0 , T ) ,
u ( 0 , y , t ) = f ( y , t ) , y ( 0 , L ) , t ( 0 , T ) ,
u x ( 0 , y , t ) = g ( y , t ) , y ( 0 , L ) , t ( 0 , T ) ,
with the operator A ( x ) acting from H ( ( 0 , L ) × ( 0 , T ) ) to H ( ( 0 , L ) × ( 0 , T ) ) and having the form
A ( x ) u = u t t ( x , y , t ) u y y ( x , y , t ) α 2 u ( x , y , t ) , x ( 0 , h ) .

2.6. Ill-Posedness

Its solution is unique, but it does not depend continuously on the Cauchy data. By the following example, the instability of the solution can be illustrated
u t t = u x x + u y y , u ( 0 , y , t ) = 1 k cos ( k 2 y ) cos ( k t ) , u x ( 0 , y , t ) = 0 ,
it is easy to see that, for k , the data u k ( 0 , y , t ) = 1 k cos ( k 2 y ) cos ( k t ) tend toward zero, while the solution
u k ( x , y , t ) = 1 k e k x + e k x 2 cos ( k 2 y ) cos ( k t ) ,
increases indefinitely in an arbitrarily small neighborhood of x = 0 .

2.7. Conditional Stability Estimate

If there exists a solution u ( x , y , t ) C 2 to the problem in (10)–(12), we can show that
u ( x , y , t ) e 2 x ( h x ) q ( y , t ) 2 + 1 2 < A u ( 0 , y , t ) , u ( 0 , y , t ) > u ( 0 , y , t ) 2 x h × × u ( 0 , y , t ) 2 + 1 2 < A u ( 0 , y , t ) , u ( 0 , y , t ) > u ( 0 , y , t ) 2 h x h 1 2 < A u ( 0 , y , t ) , u ( 0 , y , t ) > u ( 0 , y , t ) 2 .
A u ( 0 , y , t ) = f t t f y y α 2 f .
< A u ( 0 , y , t ) , u ( 0 , y , t ) > = < f t t f y y α 2 f , f > ,
u ( 0 , y , t ) 2 = < g , g > , u ( 0 , y , t ) 2 = < f , f > .

2.8. Continuation Problem for the Helmholtz Equation

Let us investigate the stability of the continuation problem. We suppose v ( x , y , t ) = u ( x , y ) e i ω t and consider the continuation problem for the Helmholtz equation [66,67,68,69,70], as follows:
Δ u + k 2 u = 0 , x ( 0 , h ) , y ( 0 , π ) ,
u ( 0 , y ) = f ( y ) , y ( 0 , π ) ,
u x ( 0 , y ) = 0 , y ( 0 , π ) ,
u ( x , 0 ) = u ( x , π ) = 0 , x ( 0 , h ) ,
here,
k 2 = ε ω 2 i σ ω ,
ε is permeability, σ is conductivity, ω is frequency.
The continuation problem in (13)–(16) consists in finding the function u ( x , y ) in the domain x ( 0 , h ) , y ( 0 , π ) by the given boundary conditions (14)–(16).
Let us formulate the continuation problem in the form of an inverse problem. We introduce the following direct problem:
Δ u + k 2 u = 0 , x ( 0 , h ) , y ( 0 , π ) ,
u x ( 0 , y ) = 0 , u ( h , y ) = q ( y ) , y ( 0 , π ) ,
u ( x , 0 ) = u ( x , π ) = 0 , x ( 0 , h ) .
Inverse problem: find the function q ( y ) using additional information, as follows:
u ( 0 , y ) = f ( y ) , y ( 0 , π ) .
The operator statement of the inverse problem in (17)–(20) can be written in the form A q = f , where A : L 2 ( 0 , π ) L 2 ( 0 , π ) [18].
Let us find the solution to the direct problem in (17)–(19). We suppose that q ( y ) has the following form:
q ( x ) = m = 1 q ( m ) sin ( m y ) ,
and will find the direct problem solution, as follows:
u ( x , y ) = m = 1 u ( m ) ( x ) sin ( m y ) ,
solving the sequence of direct problems, as follows:
u x x ( m ) + k m 2 u ( m ) = 0 , x ( 0 , h ) ,
u x ( m ) ( 0 ) = 0 , u ( m ) ( h ) = q ( m ) ,
here,
k m 2 = ε ω 2 m 2 i σ ω .
The general solution of Equation (21) has the following form:
u ( m ) ( x ) = C 1 e λ m x + C 2 e λ m x ,
here, k m 2 = ± λ m , λ m = α m + i β m and
α m = ( m 2 ε ω 2 ) 2 + σ 2 ω 2 + m 2 ε ω 2 2 ,
β m = ( m 2 ε ω 2 ) 2 + σ 2 ω 2 m 2 + ε ω 2 2 .
Therefore, the solution of the problem in (21), (22) is given by the following formula:
u ( m ) ( x ) = cosh ( λ m x ) cosh ( λ m h ) q ( m ) .
Then the direct problem solution in (17)–(19) is given by the following Fourier series:
u ( x , y ) = m = 1 cosh ( λ m x ) cosh ( λ m h ) q ( m ) sin ( m y ) ,
therefore, the solution of the inverse problem in (17)–(20) is given by Fourier series expansion, as follows:
q ( y ) = m = 1 f ( m ) cosh ( λ m h ) sin ( m y ) .
Thus, the singular values of the operator A have the following form:
σ m ( A ) = 1 cosh ( λ m h ) = 2 cosh ( 2 α m h ) + cos ( 2 β m h ) .
Consider particular cases of singular values of the operator A.
  • Laplace equation ε = 0 , σ = 0 :
    σ m ( A ) = 1 cosh ( m h ) .
  • The parabolic equation ε = 0 , σ 0
    σ m ( A ) = 2 cosh ( 2 α m h ) + cos ( 2 β m h ) ,
    α m = m 4 + σ 2 ω 2 + m 2 2 , β m = m 4 + σ 2 ω 2 m 2 2 .
  • The Helmholtz equation ε 0 , σ = 0 :
    σ m ( A ) = 1 | cos ( k m h ) | , m 2 ε ω 2 , 1 cosh ( k m h ) , ε ω 2 < m 2 ,
    and k m 2 = ε ω 2 m 2 .
    According to [71], on the wave number k m , the singular values depend strongly. The singular values of A are bounded below by 1 in the low-frequency domain m 2 ε ω 2 , where the singular values decay exponentially for the high-frequency domain. In the low-frequency domain, the operator A is continuously invertible, and this domain increases with k m , which is the most important fact.

2.9. Limitations of Direct Methods

When the size of the coefficient matrix increases, the number of operations required increases rapidly to solve the equation sets. For large linear systems, direct methods are computationally very expensive. Usually, iterative methods are a preferred choice to solve large systems.

2.10. Iterative Methods

We use iterative methods to solve the system of linear equations that arise from the finite difference approximations of PDEs, which have large and sparse coefficient matrices. For the unknown vector q of A q = f , the process starts with an initial approximation, and by an iterative process, the successive approximations will be improved.
q ( n + 1 ) = R q ( n ) + C , n = 0 , 1 , 2
where q ( n ) and q ( n + 1 ) are the n th and ( n + 1 ) th approximations for the solution of the linear set of equations.
R is called the non-singular iteration matrix depending on A.
C is called the constant column vector.
Given q ( 0 ) , classical methods generate a sequence q ( n ) that converges to the solution A 1 f , where q ( n + 1 ) is calculated from q ( n ) by iterating (23). The iterative method strategy generates a sequence of approximate solution vectors q ( 0 ) , q ( 1 ) , q ( 2 ) , . , q ( n ) for the system A q = f . The process can be stopped when
q ( n + 1 ) q ( n ) < ε ,
in the limiting case, where n , q ( n ) converges to the exact solution q = A 1 f . From (24), the exact solution q can be found, that is, a stationary point; if  q ( n ) is equal to the exact solution of the set of equations, then q ( n + 1 ) will be equal to the exact solution.

2.11. Classical Iterative Methods

Based on [48], using the principle that the matrix A can be written as the sum of other matrices, classic iterative methods are built. There are many ways to divide the matrix; two of them are the origin of the Jacobi and the Gauss–Seidel method. The successive over-relaxation method is an improved version of the Gauss–Seidel method. Classic iterative methods generally have quite a low convergence rate.
The matrix A is split into two matrices, which are M and K such that M + K = A . Here, M is a diagonal matrix with entries such as A on the main diagonal, the matrix K has zeros on the diagonal, and the off-diagonal entries are equal to the rest of the entries in A. By applying this to the set of linear equations, we obtain
A q = f .
( M + K ) q = f ,
where M is the preconditioner, or the preconditioning matrix is taken to be invertible and solve for q. We obtain
q = R q + C ,
where R = M 1 K and C = M 1 f . Here, R is called an iteration matrix.
Definition 1
([45,72]). The matrix A = ( a i j ) is an M-matrix if a i j 0 for all i j , A is non-singular, and A 1 0 . If A is irreducible and a i i a i j has a strict inequality at least on i, then A is an M-matrix.
We write (25) in the component form, which gives the following expression:
q i = 1 a i i j = 1 j i k a i j q j f i , i = 1 , 2 , , k .
Lemma 1.
Let · be any operator norm R = m a x q 0 R q q . Then, R < 1 if q ( n + 1 ) = R q ( n ) + C converges for any q ( 0 ) .
Proof of Lemma 1. 
Using (25), we have
q ( n + 1 ) q = R q ( n ) + C R q C
R q ( n ) R q R q ( n ) q R ( n + 1 ) q ( 0 ) q .
If R < 1 , R ( n + 1 ) 0 , as  n . Thus, q ( n + 1 ) q 0 , as  n . For the converse, if  q ( n + 1 ) q , then R < 1 .    □
The spectral radius of the matrix A, denoted by
ρ ( A ) = m a x { λ } ,
where the maximum is taken from the overall eigenvalues λ of A.
Lemma 2
([73]). For all operator norms · , ρ ( R ) R .
Corollary 1.
The iteration q ( n + 1 ) = R q ( n ) + C converges to the solution A q = f for all initial q ( 0 ) and for all f if ρ ( R ) < 1 .
Proof of Corollary 1. 
This corollary follows from Lemmas 1 and 2.    □
Remark 1.
The measure of the number of iterations that are needed to converge to the given tolerance is the convergence rate of the iterative scheme q ( n + 1 ) = R q ( n ) + C , and it is defined as r ( R ) = l o g ρ ( R ) . This means that the smaller ρ ( R ) , the higher the rate of convergence. Therefore, the method is said to be efficient if we choose a splitting A = M + K so that
1. R = M 1 K and C = M 1 f are easy to evaluate;
2. ρ ( R ) is small.
The splitting of the methods that we discussed in this section shares the following notation:
A = M + K = D + L + U ,
where D is the diagonal of the matrix A, L is the lower triangular part of the matrix A, and U is the upper triangular part of the matrix A.

2.11.1. Jacobi Iterative Method

All the entries in the current approximation will be updated on the basis of the values in the previous approximation in the Jacobi iterative method. The splitting of the coefficient matrix A for the Jacobi iterative method is M = D , K = L + U , and its iteration gives the following result:
q ( n + 1 ) = R J q ( n ) + C J ,
where R J = M 1 K = D 1 ( L + U ) , C J = M 1 f = D 1 f .
The component form of Equation (28) is (see [74,75])
q i ( n + 1 ) = 1 a i i j = 1 j i k a i j q j n f i , i = 1 , 2 , , k and n 0 ,
where the initial guess q ( 0 ) = ( q 1 ( 0 ) , q 2 ( 0 ) , q 3 ( 0 ) , , q k ( 0 ) ) for the solution can be arbitrarily chosen.
Always, we start with the zero initial vector q ( 0 ) = ( 0 , 0 , 0 , , 0 ) . The Jacobi iteration starts with an initial approximation q ( 0 ) , and repeatedly applies the update of Jacobi to create a sequence q ( 0 ) , q ( 1 ) , q ( 2 ) , that converges to the exact or analytical solution. With the control we apply for iteration to check the residual and compute n th iterate q ( n ) , the residual ε ( n ) is defined by
ε ( n ) = A q ( n ) f ,
and control error using Root Mean Square normalization (RMS norm) ε ( n ) k . However, if we do not know the true solution, the proper way to control and terminate an iteration is to monitor the residual.
The difference between the approximate solution (29) and the exact solution (26) is defined as an error, as follows:
q i ( n + 1 ) q i = j = 1 j i k a i j a i i q i ( n ) q i , i = 1 , 2 , , k and n 0
each component of the error satisfies
ε i ( n + 1 ) = j = 1 j i k a i j a i i ε i ( n ) , i = 1 , 2 , , k and n 0
where q i ( n ) q i = ε i ( n ) , n 0 is the error at the n th approximation, so
ε i ( n + 1 ) = j = 1 j i k a i j a i i ε i ( n ) , i = 1 , 2 , , k and n 0
ε ( n + 1 ) m a x 1 i k j = 1 j i k a i j a i i ε ( n ) , i = 1 , 2 , , k and n 0
ε ( n + 1 ) R ε ( n ) ,
where
R = m a x 1 i k j = 1 j i k a i j a i i = R .
This shows that the rate of convergence is linear. Equation (30) implies that
ε 1 R ε ( 0 ) ,
ε 2 R ε ( 1 ) R 2 ε ( 0 ) ,
ε 3 R ε ( 2 ) R 3 ε ( 0 ) ,
and so on 
ε n R n ε ( 0 ) .
If R < 1 , R n 0 , as  n , then ε n 0 as n , the Jacobi iteration method converges.
Definition 2
([45]). The matrix A is said to be diagonally dominant if and only if R < 1 . In order for R < 1 to be true, the coefficient matrix A must be diagonally dominant, that is,
m a x 1 i k j = 1 j i k a i j a i i < 1 j = 1 j i k a i j a i i < 1 , i = 1 , 2 , , k
a i i > j = 1 j i k a i j , i = 1 , 2 , , k .
Therefore, if A is diagonally dominant and if the given system of linear equations is strictly diagonally dominant by rows, the Jacobi method converges.

2.11.2. Gauss–Seidel Iterative Method

The Gauss–Seidel method uses the values previously updated in the current approximation to find the rest. The splitting of the matrix A for the Gauss–Seidel (GS) method is M = D + L , K = U , and it takes the same derivations as for Equation (28), and its iteration gives the following result:
q ( n + 1 ) = R G S q ( n ) + C G S ,
where R G S = M 1 K = ( D + L ) 1 ( U ) , C G S = M 1 f = ( D + L ) 1 f .
The component form of Equation (31) is (see [74,75])
q i ( n + 1 ) = 1 a i i j = 1 i 1 a i j q j ( n + 1 ) + j = i + 1 k a i j q j ( n ) f i , i = 1 , 2 , , k and n 0
where q i ( n + 1 ) indicates that it is a new value of the current iteration.

2.11.3. SOR ( ω o p t ) Iterative Method

The successive over-relaxation method (SOR ( ω o p t ) ) is an improvement of the Gauss–Seidel method, which can be made by anticipating future corrections to the approximation by making an over-correction at each iterative step. This method is based on the matrix splitting
ω A = ( D + ω L ) + ( ( ω 1 ) D + ω U ) ,
by applying to a set of linear equations, we obtain
( ( D + ω L ) + ( ( ω 1 ) D + ω U ) ) q = ω f ,
the matrices D , L , and U are the same as for the Gauss–Seidel method, and  ω is the over-relaxation parameter. In the iterative method, this matrix splitting results ( D + ω L ) q ( n + 1 ) = [ ( ω 1 ) D + ω U ] q ( n ) + ω f , M = ( D + ω L ) , K = [ ( ω 1 ) D + ω U ] , and its iteration gives the following result:
q ( n + 1 ) = R S O R ( ω ) q ( n ) + C S O R ( ω ) ,
where R S O R ( ω ) = M 1 K = ( D + ω L ) 1 [ ( ω 1 ) D + ω U ] , C S O R ( ω ) = M 1 f = ( D + ω L ) 1 ω f .
Here, the relaxation parameter is ω . If  ω < 1 , the method can be called under-relaxation, and  ω > 1 is called over-relaxation. The method is equivalent to the Gauss–Seidel method when we take ω = 1 .
The component form of Equation (33) is (see [74,76])
q i ( n + 1 ) = ( 1 ω ) q i ( n ) ω a i i j = 1 i 1 a i j q j ( n + 1 ) + j = i + 1 k a i j q j ( n ) f i , i = 1 , 2 , , k and n 0
where q i ( n + 1 ) indicates that it is a new value of the current iteration.
It can be shown that this converges for 0 < ω < 2 (see [77]). When ω = 1 , this is just the Gauss–Seidel method, ω < 1 is under-relaxation (which slows the convergence), and  1 < ω < 2 is over-relaxation. The convergence rate depends on the value of ω ; choosing a value that is too large is as bad as choosing a value that is too small because the solution will overshoot the final value. SOR ( ω o p t ) is very easy to program, but does require determining the relaxation parameter ω (although this can be estimated empirically, since if ω is too large, the solution will oscillate).
Remark 2.
Note that the iteration matrices of Jacobi, Gauss–Seidel, and SOR ( ω o p t ) are denoted as R J , R G S , and R S O R ( ω ) , respectively (see Algorithm 1).
Algorithm 1 Successive Over-Relaxation (SOR ( ω o p t ) ) [48]
  • INPUT: the entries a i j of the matrix A; the entries f i of f; the entries q i ( 0 ) of initial guess q ( 0 ) ; the number of equations k; the relaxation parameter ω ; tolerance T o l ; maximum number of iterations N.  
  • OUTPUT: the approximate solution q 1 , q 2 , , q k or a message that the number of iterations was exceeded.  
  • Step 1 set n = 1 .  
    Step 2 While ( n N ) repeat steps 3–6.  
  •            Step 3 For i = 1 , , k  
  •                    set q i = ( 1 ω ) q i ( 0 ) ω a i i j = 1 i 1 a i j q j + j = i + 1 k a i j q j ( 0 ) f i .  
  •            Step 4 If q q ( 0 ) < T o l then OUTPUT ( q 1 , q 2 , , q k ) ; STOP.  
  •            Step 5 Set n = n + 1 .  
  •            Step 6 For i = 1 , , k set q i ( 0 ) = q i .  
  • Step 7 OUTPUT (’Maximum number of iterations exceeded’); STOP.

2.12. Convergence

The number of iterations required for an iterative method to find an approximation of the solution that is within a certain range of the exact solution indicates the convergence rate. The choice of an optimal over-relaxation parameter and the rate of convergence are dependent on finding the spectral radius of the iteration matrix, or at least an upper bound for it. In the previous sections, we discussed iterative methods, which are in the following form:
q ( n + 1 ) = R q ( n ) + f ,
here, the matrix R is the iteration matrix and f is a vector. In Table 1, the iteration matrices are given for all three iterative methods.
For stability, the spectral radius of the iterative matrix is less than one, and the iterative methods will converge. For any matrix norm, the following inequality is upheld (see [74]):
ρ ( R ) R ,
where ρ ( R ) is the spectral radius of the matrix R.
Lemma 3
([78]). If A is irreducible and weakly row diagonally dominant, then both the Jacobi and Gauss–Seidel methods converge, and ρ ( R G S ) < ρ ( R J ) < 1 .
The square of the spectral radius of the Jacobi method is shown to be the spectral radius of the Gauss–Seidel method iteration matrix (see [45]), as follows:
ρ ( R G S ) = ( ρ ( R J ) ) 2 .
Lemma 4
([78]). If R S O R ( ω ) converges, then ρ ( R S O R ( ω ) ) ω 1 . Thus, 0 < ω < 2 is required for convergence.
Lemma 5
([78]). If A is symmetric positive definite, then ρ ( R S O R ( ω ) ) < 1 for all 0 < ω < 2 .
In the following subsection, the spectral radius of the SOR ( ω o p t ) method is found and is dependent on the choice of the over-relaxation parameter.

2.13. Over-Relaxation Parameter

To achieve convergence faster than the standard Gauss–Seidel algorithm, the over-relaxation parameter must be within a narrow range around the optimal value for the SOR ( ω o p t ) algorithm. Based on the following analytical expression, the optimal value can be found for the over-relaxation parameter (see [79]), as follows:
ω = 2 1 + 1 ρ ( R G S ) ,
for the optimal choice of ω , the spectral radius of the SOR ( ω o p t ) iteration matrix is given in the following expression:
ρ ( R S O R ( ω ) ) = ρ ( R G S ) ( 1 + 1 ρ ( R G S ) ) 2 .
Experimentally, the over-relaxation parameter can be found. Obtaining the spectral radius of the Gauss–Seidel iteration matrix analytically is not always possible; the only choice is to find it numerically. That is, the GS method is twice as fast as the Jacobi method. The SOR ( ω o p t ) method is faster than both Jacobi and Gauss–Seidel for the system of Equations (9).

3. Finite Difference Method

The basic idea of FDM is to discretize the partial differential equation by replacing the partial derivatives with their approximations, which are finite differences (see [80]). We illustrate the scheme with an acoustic wave equation. The effectiveness of this method is tested for some acoustic wave equations with a known analytical solution using MATLAB software, and the derived numerical results show that the method produces accurate results. In real-world systems, numerical methods can be used to provide precise results.
A three-dimensional region can be divided into small regions with increments in the x , y , and z directions with time t given as Δ x , Δ y , and Δ z , and Δ t is the time interval for time discretization, as shown in the figure mentioned above. Each nodal point is designed by a numbering scheme i , j , k, and n, where i indicates an increase x and j indicates an increase y, k indicates an increase z, and n indicates an increase t, as shown in Figure 1. Through the case study, the temperature at each nodal point ( x i , y j , z k , t n ) is the average temperature of the surrounding hatched region in the temperature distribution. A suitable finite difference equation can be obtained for the interior nodes of a steady three-dimensional system by considering the acoustic wave equation at the nodal point i , j , k with the current time index t as
2 v t 2 | i , j , k , n + γ ( x i , y j , z k ) v ( x i , y j , z k , t n ) = = 2 v x 2 | i , j , k , n + 2 v y 2 | i , j , k , n + 2 v z 2 | i , j , k , n + p ( x i , y j , z k , t n ) .
The second-order central difference scheme at the nodal point ( i , j , k , n ) can be approximated as
2 v t 2 | i , j , k , n v i , j , k ( n + 1 ) 2 v i , j , k ( n ) + v i , j , k ( n 1 ) ( Δ t ) 2 , 2 v x 2 | i , j , k , n v i + 1 , j , k ( n ) 2 v i , j , k ( n ) + v i 1 , j , k ( n ) ( Δ x ) 2 , 2 v y 2 | i , j , k , n v i , j + 1 , k ( n ) 2 v i , j , k ( n ) + v i , j 1 , k ( n ) ( Δ y ) 2 , 2 v z 2 | i , j , k , n v i , j , k + 1 ( n ) 2 v i , j , k ( n ) + v i , j , k 1 ( n ) ( Δ z ) 2 .
The finite difference approximation of acoustic wave equation for interior regions can be expressed as
v i , j , k ( n + 1 ) = 2 v i , j , k ( n ) v i , j , k ( n 1 ) + + ( Δ t ) 2 ( v i + 1 , j , k ( n ) 2 v i , j , k ( n ) + v i 1 , j , k ( n ) ( Δ x ) 2 + v i , j + 1 , k ( n ) 2 v i , j , k ( n ) + v i , j 1 , k ( n ) ( Δ y ) 2 + v i , j , k + 1 ( n ) 2 v i , j , k ( n ) + v i , j , k 1 ( n ) ( Δ z ) 2 γ ( x i , y j , z k ) v ( x i , y j , z k , t n ) + p ( x i , y j , z k , t n ) ) .
Let Δ x = Δ y = Δ z = h . We can write Equation (36) as
v i , j , k ( n + 1 ) = 2 v i , j , k ( n ) v i , j , k ( n 1 ) + + Δ t h 2 v i + 1 , j , k ( n ) + v i 1 , j , k ( n ) + v i , j + 1 , k ( n ) + v i , j 1 , k ( n ) + v i , j , k + 1 ( n ) + v i , j , k 1 ( n ) 6 v i , j , k ( n ) ( Δ t ) 2 γ ( x i , y j , z k ) v ( x i , y j , z k , t n ) + ( Δ t ) 2 p ( x i , y j , z k , t n ) .
In the same way, higher-order approximations with more accuracy for the boundary and interior nodes are also obtained.
The purpose of this paper is to develop numerical methods and investigate regularization techniques to solve certain ill-posed problems for the 3D acoustic wave equation.

3.1. Reducing to Cube Domains

We first discretize BVP (1)–(5), (7) in all three ( x , y , z ) dimensions on a uniform grid with grid points N x × N y × N z , for which we consider a cube domain where L = W = H . If Δ x Δ y Δ z , then we can separate our region into subintervals N x = L Δ x , N y = W Δ y and N z = H Δ z along the x, y, and z axes with the current time frame t. The goal is to approximate all the solutions, v i , j , k ( n ) , where 0 i N x , 0 j N y , 0 k N z , and t > 0 .
As we have seen from Equation (37), any point v i , j , k ( n ) in the region is related to the six points surrounding it. Consider a sketch of a region where N x = 3 , N y = 3 , and N z = 3 . Here, the cross sections of our cube can be viewed at different z values. Note that many of the values in this region are already defined. From the boundary conditions, it is known that v 0 , j , k ( n ) = b 1 ( y j , z k , t n ) , v N x , j , k ( n ) = b 2 ( y j , z k , t n ) , v i , 0 , k ( n ) = c 1 ( x i , z k , t n ) , v i , N y , k ( n ) = c 2 ( x i , z k , t n ) , v i , j , 0 ( n ) = f ( x i , y j , t n ) , and v z ( x i , y j , 0 , t n ) = g ( x i , y j , t n ) . The remaining ( N x 1 ) ( N y 1 ) ( N z ) points will be approximated by building a linear system of equations. We will create a system of equations ( N x 1 ) ( N y 1 ) ( N z ) = 12 , one for each solution at an internal point of our cube by iterating through all possible values of i, j, and k, where 0 < i < N x , 0 < j < N y , and 0 < k N z .
Corollary 2.
We use the forward finite difference with respect to time t to solve the problem in (1)–(5), (7) in finite difference approximation (37), as follows:
v t ( x i , y j , z k , 0 ) v i , j , k ( 1 ) v i , j , k ( 0 ) Δ t = a 2 ( x i , y j , z k ) .
Corollary 3.
We use the forward finite difference with respect to time t to solve the problem in (1)–(5), (7) in finite difference approximation (37), as follows:
v z ( x i , y j , 0 , t n ) v i , j , 1 ( n ) v i , j , 0 ( n ) Δ z = g ( x i , y j , t n ) .
For example, if we work with the N x = 3 , N y = 3 , and N z = 3 subintervals, the system of linear equations can be written in corresponding matrices and vectors as
A v = f h 2 p ,
where v is the vector of approximate solutions at each point in the domain, A is the coefficient matrix of these solutions, f is the boundary and initial condition vector at these points, and p is the vector of the source function. Although Equation (38) is the same as for the two-dimensional case, the coefficient matrix A and the boundary and initial condition vector f will have some different patterns.

3.2. Time Adaptivity

Time increments are used by time-adaptivity algorithms depending upon the medium in which they are adjusted, and by employing intermediate time steps, stability limits can be satisfied on each subregion of the domain. Depending on the algorithm used, these intermediate intervals of time can be chosen. According to [81], with the lowest propagation velocity, the discretization value Δ t ensures stability for the subregion.

3.3. Direct or Iterative Solution

For a system of small unknowns ( N x × N y × N z ) , the direct Gaussian elimination method can be used to solve the above system of equations. Iterative methods achieve a better result for large linear systems. According to [82], the accuracy of the numerical results greatly depends on the computational grid for all numerical methods based on the grid. For accuracy, a grid-converged solution would be preferred (i.e., the solution does not change significantly when more grid points are used as one approaches a tolerance point). For this work, three different iterative techniques are proposed to be used. Details of each iterative technique are provided in the following.
If we apply Equation (28) or (29) to solve the system of finite difference equations for the 3D acoustic wave equation, we obtain the Jacobi iteration formula (see [32]), as follows
v i , j , k ( n + 1 ) = 2 v i , j , k ( n ) v i , j , k ( n 1 ) + + Δ t h 2 v i + 1 , j , k ( n ) + v i 1 , j , k ( n ) + v i , j + 1 , k ( n ) + v i , j 1 , k ( n ) + v i , j , k + 1 ( n ) + v i , j , k 1 ( n ) 6 v i , j , k ( n ) ( Δ t ) 2 γ ( x i , y j , z k ) v ( x i , y j , z k , t n ) + ( Δ t ) 2 p ( x i , y j , z k , t n ) .
The superscript n is an iterative index. To produce v i , j , k , t 0 , we set the initial iterative guess at n = 0 , and based on the iteration, it improves successively. From Equation (39), we find the next iteration ( n + 1 ) for each point in the grid ( i , j , k , t ) across all points in the grid in the horizontal rows. In the interior grid for all points when the iteration is completed, the difference between the vectors of the next iteration v n + 1 and the previous iteration v n is calculated. We set the predefined condition (tolerance) for the iteration to converge, and once the tolerance is met, the iteration ends and the solution to (39) is v n + 1 ; otherwise, the iterations continue, i.e.,
v n + 1 v n < t o l e r a n c e .
If we apply Equation (31) or (32) to solve the system of finite difference equations for the 3D acoustic wave equation, we obtain the Gauss–Seidel iteration formula (see [82]), as follows:
v i , j , k ( n + 1 ) = 2 v i , j , k ( n ) v i , j , k ( n 1 ) + + Δ t h 2 v i + 1 , j , k ( n ) + v i 1 , j , k ( n + 1 ) + v i , j + 1 , k ( n ) + v i , j 1 , k ( n + 1 ) + v i , j , k + 1 ( n ) + v i , j , k 1 ( n + 1 ) 6 v i , j , k ( n ) ( Δ t ) 2 γ ( x i , y j , z k ) v ( x i , y j , z k , t n ) + ( Δ t ) 2 p ( x i , y j , z k , t n ) ,
as can be seen in Equation (40), the values v i 1 , j , k , v i , j 1 , k and v i , j , k 1 are already updated as one moves through the grids to reach the grid point ( i , j , k ) . The implementation of this iteration method follows the Jacobi method.
The most widely used iteration method is the SOR ( ω o p t ) method and is integrated into the Gauss–Seidel method. With the aim of speedy convergence, a relaxation parameter ω is included in the Gauss–Seidel iteration method. Using (33) or (34), we obtain the SOR ( ω o p t ) iteration formula (see [83]), as follows:
v i , j , k n + 1 = ( 1 ω ) v i , j , k n + ω ( 2 v i , j , k ( n ) v i , j , k ( n 1 ) + + Δ t h 2 v i + 1 , j , k ( n ) + v i 1 , j , k ( n + 1 ) + v i , j + 1 , k ( n ) + v i , j 1 , k ( n + 1 ) + v i , j , k + 1 ( n ) + v i , j , k 1 ( n + 1 ) 6 v i , j , k ( n ) ( Δ t ) 2 γ ( x i , y j , z k ) v ( x i , y j , z k , t n ) + ( Δ t ) 2 p ( x i , y j , z k , t n ) ) ,
the relaxation parameter is in the range 0 < ω < 2 . The implementation of the SOR ( ω o p t ) method follows the first two iteration methods.
To solve Equation (9) using an iterative method, performing a matrix vector product A · q is the main cost. However, in practice, making this a matrix-free method, the matrix A is never generated or stored. To produce the action of A on q, a MATLAB code can be created using the finite difference algorithm.
MATLAB programs are developed for all three iterative techniques using the finite difference method with Dirichlet and Neumann boundary conditions that are applied at the boundary of the domain. The results of our discretization and iterative approximations for the sample problem can be examined with a larger mesh size in different time frames. Our Jacobi, Gauss–Seidel, and SOR ( ω o p t ) iterations will use an RMS residual tolerance of 10 4 , for the values of N x = 100 , N y = 100 , and N z = 100 in different time frames, the time interval Δ t = 0.00405 used for the Jacobi, Gauss–Seidel, and SOR ( ω o p t ) iterative methods for stability. For the values of N x = 200 , N y = 200 and N z = 200 in different time frames, the time interval Δ t = 0.002 used for the SOR ( ω o p t ) iterative method for stability in test problem 1.
We compare the exact solution with the continuous problem with the solution of the discretized problem computed using iteration techniques.

4. Numerical Experiments

4.1. Test Problem 1: A Known Analytical Solution

In this test problem 1, we set
p ( x , y , z , t ) = ( γ 4 ) c o s ( t ) s i n h ( x ) s i n h ( y ) s i n h ( z ) ,
a 1 ( x , y , z ) = s i n h ( x ) s i n h ( y ) s i n h ( z ) ,
a 2 ( x , y , z ) = 0 ,
b 1 ( y , z , t ) = 0 ,
b 2 ( y , z , t ) = c o s ( t ) s i n h ( 1 ) s i n h ( y ) s i n h ( z ) ,
c 1 ( x , z , t ) = 0 ,
c 2 ( x , z , t ) = c o s ( t ) s i n h ( x ) s i n h ( 1 ) s i n h ( z ) ,
f ( x , y , t ) = 0 ,
g ( x , y , t ) = c o s ( t ) s i n h ( x ) s i n h ( y ) ,
q ( x , y , t ) = c o s ( t ) s i n h ( x ) s i n h ( y ) s i n h ( 1 ) ,
γ ( x , y , z ) = s i n h ( x ) s i n h ( y ) s i n h ( z ) ,
c = 1 .
These functions are chosen such that the continuation problem in (1)–(6) has a known analytical solution
v ( x , y , z , t ) = c o s ( t ) s i n h ( x ) s i n h ( y ) s i n h ( z ) .

4.2. Results

From Table 2, we can check the numerical performance of iterative methods, Jacobi, Gauss–Seidel, and SOR ( ω o p t ) (with relaxation parameter ω = 1.99 ) in three dimensions to solve the acoustic wave equation for the values of N x = 100 , N y = 100 , and N z = 100 in different time frames t.
In Table 2, the first column represents the mesh size, the second column shows the current time t, the third column represents the error of the numerical method, the fourth column shows the error of the inverse problem, and the fifth column shows the number of iterations taken by the Jacobi, Gauss–Seidel, and SOR ( ω o p t ) iterative methods for the chosen tolerance of 10 4 until convergence in the relative residual. The last column shows the wall clock time for each and every run.
From Table 3, we can check the L 1 , L 2 , and L norms of the Jacobi, Gauss–Seidel, and SOR ( ω o p t ) methods in three dimensions to solve the acoustic wave equation for the values of N x = 100 , N y = 100 , and N z = 100 in different time frames. Here, the norm L 2 is the spectral radius of the matrix A.
From Table 4, we can check the numerical performance of the 3D—SOR ( ω o p t ) method in three dimensions to solve the acoustic wave equation for the values of N x = 100 , N y = 100 , and N z = 100 with several relaxation parameters ω in time t = 3.5 and for the values of N x = 200 , N y = 200 , and N z = 200 with relaxation parameters ω = 1.99 in different time frames t.
In Table 4, the first column represents the mesh size, the second column is the relaxation parameter, the third column shows the current time t, the fourth column represents the error of the numerical method, the fifth column shows the error of the inverse problem, and the sixth column shows the number of iterations taken by the SOR ( ω o p t ) iterative method for the chosen tolerance of 10 4 until convergence in the relative residual. The last column shows the wall clock time for each and every run.
From Table 5, we can check the L 1 , L 2 , and L norms of the SOR ( ω o p t ) method in three dimensions to solve the acoustic wave equation for the values of N x = 200 , N y = 200 , and N z = 200 in different time frames. Here, the norm L 2 is the spectral radius of the matrix A.

4.3. Comparison of Iterative Methods

As mentioned above, we have shown that sophisticated iterative methods have converged in fewer iterations within a shorter run time of the wall clock. Now, our aim is to characterize the differences in convergence rate further in terms of iterations. To do that, we have taken the spectral radius of the iteration matrix, from each method with the same grid size of N x × N y × N z in different time frames t. Then we plot the relative residuals versus the iteration number and compare the results of each method, shown in Figure 2 and Figure 3 of this result with two visualizations. Figure 2a and Figure 3a depict the convergence of each iterative method with the grid size 100 × 100 × 100 at time t = 0.05 , t = 1 , and Figure 2b and Figure 3b show the same data, but only for the first 1000 iterations to take a closer look at the iterative methods, which converge much faster.
From Figure 2 and Figure 3, we can clearly see that the Jacobi and Gauss–Seidel iterative methods are very slower than the SOR ( ω o p t ) iterative method, and for convergence, these methods require more iterations.

4.4. Comparison of Analytical and Numerical Solutions

From Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8, we can see the sound pressure distribution from low and high levels in a three-dimensional cube for the analytical and numerical solutions of test problem 1.
We plot and compare the analytical and numerical solutions shown in Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 with two visualizations of the result. Figure 4a, Figure 5a, Figure 6a, Figure 7a, and Figure 8a show the sound pressure distribution from low to high level, from high to low level in a cube for the size of the grid 100 × 100 × 100 at time t = 0.05 , t = 2.5 , t = 4.25 , t = 1 , t = 6 , and Figure 4b, Figure 5b, Figure 6b, Figure 7b, and Figure 8b shows the same result, but for the numerical solution.

4.5. Error Graphs

We plot the error difference between the analytical and numerical solutions, shown in Figure 9 and Figure 10. Figure 9a,b and Figure 10a,b depict the error difference between the analytical and numerical solutions at time t = 0.05 , t = 1 , t = 2.5 , t = 4.25 for the size of the grid 100 × 100 × 100 .
From Table 6, we can check the maximum error between the analytical and numerical solution using the Jacobi, Gauss–Seidel, and SOR ( ω o p t ) methods in three dimensions to solve the acoustic wave equation for the values of N x = 100 , N y = 100 , and N z = 100 in different time frames t.
The results of our discretization and iterative approximations with a larger grid size in different time frames for test problem 2 can be examined. Our SOR ( ω o p t ) iterations will use an RMS residual tolerance of 10 6 , for the values of N x = 100 , N y = 100 and N z = 100 in different time frames, the time interval Δ t = 0.00412 used for the SOR ( ω o p t ) iterative method for stability.
We compare the exact solution with the continuous problem with the solution of the discretized problem computed using the SOR ( ω o p t ) iteration technique.

4.6. Test Problem 2: A Known Analytical Solution

In this test problem 2, we set
p ( x , y , z , t ) = ( γ + π 2 ) c o s ( π t ) s i n ( π x ) s i n ( π y ) ( z + 1 ) ,
a 1 ( x , y , z ) = s i n ( π x ) s i n ( π y ) ( z + 1 ) ,
a 2 ( x , y , z ) = 0 ,
b 1 ( y , z , t ) = 0 ,
b 2 ( y , z , t ) = 0 ,
c 1 ( x , z , t ) = 0 ,
c 2 ( x , z , t ) = 0 ,
f ( x , y , t ) = c o s ( π t ) s i n ( π x ) s i n ( π y ) ,
g ( x , y , t ) = c o s ( π t ) s i n ( π x ) s i n ( π y ) ,
q ( x , y , t ) = 2 c o s ( π t ) s i n ( π x ) s i n ( π y ) ,
γ ( x , y , z ) = s i n ( π x ) s i n ( π y ) s i n ( π z ) ,
c = 1 .
These functions are chosen such that the continuation problem in (1)–(6) has a known analytical solution
v ( x , y , z , t ) = c o s ( π t ) s i n ( π x ) s i n ( π y ) ( z + 1 ) .

4.7. Results

From Table 7, we can check the numerical performance of the 3D—SOR ( ω o p t ) method in three dimensions to solve the acoustic wave equation for the values of N x = 100 , N y = 100 , and N z = 100 in different time frames t.
The top panel of Table 7 shows the results of the 3D-SOR ( ω o p t ) iterative method for test problem 2. The first column represents the size of the mesh, the second column shows the relaxation parameter, the third column shows the current time t, the fourth column represents the error of the numerical method, the fifth column shows the error of the inverse problem, and the sixth column shows the number of iterations taken by the SOR ( ω o p t ) iterative method for the chosen tolerance of 10 6 until convergence in the relative residual. The last column shows the wall clock time for each and every run.
From Table 8, we can check the L 1 , L 2 , and L norms of the SOR ( ω o p t ) method in three dimensions to solve the acoustic wave equation for the values of N x = 100 , N y = 100 , and N z = 100 in different time frames. Here, the norm L 2 is the spectral radius of the matrix A.

4.8. Comparison of Analytical and Numerical Solutions

From Figure 11, Figure 12 and Figure 13, we can see the sound pressure distribution from low and high levels in a three-dimensional cube for the analytical and numerical solutions of test problem 2.
We plot and compare the analytical and numerical solutions shown in Figure 11, Figure 12 and Figure 13 with two visualizations of the result. Figure 11a, Figure 12a, and Figure 13a show the sound pressure distribution from low to high level and from high to low level in a cube for the size of the grid 100 × 100 × 100 at time t = 0.05 , t = 0.75 , t = 1.75 , and Figure 11b, Figure 12b, and Figure 13b show the same result, but for the numerical solution.

4.9. Error Graphs

We plot the error difference between the analytical and numerical solutions, shown in Figure 14 with two visualizations of this result. Figure 14a depicts the error difference between the analytical and numerical solutions at time t = 0.05 for the size of the grid 100 × 100 × 100 , and Figure 14b shows the same result, but at time t = 0.75 .
From Table 9, we can check the maximum error between the analytical and numerical solution using the SOR ( ω o p t ) method in three dimensions to solve the acoustic wave equation for the values of N x = 100 , N y = 100 , and N z = 100 in different time frames t.

5. Discussion

In this paper, using an explicit finite difference method in the time domain, we find the numerical solution of the 3D-acoustic wave equation in a cube through three different iterative techniques. We compared our numerical results with the known analytical solution through numerical experiments, checked the results for stability with larger grid size, and also found the better iterative technique among three.
From Table 2 and Table 3, we compare the numerical performances of Jacobi, Gauss–Seidel, and SOR ( ω o p t ) iterative methods in three dimensions to solve the acoustic wave equation in different time periods for test problem 1. We observe that all three iterative methods are accurate; however, the Jacobi iterative method is much slower than the other two iterative methods and does not converge in a reasonable amount of time for the higher values of N x , N y , and N z ; we checked the numerical results of the Gauss–Seidel iterative method for the same values of N x , N y , and N z . Compared with the Jacobi iterative method, the Gauss–Seidel iterative method requires about more than half of the iterations and fewer execution times than the Jacobi iterative method. However, the number of iterations required for this method is unacceptable for a larger grid size.
The SOR ( ω o p t ) iterative method can solve large linear systems in a faster way than Jacobi, and the Gauss–Seidel methods and convergence for the higher values of N x , N y , and N z are significantly faster than the other two iterative methods. We saw that the SOR ( ω o p t ) method has a much lower iteration count and shorter runtime with relaxation parameter ω = 1.99 than the Jacobi and Gauss–Seidel methods. Specifically, for N x = 100 , N y = 100 , and N z = 100 , at time t = 3.5 , the Gauss–Seidel method took 10891 iterations and the execution time is 405.187567 s versus the SOR ( ω o p t ) method, which took only 647 iterations and only 24.908182 s with the relaxation parameter ω = 1.99 . We checked the SOR ( ω o p t ) method at several values of the relaxation parameter ω at time t = 3.5 (see Table 4) and observed that, at ω = 1.99 (see Table 2), this method requires fewer iterations and fewer execution times.
From Table 6, we checked the difference in error between the analytical and numerical solutions at different time periods for the Jacobi, Gauss–Seidel, and SOR ( ω o p t ) iterative schemes for test problem 1. We saw slightly less error difference at time t = 1.5 compared with the other time periods that we checked in the problem between the analytical and numerical solutions for the three iterative schemes.
From Table 7 and Table 8, we checked the numerical performance of the SOR ( ω o p t ) iterative method in three dimensions in solving the acoustic wave equation in different time periods for test problem 2. From Table 9, we checked the difference in error between the analytical and numerical solutions at different time periods for the SOR ( ω o p t ) iterative scheme for test problem 2. We saw slightly less error difference at time t = 1.75 compared with the other time periods that we checked in the problem between the analytical and numerical solutions for the SOR ( ω o p t ) iterative scheme.
In this work, we presented numerical results in the cubic domain and also checked numerical results for other geometric domains through MATLAB software and saw the efficiency of this method. Based on our numerical results, the finite difference method is relatively simple to implement for other geometric domains.
The finite difference method (FDM) has some limitations related to complex geometries, boundary conditions, and accuracy. Accuracy depends on grid spacing and time step, while complex shapes and non-straight boundaries are difficult to represent.

6. Conclusions

This research was considered to reconstruct the acoustic pressure in a cube through the numerical solution of the acoustic wave equation in the time domain. Under the finite difference method (FDM), three iterative schemes were compared, which are Jacobi, Gauss–Seidel, and SOR ( ω o p t ) iterative schemes. The results achieved in this work clearly showed that the three iterative schemes are accurate and the SOR ( ω o p t ) iterative scheme is more efficient in terms of fewer iterations, fewer execution times, and faster convergence compared with the other two iterative techniques.

Author Contributions

Methodology, G.B., S.C., S.K., and M.S.; software, S.C.; formal analysis, G.B., S.C., S.K., and M.S.; writing—original draft preparation, S.C.; writing—review and editing, G.B., S.C., S.K., and M.S.; supervision, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

Research by authors G.B., S.K., and M.S. has been funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP 19678469). The work of author S.C. is supported by the Mathematical Centre in Akademgorodok under the Agreement No. 075-15-2022-281 with the Ministry of Science and Higher Education of the Russian Federation.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

FDMfinite difference method
PDEspartial differential equations
SLAEsystem of linear algebraic equations
SORsuccessive over-relaxation

References

  1. John, F. Continuous dependence on data for solutions of partial differential equations with a prescribed bound. Pure Appl. Math. 1960, 4, 551–585. [Google Scholar] [CrossRef]
  2. John, F. Differential Equations with Approximate and Improper Data; Lectures. N. Y.; New York University: New York, NY, USA, 1995. [Google Scholar]
  3. Douglas, J. A Numerical Method for Analytic Continuation, Boundary Value Problems in Differential Equations; University of Wisconsin Press: Madison, WI, USA, 1960; pp. 179–189. [Google Scholar]
  4. Cannon, J.R. Error estimates for some unstable continuation problems. J. Soc. Ind. Appl. Math. 1964, 12, 270–284. [Google Scholar] [CrossRef]
  5. Cannon, J.R.; Miller, K. Some problems in numerical analytic continuation. J. SIAM Numer. Anal. 1965, 2, 87–98. [Google Scholar] [CrossRef]
  6. Lavrentiev, M.M.; Romanov, V.G.; Shishat ski, S.P. Ill-Posed Problems of Mathematical Physics and Analysis; American Mathematical Soc.: Providence, RI, USA, 1986. [Google Scholar]
  7. Hadamard, J. Lectures on Cauchy’s Problem in Linear Partial Differential Equations; Yale Univ. Press: New Haven, CT, USA, 1923. [Google Scholar]
  8. Payne, L.E. Improperly Posed Problems in Partial Differential Equation; SIAM: Philadelphia, PA, USA, 1975. [Google Scholar]
  9. Finch, D.; Patch, S.K. Determining a function from its mean values over a family of spheres. SIAM J. Math. Anal. 2004, 35, 1213–1240. [Google Scholar] [CrossRef]
  10. Natterer, F. Photo-acoustic inversion in convex domains. Inverse Probl. Imaging. 2012, 6, 1–6. [Google Scholar] [CrossRef]
  11. Symes, W.W. A trace theorem for solutions of the wave equation, and the remote determination of acoustic sources. Math. Meth. Appl. Sci. 1983, 5, 131–152. [Google Scholar] [CrossRef]
  12. Romanov, V.G. An asymptotic expansion for a solution to viscoelasticity equations. Eurasian J. Math. Comput. Appl. 2013, 1, 42–62. [Google Scholar] [CrossRef]
  13. Romanov, V.G. Estimation of the solution stability of the Cauchy problem with the data on a time-like plane. J. Appl. Industr. Math. 2018, 12, 531–539. [Google Scholar] [CrossRef]
  14. Romanov, V.G. Regularization of a solution to the Cauchy problem with data on a timelike plane. Sib. Math. J. 2018, 59, 694–704. [Google Scholar] [CrossRef]
  15. Romanov, V.G.; Bugueva, T.V.; Dedok, V.A. Regularization of the solution of the Cauchy problem: The quasi-reversibility method. J. Appl. Industr. Math. 2018, 12, 716–728. [Google Scholar] [CrossRef]
  16. Kabanikhin, S.I.; Shishlenin, M.A. Theory and numerical methods for solving inverse and ill-posed problems. J. Inverse Ill-Posed Probl. 2019, 27, 453–456. [Google Scholar] [CrossRef]
  17. Kabanikhin, S.I.; Scherzer, O.; Shishlenin, M.A. Iteration methods for solving a two-dimensional inverse problem for a hyperbolic equation. J. Inverse-Ill-Posed Probl. 2003, 11, 87–109. [Google Scholar] [CrossRef]
  18. Kabanikhin, S.I. Inverse and Ill-Posed Problems; De Gruyter: Berlin, Germany, 2012. [Google Scholar]
  19. Lavrent’ev, M.M.; Savel’ev, L.J. Operator Theory and Ill-Posed Problems; Walter de Gruyter: Berlin, Germany, 2006. [Google Scholar]
  20. Lions, J.L.; Magenes, E. Non-Homogeneous Boundary Value Problems and Applications; Springer: Berlin/Heidelberg, Germany, 1972. [Google Scholar]
  21. Romanov, V.G.; Kabanikhin, S.I. Inverse Problems of Geoelectrics; VNU Science: Utrecht, The Netherlands, 1994. [Google Scholar]
  22. Weston, V.H. Invariant imbedding and wave splitting in R3, Part II. The Green function approach to inverse scattering. Inverse Probl. 1992, 8, 919–947. [Google Scholar] [CrossRef]
  23. Weston, V.H.; He, S. Wave-splitting of the telegraph equation in R3 and its application to the inverse scattering. Inverse Probl. 1993, 9, 789–813. [Google Scholar] [CrossRef]
  24. Romanov, V.G. On a Numerical Method for Solving a Certain Inverse Problem for a Hyperbolic Equation. Sib. Math. J. 1996, 37, 633–655. [Google Scholar] [CrossRef]
  25. Romanov, V.G. A Local Version of the Numerical Method for Solving an Inverse Problem. Sib. Math. J. 1996, 37, 904–918. [Google Scholar] [CrossRef]
  26. Helsing, J.; Johansson, B. Fast reconstruction of harmonic functions from cauchy data using the dirichlet-to-neumann map and integral equations. Inverse Probl. Sci. Eng. 2011, 19, 717–727. [Google Scholar] [CrossRef]
  27. Kabanikhin, S.I.; Nurseitov, D.; Shishlenin, M.A.; Sholpanbaev, B.B. Inverse Problems for the Ground Penetrating Radar. J. Inverse-Ill-Posed Probl. 2013, 21, 885–892. [Google Scholar] [CrossRef]
  28. Kabanikhin, S.I.; Gasimov, Y.S.; Nurseitov, D.B.; Shishlenin, M.A.; Sholpanbaev, B.B.; Kasenov, S. Regularization of the continuation problem for elliptic equations. J. Inverse-Ill-Posed Probl. 2013, 21, 871–884. [Google Scholar] [CrossRef]
  29. Kabanikhin, S.I.; Shishlenin, M.A. Regularization of the decision prolongation problem for parabolic and elliptic elliptic equations from border part. Eurasian J. Math. Comput. Appl. 2014, 2, 81–91. [Google Scholar]
  30. Kabanikhin, S.I.; Shishlenin, M.A.; Nurseitov, D.B.; Nurseitova, A.T.; Kasenov, S.E. Comparative analysis of methods for regularizing an initial boundary value problem for the Helmholtz equation. J. Appl. Math. 2014, 2014, 786326. [Google Scholar] [CrossRef]
  31. Keran, L.; Wenyuan, L. An efficient and high accuracy finite-difference scheme for the acoustic wave equation in 3D heterogeneous media. J. Comput. Sci. 2020, 40, 101063. [Google Scholar] [CrossRef]
  32. Causon, D.M.; Mingham, C.G. Introductory Finite Difference Methods for PDES; Ventus Publishing ApS: London, UK, 2010; p. 144. [Google Scholar]
  33. Alford, R.; Kelly, K.; Boore, D.M. Accuracy of finite-difference modeling of the acoustic wave equation. Geophysics 1974, 39, 834–842. [Google Scholar] [CrossRef]
  34. Tam, C.K.; Webb, J.C. Dispersion-relation-preserving finite difference schemes for computational acoustics. J. Comput. Phys. 1993, 107, 262–281. [Google Scholar] [CrossRef]
  35. Yang, D.; Tong, P.; Deng, X. A central difference method with low numerical dispersion for solving the scalar wave equation. Geophys. Prospect 2012, 60, 885–905. [Google Scholar] [CrossRef]
  36. Liao, H.L.; Sun, Z.Z. Maximum norm error estimates of efficient difference schemes for second-order wave equations. J. Comput. Appl. Math. 2011, 235, 2217–2233. [Google Scholar] [CrossRef]
  37. Finkelstein, B.; Kastner, R. Finite difference time domain dispersion reduction schemes. J. Comput. Phys. 2017, 221, 422–438. [Google Scholar] [CrossRef]
  38. Zapata, M.; Balam, R.; Urquizo, J. High-order implicit staggered grid finite differences methods for the acoustic wave equation. Numer. Methods Partial Differ. Equ. 2018, 34, 602–625. [Google Scholar] [CrossRef]
  39. Alexandre, J.M.A.; Regina, C.P.L.; Otton, T.S.F.; Elson, M.T. Finite difference method for solving acoustic wave equation using locally adjustable time-steps. Procedia Comput. Sci. 2014, 29, 627–636. [Google Scholar] [CrossRef]
  40. Liu, Y.; Sen, M.K. An implicit staggered-grid finite-difference method for seismic modelling. Geophys. J. Int. 2009, 179, 459–474. [Google Scholar] [CrossRef]
  41. Liu, Y.; Mrinal, K. A new time-space domain high-order finite-difference method for the acoustic wave equation. J. Comput. Phys. 2009, 228, 8779–8806. [Google Scholar] [CrossRef]
  42. Liu, Y.; Sen, M. Acoustic VTI modeling with a time-space domain dispersion-relation-based finite-difference scheme. Geophysics 2010, 75, 11–17. [Google Scholar] [CrossRef]
  43. Liang, W.; Yang, C.; Wang, Y.; Liu, H. Acoustic wave equation modeling with new time-space domain finite difference operators. Chin. J. Geophys. 2013, 56, 3497–3506. [Google Scholar]
  44. Liao, W.; Yong, P.; Dastour, H.; Huang, J. Efficient and accurate numerical simulation of acoustic wave propagation in a 2d heterogeneous media. Appl. Math. Comput. 2018, 321, 385–400. [Google Scholar] [CrossRef]
  45. Young, D.M. Iterative Solution of Large Linear Systems; Academic Press: New York, NY, USA, 1971. [Google Scholar]
  46. Dancis, J. The optimal ω is not best for the SOR iteration method. Linear Algebra Its Appl. 1991, 154–156, 819–845. [Google Scholar] [CrossRef]
  47. Rigal, A. Convergence and optimization of successive overrelaxation for linear systems of equations with complex eigenvalues. J. Comput. Phys. 1979, 32, 10–23. [Google Scholar] [CrossRef]
  48. Hadjidimos, A. Successive overrelaxation (SOR) and related methods. J. Comput. Appl. Math. 2000, 123, 177–199. [Google Scholar] [CrossRef]
  49. Britt, S.; Turkel, E.; Tsynkov, S. A high order compact time/space finite difference scheme for the wave equation with variable speed of sound. J. Sci. Comput. 2018, 76, 777–811. [Google Scholar] [CrossRef]
  50. Mohammadi, V.; Dehghan, M.; Mesgarani, H. The localized RBF interpolation with its modifications for solving the incompressible two-phase fluid flows: A conservative Allen–Cahn–Navier–Stokes system. Eng. Anal. Bound. Elem. 2024, 168, 105908. [Google Scholar] [CrossRef]
  51. Qu, W.; Gu, Y.; Zhang, Y.; Fan, C.M.; Zhang, C. A combined scheme of generalized finite difference method and Krylov deferred correction technique for highly accurate solution of transient heat conduction problems. Int. J. Numer. Methods Eng. 2018, 117, 63–83. [Google Scholar] [CrossRef]
  52. Qu, W.; Fan, C.M.; Zhang, Y. Analysis of three-dimensional heat conduction in functionally graded materials by using a hybrid numerical method. Int. J. Heat Mass Transf. 2019, 145, 118771. [Google Scholar] [CrossRef]
  53. Belonosov, A.; Shishlenin, M.; Klyuchinskiy, D. A comparative analysis of numerical methods of solving the continuation problem for 1D parabolic equation with the data given on the part of the boundary. Adv. Comput. Math. 2019, 45, 735–755. [Google Scholar] [CrossRef]
  54. Chung, E.; Ito, K.; Yamamoto, M. Least squares formulation for ill-posed inverse problems and applications. Appl. Anal. 2021, 101, 5247–5261. [Google Scholar] [CrossRef]
  55. Desiderio, L.; Falletta, S.; Ferrari, M.; Scuderi, L. CVEM-BEM Coupling for the Simulation of Time-Domain Wave Fields Scattered by Obstacles with Complex Geometries. Comput. Methods Appl. Math. 2023, 23, 353–372. [Google Scholar] [CrossRef]
  56. Bzeih, M.; Arwadi, T.E.; Wehbe, A.; Madureira, R.L.R.; Rincon, M.A. A finite element scheme for a 2D-wave equation with dynamical boundary control. Math. Comput. Simul. 2023, 205, 315–339. [Google Scholar] [CrossRef]
  57. Duc, N.V.; Hao, D.N.; Shishlenin, M. Regularization of backward parabolic equations in Banach spaces by generalized Sobolev equations. J. Inverse Ill Posed Probl. 2024, 32, 9–20. [Google Scholar] [CrossRef]
  58. Burman, E.; Delay, G.; Ern, A. The unique continuation problem for the Heat equation discretized with a high-order space-time nonconforming method. SIAM J. Numer. Anal. 2023, 61, 2534–2557. [Google Scholar] [CrossRef]
  59. Dahmen, W.; Monsuur, H.; Stevenson, R. Least squares solvers for ill-posed PDEs that are conditionally stable. ESAIM Math. Model. Numer. Anal. 2023, 57, 2227–2255. [Google Scholar] [CrossRef]
  60. Helin, T. Least Squares approximations in linear statistical inverse learning problems. SIAM J. Numer. Anal. 2024, 62, 2025–2047. [Google Scholar] [CrossRef]
  61. Epperly, E.N. Fast and Forward stable randomized algorithms for linear least-squares problems. SIAM J. Matrix Anal. Appl. 2024, 45, 1782–1804. [Google Scholar] [CrossRef]
  62. Qu, W.; Gu, Y.; Fan, C.M. A stable numerical framework for long-time dynamic crack analysis. Int. J. Solids Struct. 2024, 293, 112768. [Google Scholar] [CrossRef]
  63. Li, H. Projected Newton method for large-scale bayesian linear inverse problems. SIAM J. Optim. 2025, 35, 1439–1468. [Google Scholar] [CrossRef]
  64. Bakanov, G.; Chandragiri, S.; Shishlenin, M.A. Jacobi numerical method for solving 3D continuation problem for wave equation. Sib. Electron. Math. Rep. 2025, 22, 428–442. [Google Scholar]
  65. Kabanikhin, S.I. Definitions and examples of inverse and ill-posed problems. J. Inverse Ill-Posed Probl. 2008, 16, 317–357. [Google Scholar] [CrossRef]
  66. Marin, L.; Elliott, L.; Heggs, P.J.; Ingham, D.B.; Lesnic, D.; Wen, X. Conjugate gradient-boundary element solution to the Cauchy problem for Helmholtz-type equations. Comput. Mech. 2003, 31, 367–377. [Google Scholar] [CrossRef]
  67. Marin, L.; Elliott, L.; Heggs, P.J.; Ingham, D.B.; Lesnic, D.; Wen, X. Comparison of regularization methods for solving the Cauchy problem associated with the Helmholtz equation. Int. J. Numer. Methods Eng. 2004, 60, 1933–1947. [Google Scholar] [CrossRef]
  68. Qin, H.; Wei, T.; Shi, R. Modified Tikhonov regularization method for the Cauchy problem of the Helmholtz equation. J. Comput. Appl. Math. 2009, 224, 39–53. [Google Scholar] [CrossRef]
  69. Reginska, T.; Reginski, K. Approximate solution of a Cauchy problem for the Helmholtz equation. Inverse Probl. 2006, 22, 975–989. [Google Scholar] [CrossRef]
  70. Reginiska, T.; Tautenhahn, U. Conditional stability estimates and regularization with applications to Cauchy problems for the Helmholtz equation. Numer. Funct. Anal. Optim. 2009, 30, 1065–1097. [Google Scholar] [CrossRef]
  71. Isakov, V.; Kindermann, S. Subspaces of stability in the Cauchy problem for the Helmholtz equation. Methods Appl. Anal. 2011, 18, 1–30. [Google Scholar] [CrossRef]
  72. Axelsson, O. Iterative Solution Methods; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
  73. Strikwerda, J.C. Finite Difference Schemes and Partial Differential Equations, 2nd ed.; SIAM: Philadelphia, PA, USA, 2004. [Google Scholar]
  74. Saad, Y. Iterative Methods for Sparse Linear Systems; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2003. [Google Scholar]
  75. Smith, G.D. Numerical Solution of Partial Differential Equations Finite Difference Methods, 3rd ed.; Oxford University Press: New York, NY, USA, 1985. [Google Scholar]
  76. Burden, R.L.; Douglas, J.F. Numerical Analysis; Cengage Learning: Toronto, ON, Canada, 2010. [Google Scholar]
  77. Ames, W.F. Numerical Methods for Partial Differential Equations, 2nd ed.; Academic Press: New York, NY, USA, 1977. [Google Scholar]
  78. Demmel, J.W. Applied Numerical Linear Algebra; SIAM: Philadelphia, PA, USA, 1997. [Google Scholar]
  79. Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical Recipes Third Edition; Cambridge University Press: New York, NY, USA, 2007. [Google Scholar]
  80. Nagel, J.R. Numerical solutions to Poisson equations using the Finite difference method. IEEE Antennas Propag. Mag. 2014, 56, 209–224. [Google Scholar] [CrossRef]
  81. Falk, J.; Tessmer, E.; Gajewski, D. Efficient finite-difference modelling of seismic waves using locally adjustable time steps. Geophys. Prospect. 1998, 46, 603–616. [Google Scholar] [CrossRef]
  82. Vendromin, C.J. Numerical Solutions of Laplace’s Equation for Various Physical Situations; Brock University: St. Catharines, ON, Canada, 2017. [Google Scholar]
  83. Yang, X.I.A.; Mittal, R. Acceleration of the Jacobi Iterative Method by factors exceeding 100 using Scheduled Relaxation. J. Comput. Phys. 2014, 274, 695–708. [Google Scholar] [CrossRef]
Figure 1. Domain Ω . Variable z means the depth; variables x and y are horizontal ones.
Figure 1. Domain Ω . Variable z means the depth; variables x and y are horizontal ones.
Mathematics 13 02979 g001
Figure 2. Comparison of three different iterative methods in test problem 1: (a) The relative residual versus the iteration number for each iterative method with grid 100 × 100 × 100 at the time t = 0.05 on a semilog plot (b); the same data as (a), but only the first 1000 iterations.
Figure 2. Comparison of three different iterative methods in test problem 1: (a) The relative residual versus the iteration number for each iterative method with grid 100 × 100 × 100 at the time t = 0.05 on a semilog plot (b); the same data as (a), but only the first 1000 iterations.
Mathematics 13 02979 g002
Figure 3. Comparison of three different iterative methods in test problem 1: (a) The relative residual versus the iteration number for each iterative method with grid 100 × 100 × 100 at the time t = 1 on a semilog plot (b); the same data as (a), but only the first 1000 iterations.
Figure 3. Comparison of three different iterative methods in test problem 1: (a) The relative residual versus the iteration number for each iterative method with grid 100 × 100 × 100 at the time t = 1 on a semilog plot (b); the same data as (a), but only the first 1000 iterations.
Mathematics 13 02979 g003
Figure 4. For the grid 100 × 100 × 100 : (a) analytical solution at t = 0.05 ; (b) numerical solution at t = 0.05 .
Figure 4. For the grid 100 × 100 × 100 : (a) analytical solution at t = 0.05 ; (b) numerical solution at t = 0.05 .
Mathematics 13 02979 g004
Figure 5. For the grid 100 × 100 × 100 : (a) analytical solution at t = 1 ; (b) numerical solution at t = 1 .
Figure 5. For the grid 100 × 100 × 100 : (a) analytical solution at t = 1 ; (b) numerical solution at t = 1 .
Mathematics 13 02979 g005
Figure 6. For the grid 100 × 100 × 100 : (a) analytical solution at t = 2.5 ; (b) numerical solution at t = 2.5 .
Figure 6. For the grid 100 × 100 × 100 : (a) analytical solution at t = 2.5 ; (b) numerical solution at t = 2.5 .
Mathematics 13 02979 g006
Figure 7. For the grid 100 × 100 × 100 : (a) analytical solution at t = 4.25 ; (b) numerical solution at t = 4.25 .
Figure 7. For the grid 100 × 100 × 100 : (a) analytical solution at t = 4.25 ; (b) numerical solution at t = 4.25 .
Mathematics 13 02979 g007
Figure 8. For the grid 100 × 100 × 100 : (a) analytical solution at t = 6 ; (b) numerical solution at t = 6 .
Figure 8. For the grid 100 × 100 × 100 : (a) analytical solution at t = 6 ; (b) numerical solution at t = 6 .
Mathematics 13 02979 g008
Figure 9. Error between analytical and numerical solution( 100 × 100 × 100 ): (a) at t = 0.05 ; (b) at t = 1 .
Figure 9. Error between analytical and numerical solution( 100 × 100 × 100 ): (a) at t = 0.05 ; (b) at t = 1 .
Mathematics 13 02979 g009
Figure 10. Error between analytical and numerical solution( 100 × 100 × 100 ): (a) at t = 2.5 ; (b) at t = 4.25 .
Figure 10. Error between analytical and numerical solution( 100 × 100 × 100 ): (a) at t = 2.5 ; (b) at t = 4.25 .
Mathematics 13 02979 g010
Figure 11. For the grid 100 × 100 × 100 : (a) analytical solution at t = 0.05 ; (b) numerical solution at t = 0.05 .
Figure 11. For the grid 100 × 100 × 100 : (a) analytical solution at t = 0.05 ; (b) numerical solution at t = 0.05 .
Mathematics 13 02979 g011
Figure 12. For the grid 100 × 100 × 100 : (a) analytical solution at t = 0.75 ; (b) numerical solution at t = 0.75 .
Figure 12. For the grid 100 × 100 × 100 : (a) analytical solution at t = 0.75 ; (b) numerical solution at t = 0.75 .
Mathematics 13 02979 g012
Figure 13. For the grid 100 × 100 × 100 : (a) analytical solution at t = 1.75 ; (b) numerical solution at t = 1.75 .
Figure 13. For the grid 100 × 100 × 100 : (a) analytical solution at t = 1.75 ; (b) numerical solution at t = 1.75 .
Mathematics 13 02979 g013
Figure 14. Error between analytical and numerical solution( 100 × 100 × 100 ): (a) at t = 0.05 ; (b) at t = 0.75 .
Figure 14. Error between analytical and numerical solution( 100 × 100 × 100 ): (a) at t = 0.05 ; (b) at t = 0.75 .
Mathematics 13 02979 g014
Table 1. The iteration matrices of the classic iterative methods.
Table 1. The iteration matrices of the classic iterative methods.
MethodIteration Matrix
Jacobi D 1 ( L + U )
Gauss–Seidel ( D + L ) 1 ( U )
SOR ( ω o p t ) ( D + ω L ) 1 ( ( ω 1 ) D + ω U )
Table 2. Numerical results obtained for test problem 1 using Jacobi, Gauss–Seidel, and SOR ( ω o p t ) iterative methods.
Table 2. Numerical results obtained for test problem 1 using Jacobi, Gauss–Seidel, and SOR ( ω o p t ) iterative methods.
3D—Jacobi Method:
Grid sizet v v e x a c t q q e x a c t IterationsRun time (s)
100 × 100 × 1000.059.9990 × 10 5 1.4358 × 10 5 10,706407.771762
100 × 100 × 1000.59.9982 × 10 5 1.4194 × 10 5 12,728476.778365
100 × 100 × 10019.9999 × 10 5 1.4136 × 10 5 15,226584.356846
100 × 100 × 1001.59.9964 × 10 5 1.4117 × 10 5 16,967612.186274
100 × 100 × 1002.59.9980 × 10 5 1.4111 × 10 5 18,853717.650728
100 × 100 × 1003.59.9998 × 10 5 1.4113 × 10 5 19,077729.772418
100 × 100 × 1004.259.9994 × 10 5 1.4115 × 10 5 18,196710.487008
100 × 100 × 1005.51.0000 × 10 4 1.4152 × 10 5 14,241548.987611
100 × 100 × 10069.9980 × 10 5 1.4268 × 10 5 11,522448.695514
3D—Gauss–Seidel Method:
Grid sizet v v e x a c t q q e x a c t IterationsRun time (s)
100 × 100 × 1000.059.9952 × 10 5 1.4197 × 10 5 6505233.037818
100 × 100 × 1000.59.9958 × 10 5 1.4106 × 10 5 7563282.721219
100 × 100 × 10019.9931 × 10 5 1.4068 × 10 5 8869324.869474
100 × 100 × 1001.59.9954 × 10 5 1.4063 × 10 5 9780352.769056
100 × 100 × 1002.59.9943 × 10 5 1.4058 × 10 5 10,773402.747680
100 × 100 × 1003.59.9987 × 10 5 1.4064 × 10 5 10,891405.187567
100 × 100 × 1004.259.9990 × 10 5 1.4066 × 10 5 10,426391.040137
100 × 100 × 1005.59.9941 × 10 5 1.4078 × 10 5 8354312.788946
100 × 100 × 10069.9979 × 10 5 1.4151 × 10 5 6932260.718670
3D—SOR ( ω opt ) Method:
Grid sizet v v e x a c t q q e x a c t IterationsRun time (s)
100 × 100 × 1000.059.9870 × 10 5 1.3004 × 10 5 46218.067391
100 × 100 × 1000.59.8292 × 10 5 1.2799 × 10 5 50920.671934
100 × 100 × 10019.9957 × 10 5 1.3021 × 10 5 56221.994781
100 × 100 × 1001.59.9063 × 10 5 1.2912 × 10 5 60023.676971
100 × 100 × 1002.59.8889 × 10 5 1.2903 × 10 5 64224.651705
100 × 100 × 1003.59.9105 × 10 5 1.2933 × 10 5 64724.908182
100 × 100 × 1004.259.9253 × 10 5 1.2945 × 10 5 62724.489813
100 × 100 × 1005.59.9756 × 10 5 1.2993 × 10 5 54122.181539
100 × 100 × 10069.8264 × 10 5 1.2795 × 10 5 48218.853763
Table 3. L 1 , L 2 , and L norms obtained for test problem 1 using Jacobi, Gauss–Seidel, and SOR ( ω o p t ) iterative methods.
Table 3. L 1 , L 2 , and L norms obtained for test problem 1 using Jacobi, Gauss–Seidel, and SOR ( ω o p t ) iterative methods.
3D—Jacobi Method:
Grid sizet L 1 norm L 2 norm L norm
100 × 100 × 1000.051.02 × 10 2 1.66 × 10 2 8.74 × 10 2
100 × 100 × 1000.58.91 × 10 3 1.46 × 10 2 7.69 × 10 2
100 × 100 × 10015.44 × 10 3 8.95 × 10 3 4.74 × 10 2
100 × 100 × 1001.55.42 × 10 4 9.83 × 10 4 5.89 × 10 3
100 × 100 × 1002.58.78 × 10 3 1.42 × 10 2 7.25 × 10 2
100 × 100 × 1003.51.02 × 10 2 1.66 × 10 2 8.48 × 10 2
100 × 100 × 1004.254.95 × 10 3 7.95 × 10 3 4.04 × 10 2
100 × 100 × 1005.57.18 × 10 3 1.18 × 10 2 6.21 × 10 2
100 × 100 × 10069.76 × 10 3 1.60 × 10 2 8.41 × 10 2
3D—Gauss–Seidel Method:
Grid sizet L 1 norm L 2 norm L norm
100 × 100 × 1000.051.02 × 10 2 1.67 × 10 2 8.76 × 10 2
100 × 100 × 1000.59.01 × 10 3 1.47 × 10 2 7.71 × 10 2
100 × 100 × 10015.54 × 10 3 9.07 × 10 3 4.76 × 10 2
100 × 100 × 1001.56.39 × 10 4 1.09 × 10 3 6.08 × 10 3
100 × 100 × 1002.58.68 × 10 3 1.40 × 10 2 7.22 × 10 2
100 × 100 × 1003.51.01 × 10 2 1.64 × 10 2 8.45 × 10 2
100 × 100 × 1004.254.85 × 10 3 7.83 × 10 3 4.01 × 10 2
100 × 100 × 1005.57.28 × 10 3 1.19 × 10 2 6.23 × 10 2
100 × 100 × 10069.85 × 10 3 1.61 × 10 2 8.43 × 10 2
3D—SOR ( ω opt ) Method:
Grid sizet L 1 norm L 2 norm L norm
100 × 100 × 1000.051.03 × 10 2 1.69 × 10 2 8.78 × 10 2
100 × 100 × 1000.59.11 × 10 3 1.48 × 10 2 7.73 × 10 2
100 × 100 × 10015.64 × 10 3 9.18 × 10 3 4.78 × 10 2
100 × 100 × 1001.57.41 × 10 4 1.21 × 10 3 6.29 × 10 3
100 × 100 × 1002.58.57 × 10 3 1.39 × 10 2 7.20 × 10 2
100 × 100 × 1003.51.00 × 10 2 1.63 × 10 2 8.43 × 10 2
100 × 100 × 1004.254.75 × 10 3 7.71 × 10 3 3.99 × 10 2
100 × 100 × 1005.57.38 × 10 3 1.20 × 10 2 6.25 × 10 2
100 × 100 × 10069.95 × 10 3 1.62 × 10 2 8.45 × 10 2
Table 4. Numerical results obtained for test problem 1 using SOR ( ω o p t ) iterative method.
Table 4. Numerical results obtained for test problem 1 using SOR ( ω o p t ) iterative method.
3D—SOR ( ω opt ) Method:
Grid size ω t v v e x a c t q q e x a c t IterationsRun time (s)
100 × 100 × 1001.813.59.9531 × 10 5 1.3725 × 10 5 192575.360733
100 × 100 × 1001.833.59.9821 × 10 5 1.3736 × 10 5 178061.252317
100 × 100 × 1001.853.59.9812 × 10 5 1.3701 × 10 5 163756.894357
100 × 100 × 1001.873.59.9936 × 10 5 1.3677 × 10 5 149550.280692
100 × 100 × 1001.893.59.9302 × 10 5 1.3542 × 10 5 135544.859061
100 × 100 × 1001.913.59.9894 × 10 5 1.3563 × 10 5 121438.342488
100 × 100 × 1001.933.59.9898 × 10 5 1.3486 × 10 5 107433.096681
100 × 100 × 1001.953.59.9435 × 10 5 1.3323 × 10 5 93429.273534
100 × 100 × 1001.973.59.9586 × 10 5 1.3204 × 10 5 79225.753374
3D—SOR ( ω opt ) Method:
Grid size ω t v v e x a c t q q e x a c t IterationsRun time (s)
200 × 200 × 2001.990.059.9883 × 10 5 9.6933 × 10 6 2270601.174088
200 × 200 × 2001.990.59.9762 × 10 5 9.6755 × 10 6 2514728.882570
200 × 200 × 2001.9919.9717 × 10 5 9.6703 × 10 6 2810775.991934
200 × 200 × 2001.991.59.9881 × 10 5 9.6876 × 10 6 3017877.440963
200 × 200 × 2001.992.59.9919 × 10 5 9.6946 × 10 6 3247888.147237
200 × 200 × 2001.993.59.9888 × 10 5 9.6922 × 10 6 3275917.482363
200 × 200 × 2001.994.259.9945 × 10 5 9.6957 × 10 6 3166849.660135
200 × 200 × 2001.995.59.9860 × 10 5 9.6840 × 10 6 2693768.742178
200 × 200 × 2001.9969.9962 × 10 5 9.6976 × 10 6 2369728.895564
Table 5. L 1 , L 2 , and L norms obtained for test problem 1 using SOR ( ω o p t ) iterative method.
Table 5. L 1 , L 2 , and L norms obtained for test problem 1 using SOR ( ω o p t ) iterative method.
3D—SOR ( ω opt ) Method:
Grid sizet L 1 norm L 2 norm L norm
200 × 200 × 2000.051.02 × 10 2 1.66 × 10 2 8.74 × 10 2
200 × 200 × 2000.59.00 × 10 3 1.47 × 10 2 7.69 × 10 2
200 × 200 × 20015.57 × 10 3 9.07 × 10 3 4.75 × 10 2
200 × 200 × 2001.57.28 × 10 4 1.19 × 10 3 6.25 × 10 3
200 × 200 × 2002.58.48 × 10 3 1.38 × 10 2 7.17 × 10 2
200 × 200 × 2003.59.94 × 10 3 1.61 × 10 2 8.40 × 10 2
200 × 200 × 2004.254.70 × 10 3 7.62 × 10 3 3.98 × 10 2
200 × 200 × 2005.57.29 × 10 3 1.19 × 10 2 6.22 × 10 2
200 × 200 × 20069.84 × 10 3 1.60 × 10 2 8.41 × 10 2
Table 6. Maximum error between analytical and numerical solution obtained for test problem 1 using the Jacobi Gauss–Seidel and SOR ( ω o p t ) iterative methods.
Table 6. Maximum error between analytical and numerical solution obtained for test problem 1 using the Jacobi Gauss–Seidel and SOR ( ω o p t ) iterative methods.
3D—Jacobi Method:
Grid sizetMax error
100 × 100 × 1000.050.08742306
100 × 100 × 1000.50.07688044
100 × 100 × 10010.04736541
100 × 100 × 1001.50.00589268
100 × 100 × 1002.50.07245479
100 × 100 × 1003.50.08476152
100 × 100 × 1004.250.04036343
100 × 100 × 1005.50.06212928
100 × 100 × 10060.08407013
3D—Gauss–Seidel Method:
Grid sizetMax error
100 × 100 × 1000.050.08762827
100 × 100 × 1000.50.07708416
100 × 100 × 10010.04757004
100 × 100 × 1001.50.00608207
100 × 100 × 1002.50.07224184
100 × 100 × 1003.50.08454893
100 × 100 × 1004.250.04014474
100 × 100 × 1005.50.06233324
100 × 100 × 10060.08427443
3D—SOR ( ω opt ) Method:
Grid sizetMax error
100 × 100 × 1000.050.08783366
100 × 100 × 1000.50.07728949
100 × 100 × 10010.04777679
100 × 100 × 1001.50.00628667
100 × 100 × 1002.50.07202712
100 × 100 × 1003.50.08433334
100 × 100 × 1004.250.03993213
100 × 100 × 1005.50.06253916
100 × 100 × 10060.08447975
Table 7. Numerical results obtained for test problem 2 using SOR ( ω o p t ) iterative method.
Table 7. Numerical results obtained for test problem 2 using SOR ( ω o p t ) iterative method.
3D—SOR ( ω opt ) Method:
Grid size ω t v v e x a c t q q e x a c t IterationsRun time (s)
100 × 100 × 1001.970.059.7753 × 10 7 7.2767 × 10 8 32510.958864
100 × 100 × 1001.970.759.9082 × 10 7 4.1724 × 10 8 48018.453368
100 × 100 × 1001.9719.7063 × 10 7 3.8858 × 10 8 48519.093781
100 × 100 × 1001.971.759.6166 × 10 7 3.2720 × 10 8 42116.910070
100 × 100 × 1001.9729.6401 × 10 7 3.2028 × 10 8 41915.664194
100 × 100 × 1001.9739.9988 × 10 7 4.0454 × 10 8 48418.827001
100 × 100 × 1001.974.259.8244 × 10 7 3.3578 × 10 8 41515.134811
100 × 100 × 1001.975.759.8568 × 10 7 3.4404 × 10 8 41715.342882
100 × 100 × 1001.9769.8729 × 10 7 7.0618 × 10 8 33912.254305
Table 8. L 1 , L 2 , and L norms obtained for test problem 2 using SOR ( ω o p t ) iterative method.
Table 8. L 1 , L 2 , and L norms obtained for test problem 2 using SOR ( ω o p t ) iterative method.
3D—SOR ( ω opt ) Method:
Grid sizet L 1 norm L 2 norm L norm
100 × 100 × 1000.053.70 × 10 5 4.85 × 10 5 1.27 × 10 4
100 × 100 × 1000.753.25 × 10 5 4.31 × 10 5 1.15 × 10 4
100 × 100 × 10014.79 × 10 5 6.36 × 10 5 1.71 × 10 4
100 × 100 × 1001.752.73 × 10 5 3.59 × 10 5 9.40 × 10 5
100 × 100 × 10023.74 × 10 5 4.91 × 10 5 1.28 × 10 4
100 × 100 × 10034.78 × 10 5 6.35 × 10 5 1.70 × 10 4
100 × 100 × 1004.252.84 × 10 5 3.74 × 10 5 9.79 × 10 5
100 × 100 × 1005.752.88 × 10 5 3.79 × 10 5 9.91 × 10 5
100 × 100 × 10063.68 × 10 5 4.83 × 10 5 1.26 × 10 4
Table 9. Maximum error between analytical and numerical solution obtained for test problem 2 using the SOR ( ω o p t ) iterative method.
Table 9. Maximum error between analytical and numerical solution obtained for test problem 2 using the SOR ( ω o p t ) iterative method.
3D—SOR ( ω opt ) Method:
Grid sizetMax error
100 × 100 × 1000.051.2657 × 10 4
100 × 100 × 1000.751.1521 × 10 4
100 × 100 × 10011.7082 × 10 4
100 × 100 × 1001.759.3973 × 10 5
100 × 100 × 10021.2796 × 10 4
100 × 100 × 10031.7043 × 10 4
100 × 100 × 1004.259.7877 × 10 5
100 × 100 × 1005.759.9128 × 10 5
100 × 100 × 10061.2597 × 10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bakanov, G.; Chandragiri, S.; Kabanikhin, S.; Shishlenin, M. Comparative Analysis of Numerical Methods for Solving 3D Continuation Problem for Wave Equation. Mathematics 2025, 13, 2979. https://doi.org/10.3390/math13182979

AMA Style

Bakanov G, Chandragiri S, Kabanikhin S, Shishlenin M. Comparative Analysis of Numerical Methods for Solving 3D Continuation Problem for Wave Equation. Mathematics. 2025; 13(18):2979. https://doi.org/10.3390/math13182979

Chicago/Turabian Style

Bakanov, Galitdin, Sreelatha Chandragiri, Sergey Kabanikhin, and Maxim Shishlenin. 2025. "Comparative Analysis of Numerical Methods for Solving 3D Continuation Problem for Wave Equation" Mathematics 13, no. 18: 2979. https://doi.org/10.3390/math13182979

APA Style

Bakanov, G., Chandragiri, S., Kabanikhin, S., & Shishlenin, M. (2025). Comparative Analysis of Numerical Methods for Solving 3D Continuation Problem for Wave Equation. Mathematics, 13(18), 2979. https://doi.org/10.3390/math13182979

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop