Next Article in Journal
Tree-Based Methods of Volatility Prediction for the S&P 500 Index
Previous Article in Journal
Numerical Simulation of Capture of Diffusing Particles in Porous Media
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A High-Order Hybrid Approach Integrating Neural Networks and Fast Poisson Solvers for Elliptic Interface Problems

Department of Mathematics, University of Alabama, Tuscaloosa, AL 35487, USA
*
Author to whom correspondence should be addressed.
Computation 2025, 13(4), 83; https://doi.org/10.3390/computation13040083
Submission received: 20 February 2025 / Revised: 12 March 2025 / Accepted: 20 March 2025 / Published: 23 March 2025

Abstract

:
A new high-order hybrid method integrating neural networks and corrected finite differences is developed for solving elliptic equations with irregular interfaces and discontinuous solutions. Standard fourth-order finite difference discretization becomes invalid near such interfaces due to the discontinuities and requires corrections based on Cartesian derivative jumps. In traditional numerical methods, such as the augmented matched interface and boundary (AMIB) method, these derivative jumps can be reconstructed via additional approximations and are solved together with the unknown solution in an iterative procedure. Nontrivial developments have been carried out in the AMIB method in treating sharply curved interfaces, which, however, may not work for interfaces with geometric singularities. In this work, machine learning techniques are utilized to directly predict these Cartesian derivative jumps without involving the unknown solution. To this end, physics-informed neural networks (PINNs) are trained to satisfy the jump conditions for both closed and open interfaces with possible geometric singularities. The predicted Cartesian derivative jumps can then be integrated in the corrected finite differences. The resulting discrete Laplacian can be efficiently solved by fast Poisson solvers, such as fast Fourier transform (FFT) and geometric multigrid methods, over a rectangular domain with Dirichlet boundary conditions. This hybrid method is both easy to implement and efficient. Numerical experiments in two and three dimensions demonstrate that the method achieves fourth-order accuracy for the solution and its derivatives.

1. Introduction

This paper addresses a two-dimensional (2D) or three-dimensional (3D) elliptic interface problem for the Poisson equation with a smooth function λ :
Δ u ( x ) + λ ( x ) u ( x ) = f ( x ) , x Ω Ω + .
It is subject to the Dirichlet boundary condition, given as follows:
u ( x ) = u b ( x ) , x Ω .
The domain Ω is assumed to be rectangular and is partitioned into subdomains Ω = Ω + Ω by an interface Γ . We denote the solution in each subdomain by u + and u . Across the interface Γ , the solution exhibits jump discontinuities governed by the following jump conditions:
[ [ u ( x ) ] ] = γ ( x ) , [ [ u n ( x ) ] ] = ρ ( x ) , x Γ ,
where the notation [ [ · ] ] denotes the difference between the values of a quantity as it approaches Γ from Ω + and Ω (i.e., the value from the Ω + side minus the value from the Ω side). The term u n represents the normal derivative u · n , where n is the unit normal vector pointing from Ω to Ω + . The source term is piecewise continuous with [ [ f ( x ) ] ] = f + ( x ) Γ f ( x ) Γ 0 .
Interface problems pose significant challenges for traditional numerical methods due to the loss of solution regularity across the interface. To address this issue and recover numerical accuracy near the interface, various methods have been developed by enforcing the jump conditions given in (3) and (4) into the numerical discretization. These approaches enable the design of accurate and robust numerical algorithms, including the coupling interface method [1], the piecewise-polynomial interface method [2], the kernel-free integral equation method [3], the finite volume method [4], and the immersed finite element method (IFEM) [5,6].
Over the past few decades, finite difference methods using Cartesian grids have been intensively studied for elliptic interface problems. Notable examples include the immersed interface method (IIM) [7,8,9], the ghost fluid method (GFM) [10], and the matched interface and boundary (MIB) method [11,12]. Building on the MIB methods, the augmented matched interface and boundary (AMIB) method has recently been developed for two- and three-dimensional elliptic interface problems [13,14,15]. In the AMIB method, finite difference discretizations near an irregular interface are corrected by Cartesian derivative jumps defined on the interface, which are reconstructed via additional approximations. Moreover, such jumps are treated as auxiliary variables in an augmented system, so that they can be solved together with the unknown solution in an iterative procedure. By using fast Poisson solvers including the fast Fourier transform [13,14] and multigrid method [15] to invert the discrete Laplacian operator, the AMIB method is highly efficient, achieving fourth-order accuracy for both the solution and its gradient. Nontrivial developments have been carried out in the AMIB method [14,16] in treating sharply curved interfaces. Nevertheless, the fourth-order AMIB method assumes the interface is C 1 continuous, so that it may not be applicable to C 0 continuous interfaces involving geometric singularities.
In recent years, the application of neural networks, both shallow and deep, to solve elliptic interface problems has gained significant attention in the scientific computing community. Unlike traditional grid-based methods, neural networks are inherently mesh-free, making them highly effective for handling complex interfaces and irregular domains. Prominent approaches like the Deep Ritz method [17] and physics-informed neural networks (PINNs) [18] have been widely used for solving partial differential equations (PDEs) with smooth solutions. Building on these foundations, various neural network models have been developed specifically for elliptic interface problems [19,20,21,22,23,24]. While many of these methods utilize deep network architectures, recent advances have demonstrated that shallow networks can also effectively address interface problems, such as the discontinuity capturing shallow neural network (DCSNN) [25] and the cusp-capturing PINNs [22]. Despite their promise, these mesh-free neural network approaches face ongoing challenges in ensuring stability and achieving robust convergence rates. To address these limitations, a hybrid method combining neural network-based machine learning with the traditional FFT Poisson solver was developed in [26]. This approach incorporates interface conditions into a learned function while using regular finite difference discretizations, achieving second-order accuracy and effectively handling complex geometries.
In this work, we propose a novel fourth-order hybrid method, referred to as the corrected hybrid method, for solving the Poisson interface problem (1)–(3). This method combines neural network learning with corrected finite difference schemes, integrating elements from the machine learning framework of the hybrid method [26] and the corrected finite difference discretization with high-order fast Poisson solvers from the AMIB method [14]. The corrected hybrid method retains the advantages of the hybrid method, such as its implementation simplicity and ability to handle complex interface geometries, while preserving the high-order accuracy of the AMIB method.
In the AMIB method, the standard fourth-order finite difference scheme for the Laplacian operator is corrected using Cartesian derivative jumps, which require additional approximation treatments. To simplify this process, the corrected hybrid method employs neural networks to predict these derivative jumps directly. This approach reduces implementation complexity while maintaining accuracy. Unlike the original hybrid method, which uses shallow neural networks to approximate solution components with second-order accuracy, the corrected hybrid method utilizes neural networks to predict high-order jump quantities. These predictions are incorporated into a corrected finite difference scheme, enabling fourth-order accuracy.
It is important to emphasize that the objective of the corrected hybrid method is not to replace traditional numerical methods such as the AMIB or IIM or to compete with them across all dimensions. Instead, the goal is to provide an alternative approach that is particularly advantageous for its high accuracy and ease of implementation when addressing Poisson interface problems with non-homogeneous jump conditions. By leveraging fast Poisson solvers and machine learning, the corrected hybrid method achieves both computational efficiency and high-order accuracy. Moreover, the use of the correct hybrid method for solving an open interface problem will also be explored.
The rest of the paper is organized as follows. In Section 2, a high-order hybrid method is proposed to solve the elliptic interface problems. The incorporation of FFT and multigrid solvers will be discussed in treating elliptic equations with constant coefficients and variable coefficients. Section 3 is dedicated to the numerical results to demonstrate the performance of the proposed algorithm in 2D and 3D. The generalization of the corrected hybrid method will be considered in Section 4, so that one can solve an open interface problem. Finally, this paper ends with a summary in Section 5.

2. Theory and Algorithm

To solve the Possion interface problem, we first define a small region Ω Γ Ω that encloses the interface Γ . This region consists of all points within a certain distance d from Γ , where d is small. Here, the distance between a point x Ω and Γ is defined as x x Γ 2 where x Γ Γ and the line passing these two points is perpendicular to Γ . We assume that u + , u , f + , and  f are smooth extensions of the solution u and the source term f to Ω Γ , ensuring they are valid throughout this region. These extensions satisfy the Poisson equations for x Ω Γ :
Δ u + ( x ) + λ u + ( x ) = f + ( x ) , x Ω Γ ,
Δ u ( x ) + λ u ( x ) = f ( x ) , x Ω Γ .
In the present study, u + and u are assumed to be C 6 continuous in Ω Γ . In other words, the original solution u to the Poisson interface problem (1)–(3) is piecewise C 6 continuous. The same regularity requirement is also assumed in the fourth-order AMIB method [13,14,15,27], which is essentially required for achieving the fourth order of accuracy in solving second-order PDEs by using the corrected finite difference.

2.1. Neural Network Approximation for the Difference Function

We will not seek to solve u + ( x ) and u ( x ) over Ω Γ . Instead, we will consider the difference function, defined as D ( x ) = u + ( x ) u ( x ) . By subtracting Equation (5) from Equation (4) and using the jump conditions in Equation (3), we derive that D ( x ) satisfies the following conditions along the interface Γ :
D ( x ) = γ ( x ) , n D ( x ) = ρ ( x ) , Δ D ( x ) + λ ( x ) D ( x ) = [ [ f ( x ) ] ] , x Γ .
Based on the regularity assumption for u + and u , D ( x ) is C 6 continuous over Ω Γ . According to the universal approximation theory, such a smooth function can be approximated by a shallow, fully connected neural network. We thus propose using a simple neural network with one hidden layer to approximate D ( x ) . The network parameters are optimized using machine learning. Training data are collected at M points on the interface, { ( x i Γ ) Γ } i = 1 M , where the target values are γ ( x i Γ ) , ρ ( x i Γ ) , and  [ [ f ( x i Γ ) ] ] . The proposed physics-informed neural network (PINN) minimizes the following mean squared error loss function:
Loss ( p ) = 1 M i = 1 M [ D ( ( x i Γ ) ; p ) γ ( x i Γ ) 2 + n D ( ( x i Γ ) ; p ) ρ ( x i Γ ) 2 + Δ D ( ( x i Γ ) ; p ) + λ ( x i Γ ) D ( ( x i Γ ) ; p ) [ [ f ( x i Γ ) ] ] 2 ] ,
where p denotes the trainable parameters (weights and biases) of the NN. The Levenberg–Marquardt (LM) method [28] is used to minimize this loss function.
Unlike the hybrid method presented in [26], which decomposes the solution u ( x ) into singular and regular components, the proposed corrected hybrid method directly solves for the solution u ( x ) . This is achieved by incorporating the Cartesian derivative jumps across the interface, as predicted by the NN function D ( x ) . These predicted jump quantities are then utilized within a corrected finite difference discretization, enabling the accurate computation of u ( x ) while satisfying the necessary boundary conditions.

2.2. Prediction of Cartesian Derivative Jumps

For the finite difference discretization, we assume a uniform mesh spacing h in a 2D rectangular domain Ω = [ a , b ] × [ c , d ] . The grid points are defined as follows:
x i = a + i h , y j = c + j h , i = 0 , , n x , j = 0 , , n y .
The standard fourth-order finite difference discretization becomes invalid near interfaces due to the presence of jump discontinuities. To address this issue, corrections based on Cartesian derivative jumps can be used. For any interface point ( α x , α y ) Γ , the jump in the k-th derivative of u is defined as follows:
k u x k | ( α x , α y ) = lim ( x , α y ) ( α x + , α y ) k u x k lim ( x , α y ) ( α x , α y ) k u x k ,
where k = 0 , 1 , , 4 . Here, ( x , α y ) ( α x + , α y ) denotes the right-hand limit, and ( x , α y ) ( α x , α y ) denotes the left-hand limit. Note that the jump notation [ · ] here is distinct from [ [ · ] ] , which is used in the jump conditions in Equation (3).
In the proposed corrected hybrid method, the approach to determine the Cartesian derivative jumps differs from the previous AMIB method [13,14,15,27]. For the case k = 0 , the zeroth-order Cartesian derivative jump is directly derived from the zeroth-order jump condition. For  k > 1 , instead of relying on fictitious values generated by MIB schemes to reconstruct the Cartesian derivative jumps, the higher-order Cartesian derivative jumps are obtained with the aid of the difference function D ( x ) in this work.
Once the NN function D ( x ) is determined, the Cartesian derivative jump k u x k can be expressed in terms of the partial derivatives of the difference function k D x k at each interface point, scaled by a factor c:
k u x k = c k D x k .
The partial derivatives of the neural network approximation D ( x ) are efficiently computed using automatic differentiation [29]. The scaling factor c depends on the orientation of the interface crossing at the interface point. Specifically, c = 1 if the grid line crosses the interface Γ from Ω + to Ω , and  c = 1 otherwise. This formulation applies equivalently to y- and z-grid lines.
By leveraging machine learning to predict these derivative jumps, the implementation is significantly simplified compared to the series of numerical approximations required by AMIB methods [13,14,15,27]. These predicted Cartesian derivative jumps are directly incorporated into the fourth-order corrected central differences, as detailed in the following subsection.

2.3. Fourth-Order Corrected Central Differences

We denote the standard fourth-order central difference operator for the second-order derivatives along the x-direction as follows:
δ x x ( x i ) = 1 12 h 2 u ( x i 2 ) + 4 3 h 2 u ( x i 1 ) 5 2 h 2 u ( x i ) + 4 3 h 2 u ( x i + 1 ) 1 12 h 2 u ( x i + 2 ) .
In the AMIB methods [13,14,15,27], such a standard central difference needs to be corrected near the interface. The corrected fourth-order central difference includes correction terms that incorporate the Cartesian derivative jumps at points where grid lines intersect the interface Γ . To simplify the discussion, we temporarily ignore the dependence of the solution on y and z, assuming u = u ( x ) . As in the AMIB methods [13,14,15,27], the solution u is assumed to be piecewise C 6 continuous, i.e.,  C 6 [ x i h , α ) C 6 ( α , x i + 1 + h ] , with x i α x i + 1 and its derivatives extending continuously to the interface point α .
When a jump discontinuity exists at the interface, the central difference operator must be adjusted locally. Using fourth-order corrected central differences, the Laplacian in the x-direction can be approximated as follows [13]:
u x x ( x i 1 ) δ x x ( x i 1 ) + 1 12 h 2 k = 0 K ( h + ) k k ! k u x k ,
u x x ( x i ) δ x x ( x i ) 4 3 h 2 k = 0 K ( h + ) k k ! k u x k + 1 12 h 2 k = 0 K ( h + h + ) k k ! k u x k ,
u x x ( x i + 1 ) δ x x ( x i + 1 ) + 4 3 h 2 k = 0 K ( h ) k k ! k u x k 1 12 h 2 k = 0 K ( h h ) k k ! k u x k ,
u x x ( x i + 2 ) δ x x ( x i + 2 ) 1 12 h 2 k = 0 K ( h ) k k ! k u x k ,
where h = x i α , h + = x i + 1 α , and the Cartesian derivative jumps [ k u x k ] are provided by Equation (10). These corrections focus on the x-direction, with analogous formulations for the y and z directions. The correction terms are designed to vanish at regular points but are specifically applied near interfaces at irregular points. By combining corrected differences across all directions, the Laplacian operator is effectively approximated. For fourth-order accuracy, we set K = 4 , resulting in a third-order local truncation error, O ( h 3 ) , at irregular points while maintaining a global fourth-order convergence rate. However, correcting fourth-order central differences along Cartesian directions near interfaces introduces challenges related to corners. These corner cases, which arise during the correction process, are rigorously addressed in Theorems 1–4 of [15].

2.4. Fast Poisson Solvers

We introduce a 1D column vector Q of size 5 N 2 × 1 to represent the Cartesian derivative jumps [ k u x k ] and [ k u y k ] at the N 2 intersection points between grid lines and the interface. The unknown function values at the N 1 interior grid points within the domain Ω are organized in a 1D column vector U of dimension N 1 × 1 .
Let U i , j denote the discrete solution corresponding to the analytical solution u ( x i , y j ) at ( x i , y j ) . Based on the fourth-order corrected difference analysis discussed in the previous subsection, the resulting discretized Poisson equation in matrix form is expressed as follows:
( L h + λ i , j ) U i , j + C i , j = f i , j , for 1 i n x 1 , 1 j n y 1 ,
where C i , j represents the correction term, and  L h U i , j denotes the standard fourth-order finite difference approximation to the Laplace operator.
The matrix form of the discretized system (15) can be written as A ^ U + B Q = F , where A ^ = A + D , with A representing an N 1 -by- N 1 discrete Laplacian matrix and D being an N 1 -by- N 1 diagonal matrix containing the entries λ i , j . B is an N 1 -by- 5 N 2 matrix containing coefficients from the correction terms, and F represents the known right-hand side after applying boundary conditions. Moving the known correction term B Q to the right-hand side yields
A ^ U = F B Q .
The inversion of the matrix A ^ can be efficiently performed using fast Poisson solvers, such as the Fast Fourier Transform (FFT) method [13,14,27], geometric multigrid methods [15], or other fast techniques. By leveraging corrected central differences and Cartesian derivative jumps, these solvers ensure that solution discontinuities near the interface do not compromise accuracy or efficiency.
The fast Poisson solvers discussed in this work, with slight variations in the fourth-order finite difference schemes used for the Laplace operator and the discrete matrix A ^ , have been employed in our previous studies [13,14,15,27]. These solvers are designed to handle the PDE (1) with different cases of the coefficient λ .
  • Fast Fourier Transform (FFT):
    The FFT solver is applied for Poisson problems with a constant coefficient λ . In this case, the discrete matrix A ^ is constructed using a fourth-order central difference scheme. Following the approach outlined in [13,14,27], the matrix A ^ is efficiently inverted using the FFT algorithm. This method requires the solution to exhibit either an anti-symmetric property across the boundaries of the rectangular domain or periodic boundary conditions to ensure compatibility with the FFT framework.
  • Geometric Multigrid Method:
    The geometric multigrid method is employed for Possion problems involving a variable coefficient λ . To maintain accuracy, a fourth-order geometric multigrid approach is applied to invert the matrix A ^ . As described in [15], the standard fourth-order central difference scheme is utilized at interior grid points far from the boundaries, while fourth-order one-sided finite difference approximations are employed near boundary-adjacent points to ensure high-order accuracy near the domain boundaries.
Let us summarize the proposed fourth-order hybrid neural network and corrected finite difference method for solving the Poisson interface problem, as described in Algorithm 1.
Algorithm 1 Fourth-Order Hybrid Neural Network and Corrected Finite Difference Method
1:
Step 1: Train the neural network function D by minimizing the loss function (7) using the Levenberg–Marquardt (LM) optimizer and compute the partial derivatives n D x n and n D y n at intersection points of grid lines with the interface Γ using automatic differentiation.
2:
Step 2: Calculate the Cartesian derivative jumps at each intersection point on Γ using Equation (9) and store these values in a vector Q, then determine the coefficients of the correction terms in Equation (10) and store them in a matrix B.
3:
Step 3: Apply a fast Poisson solver to solve the corrected finite difference equation, yielding the solution U.

2.5. Comparison Remarks

A comparison of the proposed corrected hybrid method with the existing high-order numerical methods for elliptic interface problems is in order.
The proposed fourth-order corrected hybrid method is designed to solve a simple Poisson interface problem (1)–(3), and is inapplicable to more general interface problems considered in the fourth-order AMIB method [13,14,15,27], in which the derivative jump condition involves discontinuous material coefficients. However, the corrected hybrid method offers notable advantages for Poisson interface problems and is potentially easier to implement. Due to the mesh-free nature of the neural network component, the proposed corrected hybrid method can handle complex interface geometries with minimal effort. Implementing machine learning for Cartesian derivative jump approximation only requires providing the interface description, such as its parametric form and normal vector. In contrast, the AMIB methods [14,15,16] require introducing fictitious values to construct Cartesian derivative jumps. A recent AMIB development allows for representing a fictitious value by function values away from the interface and other previously solved fictitious values [16], so that a more complex geometry can be handled. However, the fictitious value generation becomes very challenging when using coarse meshes or dealing with geometric singularities [16]. The proposed corrected hybrid method eliminates the need for fictitious value generation, making it more straightforward and robust for handling complicated interface geometries. It is important to note that the neural network function D ( x ) is trained only once and can be evaluated at any interface location, allowing it to be applied across multiple grid resolutions, independent of discretization. Furthermore, the proposed method retains the advantageous features of the AMIB method, including the use of fast Poisson solvers and corrected finite difference schemes to ensure both efficiency and solution accuracy.
We note that for the present Poisson interface problem (1)–(3), the Cartesian derivative jumps can be calculated by the immersed interface method (IIM) [7,8,9]. In the context of handling interface conditions, the tangential jump [ [ u τ ] ] can be computed based on the condition [ [ u ] ] = γ ( x ) . Coupled with the normal jump [ [ u n ( x ) ] ] = ρ ( x ) , this allows for the analytical derivation of first-order Cartesian derivative jumps, namely, [ u x ] and [ u y ] . For second-order derivative jumps, such as [ u x x ] and [ u y y ] , the IIM [30] provides a framework for computing these values. This approach relies on the second-order tangential and normal derivative jumps [ [ u τ τ ] ] and [ [ u n n ] ] . However, the implementation of this method can be complex due to the involvement of curvature terms. In contrast, the high-order IIM formulation [31] facilitates the derivation of higher-order derivative jumps. Nevertheless, the proposed neural network-based method presents an alternative approach that is both more straightforward and efficient, offering a simpler means to approximate Cartesian derivative jumps, including higher-order derivatives.

3. Numerical Results

In this section, we analyze the accuracy of the proposed corrected hybrid method in solving two- and three-dimensional Poisson interface problems. For a comparison, we also examine the performance of the second-order hybrid method [26]. Additionally, we evaluate the ray-casting AMIB scheme introduced in [14,15], where Cartesian derivative jumps are constructed using fictitious values generated via the ray-casting MIB scheme. For both the AMIB method and the proposed corrected hybrid method, either the fourth-order FFT solver or the geometric multigrid is employed to invert the discrete matrix A ^ in solving the resulting linear Equation (16).
In each test, the neural network function D ( x ) is represented using a shallow network with a sigmoid activation function. This network employs only a single hidden layer, which simplifies the architecture and ensures efficiency by requiring the training of only a few hundred parameters across all numerical examples. A few hundred training points are randomly sampled along the interface to effectively capture its physical features. However, for more complex problems, particularly in three-dimensional cases or those involving complex geometric features, a larger number of training points may be necessary to maintain accuracy. The source code accompanying this manuscript is available on GitHub (visited on 22 March 2025): https://github.com/szhao-ua/fourth-order-hybrid-method.
The accuracy of approximating solution gradients or fluxes is explored through several numerical examples. The discrete gradient u = ( u x , u y ) is computed using a fourth-order corrected central difference scheme. This method incorporates the predicted Cartesian derivative jumps provided by the neural network along with the computed solution to achieve high accuracy. For instance, the fourth-order central difference for the x component of the gradient, denoted as δ x , is given by
δ x ( x i ) = 1 12 h u ( x i 2 ) 2 3 h u ( x i 1 ) + 2 3 h u ( x i + 1 ) 1 12 h u ( x i + 2 ) .
The corrected gradients are then computed as follows:
u x ( x i 1 ) δ x ( x i 1 ) + 1 12 h k = 0 K ( h + ) k k ! k u x k ,
u x ( x i ) δ x ( x i ) 2 3 h k = 0 K ( h + ) k k ! k u x k + 1 12 h k = 0 K ( h + h + ) k k ! k u x k ,
u x ( x i + 1 ) δ x ( x i + 1 ) 2 3 h ( x i + 1 ) k = 0 K ( h ) k k ! k u x k + 1 12 h k = 0 K ( h h ) k k ! k u x k ,
u x ( x i + 2 ) δ x ( x i + 2 ) + 1 12 h k = 0 K ( h ) k k ! k u x k ,
The correction of fourth-order central differences for the gradient in all corner cases is addressed in detail in Theorems 1–4 of [15].
For simplicity, the computational domain Ω is assumed to be either a square (2D) or a cubic domain (3D), uniformly partitioned into subintervals in each direction ( n = n x = n y = n z ) . The accuracy and convergence of the numerical solutions are assessed for 2D problems by measuring errors in the maximum norm defined as follows:
L ( u u h ) = max ( x i , y j ) Ω Ω + | u ( x i , y j ) u h ( x i , y j ) | ,
where u ( x i , y j ) and u h ( x i , y j ) are, respectively, the analytical and numerical solution. Similarly, error norms for the derivatives in the gradient can be defined using the same maximum norm. These definitions naturally extend to 3D problems.
The successive error, which measures the difference between solutions at two levels of grid refinement, is defined as
L ( u h u h / 2 ) = max ( x i , y j ) Ω Ω + | u h ( x i , y j ) u h / 2 ( x i , y j ) | ,
where u h and u h / 2 are the numerical solutions on the coarse and finer grids, respectively.
Example 1.
We begin by solving a two-dimensional Poisson interface problem and comparing the results of the proposed corrected hybrid method with those obtained using the AMIB method [15] and the original hybrid method [26]. The problem is defined on the square domain Ω = [ 1 , 1 ] 2 , with the embedded interface represented by an ellipse:
Γ : x 0.6 2 + y 0.4 2 = 1 .
The exact solution is given by the following:
u ( x , y ) = sin ( 5 x ) sin ( 5 y ) , if ( x , y ) Ω , cos ( 5 x ) cos ( 5 y ) , if ( x , y ) Ω + ,
from which the corresponding right-hand side f ( x , y ) , and jump conditions γ ( x , y ) and ρ ( x , y ) used in the loss function can be calculated accordingly. In this example, the geometric multigrid method is used to invert the matrix A ^ in both the proposed scheme and the AMIB scheme [15].
The same neural network setup is employed for both the present and the original hybrid methods. The network for D ( x ) is designed with 40 neurons in the hidden layer and trained using 200 randomly sampled training points on the interface Γ . The training process terminates when the stopping condition Loss ( p ) < 10 12 is met or the maximum number of iterations (epoch = 1000) is reached.
The numerical solution for a mesh size of h = 1 / 128 is displayed in Figure 1a. In Figure 1b, we present the mesh refinement study, showing the maximum norm error L ( u u h ) as a function of the mesh size h. The results indicate that the original hybrid method (solid blue line with circular markers) achieves a second-order convergence rate, whereas both the proposed hybrid method (solid red line with pentagram markers) and the AMIB method (solid purple line with diamond markers) achieve a fourth-order convergence rate. In the present hybrid method, the computed solution is used to determine u = ( u x , u y ) by applying the corrected finite difference scheme (17)–(20) with the predicted Cartesian derivative jumps. The gradients are also tested among these three methods, shown in Figure 2. The maximum errors of the gradients computed using the original hybrid method attain a second-order convergence rate, while the proposed hybrid method and the AMIB method achieve fourth-order convergence in the gradients.
Next, we evaluate the performance of the proposed corrected method in solving the PDE (1) in the case of a variable coefficient λ ( x , y ) . The coefficient is set as λ ( x , y ) = sin ( x + y ) + 1 . The comparison of maximum errors for the solutions and gradients obtained using the AMIB method and the corrected hybrid method is presented in Figure 3 and Figure 4. The results demonstrate that both methods achieve a fourth-order convergence rate for both solutions and gradients.
The induced numerical error arises from two primary sources: the neural network approximation and the finite difference approximation. Although detailed results are not presented here, the final loss value achieved is approximately 10 12 10 14 . This corresponds to a predictive error, denoted as ϵ ( 0 ) , of 10 6 10 7 for the target function D. However, the predictive error ϵ ( k ) for higher-order (k-th) derivatives of D can grow exponentially with k, as seen in (10), which is used in the corrected finite difference schemes (11)–(14). To maintain fourth-order accuracy, the predictive error of the correction terms in (11)–(14), given by 1 h 2 ϵ ( 0 ) + 1 h ϵ ( 1 ) + ϵ ( 2 ) + h ϵ ( 3 ) + h 2 ϵ ( 4 ) , must be smaller than the local truncation error O ( h 3 ) . This indicates that the error is primarily dominated by the neural network approximation when the mesh size h is very small. To verify this observation, we conducted additional numerical tests with finer mesh widths. As expected, further refining the mesh did not lead to any significant reduction in the overall error, confirming that the neural network approximation is the limiting factor. A deeper or wider neural network architecture may achieve smaller predictive errors in the finer mesh.
Example 2.
The second example concerns the Poisson equation and compares the performances of the three methods on a square domain Ω = [ π / 3 , π / 3 ] 2 , with the embedded interface represented by a five-headed curve:
Γ : r ( θ ) = 0.5 + 0.1 sin ( θ ) .
The exact solution is chosen as
u ( x , y ) = exp ( x ) cos ( y ) , if ( x , y ) Ω , sin ( k x ) sin ( k y ) , if ( x , y ) Ω + ,
from which the corresponding jump information used in the loss function can be calculated accordingly. With k = 6 , a periodic boundary condition is imposed so that the FFT is employed to invert the matrix A ^ in both the proposed corrected hybrid scheme and the AMIB scheme [14]. The same neural network setup as in Example 1 is employed.
In Figure 5a, the numerical solution of the proposed scheme is shown for a mesh size of h = 1 / 128 , where the discontinuity is sharply captured across the interface. The mesh refinement results for the numerical solution and its gradient among the three methods are tested again and presented in Figure 5b and Figure 6, respectively. These results demonstrate that the original hybrid method achieves a second-order convergence rate for both the solution and the gradient. In contrast, both the proposed hybrid method and the AMIB method achieve a fourth-order convergence rate for both the solution and the gradient.
In Figure 5b and Figure 6, the AMIB method fails to produce numerical solutions and gradients on a coarse grid, whereas the corrected hybrid method remains robust. This discrepancy arises because, on coarse meshes, the AMIB method encounters difficulties in generating fictitious values and constructing accurate Cartesian derivative jumps. In contrast, the corrected hybrid method bypasses this challenge by leveraging its mesh-free neural network component, which ensures stability and accuracy even on coarser grids.
Example 3.
In this example, we consider a complex geometry with geometric singularities, which are beyond the capability of AMIB methods [13,14,15,16]. The problem is defined on the square domain Ω = [ 0.75 , 0.5 ] 2 , with the embedded interface given by the heart-shaped curve:
Γ : r ( θ ) = 0.3 0.3 sin θ + 0.3 sin θ | cos θ | 2 sin θ + 2.8 .
The heart-shaped interface features a singularity at the origin and the bottom tip, making it computationally challenging. The analytical solution and setup are identical to those in Example 2, except that k = 4 is used. In Figure 7a, the numerical solution is shown for a mesh size of h = 5 / 1024 , demonstrating sharp discontinuity across the interface. Figure 7b presents the mesh refinement results for the numerical solution and its gradient. The proposed method exhibits fourth-order convergence for both the solution and the gradient.
Example 4.
To assess the reliability of the hybrid method, we consider a scenario where the exact solution is unavailable. The problem is defined on the square domain Ω = [ 1 , 1 ] 2 , with the interface Γ embedded as an ellipse described by the following:
Γ : x 0.6 2 + y 0.4 2 = 1 .
The right-hand side function for the Poisson interface problem is chosen as follows:
f ( x , y ) = 2 π 2 cos ( k x ) cos ( k y ) , if ( x , y ) Ω , 2 π 2 sin ( k x ) sin ( k y ) , if ( x , y ) Ω + ,
where k = 2 π . The Dirichlet boundary condition u ( x , y ) = 0 is imposed along the boundary of Ω. With an unknown solution, the fourth-order geometric multigrid method is employed in this corrected hybrid method to invert A ^ . At the interface Γ, the following jump conditions are prescribed:
[ [ u ( x , y ) ] ] = γ ( x , y ) = cos ( k ( x + y ) ) ,
[ [ n u ( x , y ) ] ] = ρ ( x , y ) = k sin ( k ( x + y ) ) ( n 1 + n 2 ) ,
where ( n 1 , n 2 ) represents the outward normal direction of Γ pointing from Ω to Ω + .
For this test, the neural network D ( x ) is configured with 150 neurons in a single hidden layer and is trained on 300 randomly sampled points along the interface Γ . Since the exact solution is unavailable, we evaluate the maximum error by computing the successive error L ( u h u h / 2 ) , where u h denotes the solution obtained with grid size h. In Figure 8a, we display the numerical solution u at the finest resolution h = 1 / 128 . In Figure 8b, the maximum norm errors for u and its gradient are presented. As anticipated, all quantities exhibit fourth-order convergence.
Example 5.
We consider a three-dimensional Poisson interface problem given in (1) with λ = 1 , where the interface is defined as an ellipsoid,
Γ : x 0.5 2 + y 0.3 2 + z 0.2 2 = 1 ,
embedded in a cubic domain Ω = π / 3 , π / 3 3 . The exact solution is given by
u ( x , y ) = exp ( x + y + z ) if ( x , y , z ) Ω , sin ( k x ) sin ( k y ) sin ( k z ) , if ( x , y , z ) Ω + ,
So, the corresponding right-hand side f ( x , y , z ) , and the jump information γ ( x , y , z ) and ρ ( x , y , z ) are derived accordingly. We use the same shallow network structure as in the 2D examples: a single hidden layer containing 40 neurons. The network is trained using 200 points randomly sampled on the interface Γ to learn the neural network function D ( x , y , z ) .
With k = 6 , the anti-symmetry property is satisfied at the boundary, allowing the use of the 3D FFT solver in both the corrected hybrid method and the 3D AMIB solver [14]. The numerical solutions and the maximum errors of the proposed corrected hybrid method are visualized in Figure 9, where the solutions and errors are mapped onto the ellipsoid surface Γ from both the interior and exterior. Figure 10a presents the mesh refinement results, demonstrating that both the corrected hybrid method and the 3D AMIB solver [14] achieve fourth-order accuracy for the numerical solution. Notably, the proposed hybrid method exhibits higher accuracy than the AMIB solver. The corrected hybrid method utilizes a neural network to predict Cartesian derivative jumps in a straightforward, mesh-free manner, making it more robust and effective for handling complex interface geometries, especially in 3D. Similar to the 2D case, the AMIB solver fails for this 3D scenario with a mesh size of h = 1 / 16 , whereas the corrected hybrid method remains robust. Additionally, Figure 10b shows that the gradient of the proposed hybrid method also achieves fourth-order convergence.
Example 6.
We aim to solve a three-dimensional Poisson interface problem with geometric singularities. The interface is defined as an apple-shaped surface:
Γ : ρ = 0.2375 ( 1 cos ϕ ) ,
where ρ = x 2 + y 2 + z 2 and ϕ = arccos ( z / ρ ) , embedded in a cubic domain Ω = [ 1 , 1 ] 3 . The same neural network architecture and analytical solution with k = 2 π as in Example 5 are employed.
The apple-shaped interface exhibits a singularity at the origin, which poses notable computational difficulties. The numerical solution and corresponding error of the corrected hybrid method on a mesh with n = 129 are shown in Figure 11. The mesh refinement results for the numerical solution and its gradient are presented in Figure 12. These results demonstrate that the proposed corrected hybrid method achieves fourth-order convergence for both the solution and its gradient, even in the presence of geometric singularities.

4. Elliptic Interface Problem with Open Interface

To further explore the potential of the proposed corrected hybrid method, we will consider an elliptic interface problem with an open interface. Unlike a closed interface, an open interface extends to the boundary of the computational domain (see Figure 13a), introducing additional challenges for both neural network and numerical methods. In particular, open interface problems have not been studied before by using the original hybrid method [26] and the fourth-order AMIB method [13,14,15].
Since the proposed corrected hybrid method relies on a neural network trained to approximate the difference function D ( x ) , we first further examine the mathematical definitions behind D ( x ) for interface problems. We first note that in Equations (4) and (5), we assume the governing equation can be extended over a small band Ω Γ Ω containing all points within a certain distance d from Γ . Thus, the third jump condition for D ( x ) in Equation (6) is not only valid on the interface Γ . It can actually be extended to the subdomain Ω Γ as follows:
Δ D ( x ) + λ ( x ) D ( x ) = f + ( x ) f ( x ) , x Ω Γ .
We note that in Equation (21), f + ( x ) and f ( x ) are unavailable, respectively, in Ω Ω Γ and Ω + Ω Γ , according to the original elliptic interface (1)–(3). In the present study, we assume that the function expressions of f + ( x ) and f ( x ) are given so that we can extend them to the other hand side of the interface Γ . When only nodal values of f + ( x ) and f ( x ) are known, one can extrapolate these values to the other hand side of the interface Γ . Such approximations will be of high precision, because the band-width d is assumed to be small.
For a closed interface Γ , the three jump conditions given in Equation (6) over Γ provide sufficient constraints for training D ( x ) , since D ( x ) is implicitly influenced by neighboring points from all directions around the interface. Note that the interface information enforced in the loss function (7) is defined over a lower-dimensional set ( Γ is 1D in 2D problems). Such information becomes stand-alone only when Γ is closed. However, in the case of an open interface, the three jump conditions no longer yield sufficient constraints in all directions near the interface because information is lost at the open ends.
For open interface problems, we propose using Equation (21) to replace the third jump condition in Equation (6) for predicting D ( x ) . This provides more numerical stability for approximating higher-order derivatives of D ( x ) . In fact, the present treatment aligns with the fundamental principle of finite difference stencils, where high-order derivative approximations achieve greater accuracy when computed over multiple points instead of a single location. To approximate D ( x ) , training data are collected from M Γ interface points { x i Γ Γ } i = 1 M Γ and M Ω Γ points in the small region Ω Γ , { x i Ω Γ Ω Γ } i = 1 M Ω Γ , where the target values are γ ( x i Γ ) , ρ ( x i Γ ) , and f + ( x i Ω Γ ) , f ( x i Ω Γ ) . The neural network is trained by minimizing the following MSE loss function:
Loss ( p ) = 1 M Γ i = 1 M Γ D ( x i Γ ; p ) γ ( x i Γ ) 2 + n D ( x i Γ ; p ) ρ ( x i Γ ) 2 + 1 M Ω Γ i = 1 M Ω Γ Δ D ( x i Ω Γ ; p ) + λ ( x i Ω Γ ) D ( x i Ω Γ ; p ) f + ( x i Ω Γ ) + f ( x i Ω Γ ) 2 ,
To minimize the loss function, we employ the LM method. Once the neural network function is obtained, the subsequent computational procedures remain the same as in the corrected hybrid method.
Example 7.
Consider an open interface given by the following equation:
Γ ( x ) = 0.15 tanh exp 1 1 x exp 1 1 + x ,
defined on the domain [ 1 , 1 ] × [ 1 , 1 ] (see Figure 13a). The exact solution is the same as in Example 2 with k = 2 π .
For the neural network training, we employ a shallow network structure with M Γ = 200 points on the interface and M Ω Γ = 200 points in the small region Ω Γ . The solution of the corrected hybrid method, where the neural network is trained with the loss function in (22), is plotted in Figure 13a. The performance of the corrected hybrid method is evaluated by comparing the maximum errors of the solutions and gradients with the loss functions in (7) and (22). These comparisons are presented in Figure 13b and Figure 14, respectively. Using 40 neurons in the hidden layer, we observe that the loss function in (7) (with the third constraint applied directly on the interface) results in a lack of numerical stability of the high-order Cartesian derivative jumps, and consequently, the solution is inaccurate. In contrast, when the loss function in (22) is employed to train D ( x ) , and the third constraint is applied in the small region Ω Γ , the solution and gradient achieve fourth-order convergence. This example demonstrates that the present corrected hybrid method is effective in solving elliptic interface problems with open interfaces.
However, extending the original hybrid method [26] to open interfaces might be challenging. Specifically, the neural network function is defined in Ω , and since the open interface extends to the boundary of the domain, the information around the interface is incomplete. The missing information at the open ends makes it difficult to apply the jump conditions accurately, as there is no neighboring domain to provide the necessary context or constraints beyond the interface. Additionally, the region Ω Γ around the open interface may not have a well-defined structure to enforce the smoothness conditions of the regular solution.

5. Conclusions

In this paper, we introduced a novel numerical method for solving elliptic interface problems, where the solution and its derivatives exhibit jump discontinuities across an interface. The proposed approach leverages machine learning to predict Cartesian derivative jumps by incorporating the given jump information into the loss function of a neural network. These predicted derivative jumps are then integrated into a corrected finite difference discretization of the Laplace operator. The resulting linear system is efficiently solved using a fast direct solver based on this corrected discretization.
We conducted numerical comparisons among the proposed corrected hybrid method, the original hybrid method [26], and the AMIB methods [14,15] for solving Poisson interface problems. Both the AMIB methods and the corrected hybrid method achieve fourth-order accuracy for both the solution and its derivatives, while the original hybrid method delivers a second-order convergence rate for both the solution and its derivatives. Notably, the corrected hybrid method demonstrated the highest accuracy among the three approaches. The corrected hybrid method excels in handling interfaces with geometric singularities, a capability not supported by AMIB methods, due to its mesh-free neural network framework. Additionally, it can be extended to open interfaces, where the interface extends to the boundary of the domain—an area where the original hybrid method might face challenges. Thus, the proposed method not only maintains high accuracy but also effectively addresses the challenges posed by geometric singularities and open interfaces.
However, as the mesh size h decreases, the overall error of the corrected hybrid method becomes primarily dominated by the neural network approximation. Further mesh refinement does not lead to a significant reduction in error, indicating that the neural network serves as the limiting factor in achieving higher-order accuracy.
Although the examples presented focus on cases with a single embedded interface, the method can be readily extended to accommodate multiple interfaces. Future work will explore extending this methodology to general elliptic interface problems with discontinuous material coefficients involved in the flux jump condition.

Author Contributions

Conceptualization, S.Z.; methodology, Y.R. and S.Z.; software, Y.R.; validation, Y.R.; writing—original draft preparation, Y.R.; writing—review and editing, S.Z.; visualization, Y.R.; supervision, S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the National Science Foundation (NSF) of USA under grants DMS-2110914 and DMS-2306991.

Data Availability Statement

The source code accompanying this manuscript is available on GitHub (visited on 22 March 2025): https://github.com/szhao-ua/fourth-order-hybrid-method.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chern, I.L.; Shu, Y.C. A coupling interface method for elliptic interface problems. J. Comput. Phys. 2007, 225, 2138–2174. [Google Scholar] [CrossRef]
  2. Chen, T.; Strang, J. Piecewise-polynomial discretization and Krylov-accelerated multigrid for elliptic interface problems. J. Comput. Phys. 2008, 227, 7503–7542. [Google Scholar]
  3. Ying, W.; Wang, W.C. A kernel-free boundary integral method for implicitly defined surfaces. J. Comput. Phys. 2013, 252, 606–624. [Google Scholar]
  4. Bochkov, D.; Gibou, F. Solving elliptic interface problems with jump conditions on Cartesian grids. J. Comput. Phys. 2020, 407, 109269. [Google Scholar] [CrossRef]
  5. Gong, Y.; Li, B.; Li, Z. Immersed-interface finite-element Methods for elliptic interface problems with non-homogeneous jump conditions. SIAM J. Numer. Anal. 2008, 46, 472–495. [Google Scholar]
  6. He, X.; Lin, T.; Lin, Y. Immersed finite element methods for elliptic interface problems with non-homogeneous jump conditions. Int. J. Numer. Anal. Model. 2011, 8, 284–301. [Google Scholar]
  7. LeVeque, R.J.; Li, Z.L. The immersed interface method for elliptic equations with discontinuous coefficients and singular sources. SIAM J. Numer. Anal. 1994, 31, 1019–1044. [Google Scholar]
  8. Li, Z.L. A fast iterative algorithm for elliptic interface problem. SIAM J. Numer. Anal. 1998, 35, 230–254. [Google Scholar]
  9. Li, Z.L.; Ji, H.; Chen, X. Accurate solution and gradient computation for elliptic interface problems with variable coefficients. SIAM J. Numer. Anal. 2017, 55, 670–697. [Google Scholar]
  10. Fedkiw, R.P.; Aslam, T.; Merriman, B.; Osher, S. A non-oscillatory Eulerian approach to interfaces in multimaterial flows (the ghost fluid method). J. Comput. Phys. 1999, 152, 457–492. [Google Scholar]
  11. Yu, S.; Wei, G. Three-dimensional matched interface and boundary (MIB) method for treating geometric singularities. J. Comput. Phys. 2007, 227, 602–632. [Google Scholar] [CrossRef]
  12. Zhou, Y.C.; Zhao, S.; Feig, M.; Wei, G.W. High order matched interface and boundary method for elliptic equations with discontinuous coefficients and singular source. J. Comput. Phys. 2006, 213, 1–30. [Google Scholar] [CrossRef]
  13. Feng, H.; Zhao, S. A fourth order finite difference method for solving elliptic interface problems with the FFT acceleration. J. Comput. Phys. 2020, 419, 109677. [Google Scholar] [CrossRef]
  14. Ren, Y.; Zhao, S. A FFT accelerated fourth order finite difference method for solving three-dimensional elliptic interface problems. J. Comput. Phys. 2023, 477, 111924. [Google Scholar] [CrossRef]
  15. Ren, Y.; Zhao, S. A multigrid-based fourth order finite difference method for elliptic interface problems with variable coefficients. Int. J. Numer. Anal. Model. 2025; submitted. [Google Scholar]
  16. Li, C.; Zhao, S.; Pentecost, B.; Ren, Y.; Guan, Z. A spatially fourth-order Cartesian grid method for fast solution of elliptic and parabolic problems on irregular domains with sharply curved boundaries. J. Sci. Comput. 2025; submitted. [Google Scholar]
  17. E, W.; Yu, B. The deep Ritz method: A deep learning-based numerical algorithm for solving variational problems. Commun. Math. Stat. 2018, 6, 1–12. [Google Scholar] [CrossRef]
  18. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  19. Guo, H.; Yang, X. Deep unfitted Nitsche method for elliptic interface problems. Commun. Comput. Phys. 2022, 31, 1162–1179. [Google Scholar]
  20. He, C.; Hu, X.; Mu, L. A mesh-free method using piecewise deep neural network for elliptic interface problems. J. Comput. Appl. Math. 2022, 412, 114358. [Google Scholar] [CrossRef]
  21. Jiang, X.; Wang, Z.; Bao, W.; Xu, Y. Generalization of PINNs for elliptic interface problems. Appl. Math. Lett. 2024, 157, 109175. [Google Scholar]
  22. Tseng, Y.H.; Lin, T.S.; Hu, W.F.; Lai, M.C. A cusp-capturing PINN for elliptic interface problems. J. Comput. Phys. 2023, 491, 112359. [Google Scholar]
  23. Wu, S.; Lu, B. INN: Interfaced neural networks as an accessible meshless approach for solving interface PDE problems. J. Comput. Phys. 2022, 470, 111588. [Google Scholar]
  24. Ying, J.; Hu, J.; Shi, Z.; Li, J. An Accurate and Efficient Continuity-Preserved Method Based on Randomized Neural Networks for Elliptic Interface Problems. SIAM J. Sci. Comput. 2024, 46, C633–C657. [Google Scholar] [CrossRef]
  25. Hu, W.F.; Lin, T.S.; Lai, M.C. A discontinuity capturing shallow neural network for elliptic interface problems. J. Comput. Phys. 2022, 469, 111576. [Google Scholar] [CrossRef]
  26. Hu, W.F.; Lin, T.S.; Tseng, Y.H.; Lai, M.C. An efficient neural-network and finite-difference hybrid method for elliptic interface problems with applications. Commun. Comput. Phys. 2023, 33, 1090–1105. [Google Scholar]
  27. Ren, Y.; Feng, H.; Zhao, S. A FFT accelerated high order finite difference method for elliptic boundary value problems over irregular domains. J. Comput. Phys. 2022, 448, 110762. [Google Scholar] [CrossRef]
  28. Marquardt, D. An algorithm for least-squares estimation of nonlinear parameters. SIAM J. Appl. Math. 1963, 11, 431–441. [Google Scholar]
  29. Baydin, A.G.; Pearlmutter, B.A.; Radul, A.A.; Siskind, J.M. Automatic differentiation in machine learning: A survey. J. Mach. Learn. Res. 2018, 18, 1–43. [Google Scholar]
  30. Li, Z.; Ito, K. The Immersed Interface Method: Numerical Solutions of PDEs Involving Interfaces and Irregular Domains; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2006. [Google Scholar]
  31. Pan, K.; He, D.; Li, Z. A high order compact FD framework for elliptic BVPs involving singular sources, interfaces, and irregular domains. J. Sci. Comput. 2021, 88, 67. [Google Scholar]
Figure 1. Example 1. (a) Numerical solution obtained by present hybrid method with mesh size h = 1 / 128 . (b) Comparison of maximum norm errors of u among original hybrid method [26], corrected hybrid method, and AMIB method [15].
Figure 1. Example 1. (a) Numerical solution obtained by present hybrid method with mesh size h = 1 / 128 . (b) Comparison of maximum norm errors of u among original hybrid method [26], corrected hybrid method, and AMIB method [15].
Computation 13 00083 g001
Figure 2. Comparison of maximum norm errors of u x and u y among original hybrid method [26], corrected hybrid method, and AMIB method [15] in Example 1.
Figure 2. Comparison of maximum norm errors of u x and u y among original hybrid method [26], corrected hybrid method, and AMIB method [15] in Example 1.
Computation 13 00083 g002
Figure 3. Comparison of maximum norm errors of u for corrected hybrid method and AMIB method [15] in Example 1 with λ ( x , y ) = sin ( x + y ) + 1 .
Figure 3. Comparison of maximum norm errors of u for corrected hybrid method and AMIB method [15] in Example 1 with λ ( x , y ) = sin ( x + y ) + 1 .
Computation 13 00083 g003
Figure 4. Comparison of maximum norm errors of u x and u y among corrected hybrid method and AMIB method [15] in Example 1 with λ ( x , y ) = sin ( x + y ) + 1 .
Figure 4. Comparison of maximum norm errors of u x and u y among corrected hybrid method and AMIB method [15] in Example 1 with λ ( x , y ) = sin ( x + y ) + 1 .
Computation 13 00083 g004
Figure 5. Example 2. (a) Numerical solution obtained by hybrid method with mesh size h = 1 / 128 . (b) Comparison of maximum norm errors of u among original hyrbid method [26], corrected hybrid method, and AMIB method [14].
Figure 5. Example 2. (a) Numerical solution obtained by hybrid method with mesh size h = 1 / 128 . (b) Comparison of maximum norm errors of u among original hyrbid method [26], corrected hybrid method, and AMIB method [14].
Computation 13 00083 g005
Figure 6. Comparison of maximum norm errors of gradients among original hyrbid method [26], corrected hybrid method, and AMIB method [14] in Example 2.
Figure 6. Comparison of maximum norm errors of gradients among original hyrbid method [26], corrected hybrid method, and AMIB method [14] in Example 2.
Computation 13 00083 g006
Figure 7. Example 3. (a) The numerical solution obtained by the present hybrid method with a mesh size of h = 5 / 1024 . (b) The maximum norm errors of u , u x , and u y for the corrected hybrid method.
Figure 7. Example 3. (a) The numerical solution obtained by the present hybrid method with a mesh size of h = 5 / 1024 . (b) The maximum norm errors of u , u x , and u y for the corrected hybrid method.
Computation 13 00083 g007
Figure 8. Example 4. (a) The numerical solution u obtained by the present hybrid method with a mesh size of h = 1 / 128 . (b) The maximum norm errors of u , u x , and u y for the corrected hybrid method.
Figure 8. Example 4. (a) The numerical solution u obtained by the present hybrid method with a mesh size of h = 1 / 128 . (b) The maximum norm errors of u , u x , and u y for the corrected hybrid method.
Computation 13 00083 g008
Figure 9. The numerical solution (left) and error (right) of the corrected hybrid method for Example 5 on a mesh with n = 129 .
Figure 9. The numerical solution (left) and error (right) of the corrected hybrid method for Example 5 on a mesh with n = 129 .
Computation 13 00083 g009
Figure 10. Example 5. (a) The comparison of maximum norm errors of u among the corrected hybrid method and the AMIB method. (b) The maximum norm errors of u x , u y , and u z for the corrected hybrid method.
Figure 10. Example 5. (a) The comparison of maximum norm errors of u among the corrected hybrid method and the AMIB method. (b) The maximum norm errors of u x , u y , and u z for the corrected hybrid method.
Computation 13 00083 g010
Figure 11. The numerical solution (left) and error (right) of the corrected hybrid method for Example 6 on a mesh with n = 129 .
Figure 11. The numerical solution (left) and error (right) of the corrected hybrid method for Example 6 on a mesh with n = 129 .
Computation 13 00083 g011
Figure 12. The maximum norm errors of u , u x , u y , and u z for the proposed hybrid method for Example 6.
Figure 12. The maximum norm errors of u , u x , u y , and u z for the proposed hybrid method for Example 6.
Computation 13 00083 g012
Figure 13. Example 7. (a) The numerical solution obtained by the present hybrid method with a mesh size of h = 2 / 128 . (b) A comparison of the maximum errors of the solutions for the corrected hybrid method, where the neural network is trained using the loss function in Equations (7) and (22).
Figure 13. Example 7. (a) The numerical solution obtained by the present hybrid method with a mesh size of h = 2 / 128 . (b) A comparison of the maximum errors of the solutions for the corrected hybrid method, where the neural network is trained using the loss function in Equations (7) and (22).
Computation 13 00083 g013
Figure 14. A comparison of the maximum errors of the gradients for the corrected hybrid method in Example 7, where the neural network is trained using the loss function in Equations (7) and (22).
Figure 14. A comparison of the maximum errors of the gradients for the corrected hybrid method in Example 7, where the neural network is trained using the loss function in Equations (7) and (22).
Computation 13 00083 g014
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ren, Y.; Zhao, S. A High-Order Hybrid Approach Integrating Neural Networks and Fast Poisson Solvers for Elliptic Interface Problems. Computation 2025, 13, 83. https://doi.org/10.3390/computation13040083

AMA Style

Ren Y, Zhao S. A High-Order Hybrid Approach Integrating Neural Networks and Fast Poisson Solvers for Elliptic Interface Problems. Computation. 2025; 13(4):83. https://doi.org/10.3390/computation13040083

Chicago/Turabian Style

Ren, Yiming, and Shan Zhao. 2025. "A High-Order Hybrid Approach Integrating Neural Networks and Fast Poisson Solvers for Elliptic Interface Problems" Computation 13, no. 4: 83. https://doi.org/10.3390/computation13040083

APA Style

Ren, Y., & Zhao, S. (2025). A High-Order Hybrid Approach Integrating Neural Networks and Fast Poisson Solvers for Elliptic Interface Problems. Computation, 13(4), 83. https://doi.org/10.3390/computation13040083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop