Next Article in Journal
Tensor-Based Uncoupled and Incomplete Multi-View Clustering
Next Article in Special Issue
Vibration Reduction of Permanent Magnet Synchronous Motors by Four-Layer Winding: Mathematical Modeling and Experimental Validation
Previous Article in Journal
Volatility Spillover Between China’s Carbon Market and Traditional Manufacturing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Progressive Domain Decomposition for Efficient Training of Physics-Informed Neural Network

1
Department of Digital Economics, Changzhou College of Information Technology, Changzhou 213164, China
2
Department of Industrial & Information Systems Engineering, Jeonbuk National University, 567, Baekje-daero, Deokjin-gu, Jeonju-si 54896, Republic of Korea
3
Department of Mechanical, Robotics, and Energy Engineering, Dongguk University, Seoul 04620, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1515; https://doi.org/10.3390/math13091515
Submission received: 25 March 2025 / Revised: 25 April 2025 / Accepted: 28 April 2025 / Published: 4 May 2025
(This article belongs to the Special Issue Advanced Modeling and Design of Vibration and Wave Systems)

Abstract

:
This study proposes a strategy for decomposing the computational domain to solve differential equations using physics-informed neural networks (PINNs) and progressively saving the trained model in each subdomain. The proposed progressive domain decomposition (PDD) method segments the domain based on the dynamics of residual loss, thereby indicating the complexity of different sections within the entire domain. By analyzing residual loss pointwise and aggregating it over specific intervals, we identify critical regions requiring focused attention. This strategic segmentation allows for the application of tailored neural networks in identified subdomains, each characterized by varying levels of complexity. Additionally, the proposed method trains and saves the model progressively based on performance metrics, thereby conserving computational resources in sections where satisfactory results are achieved during the training process. The effectiveness of PDD is demonstrated through its application to complex PDEs, where it significantly enhances accuracy and conserves computational power by strategically simplifying the computational tasks into manageable segments.

1. Introduction

Over recent decades, deep learning has profoundly influenced various applications such as image processing and natural language processing. Recent advancements in deep learning have extended the application scope to address a broad spectrum of problems including scientific computation. Notably, physics-informed neural networks (PINNs) have emerged as a potent tool for solving partial differential equations (PDEs). PINNs offer a novel approach by eliminating the need for traditional discretization, characteristic of conventional numerical solvers. Unlike traditional methods, PINNs optimize model parameters based on the loss at each collocation point, thus operating entirely mesh-free. This methodology allows the training to be governed solely by physical laws, boundary conditions, and initial conditions, circumventing the dependence on extensive high-fidelity datasets typical of other deep learning models. Due to their mesh-free nature and high computational efficiency, PINNs have attracted increasing attention in scientific computation domains.
Introduced by Raissi et al. [1], PINNs have revolutionized the solution of supervised learning tasks by incorporating laws of physics described by general nonlinear PDEs. Significant research in PINNs has focused on weighting loss terms, optimizing activation functions, gradient techniques, neural network architectures, and loss function structures. Haghighat et al. [2] enhanced PINNs’ effectiveness through a multi-network model that accurately represents field variables in solid mechanics, employing separate networks for each variable for improved inversion and surrogate modeling. Innovative methods to enforce physical constraints have also been developed. Lu et al. [3] introduced hard constraints using penalty and augmented Lagrangian methods to solve topology optimization, resulting in simpler and smoother designs for problems with non-unique solutions. Similarly, Basir and Senocak [4] advocated for physics- and equality-constrained neural networks using augmented Lagrangian methods to better integrate residual loss and boundary conditions. Wang et al. [5] outlined best practices that significantly enhance the training efficiency and accuracy of PINNs. These practices are benchmarked against challenging problems to gauge their effectiveness. Moreover, advanced sampling methods have been paid much attention, as the performance of trained PINNs is highly related to the selection of collocation points in the computational spatial and temporal space. Wu et al. [6] explored the impact of various sampling strategies on inverse problems, particularly emphasizing the differential effects of non-uniform sampling on problem-solving efficacy. Wight and Zhao [7] proposed an adaptive PINN approach to solve the Allen–Cahn and Cahn–Hilliard equations, utilizing adaptive strategies in space and time alongside varied sampling techniques. Similarly, Tang et al. [8] developed a deep adaptive sampling method that employs deep generative models to dynamically generate new collocation points, thereby refining the training dataset based on the distribution of residuals. The field has seen the introduction of several specialized software packages designed to facilitate the application of PINNs to both forward and inverse PDE problems, such as SciANN [9] and DeepXDE [10]. These packages support a range of PINN variants like conservative PINNs (cPINNs) [11] and Finite basis PINNs (FBPINNs) [12], each tailored for specific scientific and engineering applications.
PINNs have demonstrated significant capabilities in addressing both forward and inverse modeling challenges. These networks effectively integrate data-driven learning with physics-based constraints, allowing them to handle complex problems in various scientific and engineering domains. Zhang et al. [13] introduced a general framework utilizing PINNs to analyze internal structures and defects in materials, focusing on scenarios with unknown geometric and material parameters. This approach showcases PINNs’ utility in detailed material analysis and defect identification, which are critical in materials science and engineering. PINNs are increasingly recognized for their potential to expedite inverse design processes. As described by Wiecha et al. [14], PINNs serve as ultra-fast predictors within optimization workflows, effectively replacing slower conventional simulation methods. This capability is pivotal in accelerating design cycles and enhancing efficiency in engineering tasks. Yago et al. [15] successfully applied PINNs to the design of acoustic metamaterials, tackling complex topological challenges that require significant computational resources. This underscores PINNs’ effectiveness in domains demanding high computational power and intricate problem-solving capabilities. Meng et al. [16] combined PINNs with the first-order reliability method to tackle the computationally intensive field of structural reliability analysis, highlighting the method’s potential to reduce computational burdens substantially.
Despite their successes, PINNs face several challenges such as high computational demands, difficulties in achieving convergence, nonlinearity, unstable performance, and susceptibility to trivial solutions. These issues often complicate their application in more complex scenarios. In response to these challenges, the classical numerical strategy of ‘divide and conquer’ [17], traditionally employed to simplify complex analytical problems, has been adeptly adapted for use in PINNs. This approach involves decomposing the computational domain into smaller, more manageable subdomains. Each subdomain is modeled by a distinct neural network specifically tasked with approximating the PDE solution within that localized area. This method of localization substantially mitigates spectral bias inherent to neural networks, effectively reducing the complexity of the learning process by confining it to smaller domains. A foundational approach was outlined by Quarteroni and Valli, who provided deep insights into domain decomposition for PDEs [18]. Building on these principles, Li et al. [19] introduced the deep domain decomposition method (D3M) with a variational principle. Their methodology involves dividing the computational domain into several overlapping subdomains using the Schwarz alternating method [17], enhanced by an innovative sampling method at junctions and a smooth boundary function. Jagtap and Karniadakis expanded on this by proposing spatial domain decomposition-based PINNs, specifically cPINNs and extended PINNs (XPINNs), which are tailored for various differential equations [20]. Further advancements were made by Das and Tesfamariam, who developed a parallel PINN framework that optimizes hyperparameters within each subdomain [21]. The utility of XPINNs in modeling multiscale and multi-physics problems was further emphasized by Hu et al. [22] and Shukla et al. [23], who also introduced augmented PINNs (APINNs) with soft domain decomposition to refine the XPINN approach. Additionally, domain decomposition has been successfully applied beyond traditional PINN applications. Bandai and Ghezzehei [24] used it in PINNs to model water flow in unsaturated soils with discontinuous hydraulic conductivities. Meanwhile, Meng et al. [25] explored the use of domain decomposition in a parallel PINN (PPINN) for time-dependent PDEs, employing a coarse-grained solver for initial model initialization followed by iterative refinements in each subdomain. Dwivedi et al. [26] presented a distributed PINN approach, which entails dividing the computational domain into uniformly distributed non-overlapping cells, each equipped with a PINN. All PINNs are trained simultaneously by minimizing the total loss through a gradient descent algorithm. Stiller et al. [27] trained individual neural networks to act as gates for distributing resources among expert networks in a Gated-PINN that utilizes domain decomposition. Li et al. [28] introduced a deep-learning-based Robin–Robin domain decomposition method for Helmholtz equations, utilizing an efficient plane wave activation-based neural network to discretize the subproblems. They demonstrated that suitable Robin parameters on different subdomains maintain a nearly constant convergence rate despite an increasing wave number.
Notwithstanding the success of domain decomposition methods, significant challenges persist, particularly in dynamically adjusting interfaces and the number of subdomains. Currently, identifying the optimal decomposition strategy remains a largely unresolved issue, and intuitive engineering judgment often plays a crucial role alongside algorithms for efficient automatic decomposition [29]. Moreover, state-of-the-art (SOTA) models are trained across the entire domain simultaneously, which can lead to significant computational waste if the models fail to converge in certain areas, resulting in considerable inefficiencies. This area remains ripe for further research and development, suggesting a critical direction for future investigations aimed at optimizing the efficiency and adaptability of these methods. This paper proposes a progressive domain decomposition (PDD) method that can effectively mitigate the aforementioned challenges by strategically and progressively partitioning the domain in accordance with the dynamics of residual loss. For static problems, the domain can be divided based on residual behaviors. In cases involving time evolution, it is also necessary to respect causality inherent in physics. There are two contributions presented by the PDD method. Residual dynamics precisely identify the key areas which need dense collocation samples, high computational power, and refined algorithms due to high complexity in physics. Such an approach simplifies the management of distinct models for various subdomains and reduces the need for sweeping hyperparameter adjustments across the entire model. Through the progressive saving of the trained models in individual subdomains, training efforts are respected in successful sections. The following sections are organized as follows: Section 2 reviews the principles of PINNs and domain decomposition methods. Section 3 describes the methodologies employed in this work, detailing the computational framework and algorithmic strategies. In Section 4, the proposed domain decomposition approach, PDD, is applied to three distinct PDE problems, demonstrating its effectiveness and generalization. Section 5 concludes the research and outlines future research directions.

2. Related Works

2.1. Physics-Informed Neural Network

PINNs represent a synergistic integration of classical mathematical physics and contemporary machine learning techniques. This hybrid approach harnesses the computational power of neural networks to approximate complex functions, coupled with the foundational principles of physics articulated through PDEs, to address problems that are both computationally intensive and of significant physical relevance. By embedding the governing laws of physics—commonly expressed as PDEs—directly into the neural network architecture, PINNs are capable of learning solutions to these equations even in the absence of observational data.
The process of using PINNs to solve PDEs begins by clearly defining the PDEs that encapsulate the system under study, including all necessary conditions such as initial conditions, boundary conditions, and any intermediate conditions relevant to the problem. Typical PDEs include the following mathematical equations:
F ( u , u , 2 u , , x , t ) = 0 ,     x Ω ,   t > 0 ,
u ( x b , t ) = g ( x b , t ) ,   x b Ω ,
where F(.) is the governing differential equation, x is the spatial variable, t is the temporal variable, Ω is the domain, ∂Ω is the boundary of the domain, and u(x,t) is the target solution. These equations describe how the values of a function u(x,t) change over space and time. PINNs integrate these PDEs directly into the loss function of a neural network uNN(x,t; θ), where θ represents the parameters of the network. The loss function Ltotal(θ)is formulated using several components: one or more PDE enforcement terms LF(θ) including initial and boundary conditions LIC(θ) and LBC(θ) evaluated at a set of collocation points, and a data mismatch term LU(θ) (if observational data are available). These terms are expressed as follows:
L t o t a l ( θ ) = L F ( θ ) + L I C ( θ ) + L B C ( θ ) + L U ( θ ) ,
where
L F ( θ ) = 1 N F F ( u N N , u N N , 2 u N N , , x i , t i ) 2 ,
L I C ( θ ) = 1 N I C j u N N ( x j , 0 ) u 0 ( x j ) 2 ,
L B C ( θ ) = 1 N B C k u N N ( x k , t k ) g ( x k , t k ) 2 ,   x k Ω ,
L U ( θ ) = 1 N U m u N N ( x m , t m ) U m 2 ,   x k Ω .
Here, N is the number of collocation points to be evaluated within the domain designated by the subscript, and Um represents the given observation data. By finding the network parameter θ minimizing this loss, the network u N N ( x , t ) can approximate the target solution u(x,t).

2.2. Domain Decomposition

Despite their innovative integration of physical laws into machine learning frameworks, PINNs face some challenges. Convergence is particularly problematic, and various techniques have been devised to address this issue. For example, adaptive collocation points, loss weight adjustments, and network structure improvements are some of these methods [5,6]. Another well-known method for improving convergence is domain decomposition. Domain decomposition methods strategically partition complex computational domains into smaller, more manageable subdomains, facilitating the efficient resolution of PDEs and other mathematical challenges [30,31]. This approach not only facilitates parallel processing but also significantly improves convergence rates of iterative solution techniques for elliptic, parabolic, and hyperbolic equations [30,32].
The main idea is to decompose the domain Ω into N s d non-overlapping subdomains Ω q such that
Ω = q = 1 N s d Ω q ,   Ω i Ω j = Ω i j ,   i j .
Each subdomain, denoted as Ωq, is associated with a distinct sub-network u N N q designed to approximate the solution of the PDE within that specific area. The loss function in Equation (3) can be leveraged in each of the subdomains. Moreover, it is crucial to manage the interface conditions between the subdomains, typically by ensuring the continuity of both the solution and the residual across these interfaces [20]. In other words, the neighboring subdomains communicate through interfaces by enforcing continuity. Furthermore, if the phenomena described by the PDE adhere to a conservation law, the continuity of flux must also be enforced [11]. These interface conditions, formulated as L I ( θ ) in Equation (10), are incorporated into the total loss function defined in Equation (9).
L t o t a l ( θ ) = L F ( θ ) + L I C ( θ ) + L B C ( θ ) + L U ( θ ) + L I ( θ ) ,
L I ( θ ) = L u c t ( θ ) + L F c t ( θ ) + L f c t ( θ ) ,   x k Ω i j .
Here,
L u c t ( θ ) = 1 N I p u N N ( x p , t p ) u N N + ( x p , t p ) 2 ,
L F c t ( θ ) = 1 N I p F ( u N N , u N N , 2 u N N , , x p , t p ) F ( u N N + , u N N + , 2 u N N + , , x p , t p ) 2 ,
L f c t ( θ ) = 1 N I p k u N N ( x p , t p ) + k u N N + ( x p , t p ) 2 .
In domain-decomposed neural network solvers, one approach prioritizes flux continuity, making it straightforward to implement and directly aligned with local conservation laws, especially when governing equations can be written in a flux–divergence form. Another strategy enforces residual continuity by matching the PDE residuals or solution derivatives, making it more general and versatile in multi-physics or multi-media settings where strict conservation may not be the primary concern. In practice, these continuity conditions are often imposed via separate penalty terms. The first term L u c t θ ensures solution continuity by minimizing the difference between the predicted solution u N N x p , t p and u N N + x p , t p at the interface points x p Ω i j , ensuring no artificial discontinuities between subdomains. The second term, L F c t θ , ensures residual continuity by aligning the PDE residuals F u , u , 2 u , x p , t p , ensuring the consistent satisfaction of the governing equations across the interface. The third term L f c t θ enforces flux continuity by matching the flux k u , where k is a material property, between adjacent subdomains at x p Ω i j , preserving conservation laws such as those for mass or energy [20]. In this study, the flux-matching procedure is sufficient to maintain global conservation principles while keeping the implementation relatively simple across the three test cases, effectively showcasing the strengths of the proposed PDD method.
Upon achieving a satisfactory residual loss throughout all subdomains, the individual models are aggregated to reconstruct the comprehensive solution over the entire original domain, formalized as
u N N ( x ) = q = 1 N s d u N N , q ( x ) 1 Ω q ( x ) .
Here, the indicator function 1Ωq (x) is defined as
1 Ω q ( x ) = 0 ,   i f   x Ω q 1 ,   i f   x Ω q \ j   Ω q j 1 j ,   i f   x j   Ω q j ,
where j represents the number of subdomains intersecting along the common interface, Ω q denotes the subdomain, and Ω q j specifies the local interface. This formulation ensures continuity by designating the solution at interfaces as the average of adjacent subdomains, a mechanism commonly employed in domain decomposition to enforce nodal equilibrium while preserving numerical stability. A crucial aspect of domain decomposition is the precise management of interface placement and boundary conditions between subdomains. The domain should be divided into a reasonable number of subdomains, considering computational complexity, parallel computing resources, and training dynamics. Interface conditions must be carefully handled to ensure continuity and accuracy throughout the entire domain. Iterative methods often employ boundary exchanges between neighboring subdomains until convergence is achieved across all interfaces.

3. Progressive Domain Decomposition

While domain decomposition in neural networks offers a promising pathway for enhancing PINN methodologies, it also poses several challenges. Firstly, there is no clear criterion for decomposition. Secondly, when training fails, the resources used in this failed training are wasted, leading to resource consumption until successful hyperparameters or decomposition is identified. To address these issues, this section proposes a method called progressive domain decomposition (PDD).

3.1. Overall Procedure of PDD

Figure 1 presents an overview of the suggested method. It starts with the assignment of a PINN to the target domain of the problem as in regular PINN training. The neural network is then trained to model the underlying physical phenomena within this domain. Upon completion of the training phase, the model undergoes an evaluation to assess residual loss. If the residual loss is small, it means that the network has been successfully trained with the initial setup. However, training is not always successful for various reasons [32]. When training fails, it is common to re-evaluate and adjust the hyperparameters or network architecture and continue the training process. This cycle is repeated until the training is successfully completed, but all these steps contribute to increasing the overall cost of training.
On the other hand, when training fails, apart from cases where the model converges to a completely incorrect solution, it is common for some regions to achieve successful training while others do not. The proposed method leverages models that have successfully converged in these partial regions to improve overall training efficiency and accuracy. To achieve this, the computational domains are classified into Ω s , where training has succeeded, and Ω f , where training has failed, based on residual error analysis. The criteria for this classification will be discussed in the next section. For regions where training has succeeded, predictions are made using the existing neural network. Conversely, for regions where training has failed, a new neural network is assigned and trained. This adaptive strategy ensures that the learning process prioritizes regions with higher discrepancies, thereby enhancing the overall predictive accuracy. Moreover, it alleviates the challenge of hyperparameter tuning for the entire domain and eliminates the need to discard computational resources from the initial training phase. If a high-loss subdomain persists, it is further subdivided, and the iterative refinement process continues until all regions meet the desired accuracy threshold. Once this process is complete, the trained networks responsible for each subdomain are seamlessly integrated to form a unified global model. The existing networks and the newly trained networks communicate through three mechanisms. First, during training, the predictions from the existing networks at the interface serve as boundary conditions for the newly trained network, as the predictions of the existing networks are sufficiently accurate. Second, residual continuity and flux continuity are enforced to maintain consistency with the underlying physical principles. Third, once the newly created networks satisfy the convergence criteria, predictions at the interfaces between subdomains are computed as the weighted average of outputs from neighboring subdomains, as formulated in Equations (14) and (15). This approach ensures smooth transitions and consistency across the composite model. By mitigating boundary discrepancies, this framework enables the model to function as a cohesive and robust whole.

3.2. Criteria for Domain Decomposition

In the PDD framework, specific areas within the computational domain that disproportionately contribute to the overall error are identified, enabling targeted refinement and the more efficient allocation of computational resources. This identification process relies on a predefined threshold of residual loss, aimed at delineating areas where losses are concentrated relative to the entire domain.
For this purpose, residual loss data are collected from uniformly distributed points within the domain following model training. The domain is segmented into Ns subdomains Ω i (i = 1, …, Ns), and the residual loss for each subdomain is evaluated. The subdomain with the highest residual loss is first selected as a test domain Ω s d , and it is checked whether this subdomain satisfies the following three conditions:
C 1 = L Ω s d L t o t a l m ,
C 2 = A Ω s d A Ω n ,
C 3 = C 1 C 2 t
where L Ω s d represents the aggregated residual loss in Ω s d , and A represents the area of the domain indicated with the subscript. Thus, the first condition C 1 indicates the proportion of the total residual loss contributed by Ω s d , while the second condition C 2 represents the proportion of Ω s d in the entire domain, and the third condition C 3 illustrates the concentration of residual loss. If Ω s d does not satisfy the conditions, an expanded test subdomain is formed by including neighboring subdomains. The new subdomain is determined by selecting the neighboring subdomain that shares a boundary and has the highest residual. This domain is then designated as the new test subdomain Ω s d . This process is repeated until the test subdomain meets the criteria. If the test subdomain satisfies the criteria, it is classified as a failed domain Ω f , and the rest of the domain is classified as a successful domain Ω s . The failed domain necessitates intensive analysis and additional training. A new network is assigned to the failed domain Ω f for retraining. The subsequent process follows the method described in the previous section.
The reason for considering both the proportion of loss and the proportion of areas occupied by the test subdomain in defining the failed domain Ω f is to determine whether the loss is concentrated in a specific region. The proposed progressive domain decomposition method is appropriate only when the residual loss is concentrated in one area, and this criterion is used to measure that. If no region meets the criterion, this suggests a uniform distribution of residual losses across the domain, indicating that the model could benefit from comprehensive refinement instead of targeted intervention. In other words, hyperparameters or the network architecture should be adjusted, and the training process should be restarted to achieve better results.
Once a predefined convergence criterion is established, the objective is to reduce it to a level comparable to that of other subdomains. In this study, the thresholds for the conditions, m, n, and t, are set to 0.4, 0.5, and 1.5, respectively. The convergence criterion of average residual loss L t a r g e t is determined as 10 times the average residual loss of the successful regions l i as illustrated in Equation (19). A higher criterion is set for the retrained region because the failed subdomains often suffer from high complexity, discontinuities, or other challenging conditions that make achieving convergence more difficult. But by setting this criterion, we account for these challenges and provide a more realistic and achievable target for convergence.
L t a r g e t = 10 N s i l i
Algorithm 1 provides a detailed definition of the PDD algorithm.
Algorithm 1 Progressive Domain Decomposition
1Input: Collect residual loss l i from N uniformly distributed points in the domain after training.
2Segmentation: Divide the domain into N s subdomains and calculate the total residual loss: L t o t a l = i = 1 N l i .
3High-Loss Detection: For each subdomain Ω i , identify the subdomain Ω s d with the highest residual loss. Classify Ω s d as high loss if it satisfies: L Ω s d L t o t a l m and A Ω s d A t o t a l n ,   where   L Ω s d represents the residual loss in Ω s d and A denotes the subdomain area.
4Expanded search (if needed): if Ω s d does not meet the condition, expand the test subdomain by including the neighboring subdomain Ω j with the highest residual. Repeat until the conditions are met.
5Classification: If Ω s d satisfies the conditions, classify it as a failed domain Ω f ; remaining regions are classified as successful domain Ω s . Establish the convergence criterion as: L t a r g e t = 1 N s i Ω s l i , N s is the number of points in the successful subdomain.
6Uniformity check: If no regions are flagged, consider global model refinement.
7Neural Network Assignment: Assign a new neural network to Ω f ; continue using the existing network for Ω s .
8Targeted Training: Train networks in Ω f until the residual loss satisfies: 1 N f i Ω f l i L t a r g e t .
9Re-evaluation: if 1 N f i Ω f l i > L t a r g e t , subdivide Ω f and repeat steps 3–7 for each new subdomain.
10Model Composition: Combine trained networks from all subdomains into a single model; at subdomain interfaces, compute predictions by averaging outputs from adjacent subnetworks to ensure smooth transitions.
11Output: Stop when all subdomains satisfy the condition: 1 N f i Ω f l i L t a r g e t .

4. Case Study

In this section, three numerical experiments are conducted to demonstrate the effectiveness of the proposed PDD method, including the viscous Burgers’ equation, Helmholtz equation, and Korteweg–de Vries equation. These examples are chosen for their distinct characteristics and widespread applications in various fields of science and engineering, making them ideal candidates to evaluate the robustness and versatility of the proposed method.

4.1. Viscous Burgers’ Equation

Burgers’ equation represents a fundamental model for nonlinear convection–diffusion processes, often used to test numerical methods for handling both nonlinearity and dissipation. We exemplify the proposed method with the one-dimensional viscous Burgers’ equation known for its role in studying turbulence and shock wave phenomena. The equation is expressed as follows:
u t + u u x = v u x x ,
where u(x,t) denotes the fluid velocity at time t and position x; subscripts x and t denote the derivatives; and υ represents the fluid’s viscosity and is set to be 0.01/π in this case study. The initial condition for the viscous Burgers’ equation is set to be a sinusoidal function, defined as
u ( 0 , x ) = sin ( π x ) ,   x [ 1 , 1 ] .
The boundary conditions are Dirichlet conditions, setting the fluid velocity to zero at both domain ends, which models a scenario when the fluid is static at the boundaries:
u ( t , 1 ) = u ( t , 1 ) = 0 .
To solve Burgers’ equation with the proposed method, firstly, a conventional PINN, shown in Figure 2, was adopted. The network, uNN(t,x;θ), is a feed-forward neural network that has three layers, and there are 20 neurons in each layer. We sampled 5000 collocation points in the domain, 200 points in the spatial boundary, and 100 points in the initial boundary. The collocation points are shown in Figure 3a. The model was trained for 50,000 epochs with the Adam optimizer. Figure 3b shows the residual loss after the training.
The residual loss of the trained model was then evaluated by calculating the residuals at each point, as depicted in Figure 3c. For this study, thresholds of m = 0.4, n = 0.5, and t = 1.5 were set to identify the high-loss region. Following the identification process in Section 3.2, the central part along the x-axis was recognized as the high-loss region, where 81.59% of the total residual loss was concentrated within merely 10% of the region x [ 0.1 , 0.1 ] , and C 3 = 8.159 . Subsequently, the domain was partitioned into two subdomains, as illustrated in Figure 4a. The central high-loss region Ω f . was assigned a new neural network, which was trained using sampled points identical to those in the initial model training. Note that the architecture of the newly assigned network was kept the same as the original one to specifically demonstrate the effectiveness of the proposed PDD approach. In contrast, the surrounding area was classified as the low-loss subdomain Ω s and continued to utilize the previously trained model. Additionally, with the domain now decomposed, interface conditions, as specified in Equation (9), were incorporated into the loss function to ensure continuity and enhance model integration across the subdomains. In the case of the viscous Burgers’ equation, the flux term u 2 / 2 should ideally be continuous across interfaces to prevent physical and numerical inconsistencies; this continuity is enforced by including the flux term in the loss function. The neural network in the identified region was trained for 50,000 iterations, with predictions from the successfully trained regions serving as boundary conditions for the new neural network’s training. Upon completion of the training of the new neural network, a further evaluation of residual loss, as depicted in Figure 4c, was performed to compare the residual loss with the target Ltarget. This assessment confirmed that the model met the required criteria, enabling the neural network to accurately predict solutions within the centrally complex region of the domain.
This approach has successfully reduced residual loss and minimized errors due to decreased complexity. Figure 4c depicts the reduction in residual loss following the implementation of domain decomposition. It is evident that by employing the PDD method, the residual loss in the central part along the x-axis decreased by a factor of 20. The relative L2 error across the entire domain was reduced by 16.67% from 0.006 to 0.005. The prediction in Figure 4b by the PDD is accurate and smooth due to the enforcement of continuity at the interfaces. Compared to the residual loss distribution shown in Figure 3c, the residual loss after retraining decreased significantly, with the percentage reduction improving from over 81% to 35.19%. The mean residual loss in the retrained subdomain successfully meets the target, achieving a value of 0.000264. This value is lower than 10 times the residual loss in the successful region, which is 0.000054, thereby satisfying the convergence criterion of the PDD method. Consequently, we can rely on the dynamics of residual loss to precisely segment the domain and promptly focus on challenging areas, even without extensive knowledge of the underlying physics. This demonstrates that PDD is a straightforward and universally applicable method within the framework of PINNs.

4.2. Helmholtz Equation

The Helmholtz equation with a source term is widely used across various fields such as acoustics, electromagnetics, and optics. This case provides a rigorous test of the method’s ability to handle oscillatory solutions. The 2D Helmholtz equation describes how scalar fields such as acoustic pressure or electromagnetic potentials vary in two-dimensional space under harmonic oscillations. The equation is written as
2 u ( x , y ) + k 2 u ( x , y ) = q ( x , y ) ,   x [ 1 , 1 ] , y [ 1 , 1 ] ,
where ∇2 is the Laplace operator in two dimensions, u(x,y) represents the scalar field such as acoustic pressure or electromagnetic potential, k is the wave number associated with the medium, and q(x,y) is the source function that describes the distribution and strength of the source within the domain. We set up the source equation and boundary conditions as follows according to [33]:
q ( x , y ) = ( a 1 π ) 2 sin ( a 1 π x ) sin ( a 2 π y ) ( a 2 π ) 2 sin ( a 1 π x ) sin ( a 2 π y ) + k 2 sin ( a 2 π x ) sin ( a 2 π y )
where the parameters a1 and a2 are set to 1 and 4, respectively. The boundary has Dirichlet conditions defined by
u ( x , y ) = sin ( π   x ) sin ( 4 π   y ) ,   ( x , y ) Ω
The network consists of four layers, each with 128 neurons, and employs the Adam optimizer. Within the domain, 20,000 collocation points and 256 boundary points were sampled, as shown in Figure 5a. Figure 5b illustrates the prediction across the whole domain with uniformly distributed samples after training for 1000 iterations. Subsequently, the model’s performance was assessed by evaluating the residual loss, as depicted in Figure 5c. A high-loss region was identified using Equations (16) to (18). Notably, 94.73% of the residual loss was concentrated in the segment from y = −1 to y = 0, which comprises 50% of the total domain, and C3 = 1.8946. This meets the established criteria and effectively excludes the successfully modeled subdomain. Following this analysis, we applied PDD to simplify the domain complexity by partitioning the domain into two subdomains at the interface y = 0, as shown in Figure 5a. At the interface, we enforced the continuity of solution u(x,y) to ensure communication between the models in the newly identified region and the successfully trained model. Figure 6c demonstrates that post PDD, the residual in the more complex region decreased significantly, underscoring the effectiveness of precise identification and decomposition using the PDD method. Before domain decomposition, the mean residual loss in the whole domain was 5.8402, with the mean residual loss in the lower half of the domain y∈ [−1, 0] being 11.0419 and that in the upper half being 0.6124. After domain decomposition, the same architecture of the network reduced the mean residual loss to 0.31, 0.0078, and 0.6128, respectively. The mean residual loss in the new trained region was significantly smaller than ten times the mean residual loss in the successfully modeled region, satisfying the convergence criteria effectively.
Figure 5c and Figure 6c compare the mean residual loss before and after PDD. By stabilizing the model in the upper half of the domain and updating the model in the lower half, this work has significantly enhanced the prediction accuracy in the more complex lower region. Notably, the relative L2 error decreased from 0.266 to 0.077 in this complex subdomain. Figure 5b and Figure 6b depict the predictions in the whole domain before and after using the PDD method, respectively. These results lead us to a robust conclusion: the PDD method markedly improves accuracy by substantially reducing residual loss.

4.3. Korteweg–De Vries Equation

The Korteweg–de Vris (KdV) equation is a nonlinear PDE that describes the evolution of solitary waves in shallow water channels. The integrity of KdV leads to a rich mathematical structure and numerous applications in both theoretical and applied sciences. In this work, it challenges the proposed method’s capability to accurately capture complex wave behaviors. The one-dimensional KdV equation has the following form:
u t + u u x + v u x x x = 0 , x [ 1 , 1 ] ,   t > 0 ,
where v = 0.0025, and u = u(x,t) represents the wave profile [11]. The initial condition is u(0,x) = cos(πx), and boundary conditions are periodic. First, a conventional PINN was adopted to solve the KdV equation. In this scenario, an MLP was designed with 10 layers, each containing 20 neurons, and the Adam optimizer was employed for training. The tanh activation function was employed throughout to predict the solution. In the analysis, 18,000 collocation points, along with 256 initial and 256 boundary points, were sampled, as shown in Figure 7a. The model was trained for 300,000 epochs, and Figure 7b illustrates the prediction after training. The relative L 2 error is 0.0211 across the whole domain. Then, the model was validated across the entire domain, shown in Figure 7c, revealing that the residual loss was prominently concentrated in a specific region.
By accumulating the grid residual loss across distinct intervals, the PDD algorithm identified a high-loss region between x = 0.3 and x = 0.5, shown in Figure 8a. This region contributed 45.58% of the total residual loss while covering only 10% of the total area, indicating a significant concentration of residual loss and highlighting the need for further refinement in this area. The values of the three conditions were 0.4558, 0.1, and 4.558, respectively, satisfying the criterion for domain decomposition. The remaining portions of the domain were deemed adequately modeled, and the model was subsequently saved. Notably, the collocation points in the newly identified subdomain for training were sampled with the same density as the entire domain. The MLP architecture was kept consistent with the previous model, and the training for this specific region was conducted for 100,000 epochs. At the interface between the retrained subdomain and the successfully modeled subdomains, both solution continuity and flux continuity were strictly enforced to ensure that the solution was smoothly joined across subdomains and to avoid introducing artificial discontinuities. The application of the PDD method resulted in a substantial reduction in residual loss, as illustrated in Figure 8c, demonstrating its efficacy in solving the KdV equations. The average residual loss in the retrained area was 4.8337 × 10−6, while the average residual loss in the previously modeled area was 9.0325 × 10−5, indicating that the convergence criteria were satisfied completely. The relative L2 error across the whole domain decreased from 0.0211 to 0.0142, further confirming that the PDD approach effectively captures wave behaviors and addresses challenges with KdV equations.

5. Discussion

5.1. Limitations of the Proposed Neural Network Framework

While the PDD method has shown promising results in solving low-dimensional PDE problems, there are still challenges to address when scaling this approach to high-dimensional problems. One of the primary limitations is the computational expense that comes with finer segmentation of the domain, which can increase as the complexity of the problem grows. Additionally, managing communication between subdomains in high-dimensional settings can become increasingly difficult, especially when domains exhibit highly irregular geometries or complex boundary conditions. The effectiveness of the PDD method in such cases is still an open area for exploration. Future work should focus on refining domain segmentation techniques and improving interface management between subdomains to ensure smooth transitions and efficient model training for larger-scale problems.

5.2. Extensions of the PDD Approach

The PDD approach demonstrates great potential for extending the applicability of PINNs to more complex problems, particularly in higher-dimensional spaces. Combining PDD with other algorithms like cPINNs and XPNNs could significantly enhance the method’s ability to handle these challenges. Several state-of-the-art algorithms are designed to manage high-dimensional PDEs more effectively, offering a promising avenue for extending the scope of PDD beyond low-dimensional problems. Future research should focus on integrating these advanced neural network architectures and algorithms with PDD, ensuring that the method can handle a wider range of scientific and engineering applications while maintaining computational efficiency through dynamic subdomain pruning and parallel training protocols. Furthermore, while the current work focuses on static problems, the modular architecture of PDD provides a foundation for addressing time-dependent systems. By coupling the progressive training mechanism with the causal time-marching strategies in [34], PDD can effectively model temporal evolution while maintaining its core advantages in domain decomposition. This hybrid paradigm would enable causal loss propagation through temporal residual evolution analysis, allowing the targeted refinement of subdomains during critical time intervals, and memory-efficient incremental updates to avoid retraining on full-domain data.

5.3. Special Challenges with the PDD Method

As the PDD method scales to more complex problems, several challenges remain. One major issue is the efficient identification and management of high-loss subdomains. While the method provides a strategy for handling regions with complex computational requirements, the dynamic adjustment of interfaces and the optimal partitioning of the domain remain unresolved. The increased complexity of high-dimensional problems further complicates interface communication, as the choice of continuity (solution, flux, or residual) must be adapted based on the specific characteristics of each subdomain. Developing more sophisticated strategies for managing these interfaces will be critical for improving the method’s performance in more intricate computational scenarios.

5.4. Methods for Communication at the Interface

Communication between subdomains is a critical factor for the success of the PDD method. The method must handle different types of continuity depending on the problem’s physical nature. In some cases, solution continuity is required to ensure consistency at the boundaries. For problems governed by conservation laws, such as fluid dynamics or heat transfer, flux continuity may be more appropriate. In other instances, residual continuity helps to reduce discrepancies at the interfaces, leading to more accurate and stable results. The dynamic selection of the appropriate type of continuity at each interface is vital for maintaining smooth transitions and ensuring the accuracy and efficiency of the method as it scales to more complex, higher-dimensional problems. However, through extensive experimentation, we observed that enforcing residual continuity at interfaces had a negligible impact on the final prediction accuracy while significantly increasing computational overhead during training. Consequently, our study intentionally omits residual continuity as a loss term to prioritize training efficiency without compromising accuracy. Instead, we enforced solution continuity and flux continuity at subdomain interfaces, which sufficed to ensure global accuracy, as evidenced by the <1% error deviation across all tested cases compared to scenarios with residual continuity. This design choice not only streamlined training but also isolated PDD’s contributions by avoiding confounding factors from mixed boundary constraints. While residual continuity could enhance local smoothness, our results demonstrate that PDD’s adaptive loss prioritization already achieves superior accuracy to baseline PINNs without additional continuity enforcement. We acknowledge the potential value of residual continuity for specific applications and will explore its integration in future work to address higher-order interface requirements.

5.5. Hyperparameter Tuning

In the current work, the values of m, n, and t (i.e., m = 0.4, n = 0.5, t = 1.5) were determined through iterative numerical experimentation to balance residual minimization and interface coupling. These parameters were validated through systematic performance evaluation across multiple PDE benchmarks, demonstrating stable convergence and accuracy within the problem regimes studied.
Looking ahead, we are actively investigating automated hyperparameter optimization to systematize parameter determination using methods such as genetic algorithms and Bayesian optimization. Furthermore, we propose to explore adaptive mechanisms that dynamically adjust m, n, and t during training based on gradient variance monitoring. These efforts will extend our framework’s capability toward self-optimizing domain decomposition in PINNs, ensuring broader applicability across complex engineering problems.

6. Conclusions

The PDD method effectively identifies regions of complex computation by aggregating the interval residual loss during the evaluation of the trained model. This approach not only significantly enhances prediction accuracy but also preserves computational resources by strategically saving the model corresponding to each specific subdomain. To the best of our knowledge, this is the first time that criteria for domain decomposition in PINNs have been proposed. The method demonstrates broad applicability across various complex PDEs, establishing it as a versatile tool in computational science. Despite its success, further research is warranted to explore the relationship between the complexity of the problems and their specific locations within the domain. Such studies are essential for refining the efficacy of the PDD method and expanding its applicability in more nuanced computational scenarios.
The integration of domain decomposition into PINN frameworks aligns well with traditional numerical methods, offering a promising avenue for simplifying the computational challenges associated with large and complex domains. This synergy between simple neural network architectures and established numerical techniques paves the way for more robust and efficient solutions in computational science and engineering. The PDD method demonstrates superior performance compared to single-domain computations by effectively balancing computational accuracy and efficiency. This method is broadly applicable across a variety of complex PDEs, making it a versatile tool. By segmenting the computational domain and leveraging trained models within specific subdomains, the PDD method significantly reduces computational effort. This strategic partitioning not only conserves resources but also enhances the precision and speed of problem-solving processes.
This work makes several significant contributions to the field. First, we enhance the training process through strategic subdomain segmentation. During training, the domain is divided into subdomains based on a rigorously defined criterion that evaluates the complexity of different regions. This approach allows for precise solutions in specific sections, enabling targeted model updates in regions with higher computational challenges. It simplifies the management of distinct models for each subdomain and reduces the need for sweeping adjustments to hyperparameters across the entire model.
Second, the method enables the progressive saving of training results within each subdomain, effectively conserving training effort. By treating training failures as valuable assets for future phases, this approach allows for more focused research on the most complex areas of the domain. Advanced architectural designs and innovative training methodologies are employed to overcome these challenges.
Furthermore, we integrate a sequential training methodology with PDD, optimizing the learning process and enabling more efficient progress through challenging regions.
Finally, we expand the applicability of neural networks in scientific computing. By applying this method, we address and solve problems that have traditionally been difficult for conventional neural networks. This not only demonstrates the versatility of neural networks in scientific applications but also broadens their scope in solving complex real-world problems.
Future research will explore the implementation of different network architectures and hyperparameters to further enhance the robustness and global applicability of the proposed method, paving the way for its use in a wider range of scientific and engineering problems.

Author Contributions

Conceptualization, D.L. and T.K.; methodology, D.L. and T.K.; software, T.K. and D.L.; validation, T.K., S.-H.J. and D.L.; formal analysis, T.K.; investigation, D.L.; writing—original draft preparation, D.L.; writing—review and editing, T.K.; visualization, D.L.; supervision, S.-H.J. and T.K.; funding acquisition, S.-H.J. and T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Technology Innovation Program (RS-2024-00417417, Development of lightweight carbon fiber composite material components for mobility) funded by the Korea government (MOTIE); and partly by the Scientific Research Foundation of Changzhou College of Information Technology (SGA070300020447). The project name is “The application research of physical information neural network based on multi-source heterogeneous data of industrial interconnection in enterprise digital transformation”.

Data Availability Statement

No data were used in the preparation of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PINNPhysics-Informed Neural Network
PDDProgressive Domain Decomposition
SOTAState-of-the-Art
PDEPartial Differential Equation
CPINNConservative PINN
XPINNExtended PINN
APINNAugmented PINN

References

  1. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv 2017, arXiv:1711.10561. [Google Scholar]
  2. Haghighat, E.; Raissi, M.; Moure, A.; Gomez, H.; Juanes, R. A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics. Comput. Methods Appl. Mech. Eng. 2021, 379, 113741. [Google Scholar] [CrossRef]
  3. Lu, L.; Pestourie, R.; Yao, W.J.; Wang, Z.C.; Verdugo, F.; Johnson, S.G. Physics-Informed Neural Networks With Hard Constraints for Inverse Design. Siam J. Sci. Comput. 2021, 43, B1105–B1132. [Google Scholar] [CrossRef]
  4. Basir, S.; Senocak, I. Physics and equality constrained artificial neural networks: Application to forward and inverse problems with multi-fidelity data fusion. J. Comput. Phys. 2022, 463, 111301. [Google Scholar] [CrossRef]
  5. Wang, S.; Sankaran, S.; Wang, H.; Perdikaris, P. An Expert’s Guide to Training Physics-informed Neural Networks. arXiv 2023, arXiv:2308.08468. [Google Scholar]
  6. Wu, W.; Daneker, M.; Jolley, M.A.; Turner, K.T.; Lu, L. Effective data sampling strategies and boundary condition constraints of physics-informed neural networks for identifying material properties in solid mechanics. Appl. Math. Mech.-Engl. Ed. 2023, 44, 1039–1068. [Google Scholar] [CrossRef] [PubMed]
  7. Wight, C.L.; Zhao, J. Solving Allen-Cahn and Cahn-Hilliard Equations using the Adaptive Physics Informed Neural Networks. Commun. Comput. Phys. 2021, 29, 930–954. [Google Scholar] [CrossRef]
  8. Tang, K.; Wan, X.; Yang, C. DAS-PINNs: A deep adaptive sampling method for solving high-dimensional partial differential equations. J. Comput. Phys. 2023, 476, 111868. [Google Scholar] [CrossRef]
  9. Haghighat, E.; Juanes, R. SciANN: A Keras/TensorFlow wrapper for scientific computations and physics-informed deep learning using artificial neural networks. Comput. Methods Appl. Mech. Eng. 2021, 373, 113552. [Google Scholar] [CrossRef]
  10. Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A deep learning library for solving differential equations. SIAM Rev. 2021, 63, 208–228. [Google Scholar] [CrossRef]
  11. Jagtap, A.D.; Kharazmi, E.; Karniadakis, G.E. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. Comput. Methods Appl. Mech. Eng. 2020, 365, 113028. [Google Scholar] [CrossRef]
  12. Moseley, B.; Markham, A.; Nissen-Meyer, T. Finite basis physics-informed neural networks (FBPINNs): A scalable domain decomposition approach for solving differential equations. Adv. Comput. Math. 2023, 49, 62. [Google Scholar] [CrossRef]
  13. Zhang, E.R.; Dao, M.; Karniadakis, G.E.; Suresh, S. Analyses of internal structures and defects in materials using physics-informed neural networks. Sci. Adv. 2022, 8, 7. [Google Scholar] [CrossRef]
  14. Wiecha, P.R.; Arbouet, A.; Girard, C.; Muskens, O.L. Deep learning in nano-photonics: Inverse design and beyond. Photonics Res. 2021, 9, B182–B200. [Google Scholar] [CrossRef]
  15. Yago, D.; Sal-Anglada, G.; Roca, D.; Cante, J.; Oliver, J. Machine learning in solid mechanics: Application to acoustic metamaterial design. Int. J. Numer. Meth. Eng. 2024, 125, e7476. [Google Scholar] [CrossRef]
  16. Meng, Z.; Qian, Q.C.; Xu, M.Q.; Yu, B.; Yildiz, A.R.; Mirjalili, S. PINN-FORM: A new physics-informed neural network for reliability analysis with partial differential equation. Comput. Methods Appl. Mech. Eng. 2023, 414, 116172. [Google Scholar] [CrossRef]
  17. Schwarz, H.A. Ueber einen Grenzübergang durch alternirendes Verfahren. Vierteljahresschr. Naturforsch. Ges. Zürich 1870, 15, 272–286. [Google Scholar]
  18. Quarteroni, A.; Valli, A. Domain Decomposition Methods for Partial Differential Equations; Oxford University Press: Melbourne, Australia, 1999. [Google Scholar]
  19. Li, K.; Tang, K.J.; Wu, T.F.; Liao, Q.F. D3M: A Deep Domain Decomposition Method for Partial Differential Equations. IEEE Access 2020, 8, 5283–5294. [Google Scholar] [CrossRef]
  20. Jagtap, A.D.; Karniadakis, G.E. Extended Physics-Informed Neural Networks (XPINNs): A Generalized Space-Time Domain Decomposition Based Deep Learning Framework for Nonlinear Partial Differential Equations. Commun. Comput. Phys. 2020, 28, 2002–2041. [Google Scholar] [CrossRef]
  21. Das, S.; Tesfamariam, S. State-of-the-art review of design of experiments for physics-informed deep learning. arXiv 2022, arXiv:2202.06416. [Google Scholar]
  22. Hu, Z.; Jagtap, A.D.; Karniadakis, G.E.; Kawaguchi, K. When Do Extended Physics-Informed Neural Networks (XPINNs) Improve Generalization? Siam J. Sci. Comput. 2022, 44, A3158–A3182. [Google Scholar] [CrossRef]
  23. Shukla, K.; Jagtap, A.D.; Karniadakis, G.E. Parallel physics-informed neural networks via domain decomposition. J. Comput. Phys. 2021, 447, 110683. [Google Scholar] [CrossRef]
  24. Bandai, T.; Ghezzehei, T.A. Forward and inverse modeling of water flow in unsaturated soils with discontinuous hydraulic conductivities using physics-informed neural networks with domain decomposition. Hydrol. Earth Syst. Sci. 2022, 26, 4469–4495. [Google Scholar] [CrossRef]
  25. Meng, X.H.; Li, Z.; Zhang, D.K.; Karniadakis, G.E. PPINN: Parareal physics-informed neural network for time-dependent PDEs. Comput. Methods Appl. Mech. Eng. 2020, 370, 113250. [Google Scholar] [CrossRef]
  26. Dwivedi, V.; Parashar, N.; Srinivasan, B. Distributed physics informed neural network for data-efficient solution to partial differential equations. arXiv 2019, arXiv:1907.08967. [Google Scholar]
  27. Stiller, P.; Bethke, F.; Böhme, M.; Pausch, R.; Torge, S.; Debus, A.; Vorberger, J.; Bussmann, M.; Hoffmann, N. Large-scale neural solvers for partial differential equations. In Proceedings of the Driving Scientific and Engineering Discoveries Through the Convergence of HPC, Big Data and AI: 17th Smoky Mountains Computational Sciences and Engineering Conference, SMC 2020, Oak Ridge, TN, USA, 26–28 August 2020; pp. 20–34. [Google Scholar]
  28. Li, W.Y.; Wang, Z.M.; Cui, T.; Xu, Y.X.; Xiang, X.S. Deep Domain Decomposition Methods: Helmholtz Equation. Adv. Appl. Math. Mech. 2022, 15, 118–138. [Google Scholar]
  29. Gosselet, P.; Rey, C. Non-overlapping domain decomposition methods in structural mechanics. Arch. Comput. Methods Eng. 2006, 13, 515–572. [Google Scholar] [CrossRef]
  30. Heinlein, A.; Klawonn, A.; Lanser, M.; Weber, J. Combining machine learning and domain decomposition methods for the solution of partial differential equations—A review. GAMM-Mitteilungen 2021, 44, e202100001. [Google Scholar] [CrossRef]
  31. Shao, J.P.; Chan, T.F. Domain decomposition algorithms. Acta Numer. 1994, 3, 61–143. [Google Scholar]
  32. Basir, S. Investigating and Mitigating Failure Modes in Physics-Informed Neural Networks (PINNs). Commun. Comput. Phys. 2023, 33, 1240–1269. [Google Scholar] [CrossRef]
  33. Wang, S.; Teng, Y.; Perdikaris, P. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. Siam J. Sci. Comput. 2021, 43, A3055–A3081. [Google Scholar] [CrossRef]
  34. Krishnapriyan, A.; Gholami, A.; Zhe, S.; Kirby, R.; Mahoney, M.W. Characterizing possible failure modes in physics-informed neural networks. Adv. Neural Inf. Process. Syst. 2021, 34, 26548–26560. [Google Scholar]
Figure 1. Overall procedure of progressive domain decomposition.
Figure 1. Overall procedure of progressive domain decomposition.
Mathematics 13 01515 g001
Figure 2. Conventional PINN solving Burgers’ equation.
Figure 2. Conventional PINN solving Burgers’ equation.
Mathematics 13 01515 g002
Figure 3. Performance of conventional PINN for Burgers’ equation. (a) Calculation domain, (b) prediction with conventional PINN and (c), residual loss using conventional PINN.
Figure 3. Performance of conventional PINN for Burgers’ equation. (a) Calculation domain, (b) prediction with conventional PINN and (c), residual loss using conventional PINN.
Mathematics 13 01515 g003
Figure 4. Process and performance of PDD for Burgers’ equation. (a) Decomposed domains, (b) retraining result, and (c) residual loss.
Figure 4. Process and performance of PDD for Burgers’ equation. (a) Decomposed domains, (b) retraining result, and (c) residual loss.
Mathematics 13 01515 g004
Figure 5. Performance of conventional PINN. (a) Calculation domain, (b) prediction with conventional PINN and (c), residual loss using conventional PINN.
Figure 5. Performance of conventional PINN. (a) Calculation domain, (b) prediction with conventional PINN and (c), residual loss using conventional PINN.
Mathematics 13 01515 g005
Figure 6. Process and performance of PDD for Helmholtz equation. (a) Decomposed domains, (b) retraining result, and (c) residual loss.
Figure 6. Process and performance of PDD for Helmholtz equation. (a) Decomposed domains, (b) retraining result, and (c) residual loss.
Mathematics 13 01515 g006
Figure 7. Performance of conventional PINN for KdV equation. (a) Calculation domain, (b) prediction with conventional PINN and (c), residual loss using conventional PINN.
Figure 7. Performance of conventional PINN for KdV equation. (a) Calculation domain, (b) prediction with conventional PINN and (c), residual loss using conventional PINN.
Mathematics 13 01515 g007
Figure 8. Process and performance of PDD for KdV equation. (a) Decomposed domains, (b) retraining result, and (c) residual loss.
Figure 8. Process and performance of PDD for KdV equation. (a) Decomposed domains, (b) retraining result, and (c) residual loss.
Mathematics 13 01515 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luo, D.; Jo, S.-H.; Kim, T. Progressive Domain Decomposition for Efficient Training of Physics-Informed Neural Network. Mathematics 2025, 13, 1515. https://doi.org/10.3390/math13091515

AMA Style

Luo D, Jo S-H, Kim T. Progressive Domain Decomposition for Efficient Training of Physics-Informed Neural Network. Mathematics. 2025; 13(9):1515. https://doi.org/10.3390/math13091515

Chicago/Turabian Style

Luo, Dawei, Soo-Ho Jo, and Taejin Kim. 2025. "Progressive Domain Decomposition for Efficient Training of Physics-Informed Neural Network" Mathematics 13, no. 9: 1515. https://doi.org/10.3390/math13091515

APA Style

Luo, D., Jo, S.-H., & Kim, T. (2025). Progressive Domain Decomposition for Efficient Training of Physics-Informed Neural Network. Mathematics, 13(9), 1515. https://doi.org/10.3390/math13091515

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop